The telecommunications (telecom) sector in India is rapidly changing, with more and more telecom businesses being created and many customers deciding to switch between providers. "Churn" refers to the process where customers or subscribers stop using a company's services or products. Understanding the factors that influence keeping a customer as a client in predicting churn is crucial for telecom companies to enhance their service quality and customer satisfaction. As the data scientist on this project, you aim to explore the intricate dynamics of customer behavior and demographics in the Indian telecom sector in predicting customer churn, utilizing two comprehensive datasets from four major telecom partners: Airtel, Reliance Jio, Vodafone, and BSNL:
telecom_demographics.csv
contains information related to Indian customer demographics:
Variable | Description |
---|---|
customer_id | Unique identifier for each customer. |
telecom_partner | The telecom partner associated with the customer. |
gender | The gender of the customer. |
age | The age of the customer. |
state | The Indian state in which the customer is located. |
city | The city in which the customer is located. |
pincode | The pincode of the customer's location. |
registration_event | When the customer registered with the telecom partner. |
num_dependents | The number of dependents (e.g., children) the customer has. |
estimated_salary | The customer's estimated salary. |
telecom_usage
contains information about the usage patterns of Indian customers:
Variable | Description |
---|---|
customer_id | Unique identifier for each customer. |
calls_made | The number of calls made by the customer. |
sms_sent | The number of SMS messages sent by the customer. |
data_used | The amount of data used by the customer. |
churn | Binary variable indicating whether the customer has churned or not (1 = churned, 0 = not churned). |
# Import libraries and methods/functions
import pandas as pd
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.model_selection import train_test_split
from sklearn.linear_model import RidgeClassifier,LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report, confusion_matrix
# Start your code here!
demo_df = pd.read_csv('telecom_demographics.csv')
usage_df = pd.read_csv('telecom_usage.csv')
churn_df = demo_df.merge(usage_df, on='customer_id')
print(churn_df.head())
churn_rate = churn_df['churn'].mean()
print('churn rate: ', churn_rate)
churn_df.info()
#filter numeric features:
num_feats = churn_df.select_dtypes(include='number').columns.tolist()
for x in ['customer_id','pincode','churn'] :
num_feats.remove(x)
print(num_feats)
print(churn_df['telecom_partner'].unique())
print(churn_df['gender'].unique())
print(churn_df['registration_event'].nunique())
print(churn_df['pincode'].nunique())
#encode relevant categorical features:
churn_df = pd.get_dummies(churn_df, columns=['telecom_partner', 'gender', 'state', 'city', 'registration_event'])
#enc = OneHotEncoder()
#cat_features = ['telecom_partner','gender']
#enc_cat_feats = enc.fit_transform(churn_df[cat_features])
#enc_cat_data = enc_cat_feats.toarray()
#enc_cat_names = enc.categories_[0].tolist()+enc.categories_[1].tolist()
#print(enc_cat_names)
#churn_df[enc_cat_names] = enc_cat_data
print(churn_df[num_feats].head())
#scale the numerical variables
scaler = StandardScaler()
features = churn_df.drop(['customer_id', 'churn'], axis=1)
features_scaled = scaler.fit_transform(features)
#features_scaled[['customer_id','pincode','churn','registration_event']] = churn_df[['customer_id','pincode','churn','registration_event']]
#features_scaled[enc_cat_names] = enc_cat_data
#features_scaled[num_feats] = scaler.fit_transform(churn_df[num_feats])
target = churn_df['churn']
print(features_scaled)
print(target)
X_train, X_test, y_train, y_test = train_test_split(features_scaled, target, test_size=0.2, random_state=42,shuffle=True)
logreg = LogisticRegression(random_state=42)
logreg.fit(X_train,y_train)
logreg_pred = logreg.predict(X_test)
print(logreg.score(X_test,y_test))
rf = RandomForestClassifier(random_state=42)
rf.fit(X_train,y_train)
rf_pred = rf.predict(X_test)
print(rf.score(X_test,y_test))
print("confusion matrix for RidgeClassifier: ",confusion_matrix(logreg_pred,y_test))
print("confusion matrix for RandomForestClassifier: ",confusion_matrix(logreg_pred,y_test))
higher_accuracy = 'RandomForest'