基于机器学习模型预测信用卡潜在用户(XGBoost、LightGBM和Random Forest)

基于机器学习模型预测信用卡潜在用户(XGBoost、LightGBM和Random Forest)

随着数据科学和机器学习的发展,越来越多的企业开始利用这些技术来提高运营效率。在这篇博客中,我将分享如何利用机器学习模型来预测信用卡的潜在客户。此项目基于我整理的代码和文件,涉及数据预处理、数据可视化、模型训练、预测及结果保存的完整流程。

项目概述

本项目旨在使用机器学习模型预测哪些客户最有可能成为信用卡的潜在客户。我们将使用三个主要的机器学习模型:XGBoost、LightGBM和随机森林(Random Forest)。以下是项目的主要步骤:

1、数据预处理
2、数据可视化
3、模型训练
4、模型预测
5、模型保存

1. 数据预处理

数据预处理是机器学习项目中至关重要的一步。通过清洗和准备数据,我们可以提高模型的性能和准确性。

import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt 
import seaborn as sns
#Loading the dataset
df_train=pd.read_csv("dataset/train_s3TEQDk.csv")
df_train["source"]="train"
df_test=pd.read_csv("dataset/test_mSzZ8RL.csv")
df_test["source"]="test"
df=pd.concat([df_train,df_test],ignore_index=True)
df.head()
IDGenderAgeRegion_CodeOccupationChannel_CodeVintageCredit_ProductAvg_Account_BalanceIs_ActiveIs_Leadsource
0NNVBBKZBFemale73RG268OtherX343No1045696No0.0train
1IDD62UNGFemale30RG277SalariedX132No581988No0.0train
2HD3DSEMCFemale56RG268Self_EmployedX326No1484315Yes0.0train
3BF3NC7KVMale34RG270SalariedX119No470454No0.0train
4TEASRWXVFemale30RG282SalariedX133No886787No0.0train

1. Checking and Cleaning Dataset :

#Checking columns of dataset
df.columns
Index(['ID', 'Gender', 'Age', 'Region_Code', 'Occupation', 'Channel_Code','Vintage', 'Credit_Product', 'Avg_Account_Balance', 'Is_Active','Is_Lead', 'source'],dtype='object')
#Checking shape 
df.shape
(351037, 12)
#Checking unique values 
df.nunique()
ID                     351037
Gender                      2
Age                        63
Region_Code                35
Occupation                  4
Channel_Code                4
Vintage                    66
Credit_Product              2
Avg_Account_Balance    162137
Is_Active                   2
Is_Lead                     2
source                      2
dtype: int64
#Check for Null Values
df.isnull().sum()
ID                          0
Gender                      0
Age                         0
Region_Code                 0
Occupation                  0
Channel_Code                0
Vintage                     0
Credit_Product          41847
Avg_Account_Balance         0
Is_Active                   0
Is_Lead                105312
source                      0
dtype: int64

Observation:
Null values are present in Credit _Product column.

#Fill null values in Credit_Product feature
df['Credit_Product']= df['Credit_Product'].fillna("NA")
#Again check for null values
df.isnull().sum()
ID                          0
Gender                      0
Age                         0
Region_Code                 0
Occupation                  0
Channel_Code                0
Vintage                     0
Credit_Product              0
Avg_Account_Balance         0
Is_Active                   0
Is_Lead                105312
source                      0
dtype: int64
#Checking Datatypes and info
df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 351037 entries, 0 to 351036
Data columns (total 12 columns):#   Column               Non-Null Count   Dtype  
---  ------               --------------   -----  0   ID                   351037 non-null  object 1   Gender               351037 non-null  object 2   Age                  351037 non-null  int64  3   Region_Code          351037 non-null  object 4   Occupation           351037 non-null  object 5   Channel_Code         351037 non-null  object 6   Vintage              351037 non-null  int64  7   Credit_Product       351037 non-null  object 8   Avg_Account_Balance  351037 non-null  int64  9   Is_Active            351037 non-null  object 10  Is_Lead              245725 non-null  float6411  source               351037 non-null  object 
dtypes: float64(1), int64(3), object(8)
memory usage: 32.1+ MB
#Changing Yes to 1 and No to 0 in Is_Active column to covert  data into floatdf["Is_Active"].replace(["Yes","No"],[1,0],inplace=True)df['Is_Active'] = df['Is_Active'].astype(float)
df.head()
IDGenderAgeRegion_CodeOccupationChannel_CodeVintageCredit_ProductAvg_Account_BalanceIs_ActiveIs_Leadsource
0NNVBBKZBFemale73RG268OtherX343No10456960.00.0train
1IDD62UNGFemale30RG277SalariedX132No5819880.00.0train
2HD3DSEMCFemale56RG268Self_EmployedX326No14843151.00.0train
3BF3NC7KVMale34RG270SalariedX119No4704540.00.0train
4TEASRWXVFemale30RG282SalariedX133No8867870.00.0train
#Now changing all categorical column into numerical form using label endcoding
cat_col=[ 'Gender', 'Region_Code', 'Occupation','Channel_Code', 'Credit_Product']from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
for col in cat_col:df[col]= le.fit_transform(df[col])df_2= df
df_2.head()
IDGenderAgeRegion_CodeOccupationChannel_CodeVintageCredit_ProductAvg_Account_BalanceIs_ActiveIs_Leadsource
0NNVBBKZB073181243110456960.00.0train
1IDD62UNG03027203215819880.00.0train
2HD3DSEMC056183226114843151.00.0train
3BF3NC7KV13420201914704540.00.0train
4TEASRWXV03032203318867870.00.0train
#Separating the train and test
df_train=df_2.loc[df_2["source"]=="train"]
df_test=df_2.loc[df_2["source"]=="test"]
df_1 = df_train
#we can drop column as they are irrelevant and have no effect on our data
df_1.drop(columns=['ID',"source"],inplace=True)
df_1.head()
GenderAgeRegion_CodeOccupationChannel_CodeVintageCredit_ProductAvg_Account_BalanceIs_ActiveIs_Lead
0073181243110456960.00.0
103027203215819880.00.0
2056183226114843151.00.0
313420201914704540.00.0
403032203318867870.00.0

2. 数据可视化

数据可视化有助于我们更好地理解数据的分布和特征。以下是一些常用的数据可视化方法:

import warnings
warnings.filterwarnings("ignore")
plt.rcParams['figure.figsize']  = (10,6)
plt.rcParams['font.size']  = 16
sns.set_style("whitegrid")

sns.distplot(df['Age']);

在这里插入图片描述

sns.distplot(df['Avg_Account_Balance'])
plt.show()

在这里插入图片描述

#Countplot for Gender feature 
# plt.figure(figsize=(8,4))
sns.countplot(df['Gender'],palette='Accent')
plt.show()

在这里插入图片描述

#Countplot for Target variable i.e 'Is_Lead'
target = 'Is_Lead'
# plt.figure(figsize=(8,4))
sns.countplot(df[target],palette='hls')
print(df[target].value_counts())
0.0    187437
1.0     58288
Name: Is_Lead, dtype: int64
plt.rcParams['figure.figsize']  = (12,6)
#Checking occupation with customers
# plt.figure(figsize=(8,4))
sns.countplot(x='Occupation',hue='Is_Lead',data=df,palette= 'magma')
plt.show()

在这里插入图片描述

#Plot showing Activness of customer in last 3 months with respect to Occupation of customer
# plt.figure(figsize=(8,4))
sns.catplot(y='Age',x='Occupation',hue='Is_Active',data=df,kind='bar',palette='Oranges')
plt.show()

在这里插入图片描述

3. 模型训练

我们将使用三个模型进行训练:XGBoost、LightGBM和随机森林。以下是模型的训练代码:

# To balance the dataset , we will apply undersampling method
from sklearn.utils import resample
# separate the minority and majority classes
df_majority = df_1[df_1['Is_Lead']==0]
df_minority = df_1[df_1['Is_Lead']==1]print(" The majority class values are", len(df_majority))
print(" The minority class values are", len(df_minority))
print(" The ratio of both classes are", len(df_majority)/len(df_minority))
 The majority class values are 187437The minority class values are 58288The ratio of both classes are 3.215704776283283
# undersample majority class
df_majority_undersampled = resample(df_majority, replace=True, n_samples=len(df_minority), random_state=0)
# combine minority class with oversampled majority class
df_undersampled = pd.concat([df_minority, df_majority_undersampled])df_undersampled['Is_Lead'].value_counts()
df_1=df_undersampled# display new class value counts
print(" The undersamples class values count is:", len(df_undersampled))
print(" The ratio of both classes are", len(df_undersampled[df_undersampled["Is_Lead"]==0])/len(df_undersampled[df_undersampled["Is_Lead"]==1]))
 The undersamples class values count is: 116576The ratio of both classes are 1.0
# dropping target variable 
#assign the value of y for training and testing phase
xc = df_1.drop(columns=['Is_Lead'])
yc = df_1[["Is_Lead"]]
df_1.head()
GenderAgeRegion_CodeOccupationChannel_CodeVintageCredit_ProductAvg_Account_BalanceIs_ActiveIs_Lead
6162321220010567501.01.0
1513318316905170631.01.0
16046181297222825020.01.0
17059331215223846920.01.0
20144193119210016500.01.0
#Importing necessary libraries
from sklearn import metrics
from scipy.stats import zscore
from sklearn.preprocessing import LabelEncoder,StandardScaler
from sklearn.model_selection import train_test_split,GridSearchCV
from sklearn.decomposition import PCA
from sklearn.metrics import precision_score, recall_score, confusion_matrix, f1_score, roc_auc_score, roc_curve
from sklearn.metrics import accuracy_score,classification_report,confusion_matrix,roc_auc_score,roc_curve
from sklearn.metrics import auc
from sklearn.metrics import plot_roc_curve
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import MultinomialNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier,GradientBoostingClassifier
from sklearn.model_selection import cross_val_score
from sklearn.naive_bayes import GaussianNB#Import warnings
import warnings
warnings.filterwarnings('ignore')
#Standardizing value of x by using standardscaler to make the data normally distributed
sc = StandardScaler()
df_xc = pd.DataFrame(sc.fit_transform(xc),columns=xc.columns)
df_xc.head()
GenderAgeRegion_CodeOccupationChannel_CodeVintageCredit_ProductAvg_Account_BalanceIs_Active
00.8719221.1029871.080645-1.2543101.098078-0.961192-1.495172-0.1189581.192880
10.871922-0.895316-0.2085251.008933-0.0528280.484610-1.495172-0.7482731.192880
2-1.1468910.000475-0.208525-1.2543101.0980781.3107841.2200961.310361-0.838307
3-1.1468910.8962661.172729-1.2543101.098078-1.1087231.2200961.429522-0.838307
40.871922-0.137339-0.1164411.008933-0.052828-0.9906991.220096-0.183209-0.838307
#defining a function to find fit of the modeldef max_accuracy_scr(names,model_c,df_xc,yc):accuracy_scr_max = 0roc_scr_max=0train_xc,test_xc,train_yc,test_yc = train_test_split(df_xc,yc,random_state = 42,test_size = 0.2,stratify = yc)model_c.fit(train_xc,train_yc)pred = model_c.predict_proba(test_xc)[:, 1]roc_score = roc_auc_score(test_yc, pred)accuracy_scr = accuracy_score(test_yc,model_c.predict(test_xc))if roc_score> roc_scr_max:roc_scr_max=roc_scorefinal_model = model_cmean_acc = cross_val_score(final_model,df_xc,yc,cv=5,scoring="accuracy").mean()std_dev = cross_val_score(final_model,df_xc,yc,cv=5,scoring="accuracy").std()cross_val = cross_val_score(final_model,df_xc,yc,cv=5,scoring="accuracy")print("*"*50)print("Results for model : ",names,'\n',"max roc score correspond to random state " ,roc_scr_max ,'\n',"Mean accuracy score is : ",mean_acc,'\n',"Std deviation score is : ",std_dev,'\n',"Cross validation scores are :  " ,cross_val) print(f"roc_auc_score: {roc_score}")print("*"*50)
#Now by using multiple Algorithms we are calculating the best Algo which performs best for our data set 
accuracy_scr_max = []
models=[]
#accuracy=[]
std_dev=[]
roc_auc=[]
mean_acc=[]
cross_val=[]
models.append(('Logistic Regression', LogisticRegression()))
models.append(('Random Forest', RandomForestClassifier()))
models.append(('Decision Tree Classifier',DecisionTreeClassifier()))
models.append(("GausianNB",GaussianNB()))for names,model_c in models:max_accuracy_scr(names,model_c,df_xc,yc)
**************************************************
Results for model :  Logistic Regression max roc score correspond to random state  0.727315712597147 Mean accuracy score is :  0.6696918411779096 Std deviation score is :  0.0030322593046897828 Cross validation scores are :   [0.67361469 0.66566588 0.66703839 0.67239974 0.66974051]
roc_auc_score: 0.727315712597147
**************************************************
**************************************************
Results for model :  Random Forest max roc score correspond to random state  0.8792762631904103 Mean accuracy score is :  0.8117279862602139 Std deviation score is :  0.002031698139189051 Cross validation scores are :   [0.81043061 0.81162342 0.81158053 0.81115162 0.81616985]
roc_auc_score: 0.8792762631904103
**************************************************
**************************************************
Results for model :  Decision Tree Classifier max roc score correspond to random state  0.7397495282209642 Mean accuracy score is :  0.7426399792028343 Std deviation score is :  0.0025271129138200485 Cross validation scores are :   [0.74288043 0.74162556 0.74149689 0.73870899 0.74462792]
roc_auc_score: 0.7397495282209642
**************************************************
**************************************************
Results for model :  GausianNB max roc score correspond to random state  0.7956111563031266 Mean accuracy score is :  0.7158677336619202 Std deviation score is :  0.0015884106712636206 Cross validation scores are :   [0.71894836 0.71550504 0.71546215 0.71443277 0.71499035]
roc_auc_score: 0.7956111563031266
**************************************************

First Attempt:Random Forest Classifier

# Estimating best n_estimator using grid search for Randomforest Classifier
parameters={"n_estimators":[1,10,100]}
rf_clf=RandomForestClassifier()
clf = GridSearchCV(rf_clf, parameters, cv=5,scoring="roc_auc")
clf.fit(df_xc,yc)
print("Best parameter : ",clf.best_params_,"\nBest Estimator : ", clf.best_estimator_,"\nBest Score : ", clf.best_score_)
Best parameter :  {'n_estimators': 100} 
Best Estimator :  RandomForestClassifier() 
Best Score :  0.8810508979668068
#Again running RFC with n_estimator = 100
rf_clf=RandomForestClassifier(n_estimators=100,random_state=42)
max_accuracy_scr("RandomForest Classifier",rf_clf,df_xc,yc)
**************************************************
Results for model :  RandomForest Classifier max roc score correspond to random state  0.879415808805665 Mean accuracy score is :  0.8115392510996895 Std deviation score is :  0.0008997445291505284 Cross validation scores are :   [0.81180305 0.81136607 0.81106584 0.81037958 0.81308171]
roc_auc_score: 0.879415808805665
**************************************************
xc_train,xc_test,yc_train,yc_test=train_test_split(df_xc, yc,random_state = 80,test_size=0.20,stratify=yc)
rf_clf.fit(xc_train,yc_train)
yc_pred=rf_clf.predict(xc_test)
plt.rcParams['figure.figsize']  = (12,8)
#  Random Forest Classifier Resultspred_pb=rf_clf.predict_proba(xc_test)[:,1]
Fpr,Tpr,thresholds = roc_curve(yc_test,pred_pb,pos_label=True)
auc = roc_auc_score(yc_test,pred_pb)print(" ROC_AUC score is ",auc)
print("accuracy score is : ",accuracy_score(yc_test,yc_pred))
print("Precision is : " ,precision_score(yc_test, yc_pred))
print("Recall is: " ,recall_score(yc_test, yc_pred))
print("F1 Score is : " ,f1_score(yc_test, yc_pred))
print("classification report \n",classification_report(yc_test,yc_pred))#Plotting confusion matrix
cnf = confusion_matrix(yc_test,yc_pred)
sns.heatmap(cnf, annot=True, cmap = "magma")
 ROC_AUC score is  0.8804566893762799
accuracy score is :  0.8127466117687425
Precision is :  0.8397949673811743
Recall is:  0.7729456167438669
F1 Score is :  0.8049848132928354
classification report precision    recall  f1-score   support0.0       0.79      0.85      0.82     116581.0       0.84      0.77      0.80     11658accuracy                           0.81     23316macro avg       0.81      0.81      0.81     23316
weighted avg       0.81      0.81      0.81     23316<AxesSubplot:>

在这里插入图片描述

plt.rcParams['figure.figsize']  = (12,6)
#plotting the graph for area under curve for representing accuracy of data
plt.plot([0,1],[1,0],'g--')
plt.plot(Fpr,Tpr)
plt.xlabel('False_Positive_Rate')
plt.ylabel('True_Positive_Rate')
plt.title("Random Forest Classifier")
plt.show()

Second Attempt: XG Boost Classifer

from sklearn.utils import class_weight
class_weight.compute_class_weight('balanced', np.unique(yc_train), yc_train["Is_Lead"])weights = np.ones(y_train.shape[0], dtype = 'float')
for i, val in enumerate(y_train):weights[i] = classes_weights[val-1]xgb_classifier.fit(X, y, sample_weight=weights)
#Trying XGBoost
import xgboost as xg
from xgboost import XGBClassifier
from sklearn.utils import class_weightclf2 = xg.XGBClassifier(class_weight='balanced').fit(xc_train, yc_train)
class_weight.compute_class_weight('balanced', np.unique(yc_train), yc_train["Is_Lead"])
xg_pred = clf2.predict(xc_test)
[23:35:16] WARNING: /private/var/folders/fc/8d9mxh2s4ssd8k64mkmlsrj00000gn/T/pip-req-build-y40nwdrb/build/temp.macosx-10.9-x86_64-3.8/xgboost/src/learner.cc:576: 
Parameters: { "class_weight" } might not be used.This may not be accurate due to some parameters are only used in language bindings butpassed down to XGBoost core.  Or some parameters are not used but slip through thisverification. Please open an issue if you find above cases.[23:35:16] WARNING: /private/var/folders/fc/8d9mxh2s4ssd8k64mkmlsrj00000gn/T/pip-req-build-y40nwdrb/build/temp.macosx-10.9-x86_64-3.8/xgboost/src/learner.cc:1100: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
plt.rcParams['figure.figsize']  = (12,8)
#XG Boost Results
xg_pred_2=clf2.predict_proba(xc_test)[:,1]
Fpr,Tpr,thresholds = roc_curve(yc_test,xg_pred_2,pos_label=True)
auc = roc_auc_score(yc_test,xg_pred_2)print(" ROC_AUC score is ",auc)
print("accuracy score is : ",accuracy_score(yc_test,xg_pred))
print("Precision is : " ,precision_score(yc_test, xg_pred))
print("Recall is: " ,recall_score(yc_test, xg_pred))
print("F1 Score is : " ,f1_score(yc_test, xg_pred))
print("classification report \n",classification_report(yc_test,xg_pred))cnf = confusion_matrix(yc_test,xg_pred)
sns.heatmap(cnf, annot=True, cmap = "magma")
 ROC_AUC score is  0.8706238059470456
accuracy score is :  0.8033968090581575
Precision is :  0.8246741325500275
Recall is:  0.7706296105678504
F1 Score is :  0.7967364313586378
classification report precision    recall  f1-score   support0.0       0.78      0.84      0.81     116581.0       0.82      0.77      0.80     11658accuracy                           0.80     23316macro avg       0.80      0.80      0.80     23316
weighted avg       0.80      0.80      0.80     23316<AxesSubplot:>

在这里插入图片描述

plt.rcParams['figure.figsize']  = (12,6)
#plotting the graph for area under curve for representing accuracy of data
plt.plot([0,1],[1,0],'g--')
plt.plot(Fpr,Tpr)
plt.xlabel('False_Positive_Rate')
plt.ylabel('True_Positive_Rate')
plt.title("XG_Boost Classifier")
plt.show()

在这里插入图片描述

Third Attempt: LGBM Model with Stratification Folds

#Trying stratification modeling
from sklearn.model_selection import KFold, StratifiedKFolddef cross_val(xc, yc, model, params, folds=10):skf = StratifiedKFold(n_splits=folds, shuffle=True, random_state=42)for fold, (train_idx, test_idx) in enumerate(skf.split(xc, yc)):print(f"Fold: {fold}")xc_train, yc_train = xc.iloc[train_idx], yc.iloc[train_idx]xc_test, yc_test = xc.iloc[test_idx], yc.iloc[test_idx]model_c= model(**params)model_c.fit(xc_train, yc_train,eval_set=[(xc_test, yc_test)],early_stopping_rounds=100, verbose=300)pred_y = model_c.predict_proba(xc_test)[:, 1]roc_score = roc_auc_score(yc_test, pred_y)print(f"roc_auc_score: {roc_score}")print("-"*50)return model_c
#Applying LGBM Model with 10 stratified cross-folds
from lightgbm import LGBMClassifierlgb_params= {'learning_rate': 0.045, 'n_estimators': 10000,'max_bin': 84,'num_leaves': 10,'max_depth': 20,'reg_alpha': 8.457,'reg_lambda': 6.853,'subsample': 0.749}
lgb_model = cross_val(xc, yc, LGBMClassifier, lgb_params)
Fold: 0
Training until validation scores don't improve for 100 rounds
[300]	valid_0's binary_logloss: 0.433821
[600]	valid_0's binary_logloss: 0.433498
Early stopping, best iteration is:
[599]	valid_0's binary_logloss: 0.433487
roc_auc_score: 0.8748638095718249
--------------------------------------------------
Fold: 1
Training until validation scores don't improve for 100 rounds
[300]	valid_0's binary_logloss: 0.434881
[600]	valid_0's binary_logloss: 0.43445
Early stopping, best iteration is:
[569]	valid_0's binary_logloss: 0.43442
roc_auc_score: 0.8755631159104413
--------------------------------------------------
Fold: 2
Training until validation scores don't improve for 100 rounds
[300]	valid_0's binary_logloss: 0.431872
[600]	valid_0's binary_logloss: 0.43125
[900]	valid_0's binary_logloss: 0.430984
Early stopping, best iteration is:
[1013]	valid_0's binary_logloss: 0.430841
roc_auc_score: 0.877077541404848
--------------------------------------------------
Fold: 3
Training until validation scores don't improve for 100 rounds
[300]	valid_0's binary_logloss: 0.442048
[600]	valid_0's binary_logloss: 0.44142
[900]	valid_0's binary_logloss: 0.441142
Early stopping, best iteration is:
[895]	valid_0's binary_logloss: 0.44114
roc_auc_score: 0.8721270953106521
--------------------------------------------------
Fold: 4
Training until validation scores don't improve for 100 rounds
[300]	valid_0's binary_logloss: 0.439466
[600]	valid_0's binary_logloss: 0.438899
Early stopping, best iteration is:
[782]	valid_0's binary_logloss: 0.438824
roc_auc_score: 0.8709229804739002
--------------------------------------------------
Fold: 5
Training until validation scores don't improve for 100 rounds
[300]	valid_0's binary_logloss: 0.427545
Early stopping, best iteration is:
[445]	valid_0's binary_logloss: 0.42739
roc_auc_score: 0.8792290845510382
--------------------------------------------------
Fold: 6
Training until validation scores don't improve for 100 rounds
[300]	valid_0's binary_logloss: 0.440554
[600]	valid_0's binary_logloss: 0.439762
[900]	valid_0's binary_logloss: 0.439505
[1200]	valid_0's binary_logloss: 0.439264
Early stopping, best iteration is:
[1247]	valid_0's binary_logloss: 0.439142
roc_auc_score: 0.872610593872283
--------------------------------------------------
Fold: 7
Training until validation scores don't improve for 100 rounds
[300]	valid_0's binary_logloss: 0.423764
Early stopping, best iteration is:
[414]	valid_0's binary_logloss: 0.423534
roc_auc_score: 0.8806521642373888
--------------------------------------------------
Fold: 8
Training until validation scores don't improve for 100 rounds
[300]	valid_0's binary_logloss: 0.440673
Early stopping, best iteration is:
[409]	valid_0's binary_logloss: 0.440262
roc_auc_score: 0.8708570312002339
--------------------------------------------------
Fold: 9
Training until validation scores don't improve for 100 rounds
[300]	valid_0's binary_logloss: 0.441536
[600]	valid_0's binary_logloss: 0.441034
Early stopping, best iteration is:
[661]	valid_0's binary_logloss: 0.440952
roc_auc_score: 0.8713195377336685
--------------------------------------------------
#LGBM results
lgb_pred_2=clf2.predict_proba(xc_test)[:,1]
Fpr,Tpr,thresholds = roc_curve(yc_test,lgb_pred_2,pos_label=True)
auc = roc_auc_score(yc_test,lgb_pred_2)print(" ROC_AUC score is ",auc)
lgb_model.fit(xc_train,yc_train)
lgb_pred=lgb_model.predict(xc_test)
print("accuracy score is : ",accuracy_score(yc_test,lgb_pred))
print("Precision is : " ,precision_score(yc_test, lgb_pred))
print("Recall is: " ,recall_score(yc_test, lgb_pred))
print("F1 Score is : " ,f1_score(yc_test, lgb_pred))
print("classification report \n",classification_report(yc_test,lgb_pred))cnf = confusion_matrix(yc_test,lgb_pred)
sns.heatmap(cnf, annot=True, cmap = "magma")
 ROC_AUC score is  0.8706238059470456
accuracy score is :  0.8030965860353405
Precision is :  0.8258784469242829
Recall is:  0.7681420483787956
F1 Score is :  0.7959646237944981
classification report precision    recall  f1-score   support0.0       0.78      0.84      0.81     116581.0       0.83      0.77      0.80     11658accuracy                           0.80     23316macro avg       0.80      0.80      0.80     23316
weighted avg       0.80      0.80      0.80     23316<AxesSubplot:>

在这里插入图片描述

plt.rcParams['figure.figsize']  = (12,6)
#plotting the graph for area under curve for representing accuracy of data
plt.plot([0,1],[1,0],'g--')
plt.plot(Fpr,Tpr)
plt.xlabel('False_Positive_Rate')
plt.ylabel('True_Positive_Rate')
plt.title("LGB Classifier model")
plt.show()

在这里插入图片描述

5. 模型预测

模型训练完成后,我们使用测试数据进行预测:

#we can drop column as they are irrelevant and have no effect on our data
df_3 = df_test
df_3.drop(columns=["source"],inplace=True)
df_3.head()
IDGenderAgeRegion_CodeOccupationChannel_CodeVintageCredit_ProductAvg_Account_BalanceIs_ActiveIs_Lead
245725VBENBARO1294102527423660.0NaN
245726CCMEWNKY14318114909255370.0NaN
245727VK3KGA9M13120201412159490.0NaN
245728TT8RPZVC12922103318680700.0NaN
245729SHQZEYTZ02920101916570870.0NaN
# dropping target variable 
#assign the value of y for training and testing phase
xc_pred = df_3.drop(columns=['Is_Lead',"ID"])#Standardizing value of x by using standardscaler to make the data normally distributed
sc = StandardScaler()
df_xc_pred = pd.DataFrame(sc.fit_transform(xc_pred),columns=xc_pred.columns)
lead_pred_xg=clf2.predict_proba(df_xc_pred)[:,1]
lead_pred_lgb=lgb_model.predict_proba(df_xc_pred)[:,1]
lead_pred_rf=rf_clf.predict_proba(df_xc_pred)[:,1]
print(lead_pred_xg, lead_pred_lgb, lead_pred_rf)
[0.09673516 0.9428428  0.12728807 ... 0.31698707 0.1821623  0.17593904] [0.14278614 0.94357392 0.13603912 ... 0.22251432 0.24186564 0.16873483] [0.17 0.97 0.09 ... 0.5  0.09 0.15]
#Dataframe for lead prediction
lead_pred_lgb= pd.DataFrame(lead_pred_lgb,columns=["Is_Lead"])
lead_pred_xg= pd.DataFrame(lead_pred_xg,columns=["Is_Lead"])
lead_pred_rf= pd.DataFrame(lead_pred_rf,columns=["Is_Lead"])
df_test = df_test.reset_index()
df_test.head()
indexIDGenderAgeRegion_CodeOccupationChannel_CodeVintageCredit_ProductAvg_Account_BalanceIs_ActiveIs_Lead
0245725VBENBARO1294102527423660.0NaN
1245726CCMEWNKY14318114909255370.0NaN
2245727VK3KGA9M13120201412159490.0NaN
3245728TT8RPZVC12922103318680700.0NaN
4245729SHQZEYTZ02920101916570870.0NaN
#Saving ID  and prediction to csv file for XG Model
df_pred_xg=pd.concat([df_test["ID"],lead_pred_xg],axis=1,ignore_index=True)
df_pred_xg.columns = ["ID","Is_Lead"]
print(df_pred_xg.head())
df_pred_xg.to_csv("Credit_Card_Lead_Predictions_final_xg.csv",index=False)#Saving ID  and prediction to csv file for LGB Model
df_pred_lgb=pd.concat([df_test["ID"],lead_pred_lgb],axis=1,ignore_index=True)
df_pred_lgb.columns = ["ID","Is_Lead"]
print(df_pred_lgb.head())
df_pred_lgb.to_csv("Credit_Card_Lead_Predictions_final_lgb.csv",index=False)#Saving ID  and prediction to csv file for RF model
df_pred_rf=pd.concat([df_test["ID"],lead_pred_rf],axis=1,ignore_index=True)
df_pred_rf.columns = ["ID","Is_Lead"]
print(df_pred_rf.head())
df_pred_rf.to_csv("Credit_Card_Lead_Predictions_final_rf.csv",index=False)
         ID   Is_Lead
0  VBENBARO  0.096735
1  CCMEWNKY  0.942843
2  VK3KGA9M  0.127288
3  TT8RPZVC  0.052260
4  SHQZEYTZ  0.057762ID   Is_Lead
0  VBENBARO  0.142786
1  CCMEWNKY  0.943574
2  VK3KGA9M  0.136039
3  TT8RPZVC  0.084144
4  SHQZEYTZ  0.055887ID  Is_Lead
0  VBENBARO     0.17
1  CCMEWNKY     0.97
2  VK3KGA9M     0.09
3  TT8RPZVC     0.12
4  SHQZEYTZ     0.09

6. 模型保存

为了在未来能够方便地加载和使用训练好的模型,我们将模型保存为pickle文件:

import joblib
# 将模型保存为文件中的pickle
joblib.dump(lgb_model,'lgb_model.pkl')
['lgb_model.pkl']

如有遇到问题可以找小编沟通交流哦。另外小编帮忙辅导大课作业,学生毕设等。不限于MapReduce, MySQL, python,java,大数据,模型训练等。 hadoop hdfs yarn spark Django flask flink kafka flume datax sqoop seatunnel echart可视化 机器学习等
在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/pingmian/14228.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【Java】/*类和对象(下)*/

目录 一、封装 1.1 初识封装 1.2 如何封装成员变量 1.3 如何使用封装后的成员变量 二、访问限定符 三、包 3.1 包的概念 3.2 如何自定义包 3.3 导入包中的类 3.4 包的访问权限控制举例 示例一&#xff1a;private修饰成员变量 示例二&#xff1a; 不去修饰成员变…

【排序算法】插入排序和希尔排序

制作不易&#xff0c;三连支持一下吧&#xff01;&#xff01;&#xff01; 文章目录 前言一、插入排序的原理及实现二、希尔排序的原理及实现总结 前言 从这篇博客开始&#xff0c;我们将介绍几种常见的排序算法&#xff01; 插入排序算法是希尔排序的基础&#xff0c;所以理…

QT5.15.2及以上版本安装

更新时间&#xff1a;2024-05-20 安装qt5.15以上版本 系统&#xff1a;ubuntu20.04.06 本文安装&#xff1a;linux-5.15.2 下载安装 # 安装编译套件g sudo apt-get install build-essential #安装OpenGL sudo apt-get install libgl1-mesa-dev# 下载qt安装器 https://downl…

Centos 7.9 使用 iso 搭建本地 YUM 源

Centos 7.9 使用 iso 搭建本地 YUM 源 1 建立挂载点 [rootlocalhost ~]# mkdir -p /media/cdrom/ 2 创建光盘存储路径 [rootlocalhost ~]# mkdir -p /mnt/cdrom/ 3 上传 CentOS-7-x86_64-Everything-2207-02.iso 到 光盘存储路径 [rootlocalhost ~]# ls /mnt/cdrom/ CentOS-…

echarts取消纵坐标,自定义提示内容,完整 echarts 布局代码

效果图 实现代码 开启点击柱子时的提示内容 //完整写法请看下面tooltip: {trigger: axis,axisPointer: {type: shadow}},自定义提示内容 //完整写法请看下面formatter: function (param) {// param是悬浮窗所在的数据&#xff08;x、y轴数据&#xff09;let relVal "&…

免费,Python蓝桥杯等级考试真题--第10级(含答案解析和代码)

Python蓝桥杯等级考试真题–第10级 一、 选择题 答案&#xff1a; 解析&#xff1a;range取首不取尾&#xff0c;即产生了数1234&#xff0c;list&#xff08;&#xff09;函数可以把产生的数1234转化为列表&#xff0c;故答案为A。 答案&#xff1a;B 解析&#xff1a;sort&a…

【芯片IC】常见拆解欣赏 单片机、FPGA、RS232

1. Giga Devices GD32F103CBT6 2.Altera Cyclone EP1C3 is the smallest 1-st generation FPGA from Altera. 在多晶硅层面&#xff0c;我们可以看到每个 M4K 块被细分为两半&#xff08;两列共 26 个 “矩形”&#xff09;。逻辑元件阵列是非对称的&#xff0c;在阵列右侧正中…

jetcache缓存

1 介绍 是阿里的双极缓存&#xff0c;jvm-->redis-->数据库 文档&#xff1a;jetcache/docs/CN at master alibaba/jetcache GitHub 2 注意事项 使用的实体类一定实现序列化接口定时刷新注解&#xff0c;慎用 它会为每一个key创建一个定时器 &#xff1a;场景为&…

Node.js —— Express 中间件、接口编写、接口跨域 【0基础向Express模块学习】

目录 中间件的概念 什么是中间件 现实生活中的例子 Express 中间件的调用流程 ​编辑 Express 中间件的格式 next 函数的作用 Express 中间件的初体验 定义中间件函数 全局生效的中间件 定义全局中间件的简化形式 中间件的作用 ​编辑 定义多个全局中间件 局部生…

前端绘制流程节点数据

根据数据结构和节点的层级、子节点id&#xff0c;前端自己绘制节点位置和关联关系、指向、已完成节点等 <template><div><div>通过后端节点和层级&#xff0c;绘制出节点以及关联关系等</div><div class"container" ref"container&…

java面试框架篇(Spring常见问题、SpringBoot、SpringMVC、mybatis经典问题、SpringCloud组件)

文章目录 面试专题-java框架篇1. spring常见问题1.1. spring是什么?1.2. 谈谈你对AOP的理解1.3. 谈谈你对IOC的理解1.4. Spring Boot、 Spring MVC和Spring有什么区别1.5. spring bean 生命周期1.6. spring事务传播机制有哪些?1.7. 循环依赖1.8. spring框架中使用了哪些设计模…

vue3插槽solt 使用

背景增加组件的复用性&#xff0c;个人体验组件化还是react 方便。 Vue插槽solt如何传递具名插槽的数据给子组件&#xff1f; 一、solt 原理 知其然知其所以然 Vue的插槽&#xff08;slots&#xff09;是一种分发内容的机制&#xff0c;允许你在组件模板中定义可插入的内容…

Python 实现Word (DOC或DOCX)与TXT文本格式互转

目录 引言 安装Python库 使用Python将Word转换为TXT文本格式 使用Python将TXT文本格式转换为Word 引言 Word文档和TXT文本文件是日常工作和生活中两种常见的文件格式&#xff0c;各有其特点和优势。Word文档能够保留丰富的格式设置&#xff0c;如字体、段落、表格、图片等…

Java入门基础学习笔记42——常用API

API&#xff08;全称&#xff1a;Application Programming Interface&#xff1a;应用程序编程接口&#xff09; 就是Java自己写好的程序&#xff0c;给程序员调用&#xff0c;方便完成一些功能的。 为什么要学别人写好的程序&#xff1f; 不要重复造轮子。 开发效率高。 面…

YOLOv5改进 | 主干网络 | 用repvgg模块替换Conv【教程+代码 】

&#x1f4a1;&#x1f4a1;&#x1f4a1;本专栏所有程序均经过测试&#xff0c;可成功执行&#x1f4a1;&#x1f4a1;&#x1f4a1; 尽管Ultralytics 推出了最新版本的 YOLOv8 模型。但YOLOv5作为一个anchor base的目标检测的算法&#xff0c;YOLOv5可能比YOLOv8的效果更好。…

2024电工杯数学建模选题建议及各题思路来啦!

大家好呀&#xff0c;2024电工杯数学建模竞赛开始了&#xff0c;来说一下初步的选题建议吧&#xff1a; 首先定下主基调&#xff0c; 本次电工杯建议选B。A题目难度较高&#xff0c;只建议有相关专业知识和优化经验的队伍选择&#xff0c;小白队伍无脑选B即可。是比较经典的数…

网页加载时,大图片文件如何分片加载,有示例代码。

浏览网页时候&#xff0c;碰到大图片半天加载不出来&#xff0c;急死人&#xff0c;本问分享一种分片加载的方式&#xff0c;其实还有其他方式&#xff0c;比如先模糊后清晰等。 一、为什么要分片加载 大图片文件可以通过分片加载来提高加载性能和用户体验。分片加载的基本思…

PointCloudLib 点云Ransac拟合平面功能实现 C++版本

0.实现效果 左图为原始点云,右图中的红色点为拟合平面所选取的点,绿色的点为拟合平面所抛弃的点 拟合出的结果是一个平面方程。 1.算法原理 RANSAC(Random Sample Consensus,随机样本一致性)算法在拟合平面时的工作原理可以概括为以下几个步骤: 随机选择最小样本集: R…

PTK密钥传递攻击

一. PTK密钥传递攻击原理 1. PTK介绍 PTK(Pass The key)&#xff0c;中文叫密钥传递攻击&#xff0c;PTH传递中&#xff0c;使用的是NTLM-HASH值&#xff0c;PTK使用 AES256或者AES128的方式进行传递,PTK 攻击只能用于kerberos认证中,NTLM认证中没有&#xff01; 2.PTK的原理…

官宣!正式成为淡人!向数据备份焦虑Say NO!

连轴转了十几天&#xff0c;想着终于要迎接美好的双休了&#xff0c;焦躁的心都变淡了。 但有时候压死骆驼的不是最后一根稻草&#xff1b; 当我终于剪好视频&#xff0c;满心欢喜导出时&#xff0c;却收到了一个令人沮丧的提示&#xff1a; “存储空间不足&#xff0c;请清…