逻辑回归模型构建及训练流程
关于逻辑回归的数据,有很多学习⽤的⽰例样本。这⾥我们使⽤scikit learn提供的数据集⽣成函数来创建
具体参数可参照官网
Scikit-learn 是⽤ Python 开发的开源机器学习库,⼴泛⽤于数据挖掘和数据分析。
- 特点:易⽤、⾼效,具有丰富的⼯具,涵盖分类、回归、聚类等多种机器学习算法。
- 功能:提供数据预处理、模型选择、评估等功能,便于构建完整的机器学习⼯作流。
- 优势:有详细⽂档和⽰例,社区活跃,能降低开发成本、提⾼效率。
#生成分类数据
from sklearn.datasets import make_classification #生成的是numpy.ndarray数组(随机生成)
from sklearn.model_selection import train_test_split #划分数据集
import numpy as np
demo_X,demo_y = make_classification() #默认生成100个样本,20个特征,2个类别
print(demo_X) #特征
print(demo_y) #标签
print(demo_X.shape)
print(demo_y.shape)
[[ 0.96837399 0.69991556 -0.80719258 ... 1.07349589 0.600931011.25834368][ 1.54064333 -0.72874526 0.05101656 ... 1.69469224 0.68078434-0.22108232][ 1.3130273 0.13845124 -0.17878829 ... -2.51988675 0.73565307-0.61197128]...[-0.96507974 0.62850721 0.25545924 ... -1.03533221 -0.02341121.86283345][-1.09606837 -0.92451774 -0.59875319 ... -0.19421878 0.62418285-0.26886614][-0.18375534 0.12046227 0.52649374 ... 0.93921941 0.896507111.14815417]]
[0 0 1 1 1 0 1 1 0 1 1 1 1 1 0 0 0 1 1 1 1 1 0 0 1 1 1 1 1 0 0 0 1 0 0 1 00 0 1 0 1 0 1 1 0 0 1 0 1 1 0 0 0 1 0 0 1 1 0 1 0 1 0 0 1 0 1 1 1 0 0 0 00 0 0 1 1 1 1 0 0 1 1 0 1 0 0 1 1 1 0 0 1 0 0 1 1 0]
(100, 20)
(100,)
查看生成结果发现,这是一个元组,里面包含两组值,默认生成100个样本,20个特征,2个类别
demo_X[1]
array([ 1.54064333, -0.72874526, 0.05101656, 2.66218782, 1.94089634,-0.10555552, 0.12877297, -0.47275342, -0.23722334, -0.24897953,0.29021104, -1.03756101, -0.6875083 , -1.57963226, 1.81221622,-0.04901801, -0.91022508, 1.69469224, 0.68078434, -0.22108232])
demo_y[1]
0
步骤
- 数据准备,参数初始化
- 模型计算函数
- 计算损失函数
- 计算梯度
- 模型训练
模型训练的数据,通常还需要拆分为训练集和测试集。⽬的是为了防⽌数据泄露
**数据泄露(Data Leakage)**指的是训练数据中包含了不应该有的信息,这些信息会使模型在评估时表现得⽐在真实应⽤场景中更好,导致对模型性能的⾼估,使得模型的泛化能⼒被错误判断。
#1.数据准备,参数初始化
#生成150个样本,10个特征,2个类别的训练数据
X,y = make_classification(n_samples=150,n_features=10) #shape(150,10)#数据拆分
#局部样本训练模型(过拟合模型)测试预测不好(泛化能力差)
#全局样本训练模型(泛化能力好)测试预测好(泛化能力好)
#新样本数据模型表现不好(泛化能力差)
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3) #30%作为测试集
#X_train, X_test表示一部分用作训练,一部分用作测试,避免过拟合
print(X_train.shape)
print(X_test.shape)
(105, 10)
(45, 10)
#权重参数
theta = np.random.randn(1,10) #10个特征 1个权重 #shape(1,10) #随机生成一个权重参数
#偏置参数
bias = 0
#超参数
lr = 0.01 #学习率
epochs = 3000 #迭代次数,训练次数
#假设 X 是shape(3,3)
#[[x1,x2,x3],
# [x4x5,x6],
# [x7,x8,x9]]
#X.T 是shape(3,3)
#[[x1,x4,x7],
# [x2,x5,x8],
# [x3,x6,x9]]
#
#假设theta 模型参数shape(1,3)
#[[w1,w2,w3]]
#theta * X.T = shape(1,3) * shape(3,3) = shape(1,3)
#y1 = w1*x1 + w2*x2 + w3*x3
#y2 = w1*x4 + w2*x5 + w3*x6
#y3 = w1*x7 + w2*x8 + w3*x9
#y = [[y1,y2,y3]]
##2. 模型计算函数
def forward(X,theta,bias):#线性运算z = np.dot(theta,X.T) + bias #shape(105,10)#sigmod 是为了将线性运算的结果映射到0-1之间y_hat = 1/(1+np.exp(-z)) #shape(105,10)return y_hat #3. 计算损失函数
def loss(y,y_hat):#损失函数e = 1e-8 #防止除0错误return - y*np.log(y_hat +e ) - (1-y)*np.log(1-y_hat + e)#4. 计算梯度
def calc_gradient(x,y,y_hat):m = x.shape[-1] #样本数delta_theta = np.dot(y_hat-y,x)/m #shape(1,10) #梯度delta_bias = np.mean(y_hat-y) #shape(1,) #偏置的梯度return delta_theta,delta_bias#5. 模型训练
for i in range(epochs): #epochs是训练次数#前向传播y_hat = forward(X_train,theta,bias)#计算损失loss_value = loss(y_train,y_hat)#计算梯度delta_theta,delta_bias = calc_gradient(X_train,y_train,y_hat)#更新参数theta = theta - lr * delta_thetabias = bias - lr * delta_biasif i % 100 == 0:print(f"epoch:{i},loss:{np.mean(loss_value)}")#计算准确率acc = np.mean(np.round(y_hat) == y_train) #[Fales,True,True,False,True] --> [0,1,1,0,1] --> 0.6 print(f"epoch:{i},loss:{np.mean(loss_value)},acc:{acc}")
epoch:0,loss:0.9826484121456485
epoch:0,loss:0.9826484121456485,acc:0.6
epoch:100,loss:0.28410629245685803
epoch:100,loss:0.28410629245685803,acc:0.8857142857142857
epoch:200,loss:0.24510667568678654
epoch:200,loss:0.24510667568678654,acc:0.8666666666666667
epoch:300,loss:0.23505007869724906
epoch:300,loss:0.23505007869724906,acc:0.8571428571428571
epoch:400,loss:0.23103972248220034
epoch:400,loss:0.23103972248220034,acc:0.8666666666666667
epoch:500,loss:0.22908011250548505
epoch:500,loss:0.22908011250548505,acc:0.8761904761904762
epoch:600,loss:0.2280154352157034
epoch:600,loss:0.2280154352157034,acc:0.8761904761904762
epoch:700,loss:0.22739935077291418
epoch:700,loss:0.22739935077291418,acc:0.8761904761904762
epoch:800,loss:0.2270275272465418
epoch:800,loss:0.2270275272465418,acc:0.8761904761904762
epoch:900,loss:0.22679615751708468
epoch:900,loss:0.22679615751708468,acc:0.8761904761904762
epoch:1000,loss:0.2266487526019183
epoch:1000,loss:0.2266487526019183,acc:0.8761904761904762
epoch:1100,loss:0.22655303465776838
epoch:1100,loss:0.22655303465776838,acc:0.8761904761904762
epoch:1200,loss:0.22648987077080793
epoch:1200,loss:0.22648987077080793,acc:0.8761904761904762
epoch:1300,loss:0.22644759054763966
epoch:1300,loss:0.22644759054763966,acc:0.8761904761904762
epoch:1400,loss:0.2264189111347839
epoch:1400,loss:0.2264189111347839,acc:0.8666666666666667
epoch:1500,loss:0.22639920302759006
epoch:1500,loss:0.22639920302759006,acc:0.8666666666666667
epoch:1600,loss:0.22638547862134517
epoch:1600,loss:0.22638547862134517,acc:0.8666666666666667
epoch:1700,loss:0.22637578566413857
epoch:1700,loss:0.22637578566413857,acc:0.8666666666666667
epoch:1800,loss:0.22636883499686028
epoch:1800,loss:0.22636883499686028,acc:0.8666666666666667
epoch:1900,loss:0.2263637676238141
epoch:1900,loss:0.2263637676238141,acc:0.8666666666666667
epoch:2000,loss:0.2263600065967938
epoch:2000,loss:0.2263600065967938,acc:0.8666666666666667
epoch:2100,loss:0.22635716155827576
epoch:2100,loss:0.22635716155827576,acc:0.8666666666666667
epoch:2200,loss:0.2263549665310887
epoch:2200,loss:0.2263549665310887,acc:0.8666666666666667
epoch:2300,loss:0.2263532389959844
epoch:2300,loss:0.2263532389959844,acc:0.8666666666666667
epoch:2400,loss:0.2263518527619451
epoch:2400,loss:0.2263518527619451,acc:0.8666666666666667
epoch:2500,loss:0.22635071986180172
epoch:2500,loss:0.22635071986180172,acc:0.8666666666666667
epoch:2600,loss:0.22634977840264098
epoch:2600,loss:0.22634977840264098,acc:0.8666666666666667
epoch:2700,loss:0.22634898437242262
epoch:2700,loss:0.22634898437242262,acc:0.8666666666666667
epoch:2800,loss:0.2263483060903649
epoch:2800,loss:0.2263483060903649,acc:0.8666666666666667
epoch:2900,loss:0.2263477204327887
epoch:2900,loss:0.2263477204327887,acc:0.8666666666666667
#模型推理
idx = np.random.randint(len(X_test)) #随机生成一个索引作为测试样本
x = X_test[idx] #shape(10,)
y = y_test[idx] #shape(1,)
#模型预测
predict = np.round(forward(x,theta,bias))
print(f"y:{y},predict:{predict}")
y:1,predict:[1.]