记录了使用for
循环实现网格搜索的简单框架。
使用df_search
记录每种超参数组合下的运行结果。
lgb_model.best_score
返回模型的最佳得分
lgb_model.best_iteration_
返回模型的最佳iteration
也就是最佳n_extimator
import numpy as np
import pandas as pd
import lightgbm as lgbdf = pd.read_csv("this_is_train.csv")
df_search_columns = ['learning_rate', 'num_leaves', 'max_depth','subsample','colsample_bytree','best_iteration','best_score']
df_search = pd.DataFrame(columns=df_search_columns )
# colsample_bytree :0.9, learning_rate : 0.001
lgb_params = {"objective": "mae", # "mae""n_estimators": 6000,"num_leaves": 256, # 256"subsample": 0.6,"colsample_bytree": 0.8,"learning_rate": 0.00571, # 0.00871'max_depth': 11, # 11"n_jobs": 4,"device": "gpu","verbosity": -1,"importance_type": "gain",
}
for learning_rate in [0.001,0.005,0.01,0.015,0.05]:for num_leaves in [300,256,200,150]:for max_depth in [15,13,11,9,7]:for subsample in [0.8,0.6,0.5]:for colsample_bytree in [0.9,0.8,0.7]:print(f"learning_rate : {learning_rate}, num_leaves : {num_leaves}, max_depth:{max_depth}, subsample : {subsample}, colsample_bytree : {colsample_bytree}")lgb_params['learning_rate'] = learning_ratelgb_params['num_leaves'] = num_leaveslgb_params['max_depth'] = max_depthlgb_params['subsample'] = subsamplelgb_params['colsample_bytree'] = colsample_bytree# Train a LightGBM model for the current foldlgb_model = lgb.LGBMRegressor(**lgb_params)lgb_model.fit(train_feats,train_target,eval_set=[(valid_feats, valid_target)],callbacks=[lgb.callback.early_stopping(stopping_rounds=100),lgb.callback.log_evaluation(period=100),],)best_iteration = lgb_model.best_iteration_best_score = lgb_model.best_score_cache = pd.DataFrame([[learning_rate,num_leaves,max_depth,subsample,colsample_bytree,best_iteration,best_score]],columns=['learning_rate', 'num_leaves', 'max_depth','subsample','colsample_bytree','best_iteration','best_score'])df_search = pd.concat([df_search, cache], ignore_index=True, axis=0)
df_search.to_csv('grid_search.csv',index=False)
使用该框架,需要调整训练数据df
部分,以及进行网格的备选数据和lightgbm
的超参数。
每次运行的数据通过一下代码进行记录
cache = pd.DataFrame([[learning_rate,num_leaves,max_depth,subsample,colsample_bytree,best_iteration,best_score]],\columns=df_search_columns )
df_search = pd.concat([df_search, cache], ignore_index=True, axis=0)