超参数调优-通用深度学习篇(上)

文章目录

  • 深度学习超参数调优
    • 网格搜索
      • 示例一:网格搜索回归模型超参数
      • 示例二:Keras网格搜索
    • 随机搜索
    • 贝叶斯搜索
  • 超参数调优框架
    • Optuna深度学习超参数优化框架
    • nvidia nemo大模型超参数优化框架

参数调整理论: 黑盒优化:超参数优化算法最新进展总结

  • 均为转载,联系侵删

深度学习超参数调优

  • pytorch 网格搜索LSTM最优参数 python网格搜索优化参数
  • Keras深度学习超参数优化官方手册
  • Keras深度学习超参数优化手册-CSDN博客版
  • 超参数搜索不够高效?这几大策略了解一下
  • 使用贝叶斯优化进行深度神经网络超参数优化

网格搜索

示例一:网格搜索回归模型超参数

# grid search cnn for airline passengers
from math import sqrt
from numpy import array, mean
from pandas import DataFrame, concat, read_csv
from sklearn.metrics import mean_squared_error
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Conv1D, MaxPooling1D# split a univariate dataset into train/test sets
def train_test_split(data, n_test):return data[:-n_test], data[-n_test:]# transform list into supervised learning format
def series_to_supervised(data, n_in=1, n_out=1):df = DataFrame(data)cols = list()# input sequence (t-n, ... t-1)for i in range(n_in, 0, -1):cols.append(df.shift(i))# forecast sequence (t, t+1, ... t+n)for i in range(0, n_out):cols.append(df.shift(-i))# put it all togetheragg = concat(cols, axis=1)# drop rows with NaN valuesagg.dropna(inplace=True)return agg.values# root mean squared error or rmse
def measure_rmse(actual, predicted):return sqrt(mean_squared_error(actual, predicted))# difference dataset
def difference(data, order):return [data[i] - data[i - order] for i in range(order, len(data))]# fit a model
def model_fit(train, config):# unpack confign_input, n_filters, n_kernel, n_epochs, n_batch, n_diff = config# prepare dataif n_diff > 0:train = difference(train, n_diff)# transform series into supervised formatdata = series_to_supervised(train, n_in=n_input)# separate inputs and outputstrain_x, train_y = data[:, :-1], data[:, -1]# reshape input data into [samples, timesteps, features]n_features = 1train_x = train_x.reshape((train_x.shape[0], train_x.shape[1], n_features))# define modelmodel = Sequential()model.add(Conv1D(filters=n_filters, kernel_size=n_kernel, activation='relu', input_shape=(n_input, n_features)))model.add(MaxPooling1D(pool_size=2))model.add(Flatten())model.add(Dense(1))model.compile(loss='mse', optimizer='adam')# fitmodel.fit(train_x, train_y, epochs=n_epochs, batch_size=n_batch, verbose=0)return model# forecast with the fit model
def model_predict(model, history, config):# unpack confign_input, _, _, _, _, n_diff = config# prepare datacorrection = 0.0if n_diff > 0:correction = history[-n_diff]history = difference(history, n_diff)x_input = array(history[-n_input:]).reshape((1, n_input, 1))# forecastyhat = model.predict(x_input, verbose=0)return correction + yhat[0]# walk-forward validation for univariate data
def walk_forward_validation(data, n_test, cfg):predictions = list()# split datasettrain, test = train_test_split(data, n_test)# fit modelmodel = model_fit(train, cfg)# seed history with training datasethistory = [x for x in train]# step over each time-step in the test setfor i in range(len(test)):# fit model and make forecast for historyyhat = model_predict(model, history, cfg)# store forecast in list of predictionspredictions.append(yhat)# add actual observation to history for the next loophistory.append(test[i])# estimate prediction errorerror = measure_rmse(test, predictions)print(' > %.3f' % error)return error# score a model, return None on failure
def repeat_evaluate(data, config, n_test, n_repeats=10):# convert config to a keykey = str(config)# fit and evaluate the model n timesscores = [walk_forward_validation(data, n_test, config) for _ in range(n_repeats)]# summarize scoreresult = mean(scores)print('> Model[%s] %.3f' % (key, result))return (key, result)# grid search configs
def grid_search(data, cfg_list, n_test):# evaluate configsscores = [repeat_evaluate(data, cfg, n_test) for cfg in cfg_list]# sort configs by error, ascscores.sort(key=lambda tup: tup[1])return scores# create a list of configs to try
def model_configs():# define scope of configsn_input = [12]n_filters = [64]n_kernels = [3, 5]n_epochs = [100]n_batch = [1, 150]n_diff = [0, 12]# create configsconfigs = list()for a in n_input:for b in n_filters:for c in n_kernels:for d in n_epochs:for e in n_batch:for f in n_diff:cfg = [a, b, c, d, e, f]configs.append(cfg)print('Total configs: %d' % len(configs))return configs# define dataset
# 下载数据集:https://raw.githubusercontent.com/jbrownlee/Datasets/master/airline-passengers.csv
series = read_csv('airline-passengers.csv', header=0, index_col=0)
data = series.values
# data split
n_test = 12
# model configs
cfg_list = model_configs()
# grid search
scores = grid_search(data, cfg_list, n_test)
print('done')
# list top 10 configs
for cfg, error in scores[:3]:print(cfg, error)

示例二:Keras网格搜索

"""
调整batch size和epochs
"""# Use scikit-learn to grid search the batch size and epochs
import numpy as np
import tensorflow as tf
from sklearn.model_selection import GridSearchCV
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from scikeras.wrappers import KerasClassifier
# Function to create model, required for KerasClassifier
def create_model():# create modelmodel = Sequential()model.add(Dense(12, input_shape=(8,), activation='relu'))model.add(Dense(1, activation='sigmoid'))# Compile modelmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])return model
# fix random seed for reproducibility
seed = 7
tf.random.set_seed(seed)
# load dataset
dataset = np.loadtxt("pima-indians-diabetes.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# create model
model = KerasClassifier(model=create_model, verbose=0)
# define the grid search parameters
batch_size = [10, 20, 40, 60, 80, 100]
epochs = [10, 50, 100]
param_grid = dict(batch_size=batch_size, epochs=epochs)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=3)
grid_result = grid.fit(X, Y)
# summarize results
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):print("%f (%f) with: %r" % (mean, stdev, param))
"""
更多参考:https://machinelearningmastery.com/grid-search-hyperparameters-deep-learning-models-python-keras/
"""

随机搜索

# Load the dataset
X, Y = load_dataset()# Create model for KerasClassifier
def create_model(hparams1=dvalue,hparams2=dvalue,...hparamsn=dvalue):# Model definition...model = KerasClassifier(build_fn=create_model) # Specify parameters and distributions to sample from
hparams1 = randint(1, 100)
hparams2 = ['elu', 'relu', ...]
...
hparamsn = uniform(0, 1)# Prepare the Dict for the Search
param_dist = dict(hparams1=hparams1, hparams2=hparams2, ...hparamsn=hparamsn)# Search in action!
n_iter_search = 16 # Number of parameter settings that are sampled.
random_search = RandomizedSearchCV(estimator=model, param_distributions=param_dist,n_iter=n_iter_search,n_jobs=, cv=, verbose=)
random_search.fit(X, Y)# Show the results
print("Best: %f using %s" % (random_search.best_score_, random_search.best_params_))
means = random_search.cv_results_['mean_test_score']
stds = random_search.cv_results_['std_test_score']
params = random_search.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):print("%f (%f) with: %r" % (mean, stdev, param))

贝叶斯搜索

"""
准备数据
"""
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()# split into train, validation and test sets
train_x, val_x, train_y, val_y = train_test_split(train_images, train_labels, stratify=train_labels, random_state=48, test_size=0.05)
(test_x, test_y)=(test_images, test_labels)# normalize pixels to range 0-1
train_x = train_x / 255.0
val_x = val_x / 255.0
test_x = test_x / 255.0#one-hot encode target variable
train_y = to_categorical(train_y)
val_y = to_categorical(val_y)
test_y = to_categorical(test_y)# pip3 install keras-tuner
"""
调整获取最优参数(MLP版)
"""
model = Sequential()model.add(Dense(units = hp.Int('dense-bot', min_value=50, max_value=350, step=50), input_shape=(784,), activation='relu'))for i in range(hp.Int('num_dense_layers', 1, 2)):model.add(Dense(units=hp.Int('dense_' + str(i), min_value=50, max_value=100, step=25), activation='relu'))model.add(Dropout(hp.Choice('dropout_'+ str(i), values=[0.0, 0.1, 0.2])))model.add(Dense(10,activation="softmax"))hp_optimizer=hp.Choice('Optimizer', values=['Adam', 'SGD'])if hp_optimizer == 'Adam':hp_learning_rate = hp.Choice('learning_rate', values=[1e-1, 1e-2, 1e-3])
elif hp_optimizer == 'SGD':hp_learning_rate = hp.Choice('learning_rate', values=[1e-1, 1e-2, 1e-3])nesterov=Truemomentum=0.9
model.compile(optimizer = hp_optimizer, loss='categorical_crossentropy', metrics=['accuracy'])tuner_mlp = kt.tuners.BayesianOptimization(model,seed=random_seed,objective='val_loss',max_trials=30,directory='.',project_name='tuning-mlp')
tuner_mlp.search(train_x, train_y, epochs=50, batch_size=32, validation_data=(dev_x, dev_y), callbacks=callback)
best_mlp_hyperparameters = tuner_mlp.get_best_hyperparameters(1)[0]
print("Best Hyper-parameters")
# best_mlp_hyperparameters.values
"""
使用最优参数来训练模型
"""
model_mlp = Sequential()model_mlp.add(Dense(best_mlp_hyperparameters['dense-bot'], input_shape=(784,), activation='relu'))for i in range(best_mlp_hyperparameters['num_dense_layers']):model_mlp.add(Dense(units=best_mlp_hyperparameters['dense_' +str(i)], activation='relu'))model_mlp.add(Dropout(rate=best_mlp_hyperparameters['dropout_' +str(i)]))model_mlp.add(Dense(10,activation="softmax"))model_mlp.compile(optimizer=best_mlp_hyperparameters['Optimizer'], loss='categorical_crossentropy',metrics=['accuracy'])
history_mlp= model_mlp.fit(train_x, train_y, epochs=100, batch_size=32, validation_data=(dev_x, dev_y), callbacks=callback)
# model_mlp=tuner_mlp.hypermodel.build(best_mlp_hyperparameters)
# history_mlp=model_mlp.fit(train_x, train_y, epochs=100, batch_size=32, validation_data=(dev_x, dev_y), callbacks=callback)
"""
效果测试
"""
mlp_test_loss, mlp_test_acc = model_mlp.evaluate(test_x,  test_y, verbose=2)
print('\nTest accuracy:', mlp_test_acc)
# Test accuracy: 0.8823"""
CNN版
"""
"""
基线模型
"""
model_cnn = Sequential()
model_cnn.add(Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model_cnn.add(MaxPooling2D((2, 2)))
model_cnn.add(Flatten())
model_cnn.add(Dense(100, activation='relu'))
model_cnn.add(Dense(10, activation='softmax'))
model_cnn.compile(optimizer="adam", loss='categorical_crossentropy', metrics=['accuracy'])
"""
贝叶斯搜索超参数
"""
model = Sequential()model = Sequential()
model.add(Input(shape=(28, 28, 1)))for i in range(hp.Int('num_blocks', 1, 2)):hp_padding=hp.Choice('padding_'+ str(i), values=['valid', 'same'])hp_filters=hp.Choice('filters_'+ str(i), values=[32, 64])model.add(Conv2D(hp_filters, (3, 3), padding=hp_padding, activation='relu', kernel_initializer='he_uniform', input_shape=(28, 28, 1)))model.add(MaxPooling2D((2, 2)))model.add(Dropout(hp.Choice('dropout_'+ str(i), values=[0.0, 0.1, 0.2])))model.add(Flatten())hp_units = hp.Int('units', min_value=25, max_value=150, step=25)
model.add(Dense(hp_units, activation='relu', kernel_initializer='he_uniform'))model.add(Dense(10,activation="softmax"))hp_learning_rate = hp.Choice('learning_rate', values=[1e-2, 1e-3])
hp_optimizer=hp.Choice('Optimizer', values=['Adam', 'SGD'])if hp_optimizer == 'Adam':hp_learning_rate = hp.Choice('learning_rate', values=[1e-2, 1e-3])
elif hp_optimizer == 'SGD':hp_learning_rate = hp.Choice('learning_rate', values=[1e-2, 1e-3])nesterov=Truemomentum=0.9
model.compile( optimizer=hp_optimizer,loss='categorical_crossentropy', metrics=['accuracy'])tuner_cnn = kt.tuners.BayesianOptimization(model,objective='val_loss',max_trials=100,directory='.',project_name='tuning-cnn')
"""
采用最佳超参数训练模型
"""
model_cnn = Sequential()model_cnn.add(Input(shape=(28, 28, 1)))for i in range(best_cnn_hyperparameters['num_blocks']):hp_padding=best_cnn_hyperparameters['padding_'+ str(i)]hp_filters=best_cnn_hyperparameters['filters_'+ str(i)]model_cnn.add(Conv2D(hp_filters, (3, 3), padding=hp_padding, activation='relu', kernel_initializer='he_uniform', input_shape=(28, 28, 1)))model_cnn.add(MaxPooling2D((2, 2)))model_cnn.add(Dropout(best_cnn_hyperparameters['dropout_'+ str(i)]))model_cnn.add(Flatten())
model_cnn.add(Dense(best_cnn_hyperparameters['units'], activation='relu', kernel_initializer='he_uniform'))model_cnn.add(Dense(10,activation="softmax"))model_cnn.compile(optimizer=best_cnn_hyperparameters['Optimizer'], loss='categorical_crossentropy', metrics=['accuracy'])
print(model_cnn.summary())history_cnn= model_cnn.fit(train_x, train_y, epochs=50, batch_size=32, validation_data=(dev_x, dev_y), callbacks=callback)
cnn_test_loss, cnn_test_acc = model_cnn.evaluate(test_x,  test_y, verbose=2)
print('\nTest accuracy:', cnn_test_acc)# Test accuracy: 0.92

超参数调优框架

  • Optuna-深度学习-超参数优化
  • nvidia nemo-大模型训练优化自动超参数搜索分析
  • https://github.com/NVIDIA/NeMo-Framework-Launcher

Optuna深度学习超参数优化框架

import os
import optuna
import plotly
from optuna.trial import TrialState
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.utils.data
from torchvision import datasets
from torchvision import transforms
from optuna.visualization import plot_optimization_history
from optuna.visualization import plot_param_importances
from optuna.visualization import plot_slice
from optuna.visualization import plot_intermediate_values
from optuna.visualization import plot_parallel_coordinate# 下述代码指定了SGDClassifier分类器的参数:alpha、max_iter 的搜索空间、损失函数loss的搜索空间。
def objective(trial):iris = sklearn.datasets.load_iris()classes = list(set(iris.target))train_x, valid_x, train_y, valid_y = sklearn.model_selection.train_test_split(iris.data, iris.target, test_size=0.25, random_state=0)#指定参数搜索空间alpha = trial.suggest_loguniform('alpha', 1e-5, 1e-1)max_iter = trial.suggest_int('max_iter',64,192,step=64)loss = trial.suggest_categorical('loss',['hinge','log','perceptron'])clf = sklearn.linear_model.SGDClassifier(alpha=alpha,max_iter=max_iter)# 下述代码指定了学习率learning_rate、优化器optimizer、神经元个数n_uint 的搜索空间。
def objective(trial):params = {'learning_rate': trial.suggest_loguniform('learning_rate', 1e-5, 1e-1),'optimizer': trial.suggest_categorical("optimizer", ["Adam", "RMSprop", "SGD"]),'n_unit': trial.suggest_int("n_unit", 4, 18)}model = build_model(params)accuracy = train_and_evaluate(params, model)return accuracy# 记录超参数训练过程
def objective(trial):iris = sklearn.datasets.load_iris()classes = list(set(iris.target))train_x, valid_x, train_y, valid_y = sklearn.model_selection.train_test_split(iris.data, iris.target, test_size=0.25, random_state=0)alpha = trial.suggest_loguniform('alpha', 1e-5, 1e-1)max_iter = trial.suggest_int('max_iter',64,192,step=64)loss = trial.suggest_categorical('loss',['hinge','log','perceptron'])clf = sklearn.linear_model.SGDClassifier(alpha=alpha,max_iter=max_iter)for step in range(100):clf.partial_fit(train_x, train_y, classes=classes)intermediate_value = 1.0 - clf.score(valid_x, valid_y)trial.report(intermediate_value, step)if trial.should_prune():raise optuna.TrialPruned()return 1.0 - clf.score(valid_x, valid_y)# 创建优化过程
def objective(trial):iris = sklearn.datasets.load_iris()classes = list(set(iris.target))train_x, valid_x, train_y, valid_y = sklearn.model_selection.train_test_split(iris.data, iris.target, test_size=0.25, random_state=0)alpha = trial.suggest_loguniform('alpha', 1e-5, 1e-1)max_iter = trial.suggest_int('max_iter',64,192,step=64)loss = trial.suggest_categorical('loss',['hinge','log','perceptron'])clf = sklearn.linear_model.SGDClassifier(alpha=alpha,max_iter=max_iter)for step in range(100):clf.partial_fit(train_x, train_y, classes=classes)intermediate_value = 1.0 - clf.score(valid_x, valid_y)trial.report(intermediate_value, step)if trial.should_prune():raise optuna.TrialPruned()return 1.0 - clf.score(valid_x, valid_y)study = optuna.create_study(storage='path',study_name='first',pruner=optuna.pruners.MedianPruner())
#study = optuna.study.load_study('first','path')
study.optimize(objective, n_trials=20)
print("Study statistics: ")
print("  Number of finished trials: ", len(study.trials))
print("  Number of pruned trials: ", len(pruned_trials))
print("  Number of complete trials: ", len(complete_trials))
print("Best trial:")
trial = study.best_trial
print("  Value: ", trial.value)
print("  Params: ")
for key, value in trial.params.items():print("{}:{}".format(key, value))# 可视化搜索结果
optuna.visualization.plot_contour(study)#若不行,请尝试:
vis_path = r'result-vis/'
graph_cout = optuna.visualization.plot_contour(study,params=['n_layers','lr'])
plotly.offline.plot(graph_cout,filename=vis_path+'graph_cout.html')plot_optimization_history(study)#若不行,请尝试:
vis_path = r'result-vis/'
history = plot_optimization_history(study)
plotly.offline.plot(history,filename=vis_path+'history.html')plot_intermediate_values(study)#若不行,请尝试:
vis_path = r'result-vis/'
intermed = plot_intermediate_values(study)
plotly.offline.plot(intermed,filename=vis_path+'intermed.html')plot_slice(study, params=['alpha','max_iter','loss'])#若不行,请尝试:
vis_path = r'result-vis/'
slices = plot_slice(study)
plotly.offline.plot(slices,filename=vis_path+'slices.html')plot_parallel_coordinate(study,params=['alpha','max_iter','loss'])#若不行,请尝试:
vis_path = r'result-vis/'
paraller = plot_parallel_coordinate(study)
plotly.offline.plot(paraller,filename=vis_path+'paraller.html')

nvidia nemo大模型超参数优化框架

  • 用户手册:nvidia nemo用户手册

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/diannao/31975.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【Unity导航系统】Navigation组件的概念及其使用示例

Unity中的Navigation组件是一套用于创建和控制导航网格(NavMesh)的工具,允许游戏对象(特别是AI代理,如NavMeshAgent)在复杂的3D环境中进行自动寻路。Navigation组件主要包括以下几个方面: Navi…

CSDN低质量分文章自动化获取

1. 背景 最近粉丝终于达到了5K,可是仍然无法通过优质作者申请,原来是平均质量分较低,优化了一些文章后分数提高仍然较慢,所以需要批量获取低质量文章,重点优化 2. 目标效果 3. 核心代码 其中的Cookie可以根据浏览器…

BFS【2】迷宫

目录 迷宫 走到右下角最短路径长度 走到右下角最短路径 跨步迷宫 迷宫 走到右下角最短路径长度 我是和上一篇一样&#xff0c;创建一个队列&#xff0c;不过while 里面判责是queue非空&#xff0c;否则会死循环万一是死路的话。 也是要判断不要重复入队。 #include <…

车联网文章合集

文章来源如下&#xff1a; 汽车ECU介绍浅谈域控制器整车5大域控制器智能座舱域控制器车身域控制器汽车T-BOX介绍

使用java +paho mqtt编写模拟发布温度及订阅的过程

启动mqtt 服务 创建项目&#xff0c;在项目中添加模块 添加文件夹 添加maven依赖 <dependencies><dependency><groupId>org.eclipse.paho</groupId><artifactId>org.eclipse.paho.client.mqttv3</artifactId><version>1.2.0<…

夏至的中医理论,提供相关的养生建议。包括饮食、运动、作息等方面的建议。

夏至中医养生建议 夏至&#xff0c;作为夏季的一个重要节气&#xff0c;标志着炎热季节的正式开始。在中医理论中&#xff0c;夏至被视为阳气最旺盛的时期&#xff0c;因此&#xff0c;养生之道需顺应夏季阳盛阴衰的特点&#xff0c;以保护阳气、调整阴阳平衡为核心。以下从饮…

vue3使用富文本

1、下载 pnpm install wangeditor/editor wangeditor/editor-for-vue 2、引入和使用 <Toolbar style"border-bottom: 1px solid #ccc" :editor"editorRef" :defaultConfig"toolbarConfig" mode"default" /><Editorstyle&q…

如何下载和安装SQLynx数据库管理工具? (MySQL作为测试数据库)

目录 1. 官网下载 2. 安装软件 3. 启动SQLynx软件 4. 开始使用 5. 执行第一条SQL语句 6. 总结 SQLynx是一款先进的Web SQL集成开发环境&#xff08;IDE&#xff09;&#xff0c;专为数据库管理、查询和数据分析设计。作为一个基于浏览器的工具&#xff08;同时也支持桌面…

ruby中语法知识

return home 参考链接 理解Ruby中的作用域Ruby 中的类与对象Ruby学习之元编程  Kernel#evel(), Object#instance_evel()、Module#class_evel() 知识点 ruby中include和extend以及模块中ClassMethods Ruby require,load,include,extend的显著区别 Ruby中的 Object、Class、…

二分查找与移除元素有序数组的平方、 长度最小的子数组、螺旋矩阵II

数组 704. 二分查找 704. 二分查找 - 力扣 给定一个 n 个元素有序的&#xff08;升序&#xff09;整型数组 nums 和一个目标值 target &#xff0c;写一个函数搜索 nums 中的 target&#xff0c;如果目标值存在返回下标&#xff0c;否则返回 -1。 class Solution { public:…

Spring Cloud Hystrix快速入门demo

1.什么是Spring Cloud Hystrix&#xff1f; Spring Cloud Hystrix 是一个用于处理分布式系统中故障的库。它实现了熔断器模式&#xff0c;可以防止由于故障服务的连锁反应而导致整个系统崩溃。Spring Cloud Hystrix 提供了丰富的功能&#xff0c;如熔断、降级、限流、缓存等&a…

Python xlwt库:写excel表格

&#x1f49d;&#x1f49d;&#x1f49d;欢迎莅临我的博客&#xff0c;很高兴能够在这里和您见面&#xff01;希望您在这里可以感受到一份轻松愉快的氛围&#xff0c;不仅可以获得有趣的内容和知识&#xff0c;也可以畅所欲言、分享您的想法和见解。 推荐:「stormsha的主页」…

基于java+springboot+vue实现的电商应用系统(文末源码+Lw)241

摘 要 现代经济快节奏发展以及不断完善升级的信息化技术&#xff0c;让传统数据信息的管理升级为软件存储&#xff0c;归纳&#xff0c;集中处理数据信息的管理方式。本电商应用系统就是在这样的大环境下诞生&#xff0c;其可以帮助管理者在短时间内处理完毕庞大的数据信息&a…

当flex-direction: column时,设置flex:1不生效解决办法

当需求是: 页面纵向排列,且最后一个元素撑满剩余高度 flex:1在横向排列时是可以的,但是纵向排列会失效,此时需要给最后一个子元素设置align-self: stretch;即可撑满剩余高度 <div class"father"><div class"child child1"></div><div…

Python抓取高考网图片

Python抓取高考网图片 一、项目介绍二、完整代码一、项目介绍 本次采集的目标是高考网(http://www.gaokao.com/gkpic/)的图片,实现图片自动下载。高考网主页如下图: 爬取的流程包括寻找数据接口,发送请求,解析图片链接,向图片链接发送请求获取数据,最后保存数据。 二…

C++设计模式——Composite组合模式

一&#xff0c;组合模式简介 真实世界中&#xff0c;像企业组织、文档、图形软件界面等案例&#xff0c;它们在结构上都是分层次的。将系统分层次的方式使得统一管理和添加不同子模块变得容易&#xff0c;在软件开发中&#xff0c;组合模式的设计思想和它们类似。 组合模式是…

DDP(Differential Dynamic Programming)算法举例

DDP(Differential Dynamic Programming)算法 基本原理 DDP(Differential Dynamic Programming)是一种用于求解非线性最优控制问题的递归算法。它基于动态规划的思想,通过线性化系统的动力学方程和二次近似代价函数,递归地优化控制策略。DDP的核心在于利用局部二次近似来…

(vue3)引入组件标红,...has no default export 组件没有默认导出

(vue3)引入组件标红&#xff0c;…has no default export 组件没有默认导出 一、项目背景&#xff1a; 创建的vitevue3ts项目页面有标红,但程序不报错 二、原因 由于之前安装了 Vetur 插件&#xff0c;Vetur 默认使用 eslint-plugin-vue&#xff0c;并且强制 export default …

linux升级openssh

在日常开发中&#xff0c;经常会需要升级服务器漏洞&#xff0c;记录一下linux升级openssh相关&#xff0c;服务器版本为centos7.8&#xff0c;升级有两种方案&#xff0c;一种是可以上互联网环境&#xff0c;一种是内网环境&#xff0c;我这边因为是内网环境&#xff0c;只能进…

MySQL中CASE WHEN用法总结

MySQL中CASE WHEN用法总结 大家好&#xff0c;我是免费搭建查券返利机器人省钱赚佣金就用微赚淘客系统3.0的小编&#xff0c;也是冬天不穿秋裤&#xff0c;天冷也要风度的程序猿&#xff01; 在MySQL中&#xff0c;CASE WHEN语句是一种条件表达式&#xff0c;用于在查询中进行…