数据挖掘实战-基于深度学习RNN+CNN的能源价格预测模型

 

🤵‍♂️ 个人主页:@艾派森的个人主页

✍🏻作者简介:Python学习者
🐋 希望大家多多支持,我们一起进步!😄
如果文章对你有帮助的话,
欢迎评论 💬点赞👍🏻 收藏 📂加关注+


目录

1.项目背景

2.数据集介绍

3.技术工具

4.实验过程

4.1导入数据

4.2数据预处理

4.3数据可视化

4.4特征工程

4.5模型构建

4.6模型评估

5.总结

源代码 


1.项目背景

        能源价格的预测一直是经济领域中的一个重要问题,对于能源市场的参与者以及相关产业的发展都具有重要意义。随着能源市场的复杂性和不确定性不断增加,传统的经济模型在预测能源价格方面存在一定的局限性,因此需要引入更加灵活和准确的预测方法。

        深度学习作为人工智能领域的一个重要分支,在处理时序数据和复杂特征方面具有优势。其中,循环神经网络(RNN)和卷积神经网络(CNN)是两种常用的深度学习模型,它们分别擅长处理序列数据和空间特征,可以有效地捕捉数据中的时序信息和空间关联。

        结合RNN和CNN来预测能源价格具有一定的理论基础和实际应用前景。RNN可以很好地捕捉能源价格中的时序变化规律,例如季节性变化、周期性波动等,而CNN则可以有效地提取能源价格中的空间特征,例如不同地区之间的价格差异、价格分布的空间相关性等。将这两种模型结合起来,可以充分利用它们各自的优势,提高能源价格预测的准确性和稳定性。

        因此,基于深度学习RNN和CNN的能源价格预测模型具有重要的研究意义和应用价值。通过构建和优化这样的预测模型,可以为能源市场的参与者提供更准确的价格预测信息,帮助其制定合理的决策和战略。同时,这也为深度学习在经济领域中的应用提供了一个重要的实证案例,对于推动深度学习在经济领域的进一步发展具有积极意义。

2.数据集介绍

        数据集来源于Kaggle,原始数据集共有35064条,28个变量。在当今动态的能源市场中,准确预测能源价格对有效决策和资源配置至关重要。在这个项目中,我们使用先进的深度学习技术——特别是一维卷积神经网络(CNN)和循环神经网络(RNN)——深入研究预测分析领域。通过利用能源价格数据中的历史模式和依赖关系,我们的目标是建立能够高精度预测未来能源价格的模型。

3.技术工具

Python版本:3.9

代码编辑器:jupyter notebook

4.实验过程

4.1导入数据

导入第三方库并加载数据

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')data = pd.read_csv('energy_dataset.csv', parse_dates = ['time'])
data.head()

查看数据大小

4.2数据预处理

data.time = pd.to_datetime(data.time, utc = True, infer_datetime_format= True)
data = data.set_index('time')
data.head()

data.isnull().sum()

正如我们所看到的,在不同的列中有很多空行。所以我们必须处理好这个问题。

# 统计Dataframe所有列中0的个数
for column_name in data.columns:column = data[column_name]# 获取列中0的计数count = (column == 0).sum()print(f"{column_name:{50}} : {count}")

# 让我们删除不必要的列
data.drop(['generation hydro pumped storage aggregated', 'forecast wind offshore eday ahead','generation wind offshore', 'generation fossil coal-derived gas','generation fossil oil shale', 'generation fossil peat', 'generation marine','generation wind offshore', 'generation geothermal'], inplace = True, axis = 1)
data.isnull().sum()

plt.rcParams['figure.figsize'] = (15, 5)
plt.plot(data['total load actual'][:24*7*2])
plt.show()

# 线性插值数据集中缺失的值
data.interpolate(method='linear', limit_direction='forward', inplace=True, axis=0)
data.isnull().sum()

# 创建一个新列来计算总发电量
data['total generation'] = data['generation biomass'] + data['generation fossil brown coal/lignite'] + data['generation fossil gas'] + data['generation fossil hard coal'] + data['generation fossil oil'] + data['generation hydro pumped storage consumption'] + data['generation hydro run-of-river and poundage'] + data['generation hydro water reservoir'] + data['generation nuclear'] + data['generation other'] + data['generation other renewable'] + data['generation solar'] + data['generation waste'] + data['generation wind onshore']
data.head()

4.3数据可视化

sns.distplot(x=data['total generation'], kde=True, hist=True, hist_kws={'alpha': 0.5})
plt.show()

# 绘制一周内每小时的实际电价及其滚动平均值
fig, ax = plt.subplots(1, 1)
rolling = data['price actual'].rolling(24*7, center=True).mean()
ax.plot(data['price actual'], color='#4CAF50', label='Actual Price', marker='o', markersize=3)
ax.plot(rolling, color='#2196F3', linestyle='-', linewidth=2, label='Weekly Rolling Mean')
ax.grid(True)  
plt.legend(fontsize='large')  
plt.title('Hourly Electricity Price and Weekly Rolling Mean')
plt.xlabel('Time')
plt.ylabel('Price')
plt.tight_layout() 
plt.show()

# 绘制电价(按月计算)以及第一年的滞后
monthly_price = data['price actual'].asfreq('M')
lagged = monthly_price.shift(12)
fig, ax = plt.subplots(1, 1)
ax.plot(monthly_price, label='Monthly Price', color='#4CAF50', linewidth=2) 
ax.plot(lagged, label='1 yr lagged', color='#2196F3', linewidth=2)      
ax.grid(True)
plt.legend(fontsize='large')
plt.title('Electricity Price (Month-wise) with 1st Year Lag')
plt.xlabel('Time (Months)')
plt.ylabel('Price')
plt.show()

由于我们可以在两个图中看到类似的峰值,我们可以说数据中存在一些季节性模式。

# 绘制3周的每小时数据
start = 1+ 24*300
end = 1+ 24*322
plt.plot(data['price actual'][start:end])
plt.show()

sns.distplot(x = data['price actual'], kde = True)
plt.show()

结论:价格服从正态分布。

4.4特征工程

# 拆分数据集
def prepare_dataset(data, size):x_data = []y_data = []l = len(data) - sizefor i in range(l):x = data[i:i+size]y = data[i+size]x_data.append(x)y_data.append(y)return np.array(x_data), np.array(y_data)# 为plot创建函数,我们稍后会用到它
def plot_model_rmse_and_loss(history, title):# 评估训练和验证的准确性和损失train_rmse = history.history['root_mean_squared_error']val_rmse = history.history['val_root_mean_squared_error']train_loss = history.history['loss']val_loss = history.history['val_loss']plt.figure(figsize=(15, 5))# 绘制训练和验证RMSEplt.subplot(1, 2, 1)plt.plot(train_rmse, label='Training RMSE', color='blue', linestyle='-')plt.plot(val_rmse, label='Validation RMSE', color='orange', linestyle='--')plt.xlabel('Epochs')plt.ylabel('RMSE')plt.title('Epochs vs. Training and Validation RMSE')plt.legend()# 绘图训练和验证损失plt.subplot(1, 2, 2)plt.plot(train_loss, label='Training Loss', color='green', linestyle='-')plt.plot(val_loss, label='Validation Loss', color='red', linestyle='--')plt.xlabel('Epochs')plt.ylabel('Loss')plt.title('Epochs vs. Training and Validation Loss')plt.legend()plt.suptitle(title, fontweight='bold', fontsize=15)# 调整布局以防止元素重叠plt.tight_layout()plt.show()
# 标准化处理
from sklearn.preprocessing import MinMaxScaler
data_filtered = data['price actual'].values
scaler = MinMaxScaler(feature_range = (0,1))
scaled_data = scaler.fit_transform(data_filtered.reshape(-1,1))
scaled_data.shape

 

# 查看训练集和测试集大小
train_size = int(np.ceil(len(scaled_data) * 0.8))
test_size = int((len(scaled_data) - train_size) *0.5)
print(train_size, test_size)

# 拆分数据集为训练集、测试集、验证集
xtrain, ytrain = prepare_dataset(scaled_data[:train_size], 25)
xval, yval = prepare_dataset(scaled_data[train_size-25:train_size +test_size], 25)
xtest, ytest = prepare_dataset(scaled_data[train_size + test_size-25:], 25)
print(xtrain.shape)
print(xval.shape)
print(xtest.shape)

4.5模型构建

导入第三方库

import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, LSTM, Dropout, Conv1D, Flatten, SimpleRNN
loss = tf.keras.losses.MeanSquaredError()
metric = [tf.keras.metrics.RootMeanSquaredError()]
optimizer = tf.keras.optimizers.Adam()
early_stopping = [tf.keras.callbacks.EarlyStopping(monitor = 'loss', patience = 5)]

第一种方法:堆叠SimpleRNN 

# 第一种方法:堆叠SimpleRNN
# Create a Sequential model (a linear stack of layers)
model_SimpleRNN = Sequential()# Add a SimpleRNN layer with 128 units and return sequences for the next layer
# input_shape: Shape of input data (number of timesteps, number of features)
model_SimpleRNN.add(SimpleRNN(128, return_sequences=True, input_shape=(xtrain.shape[1], 1)))# Add another SimpleRNN layer with 64 units and do not return sequences
model_SimpleRNN.add(SimpleRNN(64, return_sequences=False))# Add a fully connected Dense layer with 64 units
model_SimpleRNN.add(Dense(64))# Add a Dropout layer with dropout rate of 0.2 to prevent overfitting
model_SimpleRNN.add(Dropout(0.2))# Add a fully connected Dense layer with 1 unit for regression
model_SimpleRNN.add(Dense(1))# Compile the model with specified loss function, evaluation metrics, and optimizer
model_SimpleRNN.compile(loss=loss, metrics=metric, optimizer=optimizer)# Train the model using training data (xtrain, ytrain) for a specified number of epochs
# Validate the model using validation data (xval, yval)
# early_stopping: A callback function to stop training if validation loss does not improve
history = model_SimpleRNN.fit(xtrain, ytrain, epochs=10, validation_data=(xval, yval), callbacks=early_stopping)

plot_model_rmse_and_loss(history,"SimpleRNN")

模型评估 

# Generate predictions using the trained SimpleRNN model on the test data
predictions = model_SimpleRNN.predict(xtest)# Inverse transform the scaled predictions to the original scale using the scaler
predictions = scaler.inverse_transform(predictions)# Calculate the Root Mean Squared Error (RMSE) between the predicted and actual values
# RMSE is a commonly used metric to measure the difference between predicted and actual values
simplernn_rmse = np.sqrt(np.mean(((predictions - ytest) ** 2)))# Print the Root Mean Squared Error for SimpleRNN model
print(f"Root Mean Squared Error for SimpleRNN = {simplernn_rmse}")

第二种方法:CNN 1D

# 第二种方法:CNN 1D
from tensorflow.keras.optimizers import Adam# Create a Sequential model (a linear stack of layers)
model_CNN = Sequential()# Add a 1D convolutional layer with 48 filters, kernel size of 2, 'causal' padding, and ReLU activation
# Input shape: Shape of input data (number of timesteps, number of features)
model_CNN.add(Conv1D(filters=48, kernel_size=2, padding='causal', activation='relu', input_shape=(xtrain.shape[1], 1)))# Flatten the output of the convolutional layer to be fed into the Dense layers
model_CNN.add(Flatten())# Add a fully connected Dense layer with 48 units and ReLU activation
model_CNN.add(Dense(48, activation='relu'))# Add a Dropout layer with dropout rate of 0.2 to prevent overfitting
model_CNN.add(Dropout(0.2))# Add a fully connected Dense layer with 1 unit for regression
model_CNN.add(Dense(1))# Use legacy optimizer tf.keras.optimizers.legacy.Adam with default learning rate
optimizer = Adam()# Compile the model with specified loss function, evaluation metrics, and optimizer
model_CNN.compile(loss=loss, metrics=metric, optimizer=optimizer)# Train the model using training data (xtrain, ytrain) for a specified number of epochs
# Validate the model using validation data (xval, yval)
# early_stopping: A callback function to stop training if validation loss does not improve
history = model_CNN.fit(xtrain, ytrain, epochs=10, validation_data=(xval, yval), callbacks=early_stopping)

# Plot the RMSE and loss curves for the CNN 1D model using the training history
plot_model_rmse_and_loss(history, "CNN 1D")# Generate predictions using the trained CNN 1D model on the test data
predictions = model_CNN.predict(xtest)# Inverse transform the scaled predictions to the original scale using the scaler
predictions = scaler.inverse_transform(predictions)# Calculate the Root Mean Squared Error (RMSE) between the predicted and actual values
# RMSE is a commonly used metric to measure the difference between predicted and actual values
CNN_rmse = np.sqrt(np.mean(((predictions - ytest) ** 2)))# Print the Root Mean Squared Error for the CNN 1D model
print(f"\nRoot Mean Squared Error for CNN 1D = {CNN_rmse}")

 第三种方法:CNN-LSTM

# 第三种方法:CNN-LSTM
from tensorflow.keras.optimizers import Adam# Create a Sequential model for CNN-LSTM architecture
model_CNN_LSTM = Sequential()# Add a 1D convolutional layer with 100 filters, kernel size of 2, 'causal' padding, and ReLU activation
# Input shape: Shape of input data (number of timesteps, number of features)
model_CNN_LSTM.add(Conv1D(filters=100, kernel_size=2, padding='causal', activation='relu', input_shape=(xtrain.shape[1], 1)))# Add a Long Short-Term Memory (LSTM) layer with 100 units and return sequences
model_CNN_LSTM.add(LSTM(100, return_sequences=True))# Flatten the output of the LSTM layer to be fed into the Dense layers
model_CNN_LSTM.add(Flatten())# Add a fully connected Dense layer with 100 units and ReLU activation
model_CNN_LSTM.add(Dense(100, activation='relu'))# Add a Dropout layer with dropout rate of 0.2 to prevent overfitting
model_CNN_LSTM.add(Dropout(0.2))# Add a fully connected Dense layer with 1 unit for regression
model_CNN_LSTM.add(Dense(1))# Use legacy optimizer tf.keras.optimizers.legacy.Adam with default learning rate
optimizer = Adam()# Compile the model with specified loss function, evaluation metrics, and optimizer
model_CNN_LSTM.compile(loss=loss, metrics=metric, optimizer=optimizer)# Train the model using training data (xtrain, ytrain) for a specified number of epochs
# Validate the model using validation data (xval, yval)
# early_stopping: A callback function to stop training if validation loss does not improve
history = model_CNN_LSTM.fit(xtrain, ytrain, epochs=10, validation_data=(xval, yval), callbacks=early_stopping)

# Plot the RMSE and loss curves for the CNN-LSTM model using the training history
plot_model_rmse_and_loss(history, "CNN-LSTM")# Generate predictions using the trained CNN-LSTM model on the test data
predictions = model_CNN_LSTM.predict(xtest)# Inverse transform the scaled predictions to the original scale using the scaler
predictions = scaler.inverse_transform(predictions)# Calculate the Root Mean Squared Error (RMSE) between the predicted and actual values
# RMSE is a commonly used metric to measure the difference between predicted and actual values
CNN_LSTM_rmse = np.sqrt(np.mean(((predictions - ytest) ** 2)))
print(f"\nRoot Mean Squarred Error for CNN-LSTM = {CNN_LSTM_rmse}")

4.6模型评估

# Print the Root Mean Squared Error for SimpleRNN model
print(f"Root Mean Squared Error for SimpleRNN = {simplernn_rmse}")# Print the Root Mean Squared Error for CNN 1D model
print(f"Root Mean Squared Error for CNN 1D = {CNN_rmse}")# Print the Root Mean Squared Error for CNN-LSTM model
print(f"Root Mean Squared Error for CNN-LSTM = {CNN_LSTM_rmse}")

5.总结


        通过实验,我们发现每种方法都有自己的优点和局限性。SimpleRNN提供了一个简单且可解释的体系结构,但可能会与长期依赖关系作斗争。1D CNN在捕获数据的局部模式和波动方面是有效的。CNN-LSTM结合了cnn和lstm的优点,为捕获短期和长期依赖提供了一个强大的框架。方法的选择取决于数据集的具体特征和手头的预测任务。

        总之,我们对SimpleRNN、1D CNN和CNN- lstm模型的探索为它们在时间序列预测任务中的适用性和性能提供了有价值的见解。通过了解每种方法的优点和局限性,从业者可以在为他们的预测需求选择最合适的体系结构时做出明智的决定。

源代码 

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')data = pd.read_csv('energy_dataset.csv', parse_dates = ['time'])
data.head()
data.shape
data.time = pd.to_datetime(data.time, utc = True, infer_datetime_format= True)
data = data.set_index('time')
data.head()
data.isnull().sum()
正如我们所看到的,在不同的列中有很多空行。所以我们必须处理好这个问题
# 统计Dataframe所有列中0的个数
for column_name in data.columns:column = data[column_name]# 获取列中0的计数count = (column == 0).sum()print(f"{column_name:{50}} : {count}")
# 让我们删除不必要的列
data.drop(['generation hydro pumped storage aggregated', 'forecast wind offshore eday ahead','generation wind offshore', 'generation fossil coal-derived gas','generation fossil oil shale', 'generation fossil peat', 'generation marine','generation wind offshore', 'generation geothermal'], inplace = True, axis = 1)
data.isnull().sum()
plt.rcParams['figure.figsize'] = (15, 5)
plt.plot(data['total load actual'][:24*7*2])
plt.show()
# 线性插值数据集中缺失的值
data.interpolate(method='linear', limit_direction='forward', inplace=True, axis=0)
data.isnull().sum()
# 创建一个新列来计算总发电量
data['total generation'] = data['generation biomass'] + data['generation fossil brown coal/lignite'] + data['generation fossil gas'] + data['generation fossil hard coal'] + data['generation fossil oil'] + data['generation hydro pumped storage consumption'] + data['generation hydro run-of-river and poundage'] + data['generation hydro water reservoir'] + data['generation nuclear'] + data['generation other'] + data['generation other renewable'] + data['generation solar'] + data['generation waste'] + data['generation wind onshore']
data.head()
sns.distplot(x=data['total generation'], kde=True, hist=True, hist_kws={'alpha': 0.5})
plt.show()
# 绘制一周内每小时的实际电价及其滚动平均值
fig, ax = plt.subplots(1, 1)
rolling = data['price actual'].rolling(24*7, center=True).mean()
ax.plot(data['price actual'], color='#4CAF50', label='Actual Price', marker='o', markersize=3)
ax.plot(rolling, color='#2196F3', linestyle='-', linewidth=2, label='Weekly Rolling Mean')
ax.grid(True)  
plt.legend(fontsize='large')  
plt.title('Hourly Electricity Price and Weekly Rolling Mean')
plt.xlabel('Time')
plt.ylabel('Price')
plt.tight_layout() 
plt.show()
# 绘制电价(按月计算)以及第一年的滞后
monthly_price = data['price actual'].asfreq('M')
lagged = monthly_price.shift(12)
fig, ax = plt.subplots(1, 1)
ax.plot(monthly_price, label='Monthly Price', color='#4CAF50', linewidth=2) 
ax.plot(lagged, label='1 yr lagged', color='#2196F3', linewidth=2)      
ax.grid(True)
plt.legend(fontsize='large')
plt.title('Electricity Price (Month-wise) with 1st Year Lag')
plt.xlabel('Time (Months)')
plt.ylabel('Price')
plt.show()
由于我们可以在两个图中看到类似的峰值,我们可以说数据中存在一些季节性模式
# 绘制3周的每小时数据
start = 1+ 24*300
end = 1+ 24*322
plt.plot(data['price actual'][start:end])
plt.show()
sns.distplot(x = data['price actual'], kde = True)
plt.show()
结论:价格服从正态分布
# 拆分数据集
def prepare_dataset(data, size):x_data = []y_data = []l = len(data) - sizefor i in range(l):x = data[i:i+size]y = data[i+size]x_data.append(x)y_data.append(y)return np.array(x_data), np.array(y_data)# 为plot创建函数,我们稍后会用到它
def plot_model_rmse_and_loss(history, title):# 评估训练和验证的准确性和损失train_rmse = history.history['root_mean_squared_error']val_rmse = history.history['val_root_mean_squared_error']train_loss = history.history['loss']val_loss = history.history['val_loss']plt.figure(figsize=(15, 5))# 绘制训练和验证RMSEplt.subplot(1, 2, 1)plt.plot(train_rmse, label='Training RMSE', color='blue', linestyle='-')plt.plot(val_rmse, label='Validation RMSE', color='orange', linestyle='--')plt.xlabel('Epochs')plt.ylabel('RMSE')plt.title('Epochs vs. Training and Validation RMSE')plt.legend()# 绘图训练和验证损失plt.subplot(1, 2, 2)plt.plot(train_loss, label='Training Loss', color='green', linestyle='-')plt.plot(val_loss, label='Validation Loss', color='red', linestyle='--')plt.xlabel('Epochs')plt.ylabel('Loss')plt.title('Epochs vs. Training and Validation Loss')plt.legend()plt.suptitle(title, fontweight='bold', fontsize=15)# 调整布局以防止元素重叠plt.tight_layout()plt.show()
预测每日电价
# 标准化处理
from sklearn.preprocessing import MinMaxScaler
data_filtered = data['price actual'].values
scaler = MinMaxScaler(feature_range = (0,1))
scaled_data = scaler.fit_transform(data_filtered.reshape(-1,1))
scaled_data.shape
# 查看训练集和测试集大小
train_size = int(np.ceil(len(scaled_data) * 0.8))
test_size = int((len(scaled_data) - train_size) *0.5)
print(train_size, test_size)
# 拆分数据集为训练集、测试集、验证集
xtrain, ytrain = prepare_dataset(scaled_data[:train_size], 25)
xval, yval = prepare_dataset(scaled_data[train_size-25:train_size +test_size], 25)
xtest, ytest = prepare_dataset(scaled_data[train_size + test_size-25:], 25)
print(xtrain.shape)
print(xval.shape)
print(xtest.shape)
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, LSTM, Dropout, Conv1D, Flatten, SimpleRNN
loss = tf.keras.losses.MeanSquaredError()
metric = [tf.keras.metrics.RootMeanSquaredError()]
optimizer = tf.keras.optimizers.Adam()
early_stopping = [tf.keras.callbacks.EarlyStopping(monitor = 'loss', patience = 5)]
# 第一种方法:堆叠SimpleRNN
# Create a Sequential model (a linear stack of layers)
model_SimpleRNN = Sequential()# Add a SimpleRNN layer with 128 units and return sequences for the next layer
# input_shape: Shape of input data (number of timesteps, number of features)
model_SimpleRNN.add(SimpleRNN(128, return_sequences=True, input_shape=(xtrain.shape[1], 1)))# Add another SimpleRNN layer with 64 units and do not return sequences
model_SimpleRNN.add(SimpleRNN(64, return_sequences=False))# Add a fully connected Dense layer with 64 units
model_SimpleRNN.add(Dense(64))# Add a Dropout layer with dropout rate of 0.2 to prevent overfitting
model_SimpleRNN.add(Dropout(0.2))# Add a fully connected Dense layer with 1 unit for regression
model_SimpleRNN.add(Dense(1))# Compile the model with specified loss function, evaluation metrics, and optimizer
model_SimpleRNN.compile(loss=loss, metrics=metric, optimizer=optimizer)# Train the model using training data (xtrain, ytrain) for a specified number of epochs
# Validate the model using validation data (xval, yval)
# early_stopping: A callback function to stop training if validation loss does not improve
history = model_SimpleRNN.fit(xtrain, ytrain, epochs=10, validation_data=(xval, yval), callbacks=early_stopping)
plot_model_rmse_and_loss(history,"SimpleRNN")
# Generate predictions using the trained SimpleRNN model on the test data
predictions = model_SimpleRNN.predict(xtest)# Inverse transform the scaled predictions to the original scale using the scaler
predictions = scaler.inverse_transform(predictions)# Calculate the Root Mean Squared Error (RMSE) between the predicted and actual values
# RMSE is a commonly used metric to measure the difference between predicted and actual values
simplernn_rmse = np.sqrt(np.mean(((predictions - ytest) ** 2)))# Print the Root Mean Squared Error for SimpleRNN model
print(f"Root Mean Squared Error for SimpleRNN = {simplernn_rmse}")
# 第二种方法:CNN 1D
from tensorflow.keras.optimizers import Adam# Create a Sequential model (a linear stack of layers)
model_CNN = Sequential()# Add a 1D convolutional layer with 48 filters, kernel size of 2, 'causal' padding, and ReLU activation
# Input shape: Shape of input data (number of timesteps, number of features)
model_CNN.add(Conv1D(filters=48, kernel_size=2, padding='causal', activation='relu', input_shape=(xtrain.shape[1], 1)))# Flatten the output of the convolutional layer to be fed into the Dense layers
model_CNN.add(Flatten())# Add a fully connected Dense layer with 48 units and ReLU activation
model_CNN.add(Dense(48, activation='relu'))# Add a Dropout layer with dropout rate of 0.2 to prevent overfitting
model_CNN.add(Dropout(0.2))# Add a fully connected Dense layer with 1 unit for regression
model_CNN.add(Dense(1))# Use legacy optimizer tf.keras.optimizers.legacy.Adam with default learning rate
optimizer = Adam()# Compile the model with specified loss function, evaluation metrics, and optimizer
model_CNN.compile(loss=loss, metrics=metric, optimizer=optimizer)# Train the model using training data (xtrain, ytrain) for a specified number of epochs
# Validate the model using validation data (xval, yval)
# early_stopping: A callback function to stop training if validation loss does not improve
history = model_CNN.fit(xtrain, ytrain, epochs=10, validation_data=(xval, yval), callbacks=early_stopping)
# Plot the RMSE and loss curves for the CNN 1D model using the training history
plot_model_rmse_and_loss(history, "CNN 1D")# Generate predictions using the trained CNN 1D model on the test data
predictions = model_CNN.predict(xtest)# Inverse transform the scaled predictions to the original scale using the scaler
predictions = scaler.inverse_transform(predictions)# Calculate the Root Mean Squared Error (RMSE) between the predicted and actual values
# RMSE is a commonly used metric to measure the difference between predicted and actual values
CNN_rmse = np.sqrt(np.mean(((predictions - ytest) ** 2)))# Print the Root Mean Squared Error for the CNN 1D model
print(f"\nRoot Mean Squared Error for CNN 1D = {CNN_rmse}")
# 第三种方法:CNN-LSTM
from tensorflow.keras.optimizers import Adam# Create a Sequential model for CNN-LSTM architecture
model_CNN_LSTM = Sequential()# Add a 1D convolutional layer with 100 filters, kernel size of 2, 'causal' padding, and ReLU activation
# Input shape: Shape of input data (number of timesteps, number of features)
model_CNN_LSTM.add(Conv1D(filters=100, kernel_size=2, padding='causal', activation='relu', input_shape=(xtrain.shape[1], 1)))# Add a Long Short-Term Memory (LSTM) layer with 100 units and return sequences
model_CNN_LSTM.add(LSTM(100, return_sequences=True))# Flatten the output of the LSTM layer to be fed into the Dense layers
model_CNN_LSTM.add(Flatten())# Add a fully connected Dense layer with 100 units and ReLU activation
model_CNN_LSTM.add(Dense(100, activation='relu'))# Add a Dropout layer with dropout rate of 0.2 to prevent overfitting
model_CNN_LSTM.add(Dropout(0.2))# Add a fully connected Dense layer with 1 unit for regression
model_CNN_LSTM.add(Dense(1))# Use legacy optimizer tf.keras.optimizers.legacy.Adam with default learning rate
optimizer = Adam()# Compile the model with specified loss function, evaluation metrics, and optimizer
model_CNN_LSTM.compile(loss=loss, metrics=metric, optimizer=optimizer)# Train the model using training data (xtrain, ytrain) for a specified number of epochs
# Validate the model using validation data (xval, yval)
# early_stopping: A callback function to stop training if validation loss does not improve
history = model_CNN_LSTM.fit(xtrain, ytrain, epochs=10, validation_data=(xval, yval), callbacks=early_stopping)
# Plot the RMSE and loss curves for the CNN-LSTM model using the training history
plot_model_rmse_and_loss(history, "CNN-LSTM")# Generate predictions using the trained CNN-LSTM model on the test data
predictions = model_CNN_LSTM.predict(xtest)# Inverse transform the scaled predictions to the original scale using the scaler
predictions = scaler.inverse_transform(predictions)# Calculate the Root Mean Squared Error (RMSE) between the predicted and actual values
# RMSE is a commonly used metric to measure the difference between predicted and actual values
CNN_LSTM_rmse = np.sqrt(np.mean(((predictions - ytest) ** 2)))
print(f"\nRoot Mean Squarred Error for CNN-LSTM = {CNN_LSTM_rmse}")
# Print the Root Mean Squared Error for SimpleRNN model
print(f"Root Mean Squared Error for SimpleRNN = {simplernn_rmse}")# Print the Root Mean Squared Error for CNN 1D model
print(f"Root Mean Squared Error for CNN 1D = {CNN_rmse}")# Print the Root Mean Squared Error for CNN-LSTM model
print(f"Root Mean Squared Error for CNN-LSTM = {CNN_LSTM_rmse}")

资料获取,更多粉丝福利,关注下方公众号获取

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/web/7580.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

深入理解 LinkedList 及底层源码分析

LinkedList 是基于链表结构的一种 List,在分析 LinkedList 源码前我们先对对链表结构做一个简单的了解。 一、链表的概念 链表是由一系列非连续的节点组成的存储结构,简单分下类的话,链表又分为_单向链表和双向链表,而单向 / 双…

layui 数据表格 拖动 列、行 位置 重新排序 等

先贴官网 layui官网 ; 再贴一个要使用的 插件官网 : layui-soul-table 示例文档 ; 这个插件功能很多 看到那个下载 后悔没早点知道啊 还自己写了 一个下载 可以到官网看看 很多实用的 需要引入的 js layui.config({base: rootPath…

检测服务器环境,实现快速部署。适用于CRMEB_PRO/多店

运行效果如图: 最近被好多人问,本来运行的好好的,突然swoole就启动不了了。 本工具为爱发电,如果工具正好解决了您的需求。我会很开心 代码如下: """本脚本为爱发电by:网前雨刮器 """…

如何翻译外文文献【攻略】

如何翻译外文文献【攻略】 前言版权推荐如何翻译外文文献简单描述第一步 准备一篇外文文献第二步 翻译网站第三步 解锁文档第四步 编辑dpf第五步 pdf转为word第六步 编辑word第七步 word转为pdf 最后 前言 2024-5-7 14:50:14 以下内容源自《【攻略】》 仅供学习交流使用 版权…

【全网首发】Typecho文章采集器火车头插件去授权版

内容目录 一、详细介绍二、效果展示1.部分代码2.效果图展示 三、学习资料下载 一、详细介绍 目前市面上基本没有typecho火车头采集器 而分享的这一款采集器,牛的一批 内置使用方法与教程! 二、效果展示 1.部分代码 代码如下(示例&#…

串的模式匹配之KMP算法实现

概述 函数刻画 主串位置不变,next值就是模式串(子串)比较后应跳转的位置 不同位置 next[j]函数 next由模式串决定,看模式串当前比较位的前串中前后缀相同的个数来得k-1的值,next[当前位]k1 小补充 PM值:也称部分匹配值&#xf…

Redis Cluster on K8s 大揭密

之前我们针对 Redis 容器化,做了一些讨论: 《Redis 容器化,是不是个“软柿子”》,业界不乏相关的实践分享,KubeBlocks 也针对 Redis Cluster 做了适配并有对应的解决方案。在 Redis 容器化的过程中,KubeBlo…

【学习AI-相关路程-工具使用-NVIDIA SDK MANAGER==NVIDIA-jetson刷机工具安装使用 】

【学习AI-相关路程-工具使用-NVIDIA SDK manager-NVIDIA-jetson刷机工具安装使用 】 1、前言2、环境配置3、知识点了解(1)jetson 系列硬件了解(2)以下大致罗列jetson系列1. Jetson Nano2. Jetson TX23. Jetson Xavier NX4. Jetson…

YouTube广告全教学:形式、投放步骤与技巧(2024年更新)

YouTube作为全球最大的视频分享和观看平台吸引了大量的观众,这一平台以其无与伦比的用户参与度和覆盖范围,重新定义了人们获取与分享知识的方式,同时也为企业开辟了一片前所未有的营销蓝海。 据统计,全球观众平均每天观看 YouTub…

超疏水TiO₂纳米纤维网膜的良好性能

超疏水TiO₂纳米纤维网膜是一种具有特殊性能的材料,它结合了TiO₂的光催化性能和超疏水表面的自清洁、防腐、防污等特性。这种材料在防水、自清洁、油水分离等领域具有广阔的应用前景。 制备超疏水TiO₂纳米纤维网膜的过程中,通过精确控制纺丝溶液的成分…

揭秘前端开发的“薪”机遇

众所周知,华为开发者大会2023,宣布不再兼容安卓,同时宣布了“鸿飞计划”,欲与iOS、安卓在市场三分天下,这对中国国产操作系统而言,具有划时代的意义。 最近有不少前端的开发者来咨询鸿蒙开发,今…

OpenNJet : 下一代云原生应用引擎

本心、输入输出、结果 文章目录 OpenNJet : 下一代云原生应用引擎前言OpenNJet 技术架构安装 OpenNJet为什么有了 OpenNJetOpenNJet 和 NGINX 是什么关系什么是云原生应用引擎?OpenNJet 的有哪些优势OpenNJet 的有哪些优势 OpenNJet 与国产化OpenNJet 使…

Llama 3 超级课堂

https://github.com/SmartFlowAI/Llama3-Tutorial/tree/main 第一节作业 streamlit run web_demo.py /root/share/new_models/meta-llama/Meta-Llama-3-8B-Instruct

一键静音,iPhone勿扰模式助你远离干扰

在现代社会的快节奏生活中,我们时常被各种各样的通知、铃声和提示音所打扰,无法专注地工作或享受宁静的时光。而iPhone的勿扰模式功能,就像是一位贴心的助手,能够一键帮你屏蔽这些干扰,让你在需要的时候拥有一个清静的…

窃鈇逃债,赧然惭愧——“天下共主”周赧王的结局

引子,债台高筑 周赧王五十九年(前256年),雒邑王都内,大周第三十七代天子、年近八十的周赧王姬延困坐在王宫内的高台上,愁容满面、沮丧悲切、束手无策;而王宫宫墙外不远处,是一大帮举…

流畅的python-学习笔记_设计模式+装饰器+闭包

策略模式 类继承abc.ABC即实现抽象类,方法可用abc.abstractmethod装饰,表明为抽象方法 装饰器基础 装饰器实际是语法糖,被装饰的函数实际是装饰器内部返回函数的引用 缺点:装饰器函数覆盖了被装饰函数的__name__和__doc__属性…

暗区突围pc版steam叫什么 暗区突围无限steam怎么搜

暗区突围pc版steam叫什么 暗区突围无限steam怎么搜 最近游戏圈热度最高的事件肯定是暗区突围PC版本的上线,在上线之前就惹得各位游戏爱好者们频频侧目,在正式上线之后更是吸引了大批的新玩家老玩家进行游戏。可是许多玩家发现在steam上找不到游戏&…

视频改字祝福/豪车装X系统源码/小程序uniapp前端源码

uniapp视频改字祝福小程序源码,全开源。创意无限!AI视频改字祝福,豪车装X系统源码开源,打造个性化祝福视频不再难! 想要为你的朋友或家人送上一份特别的祝福,让他们感受到你的真诚与关怀吗?现在…

【Python深度学习(第二版)(1)】什么是深度学习,深度学习与机器学习的区别、深度学习基本原理,深度学习的进展和未来

文章目录 一. 深度学习概念二. 深度学习与机器学习的区别三. 理解深度学习的工作原理1. 每层的转换进行权重参数化2. 怎么衡量神经网络的质量3. 怎么减小损失值 四. 深度学习已取得的进展五. 人工智能的未来 - 不要太过焦虑跟不上 一. 深度学习概念 先放一张图来理解下人工智能…

618必买好物清单来袭,这些数码产品值得你考虑!

是不是很多朋友和我一样,已经迫不及待地为618好物节做好了准备,准备开启一场购物盛宴!作为一名资深家居与数码爱好者,每年618好物节时我都会尽情挑选心仪的物品,因此今天我想和大家分享一下我的618购物清单&#xff0c…