政安晨:【Keras机器学习示例演绎】(五十一)—— 利用广义网络、深度网络和交叉网络进行结构化数据学习

政安晨的个人主页:政安晨

欢迎 👍点赞✍评论⭐收藏

收录专栏: TensorFlow与Keras机器学习实战

希望政安晨的博客能够对您有所裨益,如有不足之处,欢迎在评论区提出指正!

本文目标:使用 "宽深 "和 "深交 "网络进行结构化数据分类。

目录

简介

数据集

设置

准备数据

定义数据集元数据

实验设置

创建模型输入

特征编码

实验 1:基线模型

实验 2:广度和深度模型

实验 3:深度和交叉模型

结论


 

简介

本例演示如何使用两种建模技术进行结构化数据分类:

广度模型和深度模型
深度模型和交叉模型


请注意,本示例应在 TensorFlow 2.5 或更高版本上运行。 

数据集


本示例使用 UCI 机器学习资料库中的 Covertype 数据集。任务是根据地图变量预测森林覆盖类型。该数据集包含 506 011 个实例和 12 个输入特征:10 个数字特征和 2 个分类特征。每个实例被分为 7 类中的 1 类。

设置

import os# Only the TensorFlow backend supports string inputs.
os.environ["KERAS_BACKEND"] = "tensorflow"import math
import numpy as np
import pandas as pd
from tensorflow import data as tf_data
import keras
from keras import layers

准备数据


首先,让我们将 UCI 机器学习资源库中的数据集加载到 Pandas DataFrame 中:

data_url = ("https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.data.gz"
)
raw_data = pd.read_csv(data_url, header=None)
print(f"Dataset shape: {raw_data.shape}")
raw_data.head()
Dataset shape: (581012, 55)
0123456789...45464748495051525354
0259651325805102212321486279...0000000005
12590562212-63902202351516225...0000000005
2280413992686531802342381356121...0000000002
327851551824211830902382381226211...0000000002
42595452153-13912202341506172...0000000005

5 行 × 55 列

数据集中的两个分类特征是二进制编码的。我们将把这个数据集表示法转换为典型的表示法,即每个分类特征表示为一个整数值。

soil_type_values = [f"soil_type_{idx+1}" for idx in range(40)]
wilderness_area_values = [f"area_type_{idx+1}" for idx in range(4)]soil_type = raw_data.loc[:, 14:53].apply(lambda x: soil_type_values[0::1][x.to_numpy().nonzero()[0][0]], axis=1
)
wilderness_area = raw_data.loc[:, 10:13].apply(lambda x: wilderness_area_values[0::1][x.to_numpy().nonzero()[0][0]], axis=1
)CSV_HEADER = ["Elevation","Aspect","Slope","Horizontal_Distance_To_Hydrology","Vertical_Distance_To_Hydrology","Horizontal_Distance_To_Roadways","Hillshade_9am","Hillshade_Noon","Hillshade_3pm","Horizontal_Distance_To_Fire_Points","Wilderness_Area","Soil_Type","Cover_Type",
]data = pd.concat([raw_data.loc[:, 0:9], wilderness_area, soil_type, raw_data.loc[:, 54]],axis=1,ignore_index=True,
)
data.columns = CSV_HEADER# Convert the target label indices into a range from 0 to 6 (there are 7 labels in total).
data["Cover_Type"] = data["Cover_Type"] - 1print(f"Dataset shape: {data.shape}")
data.head().T
Dataset shape: (581012, 13)
01234
Elevation25962590280427852595
Aspect515613915545
Slope329182
Horizontal_Distance_To_Hydrology258212268242153
Vertical_Distance_To_Hydrology0-665118-1
Horizontal_Distance_To_Roadways51039031803090391
Hillshade_9am221220234238220
Hillshade_Noon232235238238234
Hillshade_3pm148151135122150
Horizontal_Distance_To_Fire_Points62796225612162116172
Wilderness_Areaarea_type_1area_type_1area_type_1area_type_1area_type_1
Soil_Typesoil_type_29soil_type_29soil_type_12soil_type_30soil_type_29
Cover_Type44114

DataFrame 的形状显示每个样本有 13 列(12 列表示特征,1 列表示目标标签)。

我们把数据分成训练集(85%)和测试集(15%)。

train_splits = []
test_splits = []for _, group_data in data.groupby("Cover_Type"):random_selection = np.random.rand(len(group_data.index)) <= 0.85train_splits.append(group_data[random_selection])test_splits.append(group_data[~random_selection])train_data = pd.concat(train_splits).sample(frac=1).reset_index(drop=True)
test_data = pd.concat(test_splits).sample(frac=1).reset_index(drop=True)print(f"Train split size: {len(train_data.index)}")
print(f"Test split size: {len(test_data.index)}")
Train split size: 493323
Test split size: 87689

然后,将训练数据和测试数据分别存储在不同的 CSV 文件中。

train_data_file = "train_data.csv"
test_data_file = "test_data.csv"train_data.to_csv(train_data_file, index=False)
test_data.to_csv(test_data_file, index=False)

定义数据集元数据


这里,我们定义了数据集的元数据,这些元数据将有助于将数据读取和解析为输入特征,并根据输入特征的类型对其进行编码。

TARGET_FEATURE_NAME = "Cover_Type"TARGET_FEATURE_LABELS = ["0", "1", "2", "3", "4", "5", "6"]NUMERIC_FEATURE_NAMES = ["Aspect","Elevation","Hillshade_3pm","Hillshade_9am","Hillshade_Noon","Horizontal_Distance_To_Fire_Points","Horizontal_Distance_To_Hydrology","Horizontal_Distance_To_Roadways","Slope","Vertical_Distance_To_Hydrology",
]CATEGORICAL_FEATURES_WITH_VOCABULARY = {"Soil_Type": list(data["Soil_Type"].unique()),"Wilderness_Area": list(data["Wilderness_Area"].unique()),
}CATEGORICAL_FEATURE_NAMES = list(CATEGORICAL_FEATURES_WITH_VOCABULARY.keys())FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMESCOLUMN_DEFAULTS = [[0] if feature_name in NUMERIC_FEATURE_NAMES + [TARGET_FEATURE_NAME] else ["NA"]for feature_name in CSV_HEADER
]NUM_CLASSES = len(TARGET_FEATURE_LABELS)

实验设置


接下来,让我们定义一个输入函数,用于读取和解析文件,然后将特征和标签转换为 atf.data.Dataset 以进行训练或评估。

def get_dataset_from_csv(csv_file_path, batch_size, shuffle=False):dataset = tf_data.experimental.make_csv_dataset(csv_file_path,batch_size=batch_size,column_names=CSV_HEADER,column_defaults=COLUMN_DEFAULTS,label_name=TARGET_FEATURE_NAME,num_epochs=1,header=True,shuffle=shuffle,)return dataset.cache()

在此,我们将配置参数,并执行给定模型的训练和评估实验程序。

learning_rate = 0.001
dropout_rate = 0.1
batch_size = 265
num_epochs = 50hidden_units = [32, 32]def run_experiment(model):model.compile(optimizer=keras.optimizers.Adam(learning_rate=learning_rate),loss=keras.losses.SparseCategoricalCrossentropy(),metrics=[keras.metrics.SparseCategoricalAccuracy()],)train_dataset = get_dataset_from_csv(train_data_file, batch_size, shuffle=True)test_dataset = get_dataset_from_csv(test_data_file, batch_size)print("Start training the model...")history = model.fit(train_dataset, epochs=num_epochs)print("Model training finished")_, accuracy = model.evaluate(test_dataset, verbose=0)print(f"Test accuracy: {round(accuracy * 100, 2)}%")

创建模型输入


现在,将模型的输入定义为一个字典,其中键是特征名称,值是具有相应特征形状和数据类型的 keras.layers.Input 张量。

def create_model_inputs():inputs = {}for feature_name in FEATURE_NAMES:if feature_name in NUMERIC_FEATURE_NAMES:inputs[feature_name] = layers.Input(name=feature_name, shape=(), dtype="float32")else:inputs[feature_name] = layers.Input(name=feature_name, shape=(), dtype="string")return inputs

特征编码


我们为输入特征创建了两种表示:稀疏表示和密集表示: 1. 在稀疏表示中,分类特征使用类别编码层(CategoryEncoding layer)进行单次编码。这种表示法可以帮助模型记忆特定的特征值,从而做出某些预测。2.在密集表示法中,分类特征使用嵌入层(Embedding layer)进行低维嵌入编码。这种表示法有助于模型很好地概括未见过的特征组合。

def encode_inputs(inputs, use_embedding=False):encoded_features = []for feature_name in inputs:if feature_name in CATEGORICAL_FEATURE_NAMES:vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[feature_name]# Create a lookup to convert string values to an integer indices.# Since we are not using a mask token nor expecting any out of vocabulary# (oov) token, we set mask_token to None and  num_oov_indices to 0.lookup = layers.StringLookup(vocabulary=vocabulary,mask_token=None,num_oov_indices=0,output_mode="int" if use_embedding else "binary",)if use_embedding:# Convert the string input values into integer indices.encoded_feature = lookup(inputs[feature_name])embedding_dims = int(math.sqrt(len(vocabulary)))# Create an embedding layer with the specified dimensions.embedding = layers.Embedding(input_dim=len(vocabulary), output_dim=embedding_dims)# Convert the index values to embedding representations.encoded_feature = embedding(encoded_feature)else:# Convert the string input values into a one hot encoding.encoded_feature = lookup(keras.ops.expand_dims(inputs[feature_name], -1))else:# Use the numerical features as-is.encoded_feature = keras.ops.expand_dims(inputs[feature_name], -1)encoded_features.append(encoded_feature)all_features = layers.concatenate(encoded_features)return all_features

实验 1:基线模型


在第一个实验中,让我们创建一个多层前馈网络,对分类特征进行单击编码。

def create_baseline_model():inputs = create_model_inputs()features = encode_inputs(inputs)for units in hidden_units:features = layers.Dense(units)(features)features = layers.BatchNormalization()(features)features = layers.ReLU()(features)features = layers.Dropout(dropout_rate)(features)outputs = layers.Dense(units=NUM_CLASSES, activation="softmax")(features)model = keras.Model(inputs=inputs, outputs=outputs)return modelbaseline_model = create_baseline_model()
keras.utils.plot_model(baseline_model, show_shapes=True, rankdir="LR")
/Users/fchollet/Library/Python/3.10/lib/python/site-packages/numpy/core/numeric.py:2468: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparisonreturn bool(asarray(a1 == a2).all())

让我们运行它:

run_experiment(baseline_model)
Start training the model...
Epoch 1/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 6s 3ms/step - loss: 1.0713 - sparse_categorical_accuracy: 0.5634
Epoch 2/50179/1862 ━[37m━━━━━━━━━━━━━━━━━━━  1s 848us/step - loss: 0.7473 - sparse_categorical_accuracy: 0.6840/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py:153: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.self.gen.throw(typ, value, traceback)1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 904us/step - loss: 0.7386 - sparse_categorical_accuracy: 0.6866
Epoch 3/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 909us/step - loss: 0.7135 - sparse_categorical_accuracy: 0.6958
Epoch 4/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 878us/step - loss: 0.6975 - sparse_categorical_accuracy: 0.7051
Epoch 5/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 941us/step - loss: 0.6876 - sparse_categorical_accuracy: 0.7089
Epoch 6/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 936us/step - loss: 0.6848 - sparse_categorical_accuracy: 0.7106
Epoch 7/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 934us/step - loss: 0.7165 - sparse_categorical_accuracy: 0.6969
Epoch 8/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 924us/step - loss: 0.6979 - sparse_categorical_accuracy: 0.7053
Epoch 9/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 967us/step - loss: 0.6913 - sparse_categorical_accuracy: 0.7088
Epoch 10/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 975us/step - loss: 0.6807 - sparse_categorical_accuracy: 0.7124
Epoch 11/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 987us/step - loss: 0.6829 - sparse_categorical_accuracy: 0.7110
Epoch 12/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 917us/step - loss: 0.6823 - sparse_categorical_accuracy: 0.7109
Epoch 13/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 879us/step - loss: 0.6658 - sparse_categorical_accuracy: 0.7175
Epoch 14/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 948us/step - loss: 0.6677 - sparse_categorical_accuracy: 0.7170
Epoch 15/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 866us/step - loss: 0.6695 - sparse_categorical_accuracy: 0.7130
Epoch 16/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 860us/step - loss: 0.6847 - sparse_categorical_accuracy: 0.7074
Epoch 17/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 853us/step - loss: 0.6660 - sparse_categorical_accuracy: 0.7174
Epoch 18/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 855us/step - loss: 0.6620 - sparse_categorical_accuracy: 0.7184
Epoch 19/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 900us/step - loss: 0.6642 - sparse_categorical_accuracy: 0.7163
Epoch 20/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 969us/step - loss: 0.6614 - sparse_categorical_accuracy: 0.7167
Epoch 21/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 988us/step - loss: 0.6560 - sparse_categorical_accuracy: 0.7199
Epoch 22/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 969us/step - loss: 0.6559 - sparse_categorical_accuracy: 0.7201
Epoch 23/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 868us/step - loss: 0.6514 - sparse_categorical_accuracy: 0.7217
Epoch 24/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 925us/step - loss: 0.6509 - sparse_categorical_accuracy: 0.7222
Epoch 25/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 879us/step - loss: 0.6464 - sparse_categorical_accuracy: 0.7233
Epoch 26/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 898us/step - loss: 0.6442 - sparse_categorical_accuracy: 0.7237
Epoch 27/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 842us/step - loss: 0.6476 - sparse_categorical_accuracy: 0.7210
Epoch 28/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 815us/step - loss: 0.6427 - sparse_categorical_accuracy: 0.7247
Epoch 29/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 837us/step - loss: 0.6414 - sparse_categorical_accuracy: 0.7244
Epoch 30/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 865us/step - loss: 0.6408 - sparse_categorical_accuracy: 0.7256
Epoch 31/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 845us/step - loss: 0.6378 - sparse_categorical_accuracy: 0.7269
Epoch 32/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 842us/step - loss: 0.6432 - sparse_categorical_accuracy: 0.7235
Epoch 33/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 905us/step - loss: 0.6482 - sparse_categorical_accuracy: 0.7226
Epoch 34/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.6586 - sparse_categorical_accuracy: 0.7191
Epoch 35/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 958us/step - loss: 0.6511 - sparse_categorical_accuracy: 0.7215
Epoch 36/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 910us/step - loss: 0.6571 - sparse_categorical_accuracy: 0.7217
Epoch 37/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 897us/step - loss: 0.6451 - sparse_categorical_accuracy: 0.7253
Epoch 38/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 846us/step - loss: 0.6455 - sparse_categorical_accuracy: 0.7254
Epoch 39/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 907us/step - loss: 0.6722 - sparse_categorical_accuracy: 0.7131
Epoch 40/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1000us/step - loss: 0.6393 - sparse_categorical_accuracy: 0.7282
Epoch 41/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 872us/step - loss: 0.6804 - sparse_categorical_accuracy: 0.7078
Epoch 42/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 884us/step - loss: 0.6657 - sparse_categorical_accuracy: 0.7135
Epoch 43/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 960us/step - loss: 0.6557 - sparse_categorical_accuracy: 0.7180
Epoch 44/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 870us/step - loss: 0.6671 - sparse_categorical_accuracy: 0.7115
Epoch 45/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 871us/step - loss: 0.6730 - sparse_categorical_accuracy: 0.7069
Epoch 46/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 875us/step - loss: 0.6669 - sparse_categorical_accuracy: 0.7105
Epoch 47/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 847us/step - loss: 0.6634 - sparse_categorical_accuracy: 0.7129
Epoch 48/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 846us/step - loss: 0.6625 - sparse_categorical_accuracy: 0.7137
Epoch 49/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 824us/step - loss: 0.6596 - sparse_categorical_accuracy: 0.7146
Epoch 50/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 833us/step - loss: 0.6714 - sparse_categorical_accuracy: 0.7106
Model training finished
Test accuracy: 69.5%

基线线性模型的测试准确率约为 76%。

实验 2:广度和深度模型

在第二个实验中,我们创建了一个广度和深度模型。广度模型是线性模型,深度模型是多层前馈网络。

在广度模型中使用输入特征的稀疏表示,在深度模型中使用输入特征的密集表示。

请注意,每个输入特征都会对模型的两个部分产生不同的表示。

def create_wide_and_deep_model():inputs = create_model_inputs()wide = encode_inputs(inputs)wide = layers.BatchNormalization()(wide)deep = encode_inputs(inputs, use_embedding=True)for units in hidden_units:deep = layers.Dense(units)(deep)deep = layers.BatchNormalization()(deep)deep = layers.ReLU()(deep)deep = layers.Dropout(dropout_rate)(deep)merged = layers.concatenate([wide, deep])outputs = layers.Dense(units=NUM_CLASSES, activation="softmax")(merged)model = keras.Model(inputs=inputs, outputs=outputs)return modelwide_and_deep_model = create_wide_and_deep_model()
keras.utils.plot_model(wide_and_deep_model, show_shapes=True, rankdir="LR")
/Users/fchollet/Library/Python/3.10/lib/python/site-packages/numpy/core/numeric.py:2468: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparisonreturn bool(asarray(a1 == a2).all())

让我们运行它:

run_experiment(wide_and_deep_model)
Start training the model...
Epoch 1/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 5s 2ms/step - loss: 0.8979 - sparse_categorical_accuracy: 0.6386
Epoch 2/50128/1862 ━[37m━━━━━━━━━━━━━━━━━━━  2s 1ms/step - loss: 0.6317 - sparse_categorical_accuracy: 0.7302/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py:153: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.self.gen.throw(typ, value, traceback)1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.6290 - sparse_categorical_accuracy: 0.7295
Epoch 3/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.6130 - sparse_categorical_accuracy: 0.7350
Epoch 4/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.6029 - sparse_categorical_accuracy: 0.7397
Epoch 5/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.6010 - sparse_categorical_accuracy: 0.7397
Epoch 6/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5924 - sparse_categorical_accuracy: 0.7445
Epoch 7/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5917 - sparse_categorical_accuracy: 0.7442
Epoch 8/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5945 - sparse_categorical_accuracy: 0.7438
Epoch 9/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5933 - sparse_categorical_accuracy: 0.7443
Epoch 10/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5862 - sparse_categorical_accuracy: 0.7481
Epoch 11/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5809 - sparse_categorical_accuracy: 0.7507
Epoch 12/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5777 - sparse_categorical_accuracy: 0.7519
Epoch 13/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5736 - sparse_categorical_accuracy: 0.7534
Epoch 14/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5716 - sparse_categorical_accuracy: 0.7545
Epoch 15/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5736 - sparse_categorical_accuracy: 0.7537
Epoch 16/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5712 - sparse_categorical_accuracy: 0.7559
Epoch 17/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5683 - sparse_categorical_accuracy: 0.7564
Epoch 18/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5666 - sparse_categorical_accuracy: 0.7569
Epoch 19/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5652 - sparse_categorical_accuracy: 0.7575
Epoch 20/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5634 - sparse_categorical_accuracy: 0.7583
Epoch 21/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5677 - sparse_categorical_accuracy: 0.7563
Epoch 22/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5651 - sparse_categorical_accuracy: 0.7578
Epoch 23/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5628 - sparse_categorical_accuracy: 0.7586
Epoch 24/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5619 - sparse_categorical_accuracy: 0.7593
Epoch 25/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5603 - sparse_categorical_accuracy: 0.7589
Epoch 26/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5644 - sparse_categorical_accuracy: 0.7585
Epoch 27/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5592 - sparse_categorical_accuracy: 0.7604
Epoch 28/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5571 - sparse_categorical_accuracy: 0.7616
Epoch 29/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5556 - sparse_categorical_accuracy: 0.7629
Epoch 30/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5538 - sparse_categorical_accuracy: 0.7640
Epoch 31/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5535 - sparse_categorical_accuracy: 0.7635
Epoch 32/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5521 - sparse_categorical_accuracy: 0.7645
Epoch 33/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5505 - sparse_categorical_accuracy: 0.7648
Epoch 34/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5494 - sparse_categorical_accuracy: 0.7657
Epoch 35/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5496 - sparse_categorical_accuracy: 0.7660
Epoch 36/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5488 - sparse_categorical_accuracy: 0.7673
Epoch 37/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5471 - sparse_categorical_accuracy: 0.7668
Epoch 38/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5474 - sparse_categorical_accuracy: 0.7673
Epoch 39/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5457 - sparse_categorical_accuracy: 0.7674
Epoch 40/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5452 - sparse_categorical_accuracy: 0.7689
Epoch 41/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5448 - sparse_categorical_accuracy: 0.7679
Epoch 42/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.5442 - sparse_categorical_accuracy: 0.7692
Epoch 43/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5436 - sparse_categorical_accuracy: 0.7701
Epoch 44/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5419 - sparse_categorical_accuracy: 0.7706
Epoch 45/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5432 - sparse_categorical_accuracy: 0.7691
Epoch 46/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5406 - sparse_categorical_accuracy: 0.7708
Epoch 47/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5412 - sparse_categorical_accuracy: 0.7701
Epoch 48/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5400 - sparse_categorical_accuracy: 0.7701
Epoch 49/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5416 - sparse_categorical_accuracy: 0.7699
Epoch 50/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5403 - sparse_categorical_accuracy: 0.7701
Model training finished
Test accuracy: 79.04%

广度和深度模型的测试准确率约为 79%。

实验 3:深度和交叉模型

在第三个实验中,我们创建了一个深度和交叉模型。该模型的深度部分与前一个实验中创建的深度部分相同。交叉部分的主要理念是以一种高效的方式应用显式特征交叉,交叉特征的程度随层深度的增加而增加。

def create_deep_and_cross_model():inputs = create_model_inputs()x0 = encode_inputs(inputs, use_embedding=True)cross = x0for _ in hidden_units:units = cross.shape[-1]x = layers.Dense(units)(cross)cross = x0 * x + crosscross = layers.BatchNormalization()(cross)deep = x0for units in hidden_units:deep = layers.Dense(units)(deep)deep = layers.BatchNormalization()(deep)deep = layers.ReLU()(deep)deep = layers.Dropout(dropout_rate)(deep)merged = layers.concatenate([cross, deep])outputs = layers.Dense(units=NUM_CLASSES, activation="softmax")(merged)model = keras.Model(inputs=inputs, outputs=outputs)return modeldeep_and_cross_model = create_deep_and_cross_model()
keras.utils.plot_model(deep_and_cross_model, show_shapes=True, rankdir="LR")
/Users/fchollet/Library/Python/3.10/lib/python/site-packages/numpy/core/numeric.py:2468: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparisonreturn bool(asarray(a1 == a2).all())

让我们运行它:

run_experiment(deep_and_cross_model)
Start training the model...
Epoch 1/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 5s 2ms/step - loss: 0.9221 - sparse_categorical_accuracy: 0.6235
Epoch 2/50116/1862 ━[37m━━━━━━━━━━━━━━━━━━━  2s 1ms/step - loss: 0.6388 - sparse_categorical_accuracy: 0.7257/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py:153: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.self.gen.throw(typ, value, traceback)1862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 2ms/step - loss: 0.6271 - sparse_categorical_accuracy: 0.7316
Epoch 3/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.6023 - sparse_categorical_accuracy: 0.7403
Epoch 4/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5896 - sparse_categorical_accuracy: 0.7453
Epoch 5/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5899 - sparse_categorical_accuracy: 0.7438
Epoch 6/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5960 - sparse_categorical_accuracy: 0.7421
Epoch 7/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5813 - sparse_categorical_accuracy: 0.7481
Epoch 8/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5748 - sparse_categorical_accuracy: 0.7500
Epoch 9/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5743 - sparse_categorical_accuracy: 0.7502
Epoch 10/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5739 - sparse_categorical_accuracy: 0.7506
Epoch 11/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5673 - sparse_categorical_accuracy: 0.7540
Epoch 12/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5649 - sparse_categorical_accuracy: 0.7561
Epoch 13/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.5651 - sparse_categorical_accuracy: 0.7548
Epoch 14/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5618 - sparse_categorical_accuracy: 0.7563
Epoch 15/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5599 - sparse_categorical_accuracy: 0.7571
Epoch 16/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5568 - sparse_categorical_accuracy: 0.7585
Epoch 17/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5556 - sparse_categorical_accuracy: 0.7592
Epoch 18/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5544 - sparse_categorical_accuracy: 0.7595
Epoch 19/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5533 - sparse_categorical_accuracy: 0.7603
Epoch 20/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5532 - sparse_categorical_accuracy: 0.7597
Epoch 21/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5531 - sparse_categorical_accuracy: 0.7602
Epoch 22/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5516 - sparse_categorical_accuracy: 0.7608
Epoch 23/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.5503 - sparse_categorical_accuracy: 0.7611
Epoch 24/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5492 - sparse_categorical_accuracy: 0.7619
Epoch 25/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5482 - sparse_categorical_accuracy: 0.7623
Epoch 26/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5464 - sparse_categorical_accuracy: 0.7635
Epoch 27/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5483 - sparse_categorical_accuracy: 0.7625
Epoch 28/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.5654 - sparse_categorical_accuracy: 0.7555
Epoch 29/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5545 - sparse_categorical_accuracy: 0.7593
Epoch 30/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5512 - sparse_categorical_accuracy: 0.7603
Epoch 31/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5493 - sparse_categorical_accuracy: 0.7616
Epoch 32/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5485 - sparse_categorical_accuracy: 0.7627
Epoch 33/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5593 - sparse_categorical_accuracy: 0.7588
Epoch 34/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5536 - sparse_categorical_accuracy: 0.7608
Epoch 35/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5537 - sparse_categorical_accuracy: 0.7612
Epoch 36/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5518 - sparse_categorical_accuracy: 0.7621
Epoch 37/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5502 - sparse_categorical_accuracy: 0.7618
Epoch 38/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5537 - sparse_categorical_accuracy: 0.7597
Epoch 39/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5526 - sparse_categorical_accuracy: 0.7609
Epoch 40/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5508 - sparse_categorical_accuracy: 0.7608
Epoch 41/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5495 - sparse_categorical_accuracy: 0.7613
Epoch 42/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.5478 - sparse_categorical_accuracy: 0.7625
Epoch 43/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5471 - sparse_categorical_accuracy: 0.7629
Epoch 44/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5462 - sparse_categorical_accuracy: 0.7640
Epoch 45/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5458 - sparse_categorical_accuracy: 0.7633
Epoch 46/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5466 - sparse_categorical_accuracy: 0.7635
Epoch 47/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5492 - sparse_categorical_accuracy: 0.7633
Epoch 48/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5474 - sparse_categorical_accuracy: 0.7639
Epoch 49/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5452 - sparse_categorical_accuracy: 0.7645
Epoch 50/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5446 - sparse_categorical_accuracy: 0.7663
Model training finished
Test accuracy: 77.98%

深度和交叉模型的测试准确率约为 81%。

结论


您可以使用 Keras 预处理层轻松处理具有不同编码机制的分类特征,包括单次编码和特征嵌入。此外,针对不同的数据集属性,不同的模型架构(如广义网络、深度网络和交叉网络)具有不同的优势。您可以探索独立使用它们,或者将它们结合起来,以获得最适合您数据集的结果。


本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/bicheng/21278.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

redis如何实现分布式锁

Redisson是怎么实现分布式锁的 分布式锁&#xff1a;Redisson 提供了一种简单而强大的方式来实现分布式锁。 它支持多种锁模式&#xff0c;如公平锁、可重入锁、读写锁等&#xff0c;并且提供了锁的超时设置和自动释放功能。 锁的获取 在Redisson中常见获取锁的方式有 lock() …

【代码随想录训练营】【Day 37】【贪心-4】| Leetcode 840, 406, 452

【代码随想录训练营】【Day 37】【贪心-4】| Leetcode 840, 406, 452 需强化知识点 python list sort的高阶用法&#xff0c;两个key&#xff0c;另一种逆序写法python list insert的用法 题目 860. 柠檬水找零 思路&#xff1a;注意 20 块找零&#xff0c;可以找3张5块升…

Mysql基础教程(13):GROUP BY

MySQL GROUP BY 【 GROUP BY】 子句用于将结果集根据指定的字段或者表达式进行分组。 有时候&#xff0c;我们需要将结果集按照某个维度进行汇总。这在统计数据的时候经常用到&#xff0c;考虑以下的场景&#xff1a; 按班级求取平均成绩。按学生汇总某个人的总分。按年或者…

“世界酒中国菜”系列活动如何助推乡村振兴和文化交流?

"世界酒中国菜"系列活动如何助推乡村振兴和文化交流&#xff1f; 《经济参考报》&#xff08;2024年5月24日 第6版&#xff09; 新华社北京&#xff08;记者 张晓明&#xff09; “世界酒中国菜”系列活动自启动以来&#xff0c;已在国内外产生了广泛影响。这一国家…

mysql面试之分库分表总结

文章目录 1.为什么要分库分表2.分库分表有哪些中间件&#xff0c;不同的中间件都有什么优点和缺点&#xff1f;3.分库分表的方式(水平分库,垂直分库,水平分表,垂直分表)3.1 水平分库3.2 垂直分库3.3 水平分表3.4 垂直分表 4.分库分表带来的问题4.1 事务一致性问题4.2 跨节点关联…

强化学习中Q值的概念

在强化学习中&#xff0c;Q值是一个非常核心的概念&#xff0c;用来表示在给定的状态下&#xff0c;采取某个特定动作所期望获得的总回报。Q值基本上是一种衡量“动作价值”的方式&#xff0c;即在当前状态采取一个动作能带来多大价值。 定义和计算 Q值通常表示为 (Q(s, a))&…

RabbitMQ小结

MQ分类 Acitvemq kafka 优点&#xff1a;性能好&#xff0c;吞吐量高百万级&#xff0c;分布式&#xff0c;消息有序 缺点&#xff1a;单机超过64分区&#xff0c;cpu会飙高&#xff0c;消费失败不支持重试 &#xff0c; Rocket 阿里的mq产品 优点&#xff1a;单机吞吐量也…

香橙派 Kunpeng Pro:基于ncnn的深度学习模型量化与部署实践

一 引言 近10年里以深度学习为代表的机器学习技术在图像处理&#xff0c;语音识别&#xff0c;自然语言处理等领域里取得了非常多的突破&#xff0c;其背后的核心算法是深度学习为代表的AI基础模型。 一般来讲&#xff0c;我们进行AI项目研发时&#xff0c;遵循三个步骤。 第…

LabVIEW步进电机的串口控制方法与实现

本文介绍了在LabVIEW环境中通过串口控制步进电机的方法&#xff0c;涵盖了基本的串口通信原理、硬件连接步骤、LabVIEW编程实现以及注意事项。通过这些方法&#xff0c;用户可以实现对步进电机的精确控制&#xff0c;适用于各种自动化和运动控制应用场景。 步进电机与串口通信…

python3.8环境下安装pyqt5

1.实验目的 测试python可视化工具包pyqt5,为后期做系统前端页面做铺垫 2.实验环境 1.软件 anaconda2.5 pycharm2024.1.1 pyqt5 2.硬件 GPU 4070TI Intel I7 1400K 3. 安装步骤 (base) C:\Users\PC>conda -V conda 23.7.4(base) C:\Users\PC>conda create qttest p…

GIS、GPS、RS综合应用

刘老师&#xff08;副教授&#xff09;&#xff0c;北京重点高校资深专家&#xff0c;拥有丰富的科研及工程技术经验&#xff0c;长期从事3S在环境中的应用等领域的研究和教学工作&#xff0c;具有资深的技术底蕴和专业背景。 第一章、3S 技术及应用简介 1.1、3S 技术及集成简…

跨模型知识融合:大语言模型的知识融合

大语言模型&#xff08;LLMs&#xff09;在多个领域的应用日益广泛&#xff0c;但确保它们的行为与人类价值观和意图一致却充满挑战。传统对齐方法&#xff0c;例如基于人类反馈的强化学习&#xff08;RLHF&#xff09;&#xff0c;虽取得一定进展&#xff0c;仍面临诸多难题&a…

wandb安装与使用 —— 用于跟踪、可视化和协作机器学习实验的工具

文章目录 一、wandb简介二、wandb注册与登陆&#xff08;网页&#xff09; —— 若登录&#xff0c;则支持在线功能三、wandb安装与登陆&#xff08;命令行&#xff09; —— 若不登录&#xff0c;则只保留离线功能四、函数详解4.1、wandb.init() —— 初始化一个新的 wandb 实…

上位机图像处理和嵌入式模块部署(f407 mcu中fatfs中间件使用)

【 声明&#xff1a;版权所有&#xff0c;欢迎转载&#xff0c;请勿用于商业用途。 联系信箱&#xff1a;feixiaoxing 163.com】 前面我们已经实现了spi norflash的驱动&#xff0c;理论上这已经可以实现数据的持久化保存了。为什么还需要一个文件系统呢&#xff1f;主要原因还…

在 Win系统安装 Ubuntu20.04子系统 WSL2 (默认是C盘,第7步开始迁移到D盘,也可以不迁移)

1、简介 WSL在Windows 10上原生运行Linux二进制可执行文件&#xff0c;不用单独安装虚拟机。 WSL2是WSL的第二个版本&#xff0c;提供了与WSL相比的显著性能改进和完全的系统呼叫兼容性。通过运行Linux内核在一个轻量级虚拟机&#xff08;VM&#xff09;中实现。 2、安装 电…

ThingsBoard MQTT 连接认证过程 源码分析+图例

整个连接过程如图所示&#xff1a; 高清图片链接 1、环境准备 thingsboard3.5.1 源码启动。&#xff08;不懂怎么启动的&#xff0c;大家可以看我的博文ThingsBoard3.5.1源码启动&#xff09;MQTTX 客户端&#xff08;用来连接 thingsboard MQTT&#xff09;默认配置。queue.…

7-15 位模式(dump_bits)---PTA实验C++

一、题目描述 为方便调试位运算相关程序&#xff0c;先做个展现位模式的小工具。 建议参照以下接口实现&#xff1a; // 利用函数重载特性&#xff1a;string dump_bits(char x);string dump_bits(short x);string dump_bits(int x);string dump_bits(long long x);// 或用函…

JVM类加载过程

在Java虚拟机规范中&#xff0c;把描述类的数据从class文件加载到内存&#xff0c;并对数据进行校验、转换解析和初始化&#xff0c;最终形成可以被虚拟机直接使用的java.lang.Class对象&#xff0c;这个过程被称作类加载过程。一个类在整个虚拟机周期内会经历如下图的阶段&…

DIYP对接骆驼后台IPTV管理,退出菜单中显示用户名已经网络信息,MAC,剩余天数,套餐名称等

演示&#xff1a;https://url03.ctfile.com/f/1779803-1042599473-4dc000?p8976 (访问密码: 8976) 后台加上EPG&#xff0c;增加一些播放源的动态端口替换。 前台app上&#xff0c;退出菜单中显示用户名已经网络信息&#xff0c;MAC&#xff0c;剩余天数&#xff0c;套餐名称…