JetBot手势识别实验

实验简介

本实验目的在JetBot智能小车实现手势识别功能,使用板卡为Jetson Nano。通过小车摄像头,识别五个不同的手势,实现小车的运动及灯光控制。

在这里插入图片描述

1.数据采集

连接小车板卡的Jupyterlab环境,运行以下代码块,配置数据采集环境,主要采集5个手势动作,每个动作采集100-200张图像。
在这里插入图片描述

import traitlets
import ipywidgets.widgets as widgets
from IPython.display import display
from jetbot import Camera, bgr8_to_jpegcamera = Camera.instance(width=224, height=224)image = widgets.Image(format='jpeg', width=224, height=224)  # this width and height doesn't necessarily have to match the camera
camera_link = traitlets.dlink((camera, 'value'), (image, 'value'), transform=bgr8_to_jpeg)import osstop_dir = 'dataset/stop'# we have this "try/except" statement because these next functions can throw an error if the directories exist already
try:os.makedirs(stop_dir)
except FileExistsError:print('Directories not created becasue they already exist')
button_layout = widgets.Layout(width='128px', height='64px')
stop_button = widgets.Button(description='add stop', button_style='success', layout=button_layout)
stop_count = widgets.IntText(layout=button_layout, value=len(os.listdir(stop_dir)))display(widgets.HBox([stop_count, stop_button]))
from uuid import uuid1def save_snapshot(directory):image_path = os.path.join(directory, str(uuid1()) + '.jpg')with open(image_path, 'wb') as f:f.write(image.value)def save_stop():global stop_dir, stop_countsave_snapshot(stop_dir)stop_count.value = len(os.listdir(stop_dir))# attach the callbacks, we use a 'lambda' function to ignore the
# parameter that the on_click event would provide to our function
# because we don't need it.
stop_button.on_click(lambda x: save_stop())
display(image)
display(widgets.HBox([stop_count, stop_button]))

2.数据集制作

数据集是通过mediapipe识别手部关键点,制作数据集。mediapipe主要是识别21个手部关键点,各关键点的顺序如下图所示。

在这里插入图片描述
下面是mediapipe的一个使用示例:

import cv2
import mediapipe as mp# 初始化MediaPipe Hands模块
mp_hands = mp.solutions.hands
hands = mp_hands.Hands(static_image_mode=True,max_num_hands=2,min_detection_confidence=0.5,min_tracking_confidence=0.5)mp_drawing = mp.solutions.drawing_utils  # 用于绘制关键点的工具# 读取图片
image_path = 'dataset/0b61e88c-02ad-11ef-9c74-28dfeb422309.jpg'  # 这里替换为你的图片路径
image = cv2.imread(image_path)if image is None:print("Cannot find the image.")
else:# 将图像从BGR转换为RGBimage_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)# 处理图像,检测手部results = hands.process(image_rgb)# 将图像从RGB转回BGR以显示image_bgr = cv2.cvtColor(image_rgb, cv2.COLOR_RGB2BGR)# 绘制手部关键点if results.multi_hand_landmarks:for hand_landmarks in results.multi_hand_landmarks:mp_drawing.draw_landmarks(image_bgr, hand_landmarks, mp_hands.HAND_CONNECTIONS)# 显示图像cv2.imshow('Hand Detection', image_bgr)cv2.waitKey(0)  # 等待按键cv2.destroyAllWindows()# # 可选:保存输出图像# output_image_path = 'path_to_your_output_image.jpg'  # 输出文件的路径# cv2.imwrite(output_image_path, image_bgr)# print("Output image is saved as", output_image_path)# 释放资源
hands.close()

核心代码是mediapipe.solutions.hands.Hands这个函数,该函数的有以下一些参数,具体含义如下:

1.static_image_mode:
类型: bool
默认值: False
当设置为 True 时,手部检测器每次都会在调用时对图像进行检测,适合于处理静态图像。当设置为 False 时,检测器会自动在第一帧进行检测,后续帧则主要进行跟踪,优化了处理视频流的性能和效率。
2.max_num_hands:
类型: int
默认值: 2
定义了检测器同时检测的最大手的数量。可以根据应用需求调整,例如在一个场景中可能存在更多的手。
3.min_detection_confidence:
类型: float
默认值: 0.5
这个阈值用来控制检测的置信度。仅当检测到的手部的置信度高于此值时,检测结果才会被认为有效。值范围是0到1,增大这个值可以减少错误检测,但可能会错过一些正确的检测。
4.min_tracking_confidence:
类型: float
默认值: 0.5
在非静态模式下使用,这个阈值用来控制跟踪的置信度。当跟踪到的手部的置信度低于此值时,检测器会在下一帧重新进行检测,而不是继续跟踪。同样,值范围是0到1。
5.model_complexity:
类型: int
默认值: 1
控制手部关键点检测模型的复杂度。可选值为0、1或2。模型复杂度越高,精度可能越高,但需要的计算资源也越多,延迟可能也会增加。
6.smooth_landmarks:
类型: bool
默认值: True
是否应用滤波处理到检测到的关键点。开启此功能可以获得更平滑的关键点移动效果,尤其是在视频流处理时。
7.enable_segmentation:
类型: bool
默认值: False
是否开启手部区域的分割。当此选项开启时,除了返回关键点外,还会返回手部的分割掩模,可以用于进一步的图像处理或视觉效果增强。
8.smooth_segmentation:
类型: bool
默认值: True
当启用手部分割时,此参数控制是否应用平滑处理到手部分割掩模上。这有助于减少分割掩模的抖动。

可以看到,它对手掌和拳头的关键点均能较好的捕捉到。

在这里插入图片描述

在这里插入图片描述
对于每个点,包含以下三个值:

landmark {x: 0.5458590388298035y: 0.37459805607795715z: -0.05105478689074516
}

具体含义如下:

x:
X坐标表示关键点在图像水平方向的位置。该值是归一化的,范围从 0 到 1,其中 0 代表图像的最左边界,1 代表图像的最右边界。
y:
Y坐标表示关键点在图像垂直方向的位置。这个值也是归一化的,范围从 0 到 1,其中 0 代表图像的顶部边界,1 代表图像的底部边界。
z:
Z坐标代表关键点相对于摄像头平面的深度。该值是归一化的,并且相对于检测框的中心点。Z坐标的单位不是物理单位,而是一个相对比例,可以用来比较同一手中不同关键点的深度(即哪些关键点更靠近或更远离摄像头)。Z坐标的具体数值大小没有绝对的距离意义,主要用于相对深度的比较。

通过运行以下代码,制作数据集,数据集训练/验证划分比例默认为8/2:

import cv2
import mediapipe as mp
import osdef data_calculate(folder_path, class_name):mp_hands = mp.solutions.handshands = mp_hands.Hands(static_image_mode=True,max_num_hands=2,min_detection_confidence=0.5,min_tracking_confidence=0.5)fail_img = []for img_name in os.listdir(folder_path):img = cv2.imread(folder_path + '/' + img_name)# Flip Horizontalimg = cv2.flip(img, 1)# BGR to RGBimg = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)try:results = hands.process(img)with open(f'data.csv', 'a') as f:for i in results.multi_hand_landmarks[0].landmark:# print(i.x, i.y, i.z)f.write(f'{i.x},{i.y},{i.z},')f.write(class_name)f.write('\n')except:fail_img.append(img_name)for i in fail_img:print(f"Can not extract image {i}")print(len(fail_img))data_calculate(folder_path='dataset/stop_img', class_name="0")
data_calculate(folder_path='dataset/forward_img', class_name="1")
data_calculate(folder_path='dataset/backward_img', class_name="2")
data_calculate(folder_path='dataset/left_img', class_name="3")
data_calculate(folder_path='dataset/right_img', class_name="4")

其中一些小细节需要注意,比如在数据集转格式时,使用cv2.flip(img, 1)对原始图像进行水平翻转,主要是因为采集数据时,人物正对的小车,而控制小车左右的方向和采集数据刚好相反,因此需要对图像进行水平翻转。

数据集最终转换成data.csv,每一行有64个值,前63列表示每一个节点的x,y,z数值,最后一列表示类别。

3.模型选择和训练

模型选择2018年这篇论文《Deep Learning for Hand Gesture Recognition on Skeletal Data》提出的手势识别模型。
论文地址:https://ieeexplore.ieee.org/document/8373818
论文代码:https://github.com/eddieai/Gexpress/tree/master

该模型的框图如下图所示:
在这里插入图片描述
算法的核心思想是使用三个特征提取分支,第一个分支是使用3个级联的7x7尺寸的卷积核进行特征提取,第二个分支是使用3个级联的3x3尺寸的卷积核进行特征提取,第三个分支是使用三个1维的平均池化层提取特征,最终使用一个线性层,将三个分支的结果进行concat,得到最终分类的类别。所有卷积层和线性层的参数使用Xavier进行参数初始化。

模型和训练代码如下:

import itertools
import time
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader, TensorDataset
import numpy as np
import pandas as pd
import random
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as pltdef load_data(filename):readbook = pd.read_csv(f'{filename}.csv')nplist = readbook.T.to_numpy()data = nplist[0:-1].Tdata = np.float64(data)target = nplist[-1]return data, targetdef random_number(data_size, key):number_set = []for i in range(data_size):number_set.append(i)if key == 1:random.shuffle(number_set)return number_setdef split_dataset(data_set, target_set, rate, ifsuf):train_size = int((1 - rate) * len(data_set))  # 计算训练集的数据个数data_index = random_number(len(data_set), ifsuf)x_train = data_set[data_index[:train_size]]x_test = data_set[data_index[train_size:]]y_train = target_set[data_index[:train_size]]y_test = target_set[data_index[train_size:]]return x_train, x_test, y_train, y_testdef inputtotensor(inputtensor, labeltensor):  # 将数据集的输入和标签转为tensor格式inputtensor = np.array(inputtensor)inputtensor = torch.FloatTensor(inputtensor)labeltensor = np.array(labeltensor)labeltensor = labeltensor.astype(float)labeltensor = torch.LongTensor(labeltensor)return inputtensor, labeltensordef addbatch(data_train, data_test, batchsize):data = TensorDataset(data_train, data_test)data_loader = DataLoader(data, batch_size=batchsize, shuffle=False)return data_loader# 定义神经网络模型
class Net(nn.Module):def __init__(self, n_channels=63, n_classes=5, dropout_probability=0.2):super(Net, self).__init__()self.n_channels = n_channelsself.n_classes = n_classesself.dropout_probability = dropout_probabilityself.all_conv_high = torch.nn.ModuleList([torch.nn.Sequential(torch.nn.Conv1d(in_channels=1, out_channels=8, kernel_size=7, padding=3),torch.nn.ReLU(),torch.nn.AvgPool1d(2),torch.nn.Conv1d(in_channels=8, out_channels=4, kernel_size=7, padding=3),torch.nn.ReLU(),torch.nn.AvgPool1d(2),torch.nn.Conv1d(in_channels=4, out_channels=4, kernel_size=7, padding=3),torch.nn.ReLU(),torch.nn.Dropout(p=self.dropout_probability),torch.nn.AvgPool1d(2))])self.all_conv_low = torch.nn.ModuleList([torch.nn.Sequential(torch.nn.Conv1d(in_channels=1, out_channels=8, kernel_size=3, padding=1),torch.nn.ReLU(),torch.nn.AvgPool1d(2),torch.nn.Conv1d(in_channels=8, out_channels=4, kernel_size=3, padding=1),torch.nn.ReLU(),torch.nn.AvgPool1d(2),torch.nn.Conv1d(in_channels=4, out_channels=4, kernel_size=3, padding=1),torch.nn.ReLU(),torch.nn.Dropout(p=self.dropout_probability),torch.nn.AvgPool1d(2))])self.all_residual = torch.nn.ModuleList([torch.nn.Sequential(torch.nn.AvgPool1d(2),torch.nn.AvgPool1d(2),torch.nn.AvgPool1d(2))])self.fc = torch.nn.Sequential(torch.nn.Linear(in_features=9 * 7, out_features=512),torch.nn.ReLU(),torch.nn.Linear(in_features=512, out_features=n_classes))for module in itertools.chain(self.all_conv_high, self.all_conv_low, self.all_residual):for layer in module:if layer.__class__.__name__ == "Conv1d":torch.nn.init.xavier_uniform_(layer.weight, gain=torch.nn.init.calculate_gain('relu'))torch.nn.init.constant_(layer.bias, 0.1)for layer in self.fc:if layer.__class__.__name__ == "Linear":torch.nn.init.xavier_uniform_(layer.weight, gain=torch.nn.init.calculate_gain('relu'))torch.nn.init.constant_(layer.bias, 0.1)def forward(self, input):input = input.unsqueeze(1)high = self.all_conv_high[0](input)low = self.all_conv_low[0](input)ap_residual = self.all_residual[0](input)# Time convolutions are concatenated along the feature maps axisoutput = torch.cat([high,low,ap_residual], dim=1)N, C, F = output.size()output = self.fc(output.view(N, C * F))return outputdef train_test(traininput, trainlabel, testinput, testlabel, batchsize):traindata = addbatch(traininput, trainlabel, batchsize)  # shuffle打乱数据集maxacc = 0start = time.time()for epoch in range(101):for step, data in enumerate(traindata):net.train()inputs, labels = data# 前向传播out = net(inputs)# 计算损失函数loss = loss_func(out, labels)# 清空上一轮的梯度optimizer.zero_grad()# 反向传播loss.backward()# 参数更新optimizer.step()# 测试准确率net.eval()testout = net(testinput)testloss = loss_func(testout, testlabel)prediction = torch.max(testout, 1)[1]  # torch.maxpred_y = prediction.numpy()target_y = testlabel.data.numpy()j = 0for i in range(pred_y.size):if pred_y[i] == target_y[i]:j += 1acc = j / pred_y.sizeif epoch % 10 == 0:print("训练次数为", epoch, "的准确率为:", acc)if acc > maxacc:torch.save(net.state_dict(), "model.pt", _use_new_zipfile_serialization=False)print('save ' + str(acc))maxacc = accend = time.time()print(end - start)if __name__ == "__main__":feature, label = load_data('data')split = 0.2  # 测试集占数据集整体的多少ifshuffle = 1  # 1为打乱数据集,0为不打乱x_train, x_test, y_train, y_test = split_dataset(feature, label, split, ifshuffle)traininput, trainlabel = inputtotensor(x_train, y_train)testinput, testlabel = inputtotensor(x_test, y_test)traininput = nn.functional.normalize(traininput)testinput = nn.functional.normalize(testinput)LR = 0.001batchsize = 2net = Net()optimizer = torch.optim.Adam(net.parameters(), LR)loss_func = torch.nn.CrossEntropyLoss()train_test(traininput, trainlabel, testinput, testlabel, batchsize)input, label = inputtotensor(feature, label)input = nn.functional.normalize(input)model = Net()model.eval()model.load_state_dict(torch.load("model.pt"))output = model(input)pred = torch.max(output, 1)[1]C = confusion_matrix(label, pred, labels=[0, 1, 2, 3, 4])plt.matshow(C, cmap=plt.cm.Reds)for i in range(len(C)):for j in range(len(C)):plt.annotate(C[j, i], xy=(i, j), horizontalalignment='center', verticalalignment='center')plt.ylabel('True label')plt.xlabel('Predicted label')plt.show()

训练完成后,保存val上最优的结果模型,命名为model.pt

4.模型部署与运动灯光控制

model.pt上传到小车中,运行如下代码进行部署:

import traitlets
from IPython.display import display
import ipywidgets.widgets as widgets
from jetbot import Camera, bgr8_to_jpeg#camera = Camera.instance(width=224, height=224)
camera = Camera.instance(width=224, height=224, fps=20)
image = widgets.Image(format='jpg', width=224, height=224)camera_link = traitlets.dlink((camera, 'value'), (image, 'value'), transform=bgr8_to_jpeg)display(widgets.HBox([image]))
import cv2
import mediapipe as mp
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
import itertoolsdef data_calculate(image):mp_hands = mp.solutions.handshands = mp_hands.Hands(static_image_mode=True,max_num_hands=2,min_detection_confidence=0.5,min_tracking_confidence=0.5)img = cv2.flip(image, 1)# BGR to RGBimg = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)input = []results = hands.process(img)for i in results.multi_hand_landmarks[0].landmark:input.extend([i.x, i.y, i.z])return inputdef inputtotensor(inputtensor):inputtensor = np.array(inputtensor)inputtensor = torch.FloatTensor(inputtensor)return inputtensorclass Net(nn.Module):def __init__(self, n_channels=63, n_classes=5, dropout_probability=0.2):super(Net, self).__init__()self.n_channels = n_channelsself.n_classes = n_classesself.dropout_probability = dropout_probabilityself.all_conv_high = torch.nn.ModuleList([torch.nn.Sequential(torch.nn.Conv1d(in_channels=1, out_channels=8, kernel_size=7, padding=3),torch.nn.ReLU(),torch.nn.AvgPool1d(2),torch.nn.Conv1d(in_channels=8, out_channels=4, kernel_size=7, padding=3),torch.nn.ReLU(),torch.nn.AvgPool1d(2),torch.nn.Conv1d(in_channels=4, out_channels=4, kernel_size=7, padding=3),torch.nn.ReLU(),torch.nn.Dropout(p=self.dropout_probability),torch.nn.AvgPool1d(2))])self.all_conv_low = torch.nn.ModuleList([torch.nn.Sequential(torch.nn.Conv1d(in_channels=1, out_channels=8, kernel_size=3, padding=1),torch.nn.ReLU(),torch.nn.AvgPool1d(2),torch.nn.Conv1d(in_channels=8, out_channels=4, kernel_size=3, padding=1),torch.nn.ReLU(),torch.nn.AvgPool1d(2),torch.nn.Conv1d(in_channels=4, out_channels=4, kernel_size=3, padding=1),torch.nn.ReLU(),torch.nn.Dropout(p=self.dropout_probability),torch.nn.AvgPool1d(2))])self.all_residual = torch.nn.ModuleList([torch.nn.Sequential(torch.nn.AvgPool1d(2),torch.nn.AvgPool1d(2),torch.nn.AvgPool1d(2))])self.fc = torch.nn.Sequential(torch.nn.Linear(in_features=9 * 7, out_features=512),torch.nn.ReLU(),torch.nn.Linear(in_features=512, out_features=n_classes))for module in itertools.chain(self.all_conv_high, self.all_conv_low, self.all_residual):for layer in module:if layer.__class__.__name__ == "Conv1d":torch.nn.init.xavier_uniform_(layer.weight, gain=torch.nn.init.calculate_gain('relu'))torch.nn.init.constant_(layer.bias, 0.1)for layer in self.fc:if layer.__class__.__name__ == "Linear":torch.nn.init.xavier_uniform_(layer.weight, gain=torch.nn.init.calculate_gain('relu'))torch.nn.init.constant_(layer.bias, 0.1)def forward(self, input):input = input.unsqueeze(1)high = self.all_conv_high[0](input)low = self.all_conv_low[0](input)ap_residual = self.all_residual[0](input)# Time convolutions are concatenated along the feature maps axisoutput = torch.cat([high,low,ap_residual], dim=1)N, C, F = output.size()output = self.fc(output.view(N, C * F))return output   def preprocess (x): x = data_calculate(x)x = inputtotensor(x)x = x.view(1,63)x = nn.functional.normalize(x)return xmodel = Net()                             
model.eval()
model.load_state_dict(torch.load("model.pt"))

下面根据分类结果来执行相应的运动和灯光控制,由于板卡性能有限,为了减少推理时的卡顿,需要先断开摄像头的实时显示推流:

camera_link.unlink() 

对电机,RGB等器件进行初始化:

from jetbot import Robot
robot = Robot()
from RGB_Lib import Programing_RGB
RGB = Programing_RGB()
import RPi.GPIO as GPIO
BEEP_pin = 6 
GPIO.setmode(GPIO.BCM)
# set pin as an output pin with optional initial state of HIGH
GPIO.setup(BEEP_pin, GPIO.OUT, initial=GPIO.LOW)import torch.nn.functional as F
import timedef update(change):global stop_slider, forward_slider,backward_slider,left_slider,right_slider,robott1 = time.time()x = change['new'] try:x = preprocess(x)output = model(x)y = torch.max(output, 1)[1]print(y)if y == 0: robot.stop()GPIO.output(BEEP_pin, GPIO.LOW)RGB.Set_ChameleonLight_RGB()RGB.OFF_ALL_RGB()if y == 1:robot.forward(0.4)GPIO.output(BEEP_pin, GPIO.LOW)RGB.Set_BreathSColor_RGB(2)RGB.Set_BreathSSpeed_RGB(1)RGB.Set_BreathSLight_RGB()if y == 2: robot.backward(0.4)RGB.OFF_ALL_RGB()GPIO.output(BEEP_pin, GPIO.LOW)RGB.Set_An_RGB(4, 0xFF, 0x00, 0x00)if y == 3: robot.left(0.5)RGB.OFF_ALL_RGB()GPIO.output(BEEP_pin, GPIO.LOW)RGB.Set_An_RGB(9, 0xFF, 0x00, 0x00)if y == 4: robot.right(0.5)GPIO.output(BEEP_pin, GPIO.LOW)RGB.Set_All_RGB(0xFF, 0x00, 0x00)   except:robot.stop()time.sleep(0.5)update({'new': camera.value})  # we call the function once to intialize
camera.observe(update, names='value') 

运动控制主要依赖小车的Robot类,该类已经做好了各种动作的集成,通过robot.stop()robot.forwardrobot.backward等函数控制小车的停止,前进,后退,左转,右转等基础运动。

灯光控制依赖RGB_Lib文件,该文件的内容如下,各函数功能已写在注释中:

import Adafruit_GPIO as GPIOclass Programming_RGB(object):# 在类中添加方法,获取一个I2C设备实例def get_i2c_device(self, address, i2c, i2c_bus):# 如果已提供i2c实例,使用该实例获取设备if i2c is not None:return i2c.get_i2c_device(address)else:# 否则,导入Adafruit_GPIO.I2C模块并根据提供的总线号获取设备import Adafruit_GPIO.I2C as I2Cif i2c_bus is None:return I2C.get_i2c_device(address)else:return I2C.get_i2c_device(address, busnum=i2c_bus)# 初始化方法,设置默认的I2C设备def __init__(self):# 创建I2C设备实例,使用默认总线1和地址0x1bself._device = self.get_i2c_device(0x1b, None, 1)# 设置RGB的值def Set_All_RGB(self, R_Value, G_Value, B_Value):# 尝试写入RGB值到I2C设备,捕获并报告任何I2C错误try:# 全部开启self._device.write8(0x00, 0xFF)# 分别设置红、绿、蓝色的值self._device.write8(0x01, R_Value)self._device.write8(0x02, G_Value)self._device.write8(0x03, B_Value)except:print('Set_All_RGB I2C error')# 关闭所有RGB灯def OFF_ALL_RGB(self):try:# 设置所有RGB值为0,实现关闭灯光self.Set_All_RGB(0x00, 0x00, 0x00)except:print('OFF_ALL_RGB I2C error')# 设置单个RGB灯def Set_An_RGB(self, Position, R_Value, G_Value, B_Value):try:# 检查位置值是否合法if Position <= 0x09:# 设置对应位置的灯光颜色self._device.write8(0x00, Position)self._device.write8(0x01, R_Value)self._device.write8(0x02, G_Value)self._device.write8(0x03, B_Value)except:print('Set_An_RGB I2C error')# 设置瀑布灯效果def Set_WaterfallLight_RGB(self):try:self._device.write8(0x04, 0x00)except:print('Set_WaterfallLight_RGB I2C error')# 设置呼吸灯颜色变化效果def Set_BreathColor_RGB(self):try:self._device.write8(0x04, 0x01)except:print('Set_BreathColor_RGB I2C error')# 设置变色龙灯效def Set_ChameleonLight_RGB(self):try:self._device.write8(0x04, 0x02)except:print('Set_ChameleonLight_RGB I2C error')# 设置呼吸灯的颜色def Set_BreathSColor_RGB(self, color):# 确保颜色值在0-6中try:self._device.write8(0x05, color)except:print('Set_BreathSColor_RGB I2C error')# 设置呼吸灯的速度def Set_BreathSSpeed_RGB(self, speed):try:self._device.write8(0x06, speed)except:print('Set_BreathSSpeed_RGB I2C error')# 设置呼吸灯效果def Set_BreathSLight_RGB(self):try:self._device.write8(0x04, 0x03)except:print('Set_BreathSLight_RGB I2C error')

通过相关函数,可以设置各种灯光效果,比如全开、全关、单个控制、瀑布灯、呼吸灯、变色龙效果等。

附录:相关环境版本

absl-py                       0.9.0
Adafruit-GPIO                 1.0.4
Adafruit-MotorHAT             1.4.0
Adafruit-PureIO               0.2.3
Adafruit-SSD1306              1.6.2
apt-clone                     0.2.1
apturl                        0.5.2
asn1crypto                    0.24.0
astor                         0.8.1
attrs                         19.3.0
backcall                      0.1.0
beautifulsoup4                4.6.0
bleach                        3.1.4
blinker                       1.4
Brlapi                        0.6.6
certifi                       2018.1.18
chardet                       3.0.4
click                         7.1.1
colorama                      0.3.7
cryptography                  2.1.4
cupshelpers                   1.0
dataclasses                   0.8
decorator                     4.4.2
defer                         1.0.6
defusedxml                    0.6.0
distro-info                   0.18ubuntu0.18.04.1
entrypoints                   0.3
feedparser                    5.2.1
Flask                         1.1.1
funcsigs                      1.0.2
gast                          0.3.3
google-pasta                  0.2.0
graphsurgeon                  0.4.1
grpcio                        1.27.2
h5py                          2.10.0
html5lib                      0.999999999
httplib2                      0.9.2
idna                          2.6
importlib-metadata            1.6.0
ipykernel                     5.2.0
ipython                       7.13.0
ipython-genutils              0.2.0
ipywidgets                    7.5.1
itsdangerous                  1.1.0
jedi                          0.16.0
jetbot                        0.3.0
Jetson.GPIO                   2.0.4
jetson-stats                  2.0.0
Jinja2                        2.11.1
json5                         0.9.4
jsonschema                    3.2.0
jupyter                       1.0.0
jupyter-client                6.1.2
jupyter-console               6.1.0
jupyter-core                  4.6.3
jupyterlab                    2.0.1
jupyterlab-server             1.0.7
Keras-Applications            1.0.8
Keras-Preprocessing           1.1.0
keyring                       10.6.0
keyrings.alt                  3.0
language-selector             0.1
launchpadlib                  1.10.6
lazr.restfulclient            0.13.5
lazr.uri                      1.0.3
louis                         3.5.0
lxml                          4.2.1
macaroonbakery                1.1.3
Mako                          1.0.7
Markdown                      3.2.1
MarkupSafe                    1.0
mediapipe                     0.8
mistune                       0.8.4
mock                          4.0.2
nbconvert                     5.6.1
nbformat                      5.0.4
nodejs                        0.1.1
notebook                      6.0.3
numpy                         1.19.4
oauth                         1.0.1
oauthlib                      2.0.6
optional-django               0.1.0
PAM                           0.4.2
pandocfilters                 1.4.2
parso                         0.6.2
pbr                           5.4.4
pexpect                       4.8.0
pickleshare                   0.7.5
Pillow                        5.2.0
pip                           21.3.1
portpicker                    1.3.1
prometheus-client             0.7.1
prompt-toolkit                3.0.5
protobuf                      3.19.6
psutil                        5.7.0
ptyprocess                    0.6.0
py-cpuinfo                    5.0.0
pycairo                       1.16.2
pycrypto                      2.6.1
pycups                        1.9.73
Pygments                      2.6.1
PyGObject                     3.26.1
PyICU                         1.9.8
PyJWT                         1.5.3
pymacaroons                   0.13.0
PyNaCl                        1.1.2
pyRFC3339                     1.0
pyrsistent                    0.16.0
pyserial                      3.4
python-apt                    1.6.6
python-dateutil               2.6.1
python-debian                 0.1.32
pytz                          2018.3
pyxdg                         0.25
PyYAML                        3.12
pyzmq                         19.0.0
qtconsole                     4.7.2
QtPy                          1.9.0
requests                      2.23.0
requests-unixsocket           0.1.5
SecretStorage                 2.3.1
Send2Trash                    1.5.0
setuptools                    46.1.3
simplejson                    3.13.2
six                           1.14.0
spidev                        3.4
ssh-import-id                 5.7
system-service                0.3
systemd-python                234
tensorboard                   1.13.1
tensorflow-estimator          1.13.0
tensorflow-gpu                1.13.1+nv19.3
tensorrt                      6.0.1.10
termcolor                     1.1.0
terminado                     0.8.3
testpath                      0.4.4
testresources                 2.0.1
torch                         1.0.0a0+18eef1d
torchvision                   0.2.2.post3
tornado                       6.0.4
traitlets                     5.0.0.dev0
ubuntu-drivers-common         0.0.0
ubuntu-pro-client             8001
uff                           0.6.5
unity-scope-calculator        0.1
unity-scope-chromiumbookmarks 0.1
unity-scope-colourlovers      0.1
unity-scope-devhelp           0.1
unity-scope-firefoxbookmarks  0.1
unity-scope-manpages          0.1
unity-scope-openclipart       0.1
unity-scope-texdoc            0.1
unity-scope-tomboy            0.1
unity-scope-virtualbox        0.1
unity-scope-yelp              0.1
unity-scope-zotero            0.1
urllib3                       1.22
wadllib                       1.3.2
wcwidth                       0.1.9
webencodings                  0.5
Werkzeug                      1.0.0
wheel                         0.30.0
widgetsnbextension            3.5.1
wrapt                         1.12.1
xkit                          0.0.0
zipp                          3.1.0
zope.interface                4.3.2

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/web/3799.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Vitis HLS 学习笔记--C/C++ static 关键字的作用

目录 1. 简介 2. c/c共有性质 3. c独有性质 4. 示例说明 5. static 对于 HLS 工具的影响 6. 总结 1. 简介 在Vitis HLS中&#xff0c;偶尔会用到 static 关键字。考虑到Vitis HLS同时兼容C和C语言&#xff0c;有必要理解这两种语言中static关键字细微差异。本文旨在梳理…

Linux cmake 初窥【1】

1.开发背景 linux 下编译程序需要用到对应的 Makefile&#xff0c;用于编译应用程序&#xff0c;但是 Makefile 的语法过于繁杂&#xff0c;甚至有些反人类&#xff0c;所以这里引用了cmake&#xff0c;cmake 其中一个主要功能就是用于生成 Makefile&#xff0c;cmake 的语法更…

AIGC:开启内容创作新纪元,我们如何看待它的影响与前景?

AIGC的概念 AIGC&#xff08;Artificial Intelligence Generated Content&#xff09;的概念主要是指人工智能生成内容。 这是一种新的人工智能技术&#xff0c;它利用人工智能模型&#xff0c;根据给定的主题、关键词、格式、风格等条件&#xff0c;自动生成各种类型的文本、图…

Maya vs Blender:制作3D动画首选哪一个?

就 3D 动画而言&#xff0c;有两款3D软件引发了最多的争论&#xff1a;Blender 与 Maya。这两个强大的平台都提供强大的工具集&#xff0c;使动画故事和角色栩栩如生。但作为一名3D动画师&#xff0c;您应该投入时间学习和创作哪一个呢&#xff1f;下面我将从以下六点给您一个清…

SQL优化——全自动SQL审核

文章目录 1、抓出外键没创建索引的表2、抓出需要收集直方图的列3、抓出必须创建索引的列4、抓出SELECT * 的SQL5、抓出有标量子查询的SQL6、抓出带有自定义函数的SQL7、抓出表被多次反复调用SQL8、抓出走了FILTER的SQL9、抓出返回行数较多的嵌套循环SQL10、抓出NL被驱动表走了全…

关于豆瓣电影数据抓取以及可视化

首先我们可以先了解以下网络爬虫的定义&#xff1a; 爬虫是一种按照一定的规则&#xff0c;自动地抓取万维网信息的程序或者脚本。它可以在互联网上自动抓取网页内容&#xff0c;将这些信息存储起来。爬虫可以抓取网站的所有网页&#xff0c;从而获取对于我们有价值的信…

企业战略落地:单项目无法解决的,交给项目组合管理

在快速变化的市场环境中&#xff0c;企业对项目管理的需求日益增加&#xff0c;但传统的单项目管理和项目集管理难以满足企业对项目管理的更高要求。随后&#xff0c;项目组合管理方法论逐渐被越来越多的企业运用&#xff0c;并用于指导企业在复杂多变的经营环境下&#xff0c;…

JVM(Jvm如何管理空间?对象如何存储、管理?)

Jvm如何管理空间&#xff08;Java运行时数据区域与分配空间的方式&#xff09; ⭐运行时数据区域 程序计数器 程序计数器&#xff08;PC&#xff09;&#xff0c;是一块较小的内存空。它可以看作是当前线程所执行的字节码的行号指示器。Java虚拟机的多线程是通过时间片轮转调…

《HCIP-openEuler实验指导手册》1.5 Apache持久化连接配置

一、知识点 二、配置方法 在/etc/httpd/conf.d目录中创建持久连接相关配置文件keepalived.conf&#xff0c;并添加如下配置信息&#xff1a; KeepAlive On KeepAliveTimeout 20 MaxKeepAliveRequests 500

目标检测——大规模商品数据集

引言 亲爱的读者们&#xff0c;您是否在寻找某个特定的数据集&#xff0c;用于研究或项目实践&#xff1f;欢迎您在评论区留言&#xff0c;或者通过公众号私信告诉我&#xff0c;您想要的数据集的类型主题。小编会竭尽全力为您寻找&#xff0c;并在找到后第一时间与您分享。 …

【STM32F407+CUBEMX+FreeRTOS+lwIP之TCP记录】

STM32F407CUBEMXFreeRTOSlwIP之TCP记录 注意TCP client(socket)示例 TCP_server(socket)效果 注意 如果连接失败&#xff0c;建议关一下代理软件。 配置方面可以参考一下上一篇UDP的文章 STM32F407CUBEMXFreeRTOSlwIP之UDP记录 TCP client(socket) #define LWIP_DEMO_PORT 8…

OpenSceneGraph

文章目录 关于 OpenSceneGraphScreenshots - OpenMW 关于 OpenSceneGraph 官网&#xff1a;https://openscenegraph.github.io/openscenegraph.io/github : https://github.com/openscenegraph/OpenSceneGraphClasses : https://podsvirov.github.io/osg/reference/opensceneg…

小清新DP题(多做多想)

牛客小白月赛90 F problem solution R(n), R(m); int L 0;F(i, 1, m) R(d[i].st), R(d[i].en), c[ L] d[i].st, c[ L] d[i].en;c[ L] n;sort(c 1, c L 1); int cnt 0;F(i, 1, L) if (c[i] ! c[i - 1]) {g[c[i]] cnt;D[cnt] c[i];}sort(d 1, d m 1);f[0][0][0] …

ElasticSearch 安装(docker)

下载安装包 阿里云链接&#xff1a; elasticSearch.exe https://www.alipan.com/s/3A356NnmWaJ 提取码: 93da 点击链接保存&#xff0c;或者复制本段内容&#xff0c;打开「阿里云盘」APP &#xff0c;无需下载极速在线查看&#xff0c;视频原画倍速播放。 安装步骤 1、首先…

冯唐成事心法笔记 —— 知人

系列文章目录 冯唐成事心法笔记 —— 知己 冯唐成事心法笔记 —— 知人 冯唐成事心法笔记 —— 知世 冯唐成事心法笔记 —— 知智慧 文章目录 系列文章目录PART 2 知人 人人都该懂战略人人都该懂战略第一&#xff0c;什么是战略第二&#xff0c;为什么要做战略第三&#xff0…

GPB | RegVar:基于深度神经网络的非编码区突变功能预测新方法

Genomics, Proteomics & Bioinformatics &#xff08;GPB&#xff09;发表了由军事医学研究院辐射医学研究所张成岗研究员、周钢桥研究员和卢一鸣副研究员团队完成的题为“RegVar: Tissue-specific Prioritization of Noncoding Regulatory Variants”的方法文章。我们的“…

Docker torchserve 部署模型流程——以WSL部署YOLO-FaceV2为例

Docker torchserve 部署模型流程——以WSL部署YOLO-FaceV2为例 Docker torchserve 模型部署 一、配置WSL安装docker二、配置docker环境1&#xff0c;拉取官方镜像2&#xff0c;启动docker容器&#xff0c;将本地路径映射到docker3&#xff0c;查看docker镜像4&#xff0c;进入…

Redis入门到实战教程(基础篇)笔记

教学来源&#xff1a; Redis课程介绍导学_哔哩哔哩_bilibilihttps://www.bilibili.com/video/BV1cr4y1671t?p1一、Redis 入门 1.认识NoSQL 2.Redis在虚拟机中的安装和开机自启 Redis在虚拟机中安装和配置开机自启-CSDN博客https://blog.csdn.net/qq_69183322/article/deta…

项目管理中常用的三个工具:甘特图、看板、燃尽图

在日常项目管理的实践中&#xff0c;为了更有效地追踪项目进度、优化资源配置和提高团队协作效率&#xff0c;管理者常常会借助一些工具来辅助工作。这些工具的本质在于将抽象复杂的项目管理任务具象化、简单化&#xff0c;以更直观、方便的方式呈现出来。 以下介绍项目管理中…

2024.4.28 机器学习周报

目录 引言 Abstract 文献阅读 1、题目 2、引言 3、创新点 4、总体流程 5、网络结构 5.1、损失函数 5.2、Confidence Maps 5.3、Part Affinity Fields(PAFs) 5.4、多人的PAFs 6、实验 7、结论 深度学习 yolov8实现目标检测和人体姿态估计 Yolov8网络结构 yaml…