结合 react-webcam、three.js 与 electron 实现桌面人脸动捕应用

系列文章目录

  1. React 使用 three.js 加载 gltf 3D模型 | three.js 入门
  2. React + three.js 3D模型骨骼绑定
  3. React + three.js 3D模型面部表情控制
  4. React + three.js 实现人脸动捕与3D模型表情同步
  5. 结合 react-webcam、three.js 与 electron 实现桌面人脸动捕应用

示例项目(github):https://github.com/couchette/simple-facial-expression-sync-desktop-app-demo

文章目录

  • 系列文章目录
  • 前言
  • 一、准备工作
    • 1. 创建项目
    • 2. 安装依赖
    • 3. 本地化配置
      • 下载AI模型
      • 下载模型配置文件
      • 复制transcoder配置文件
      • 下载3D模型
      • 配置CSP允许使用wasm
      • 设置webpack配置
  • 二、实现步骤
    • 1.创建detecor
    • 2. 实现WebCam
    • 3. 创建3D模型渲染容器
    • 4. 将三者组合起来实现一个页面
  • 三、最终效果
  • 总结


前言

开发桌面级别的人脸动捕应用程序是一个具有挑战性但也充满创意的任务。本文将介绍如何利用 React-Webcam、Three.js 和 Electron 结合起来,轻松实现一个功能强大的桌面人脸动捕应用。

在本文中,我们将详细介绍如何结合这三种技术,从搭建基础环境到实现人脸捕捉和渲染,最终打造出一个功能完善的桌面人脸动捕应用。无论您是想要深入了解这些技术的原理,还是想要快速构建一个个性化的人脸动捕应用,本文都将为您提供全面的指导和实践经验。


一、准备工作

1. 创建项目

参考让你的软件自动更新,如何使用github和electron-react-boilerplate跨平台应用模板项目实现软件的自动更新功能 使用electron-react-boilerpalte 作为模版项目创建项目,本地将以下面的fork项目作为模版项目,创建项目,该fork包括electron配置脚本(用于配置镜像地址),和一些自制的组件,有实力的小伙伴也可以自制属于自己的模板项目。

https://github.com/couchette/electron-react-boilerplate

2. 安装依赖

npm i
npm i react-webcam antd three @mediapipe/tasks-vision
npm i -D copy-webpack-plugin

3. 本地化配置

下载AI模型

从下面的地址下载模型到assets/ai_models目录下

https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task

下载模型配置文件

从下面地址下载wasm目录到assets/fileset_resolver目录下

https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@0.10.0/wasm

复制transcoder配置文件

复制目录 node_modules/three/examples/jsm/libs/basis到assets目录下

下载3D模型

从下面地址下载3D人脸模型到目录assets/models目录下

https://github.com/mrdoob/three.js/blob/master/examples/models/gltf/facecap.glb

配置CSP允许使用wasm

修改src/render/index.ejs内容如下(这将导致CSP安全警告,目前没有不清楚有其他方法可以在electron-react-boilerplate使用wasm,有知道的大神请告诉我该如何配置,拜谢了)

<!doctype html>
<html><head><meta charset="utf-8" /><metahttp-equiv="Content-Security-Policy"content="script-src 'self' 'unsafe-inline' 'unsafe-eval'; worker-src 'self' blob:;"/><title>emberface</title></script></head><body><div id="root"></div></body>
</html>

设置webpack配置

修改.erb/configs/webpack.config.base.ts文件内容如下

/*** Base webpack config used across other specific configs*/import path from 'path';
import webpack from 'webpack';
import TsconfigPathsPlugins from 'tsconfig-paths-webpack-plugin';
import CopyPlugin from 'copy-webpack-plugin';
import webpackPaths from './webpack.paths';
import { dependencies as externals } from '../../release/app/package.json';const configuration: webpack.Configuration = {externals: [...Object.keys(externals || {})],stats: 'errors-only',module: {rules: [{test: /\.[jt]sx?$/,exclude: /node_modules/,use: {loader: 'ts-loader',options: {// Remove this line to enable type checking in webpack buildstranspileOnly: true,compilerOptions: {module: 'esnext',},},},},],},output: {path: webpackPaths.srcPath,// https://github.com/webpack/webpack/issues/1114library: {type: 'commonjs2',},},/*** Determine the array of extensions that should be used to resolve modules.*/resolve: {extensions: ['.js', '.jsx', '.json', '.ts', '.tsx'],modules: [webpackPaths.srcPath, 'node_modules'],// There is no need to add aliases here, the paths in tsconfig get mirroredplugins: [new TsconfigPathsPlugins()],},plugins: [new CopyPlugin({patterns: [{from: path.join(webpackPaths.rootPath, 'assets', 'models'),to: path.join(webpackPaths.distRendererPath, 'models'),},{from: path.join(webpackPaths.rootPath, 'assets', 'ai_models'),to: path.join(webpackPaths.distRendererPath, 'ai_models'),},{from: path.join(webpackPaths.rootPath, 'assets', 'basis'),to: path.join(webpackPaths.distRendererPath, 'basis'),},{from: path.join(webpackPaths.rootPath, 'assets', 'fileset_resolver'),to: path.join(webpackPaths.distRendererPath, 'fileset_resolver'),},],}),new webpack.EnvironmentPlugin({NODE_ENV: 'production',}),],
};export default configuration;

二、实现步骤

1.创建detecor

在src/renderer/components/Detector.js中创建Detector单例类,封装模型的初始化和运行方法

import { FaceLandmarker, FilesetResolver } from '@mediapipe/tasks-vision';export default class Detector {constructor() {}// 获取单例实例的静态方法static async getInstance() {if (!Detector.instance) {Detector.instance = new Detector();Detector.instance.results = null;Detector.instance.video = null;const filesetResolver = await FilesetResolver.forVisionTasks(// 'https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@0.10.0/wasm','fileset_resolver/wasm',);Detector.instance.faceLandmarker = await FaceLandmarker.createFromOptions(filesetResolver,{baseOptions: {modelAssetPath:// "https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task",'ai_models/face_landmarker.task',delegate: 'GPU',},outputFaceBlendshapes: true,outputFacialTransformationMatrixes: true,runningMode: 'VIDEO',numFaces: 1,},);}return Detector.instance;}bindVideo(video) {this.video = video;}detect() {return this.faceLandmarker.detectForVideo(this.video, Date.now());}async detectLoop() {this.results = this.detect();this.detectLoop();}async run() {if (!this.video.srcObject) throw new Error('Video bind is null');this.detectLoop();}
}

2. 实现WebCam

在src/renderer/components/WebCam.js中使用react-webcam实现webcam功能,代码如下

import React, { useState, useRef, useEffect } from 'react';
import { Button, Select, Layout } from 'antd';
import Webcam from 'react-webcam';const { Option } = Select;
const { Content } = Layout;function WebCam({isShowCanvas,canvasHeight,canvasWidth,stream,setStream,videoRef,loaded,setLoaded,
}) {const [isStartCamButtonLoading, setIsStartCamButtonLoading] = useState(false);const [selectedDevice, setSelectedDevice] = useState(null);const [devices, setDevices] = useState([]);const handleVideoLoad = (videoNode) => {const video = videoNode.target;videoRef.current = videoNode.target;if (video.readyState !== 4) return;if (loaded) return;setLoaded(true);setIsStartCamButtonLoading(false);console.log('loaded');};// 获取可用的视频设备列表const getVideoDevices = async () => {const devicesInfo = await navigator.mediaDevices.enumerateDevices();const videoDevices = devicesInfo.filter((device) => device.kind === 'videoinput',);setDevices(videoDevices);};// 切换相机设备const handleDeviceChange = (deviceId) => {setSelectedDevice(deviceId);};// 开启摄像头const startCamera = async () => {setIsStartCamButtonLoading(true);if (!loaded) {const constraints = {video: {deviceId: selectedDevice ? { exact: selectedDevice } : undefined,},};const tempStream = await navigator.mediaDevices.getUserMedia(constraints);setStream(tempStream);}};// 停止摄像头const stopCamera = () => {if (stream) {stream.getTracks().forEach((track) => track.stop());setStream(null);setSelectedDevice(null);setLoaded(false);}};// 获取视频设备列表React.useEffect(() => {getVideoDevices();}, []);return (<Contentstyle={{display: 'flex',flexDirection: 'column',alignItems: 'center',justifyContent: 'center',}}><Selectplaceholder="选择相机"style={{ width: '150px', marginBottom: 16 }}onChange={handleDeviceChange}>{devices.map((device) => (<Option key={device.deviceId} value={device.deviceId}>{device.label || `Camera ${device.deviceId}`}</Option>))}</Select><divstyle={{display: 'flex',flexDirection: 'row',alignContent: 'center',justifyContent: 'center',}}><Button onClick={startCamera} loading={isStartCamButtonLoading}>启动相机</Button><Button onClick={stopCamera} style={{ marginLeft: 8 }}>关闭相机</Button></div><divstyle={{height: String(canvasHeight) + 'px',width: String(canvasWidth) + 'px',margin: '10px',position: 'relative',}}>{stream && (<Webcamstyle={{visibility: isShowCanvas ? 'visible' : 'hidden',position: 'absolute',top: '0',bottom: '0',left: '0',right: '0',}}width={canvasWidth}height={canvasHeight}videoConstraints={{deviceId: selectedDevice ? { exact: selectedDevice } : undefined,}}onLoadedData={handleVideoLoad}/>)}</div></Content>);
}export default WebCam;

3. 创建3D模型渲染容器

在src/renderer/components/ThreeContainer.js中创建3D模型渲染组件,代码内容如下

import * as THREE from 'three';import { OrbitControls } from 'three/addons/controls/OrbitControls.js';
import { RoomEnvironment } from 'three/addons/environments/RoomEnvironment.js';import { GLTFLoader } from 'three/addons/loaders/GLTFLoader.js';
import { KTX2Loader } from 'three/addons/loaders/KTX2Loader.js';
import { MeshoptDecoder } from 'three/addons/libs/meshopt_decoder.module.js';import { GUI } from 'three/addons/libs/lil-gui.module.min.js';
import { useRef, useEffect, useState } from 'react';// Mediapipeconst blendshapesMap = {// '_neutral': '',browDownLeft: 'browDown_L',browDownRight: 'browDown_R',browInnerUp: 'browInnerUp',browOuterUpLeft: 'browOuterUp_L',browOuterUpRight: 'browOuterUp_R',cheekPuff: 'cheekPuff',cheekSquintLeft: 'cheekSquint_L',cheekSquintRight: 'cheekSquint_R',eyeBlinkLeft: 'eyeBlink_L',eyeBlinkRight: 'eyeBlink_R',eyeLookDownLeft: 'eyeLookDown_L',eyeLookDownRight: 'eyeLookDown_R',eyeLookInLeft: 'eyeLookIn_L',eyeLookInRight: 'eyeLookIn_R',eyeLookOutLeft: 'eyeLookOut_L',eyeLookOutRight: 'eyeLookOut_R',eyeLookUpLeft: 'eyeLookUp_L',eyeLookUpRight: 'eyeLookUp_R',eyeSquintLeft: 'eyeSquint_L',eyeSquintRight: 'eyeSquint_R',eyeWideLeft: 'eyeWide_L',eyeWideRight: 'eyeWide_R',jawForward: 'jawForward',jawLeft: 'jawLeft',jawOpen: 'jawOpen',jawRight: 'jawRight',mouthClose: 'mouthClose',mouthDimpleLeft: 'mouthDimple_L',mouthDimpleRight: 'mouthDimple_R',mouthFrownLeft: 'mouthFrown_L',mouthFrownRight: 'mouthFrown_R',mouthFunnel: 'mouthFunnel',mouthLeft: 'mouthLeft',mouthLowerDownLeft: 'mouthLowerDown_L',mouthLowerDownRight: 'mouthLowerDown_R',mouthPressLeft: 'mouthPress_L',mouthPressRight: 'mouthPress_R',mouthPucker: 'mouthPucker',mouthRight: 'mouthRight',mouthRollLower: 'mouthRollLower',mouthRollUpper: 'mouthRollUpper',mouthShrugLower: 'mouthShrugLower',mouthShrugUpper: 'mouthShrugUpper',mouthSmileLeft: 'mouthSmile_L',mouthSmileRight: 'mouthSmile_R',mouthStretchLeft: 'mouthStretch_L',mouthStretchRight: 'mouthStretch_R',mouthUpperUpLeft: 'mouthUpperUp_L',mouthUpperUpRight: 'mouthUpperUp_R',noseSneerLeft: 'noseSneer_L',noseSneerRight: 'noseSneer_R',// '': 'tongueOut'
};function ThreeContainer({ videoHeight, videoWidth, detectorRef }) {const containerRef = useRef(null);const isContainerRunning = useRef(false);useEffect(() => {if (!isContainerRunning.current && containerRef.current) {isContainerRunning.current = true;init();}async function init() {const renderer = new THREE.WebGLRenderer({ antialias: true });renderer.setPixelRatio(window.devicePixelRatio);renderer.setSize(window.innerWidth, window.innerHeight);renderer.toneMapping = THREE.ACESFilmicToneMapping;const element = renderer.domElement;containerRef.current.appendChild(element);const camera = new THREE.PerspectiveCamera(60,window.innerWidth / window.innerHeight,1,100,);camera.position.z = 5;const scene = new THREE.Scene();scene.scale.x = -1;const environment = new RoomEnvironment(renderer);const pmremGenerator = new THREE.PMREMGenerator(renderer);scene.background = new THREE.Color(0x666666);scene.environment = pmremGenerator.fromScene(environment).texture;// const controls = new OrbitControls(camera, renderer.domElement);// Facelet face, eyeL, eyeR;const eyeRotationLimit = THREE.MathUtils.degToRad(30);const ktx2Loader = new KTX2Loader().setTranscoderPath('/basis/').detectSupport(renderer);new GLTFLoader().setKTX2Loader(ktx2Loader).setMeshoptDecoder(MeshoptDecoder).load('models/facecap.glb', (gltf) => {const mesh = gltf.scene.children[0];scene.add(mesh);const head = mesh.getObjectByName('mesh_2');head.material = new THREE.MeshNormalMaterial();face = mesh.getObjectByName('mesh_2');eyeL = mesh.getObjectByName('eyeLeft');eyeR = mesh.getObjectByName('eyeRight');// GUIconst gui = new GUI();gui.close();const influences = head.morphTargetInfluences;for (const [key, value] of Object.entries(head.morphTargetDictionary,)) {gui.add(influences, value, 0, 1, 0.01).name(key.replace('blendShape1.', '')).listen(influences);}renderer.setAnimationLoop(() => {animation();});});// const texture = new THREE.VideoTexture(video);// texture.colorSpace = THREE.SRGBColorSpace;const geometry = new THREE.PlaneGeometry(1, 1);const material = new THREE.MeshBasicMaterial({// map: texture,depthWrite: false,});const videomesh = new THREE.Mesh(geometry, material);scene.add(videomesh);const transform = new THREE.Object3D();function animation() {if (detectorRef.current !== null) {// console.log(detectorRef.current);const results = detectorRef.current.detect();if (results.facialTransformationMatrixes.length > 0) {const facialTransformationMatrixes =results.facialTransformationMatrixes[0].data;transform.matrix.fromArray(facialTransformationMatrixes);transform.matrix.decompose(transform.position,transform.quaternion,transform.scale,);const object = scene.getObjectByName('grp_transform');object.position.x = transform.position.x;object.position.y = transform.position.z + 40;object.position.z = -transform.position.y;object.rotation.x = transform.rotation.x;object.rotation.y = transform.rotation.z;object.rotation.z = -transform.rotation.y;}if (results.faceBlendshapes.length > 0) {const faceBlendshapes = results.faceBlendshapes[0].categories;// Morph values does not exist on the eye meshes, so we map the eyes blendshape score into rotation valuesconst eyeScore = {leftHorizontal: 0,rightHorizontal: 0,leftVertical: 0,rightVertical: 0,};for (const blendshape of faceBlendshapes) {const categoryName = blendshape.categoryName;const score = blendshape.score;const index =face.morphTargetDictionary[blendshapesMap[categoryName]];if (index !== undefined) {face.morphTargetInfluences[index] = score;}// There are two blendshape for movement on each axis (up/down , in/out)// Add one and subtract the other to get the final score in -1 to 1 rangeswitch (categoryName) {case 'eyeLookInLeft':eyeScore.leftHorizontal += score;break;case 'eyeLookOutLeft':eyeScore.leftHorizontal -= score;break;case 'eyeLookInRight':eyeScore.rightHorizontal -= score;break;case 'eyeLookOutRight':eyeScore.rightHorizontal += score;break;case 'eyeLookUpLeft':eyeScore.leftVertical -= score;break;case 'eyeLookDownLeft':eyeScore.leftVertical += score;break;case 'eyeLookUpRight':eyeScore.rightVertical -= score;break;case 'eyeLookDownRight':eyeScore.rightVertical += score;break;}}eyeL.rotation.z = eyeScore.leftHorizontal * eyeRotationLimit;eyeR.rotation.z = eyeScore.rightHorizontal * eyeRotationLimit;eyeL.rotation.x = eyeScore.leftVertical * eyeRotationLimit;eyeR.rotation.x = eyeScore.rightVertical * eyeRotationLimit;}}videomesh.scale.x = videoWidth / 100;videomesh.scale.y = videoHeight / 100;renderer.render(scene, camera);// controls.update();}window.addEventListener('resize', function () {camera.aspect = window.innerWidth / window.innerHeight;camera.updateProjectionMatrix();renderer.setSize(window.innerWidth, window.innerHeight);});}}, []);return (<divref={containerRef}style={{position: 'relative',top: '0',left: '0',width: '100vw',height: '100vh',}}></div>);
}export default ThreeContainer;

4. 将三者组合起来实现一个页面

详细代码如下

import React, { useEffect, useState, useRef } from 'react';
import { Page } from './Page';
import ThreeContainer from '../components/ThreeContainer';
import WebCam from '../components/WebCam';
import Detector from '../components/Detector';function PageContent() {const [stream, setStream] = useState(null);const videoRef = useRef(null);const [isWebCamLoaded, setIsWebCamLoaded] = useState(false);const detectorRef = useRef(null);useEffect(() => {async function init() {Detector.getInstance().then((rtn) => {detectorRef.current = rtn;detectorRef.current.bindVideo(videoRef.current);});}if (isWebCamLoaded) {init();}}, [isWebCamLoaded]);return (<divstyle={{position: 'relative',top: '0',left: '0',bottom: '0',right: '0',}}><ThreeContainerid="background"videoHeight={window.innerHeight}videoWidth={window.innerWidth}detectorRef={detectorRef}/><divid="UI"style={{position: 'absolute',top: '0',left: '0',bottom: '0',right: '0',}}><WebCamisShowCanvas={false}canvasHeight={480}canvasWidth={640}stream={stream}setStream={setStream}videoRef={videoRef}isLoaded={isWebCamLoaded}setLoaded={setIsWebCamLoaded}/></div></div>);
}export function App() {return (<Page><PageContent /></Page>);
}
export default App;

三、最终效果

运行项目npm start,效果如下
请添加图片描述


总结

本文介绍了如何结合 react-webcam、three.js 与 electron 实现桌面人脸动捕应用,希望对您有所帮助,如果文章中存在任何问题、疏漏,或者您对文章有任何建议,请在评论区提出。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/816434.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【linux深入剖析】深入理解软硬链接 | 动静态库的制作以及使用

&#x1f341;你好&#xff0c;我是 RO-BERRY &#x1f4d7; 致力于C、C、数据结构、TCP/IP、数据库等等一系列知识 &#x1f384;感谢你的陪伴与支持 &#xff0c;故事既有了开头&#xff0c;就要画上一个完美的句号&#xff0c;让我们一起加油 目录 1.理解软硬链接1.1 操作观…

CCF区块链论文录用资讯--ICDE 2024

ICDE是CCF A类会议 (数据库&#xff0f;数据挖掘&#xff0f;内容检索) 其2024录用了8篇区块链论文 Database technology for Blockchains I Efficient Partial Order Based Transaction Processing for Permissioned Blockchains &#xff08;针对许可区块链的高效的基于偏序…

【算法】回溯:与递归,dfs的同质与分别,剪枝与恢复现场的详细理解,n皇后的回溯解法及算法复杂度分析。

目录 ​编辑 1.什么是回溯 2.关于剪枝 3.关于恢复现场 4.题目&#xff1a;二叉树的所有路径&#xff08;凸显恢复现场&#xff1a;切实感受回溯与深搜&#xff09; 问题分析 ①函数设置为&#xff1a;void Dfs(root) ②函数设置为&#xff1a;void Dfs(root,path) 解题思想&…

webpack or vite? vuex or pinia?

2022.2.18, 新建一个vue3的项目&#xff0c;过程如下&#xff1a; 目录结构如下&#xff1a; 当还在犹豫选择webpack还是vite&#xff0c;vuex或者pinia的时候&#xff0c;尤大大已经给出了默认选择&#xff0c;vite && pinia。

分布式ID的方案和架构

超过并发&#xff0c;超高性能分布式ID生成系统的要求 在复杂的超高并发、分布式系统中&#xff0c;往往需要对大量的数据和消息进行唯一标识如在高并发、分布式的金融、支付、餐饮、酒店、电影等产品的系统中&#xff0c;数据日渐增长&#xff0c;对数据分库分表后需要有一个唯…

【Linux】阿里云ECS搭建lnmp和lamp集群

搭建LNMP&#xff08;Linux Nginx MySQL PHP&#xff09;或LAMP&#xff08;Linux Apache MySQL PHP&#xff09;集群 创建ECS实例&#xff1a; 在阿里云控制台创建多个ECS实例&#xff0c;选择相应的操作系统和配置&#xff0c;确保这些实例在同一VPC网络内&#xff0c;…

Golang | Leetcode Golang题解之第28题找出字符串中第一个匹配项的下标

题目&#xff1a; 题解&#xff1a; func strStr(haystack, needle string) int {n, m : len(haystack), len(needle)if m 0 {return 0}pi : make([]int, m)for i, j : 1, 0; i < m; i {for j > 0 && needle[i] ! needle[j] {j pi[j-1]}if needle[i] needle[…

安全加速SCDN带的态势感知能为网站安全带来哪些帮助

随着安全加速SCDN被越来越多的用户使用&#xff0c;很多用户都不知道安全加速SCDN的态势感知是用于做什么的&#xff0c;德迅云安全今天就带大家来了解下什么是态势感知&#xff0c;态势感知顾名思义就是对未发生的事件进行预知&#xff0c;并提前进行防范措施的布置&#xff0…

机器学习(31)PINN

文章目录 摘要Abstract一、监督学习二、文献阅读1. 题目2. abstract3. 偏微分方程的数据驱动解3.1连续时间模型example(Schrodinger equation)&#xff1a; 3.2离散时间模型Example (Allen–Cahn equation)&#xff1a; 4. 文献解读4.1 Introduction4.2 创新点 三、实验内容1.实…

车载电子电器架构 —— 电子电气架构开发总结和展望

车载电子电器架构 —— 电子电气架构开发总结和展望 我是穿拖鞋的汉子,魔都中坚持长期主义的汽车电子工程师。 老规矩,分享一段喜欢的文字,避免自己成为高知识低文化的工程师: 屏蔽力是信息过载时代一个人的特殊竞争力,任何消耗你的人和事,多看一眼都是你的不对。非必要…

【web网页制作】html+css旅游家乡山西主题网页制作(3页面)【附源码】

山西旅游网页目录 涉及知识写在前面一、网页主题二、网页效果Page1、景点介绍Page2、酒店精选|出行攻略Page3、景色欣赏 三、网页架构与技术3.1 脑海构思3.2 整体布局3.3 技术说明书 四、网页源码4.1 主页模块源码4.2 源码获取方式 作者寄语 涉及知识 山西旅游主题网页制作&am…

为什么光伏探勘测绘需要无人机?

随着全球对可再生能源需求的不断增长&#xff0c;光伏产业也迎来了快速发展的机遇。光伏电站作为太阳能发电的主要形式之一&#xff0c;其建设前期的探勘测绘工作至关重要。在这一过程中&#xff0c;无人机技术的应用正逐渐展现出其独特的优势。那么&#xff0c;为什么光伏探勘…

Java调用http接口的几种方式(HttpURLConnection、OKHttp、HttpClient、RestTemplate)

Java作为后端语言是开发接口实现功能供客户端调用接口&#xff0c;这些客户端中最主要是本项目的前端&#xff1b;但有时候也需要Java请求其他的接口&#xff0c;比如需要长连接转短链接&#xff08;请求百度的一个接口可以实现&#xff09;、获取三方OSS签名、微信小程序签名、…

IDEA 使用备忘录(不断更新)

IDEA 项目结构&#xff08;注意层级结构&#xff0c;新建相应结构时&#xff0c;按照以下顺序新建&#xff09;&#xff1a; project&#xff08;项目&#xff09; module&#xff08;模块&#xff09; package&#xff08;包&#xff09; class&#xff08;类&#xff09; 项…

公布应用程序

&#x1f4d5;作者简介&#xff1a; 过去日记&#xff0c;致力于Java、GoLang,Rust等多种编程语言&#xff0c;热爱技术&#xff0c;喜欢游戏的博主。 &#x1f4d8;相关专栏Rust初阶教程、go语言基础系列、spring教程等&#xff0c;大家有兴趣的可以看一看 &#x1f4d9;Jav…

【Vue】新手一步一步安装 vue 语言开发环境

文章目录 1、下载node.js安装包 1、下载node.js安装包 1.打开node.js的官网下载地址&#xff1a;http://nodejs.cn/download/ 选择适合自己系统的安装包&#xff1a;winds、mac 2. 配置node.js和npm环境变量 安装好之后&#xff0c;对npm安装的全局模块所在路径以及缓存所在路…

Spring Boot | Spring Boot中进行 “文件上传” 和 “文件下载”

目录: 一、SpringBoot中进行 " 文件上传" :1.编写 "文件上传" 的 “表单页面”2.在全局配置文件中添加文件上传的相关配置3.进行文件上传处理&#xff0c;实现 "文件上传" 功能4.效果测试 二、SpringBoot中进行 "文件下载" :“英文名称…

ASP.NET基于Ajax+Lucene构建搜索引擎的设计和实现

摘 要 通过搜索引擎从互联网上获取有用信息已经成为人们生活的重要组成部分&#xff0c;Lucene是构建搜索引擎的其中一种方式。搜索引擎系统是在.Net平台上用C#开发的&#xff0c;数据库是MSSQL Server 2000。主要完成的功能有&#xff1a;用爬虫抓取网页&#xff1b;获取有效…

NPU编译MultiScaleDeformableAttention

NPU对pytorch&#xff0c;想将检测模型在NPU上训练&#xff0c;存在编译MultiScaleDeformableAttention的需求。 然而&#xff0c;原dino模型https://github.com/IDEA-Research/DINO/tree/main/models/dino/ops/src 仅包含CPU版本和GPU版本&#xff1a; 是不是就真的无法解决…

传统图机器学习的特征工程-连接

概念及应用场景 通过已知连接补全未知连接 将link编码成为向量输入到机器学习模型中&#xff1a; 1.直接提取link的特征&#xff0c;构建D维向量 2.把link两段节点的D维向量拼在一起&#xff08;丢失了link本身的连接结构信息&#xff09; 应用&#xff1a; 1.客观静态图…