数字人知识库:Awesome-Talking-Head-Synthesis

数字人知识库:Awesome-Talking-Head-Synthesis

文章目录

  • 数字人知识库:Awesome-Talking-Head-Synthesis
    • Datasets
    • Survey
    • Audio-driven
    • Text-driven
    • NeRF & 3D
    • Metrics
    • Tools & Software
    • Slides & Presentations

Gihub:https://github.com/Kedreamix/Awesome-Talking-Head-Synthesis

这份资源库整理了与生成对抗网络(GAN)和神经光辐场(NeRF)相关的论文、代码和资源,重点关注基于图像和音频的虚拟讲话头合成论文及已发布代码。

论文合集及发布代码整理。✍️

大多数论文链接到“arXiv”或学术会议/期刊的PDF。但是,一些论文可能需要学术许可才能查看。

这个Awesome Talking Head Synthesis项目将持续更新 - 欢迎Pull Request。如果您有任何论文缺失、新增论文、关键研究人员或错别字建议,请编辑提交PR。您也可以打开Issue或直接通过电子邮件联系我。

如果您觉得这个仓库有用,请star✭支持!

2023年12月更新 感谢https://github.com/Curated-Awesome-Lists/awesome-ai-talking-heads, 我增加了一些其内容,例如Tools&Software和Slides&Presentations模块。 希望这对您有帮助。

如果您对扩展这个聚合资源有任何想法或反馈,请打开Issue或PR——社区贡献对推进我们共同的知识至关重要。

让我们继续努力,实现更逼真的数字人脸表现!我们已经走了很长一段路,但还有很长的路要走。通过持续的研究和合作,我相信我们一定会达到目标!

如果您觉得这个仓库很有价值,请star✭并分享给他人。您的支持可以激励我持续改进和维护它。如果您还有任何其他问题,请告诉我!

This repository organizes papers, codes and resources related to generative adversarial networks (GANs) 🤗 and neural radiance fields (NeRF) 🎨, with a main focus on image-driven and audio-driven talking head synthesis papers and released codes. 👤

Papers for Talking Head Synthesis, released codes collections. ✍️

Most papers are linked to PDFs on “arXiv” or journal/conference websites 📚. However, some papers require an academic license to view 🔐.

🔆 This project Awesome-Talking-Head-Synthesis is ongoing - pull requests are welcome! If you have any suggestions (missing papers, new papers, key researchers or typos), please feel free to edit and submit a PR. You can also open an issue or contact me directly via email. 📩

⭐ If you find this repo useful, please give it a star! 🤩

2023.12 Update 📆

Thank you to https://github.com/Curated-Awesome-Lists/awesome-ai-talking-heads, I have added some of its contents, such as Tools & Software and Slides & Presentations. 🙏 I hope this will be helpful.😊

If you have any feedback or ideas on extending this aggregated resource, please open an issue or PR - community contributions are vital to advancing this shared knowledge. 🤝

Let’s keep pushing forward to recreate ever more realistic digital human faces! 💪 We’ve come so far but still have a long way to go. With continued research 🔬 and collaboration, I’m sure we’ll get there! 🤗

Please feel free to star ⭐ and share this repo if you find it a valuable resource. Your support helps motivate me to keep maintaining and improving it. 🥰 Let me know if you have any other questions!

Datasets

在这里插入图片描述

DatasetDownload LinkDescription
Faceforensics++Download link
CelebVDownload link
VoxCelebDownload linkVoxCeleb, a comprehensive audio-visual dataset for speaker recognition, encompasses both VoxCeleb1 and VoxCeleb2 datasets.
VoxCeleb1Download linkVoxCeleb1 contains over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube.
VoxCeleb2Download linkExtracted from YouTube videos, VoxCeleb2 includes video URLs and discourse timestamps. As the largest public audio-visual dataset, it is primarily used for speaker recognition tasks. However, it can also be utilized for training talking-head generation models. To obtain download permission and access the dataset, apply here. Requires 300 GB+ storage space.
ObamaSetDownload linkObamaSet is a specialized audio-visual dataset focused on analyzing the visual speech of former US President Barack Obama. All video samples are collected from his weekly address footage. Unlike previous datasets, it exclusively centers on Barack Obama and does not provide any human annotations.
TalkingHead-1KHDownload linkThe dataset consists of 500k video clips, of which about 80k are greater than 512x512 resolution. Only videos under permissive licenses are included. Note that the number of videos differ from that in the original paper because a more robust preprocessing script was used to split the videos.
LRW (Lip Reading in the Wild)Download linkLRW, a diverse English-speaking video dataset from the BBC program, features over 1000 speakers with various speaking styles and head poses. Each video is 1.16 seconds long (29 frames) and involves the target word along with context.
MEAD 2020Download linkMEAD 2020 is a Talking Head dataset annotated with emotion labels and intensity labels. The dataset focuses on facial generation for natural emotional speech, covering eight different emotions on three intensity levels.
CelebV-HQDownload linkCelebV-HQ is a high-quality video dataset comprising 35,666 clips with a resolution of at least 512x512. It includes 15,653 identities, and each clip is manually labeled with 83 facial attributes, spanning appearance, action, and emotion. The dataset’s diversity and temporal coherence make it a valuable resource for tasks like unconditional video generation and video facial attribute editing.
HDTFDownload linkHDTF, the High-definition Talking-Face Dataset, is a large in-the-wild high-resolution audio-visual dataset consisting of approximately 362 different videos totaling 15.8 hours. Original video resolutions are 720 P or 1080 P, and each cropped video is resized to 512 × 512.
CREMA-DDownload linkCREMA-D is a diverse dataset with 7,442 original clips featuring 91 actors, including 48 male and 43 female actors aged 20 to 74, representing various races and ethnicities. The dataset includes recordings of actors speaking from a set of 12 sentences, expressing six different emotions (Anger, Disgust, Fear, Happy, Neutral, and Sad) at four emotion levels (Low, Medium, High, and Unspecified). Emotion and intensity ratings were gathered through crowd-sourcing, with 2,443 participants rating 90 unique clips each (30 audio, 30 visual, and 30 audio-visual). Over 95% of the clips have more than 7 ratings. For additional details on CREMA-D, refer to the paper link.
LRS2Download linkLRS2 is a lip reading dataset that includes videos recorded in diverse settings, suitable for studying lip reading and visual speech recognition.
GRIDDownload linkThe GRID dataset was recorded in a laboratory setting with 34 volunteers, each speaking 1000 phrases, totaling 34,000 utterance instances. Phrases follow specific rules, with six words randomly selected from six categories: “command,” “color,” “preposition,” “letter,” “number,” and “adverb.” Access the dataset here.
SAVEEDownload linkThe SAVEE (Surrey Audio-Visual Expressed Emotion) database is a crucial component for developing an automatic emotion recognition system. It features recordings from 4 male actors expressing 7 different emotions, totaling 480 British English utterances. These sentences, selected from the standard TIMIT corpus, are phonetically balanced for each emotion. Recorded in a high-quality visual media lab, the data undergoes processing and labeling. Performance evaluation involves 10 subjects rating recordings under audio, visual, and audio-visual conditions. Classification systems for each modality achieve speaker-independent recognition rates of 61%, 65%, and 84% for audio, visual, and audio-visual, respectively.
BIWI(3D)Download linkThe Biwi 3D Audiovisual Corpus of Affective Communication serves as a compromise between data authenticity and quality, acquired at ETHZ in collaboration with SYNVO GmbH.
VOCADownload linkVOCA is a 4D-face dataset with approximately 29 minutes of 4D face scans and synchronized audio from 12-bit speakers. It greatly facilitates research in 3D VSG.
Multiface(3D)Download linkThe Multiface Dataset consists of high-quality multi-view video recordings of 13 people displaying various facial expressions. It contains approximately 12,200 to 23,000 frames per subject, captured at 30 fps from around 40 to 160 camera views with uniform lighting. The dataset’s size is 65TB and includes raw images (2048x1334 resolution), tracked and meshed heads, 1024x1024 unwrapped face textures, camera calibration metadata, and audio. This repository provides code for downloading the dataset and building a codec avatar using a deep appearance model.

Survey

YearTitleConference/Journal
2023From Pixels to Portraits: A Comprehensive Survey of Talking Head Generation Techniques and ApplicationsarXiv 2023
2023Human-Computer Interaction System: A Survey of Talking-Head GenerationIEEE
2023Talking human face generation: A surveyACM
2022Deep Learning for Visual Speech Analysis: A SurveyarXiv 2022
2020What comprises a good talking-head video generation?: A Survey and BenchmarkarXiv 2020

Audio-driven

YearTitleConference/JournalCodeProjectKeywords
2024[GAIA] GAIA: Zero-shot Talking Avatar GenerationArix 2024Code(coming)Project😲😲😲
2023Implicit Identity Representation Conditioned Memory Compensation Network for Talking Head Video GenerationICCV 2023CodeProject-
2023[ToonTalker] ToonTalker: Cross-Domain Face ReenactmentICCV 2023---
2023Efficient Emotional Adaptation for Audio-Driven Talking-Head GenerationICCV 2023CodeProject-
2023[EMMN] EMMN: Emotional Motion Memory Network for Audio-driven Emotional Talking Face GenerationICCV 2023--Emotion
2023Emotional Listener Portrait: Realistic Listener Motion Simulation in ConversationICCV 2023--Emotion,LHG
2023[MODA] MODA: Mapping-Once Audio-driven Portrait Animation with Dual AttentionsICCV 2023---
2023[Facediffuser] Facediffuser: Speech-driven 3d facial animation synthesis using diffusionACM SIGGRAPH MIG 2023CodeProject🔥Diffusion,3D
2023Audio-Driven Dubbing for User Generated Contents via Style-Aware Semi-Parametric SynthesisTCSVT 2023--
2023[SadTalker] SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face AnimationCVPR 2023CodeProject3D,Single Image
2023[EmoTalk] EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face AnimationICCV 2023Code3D,Emotion
2023Emotional Talking Head Generation based on Memory-Sharing and Attention-Augmented NetworksInterSpeech 2023Emotion
2023[DINet] DINet: Deformation Inpainting Network for Realistic Face Visually Dubbing on High Resolution VideoAAAI 2023Code-
2023[StyleTalk] StyleTalk: One-shot Talking Head Generation with Controllable Speaking StylesAAAI 2023Code-Style
2023High-fidelity Generalized Emotional Talking Face Generation with Multi-modal Emotion Space LearningCVPR 2023--Emotion
2023[StyleSync] StyleSync: High-Fidelity Generalized and Personalized Lip Sync in Style-based GeneratorCVPR 2023CodeProject-
2023[TalkLip] TalkLip: Seeing What You Said - Talking Face Generation Guided by a Lip Reading ExpertCVPR 2023Code--
2023[CodeTalker] CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion PriorCVPR 2023CodeProject3D,codebook
2023[EmoGen] Emotionally Enhanced Talking Face GenerationArxiv 2023Code-Emotion
2023[DAE-Talker] DAE-Talker: High Fidelity Speech-Driven Talking Face Generation with Diffusion AutoencoderArxiv 2023-Project🔥Diffusion
2023[READ] [READ Avatars: Realistic Emotion-controllable Audio Driven Avatars](READ Avatars: Realistic Emotion-controllable Audio Driven Avatars)Arxiv 2023---
2023[DiffTalk] DiffTalk: Crafting Diffusion Models for Generalized Talking Head SynthesisCVPR 2023CodeProject🔥Diffusion
2023[Diffused Heads] Diffused Heads: Diffusion Models Beat GANs on Talking-Face GenerationArxiv 2023-Project🔥Diffusion
2022[MemFace] Expressive Talking Head Generation with Granular Audio-Visual ControlCVPR 2022---
2022Talking Face Generation with Multilingual TTSCVPR 2022Demo Track--
2022[EAMM] EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion ModelSIGGRAPH 2022--Emotion
2022[SPACEx] SPACEx 🚀: Speech-driven Portrait Animation with Controllable ExpressionarXiv 2022-Project-
2022[AV-CAT] Masked Lip-Sync Prediction by Audio-Visual Contextual Exploitation in TransformersSIGGRAPH Asia 2022---
2022[MemFace] Memories are One-to-Many Mapping Alleviators in Talking Face GenerationarXiv 2022---
2021[PC-AVS] PC-AVS: Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual RepresentationCVPR 2021CodeProject-
2021[IATS] Imitating Arbitrary Talking Style for Realistic Audio-Driven Talking Face SynthesisACM MM 2021---
2021[Speech2Talking-Face] Speech2Talking-Face: Inferring and Driving a Face with Synchronized Audio-Visual RepresentationIJCAI 2021---
2021[FAU] Talking Head Generation with Audio and Speech Related Facial Action UnitsBMVC 2021--AU
2021[EVP] Audio-Driven Emotional Video PortraitsCVPR 2021Code-Emotion
2021[IATS] IATS: Imitating Arbitrary Talking Style for Realistic Audio-Driven Talking Face SynthesisACM Multimedia 2021---
2020[Wav2Lip] A Lip Sync Expert Is All You Need for Speech to Lip Generation In The WildACM Multimedia 2020CodeProject-
2020[RhythmicHead] Talking-head Generation with Rhythmic Head MotionECCV 2020Code--
2020[MakeItTalk] Speaker-Aware Talking-Head AnimationSIGGRAPH Asia 2020CodeProject-
2020[Neural Voice Puppetry] Audio-driven Facial ReenactmentECCV 2020-Project-
2020[MEAD] A Large-scale Audio-visual Dataset for Emotional Talking-face GenerationECCV 2020CodeProject-
2020Realistic Speech-Driven Facial Animation with GANsIJCV 2020---
2019[DAVS] Talking Face Generation by Adversarially Disentangled Audio-Visual RepresentationAAAI 2019Code--
2019[ATVGnet] Hierarchical Cross-modal Talking Face Generation with Dynamic Pixel-wise LossCVPR 2019Code--
2018Lip Movements Generation at a GlanceECCV 2018Code--
2018[VisemeNet] Audio-Driven Animator-Centric Speech AnimationSIGGRAPH 2018---
2017[Synthesizing Obama] Learning Lip Sync From AudioSIGGRAPH 2017-Project-
2017[You Said That?] Synthesising Talking Faces From AudioBMVC 2019Code--
2017Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and EmotionSIGGRAPH 2017---
2017A Deep Learning Approach for Generalized Speech AnimationSIGGRAPH 2017---
2016[LRW] Lip Reading in the WildACCV 2016---

Text-driven

YearTitleConference/JournalCode/Proj
2023TalkCLIP: Talking Head Generation with Text-Guided Expressive Speaking StylesArxiv
2021Write-a-speaker: Text-based Emotional and Rhythmic Talking-head GenerationAAAICode
2021Txt2vid: Ultra-low bitrate compression of talking-head videos via textArxivCode

NeRF & 3D

YearTitleConference/JournalCodeProjectKeywords
2024[SyncTalk] SyncTalk: The Devil😈 is in the Synchronization for Talking Head SynthesisCVPR 2024?CodeProject😈
2024[DT-NeRF] DT-NeRF: Decomposed Triplane-Hash Neural Radiance Fields for High-Fidelity Talking Portrait SynthesisICASSP 2024--ER-NeRF
2023[ER-NeRF] Efficient Region-Aware Neural Radiance Fields for High-Fidelity Talking Portrait SynthesisICCV 2023CodeProjectTri-plane
2023[LipNeRF] LipNeRF: What is the right feature space to lip-sync a NeRF?FG 2023CodeProjectWav2lip
2023[SD-NeRF] SD-NeRF: Towards Lifelike Talking Head Animation via Spatially-adaptive Dual-driven NeRFsIEEE 2023--
2023[Instruct-NeuralTalker] Instruct-NeuralTalker: Editing Audio-Driven Talking Radiance Fields with InstructionsArxiv 2023
2023[GeneFace++] Generalized and Stable Real-Time Audio-Driven 3D Talking Face GenerationArxiv 2023-Project-
2023[GeneFace] GeneFace: Generalized and High-Fidelity Audio-Driven 3D Talking Face SynthesisICLR 2023CodeProject-
2022[RAD-NeRF] RAD-NeRF: Real-time Neural Talking Portrait SynthesisArxiv 2022CodeProjectInstantNGP
2022[DFRF] DFRF:Learning Dynamic Facial Radiance Fields for Few-Shot Talking Head SynthesisECCV 2022CodeProject
2022[DialogueNeRF] DialogueNeRF: Towards Realistic Avatar Face-to-face Conversation Video GenerationArxiv 2022---
2022[NeRFInvertor] NeRFInvertor: High Fidelity NeRF-GAN Inversion for Single-shot Real Image AnimationArxiv 2022CodeProject-
2022[Next3D] Next3D: Generative Neural Texture Rasterization for 3D-Aware Head AvatarsArxiv 2022CodeProject-
2022[3DFaceShop] 3DFaceShop: Explicitly Controllable 3D-Aware Portrait GenerationArxiv 2022CodeProject-
2022[FNeVR] FNeVR: Neural Volume Rendering for Face AnimationArxiv 2022Code--
2022[ROME] ROME: Realistic One-shot Mesh-based Head AvatarsECCV 2022CodeProject-
2022[IMavatar] IMavatar: Implicit Morphable Head Avatars from VideosCVPR 2022CodeProject-
2022[HeadNeRF] HeadNeRF: A Real-time NeRF-based Parametric Head ModelCVPR 2022CodeProject-
2022[SSP-NeRF] Semantic-Aware Implicit Neural Audio-Driven Video Portrait GenerationArxiv 2022CodeProject-
2021[AD-NeRF] AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head SynthesisICCV 2021CodeProject-
2021[NerFACE] NerFACE: Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar ReconstructionCVPR 2021 OralCodeProject-
2021[DFA-NeRF] DFA-NeRF: Personalized Talking Head Generation via Disentangled Face Attributes Neural RenderingArxiv 2021Code--

Metrics

MetricsPaperLink
PSNR (peak signal-to-noise ratio)-
SSIM (structural similarity index measure)Image quality assessment: from error visibility to structural similarity.
CPBD(cumulative probability of blur detection)A no-reference image blur metric based on the cumulative probability of blur detection
LPIPS (Learned Perceptual Image Patch Similarity) -The Unreasonable Effectiveness of Deep Features as a Perceptual Metricpaper
NIQE (Natural Image Quality Evaluator)Making a ‘Completely Blind’ Image Quality Analyzerpaper
FID (Fréchet inception distance)GANs trained by a two time-scale update rule converge to a local nash equilibrium
LMD (landmark distance error)Lip Movements Generation at a Glance
LRA (lip-reading accuracy)Talking Face Generation by Conditional Recurrent Adversarial Networkpaper
WER(word error rate)Lipnet: end-to-end sentencelevel lipreading.
LSE-D (Lip Sync Error - Distance)Out of time: automated lip sync in the wild
LSE-C (Lip Sync Error - Confidence)Out of time: automated lip sync in the wild
ACD(Average content distance)Facenet: a unified embedding for face recognition and clustering.
CSIM(cosine similarity)Arcface: additive angular margin loss for deep face recognition.
EAR(eye aspect ratio)Real-time eye blink detection using facial landmarks. In: Computer Vision Winter Workshop
ESD(emotion similarity distance)What comprises a good talking-head video generation?: A Survey and Benchmark

Tools & Software

Tool/ResourceDescription
LUCIADevelopment of a MPEG-4 Talking Head Engine. 💻
Yepic StudioCreate and dub talking head-style videos in minutes without expensive equipment. 🎥
Mel McGee’s TalkbotsA complete multi-browser, multi-platform talking head application in SVG suitable for web sites or as an avatar. 🗣️
face3D_chungCreate 3D character avatar head objects with texture from a single photo for your games. 🎮
CrazyTalkExciting features for 3D head creation and automation. 🤪
tts avatar free download - SourceForgeMel McGee’s Talkbots is a complete multi-browser, multi-platform talking head. (🔧👄)
Verbatim AI - Product Information, Latest Updates, and Reviews 2023A simple yet powerful API to generate AI “talking head” videos in near real-time with Verbatim AI. Add interest, intrigue, and dynamism to your chat bots! (🔧👄)
Best Open Source BASIC 3D Modeling SoftwareIncludes talk3D_chung, a small example using obj models created with face3D_chung, and speak3D_chung_dll, a dll to load and display face3D_chung talking avatars. (🛠️🎭)
DVDStyler / Discussion / Help: ffmpeg-vbr or internalTalking heads would get a bitrate which is unnecessarily high while using DVDStyler. (🛠️👄)
puffin web browser free download - SourceForgeMel McGee’s Talkbots is a complete multi-browser, multi-platform talking head. (🔧👄)
12 best AI video generators to use in 2023 Free and paid |Product …Whether you’re an entrepreneur, small business owner, or run a large company, AI video generators make it super easy to create high-quality videos from scratch. (🔧🎥)

Slides & Presentations

Presentation TitleDescription
Few-Shot Adversarial Learning of Realistic Neural Talking Head ModelsPresentation reviewing the few-shot adversarial learning of realistic neural talking head models.
Nethania Michelle’s CharacterPPT: Presentation discussing the improvement of a 3D talking head for use in an avatar of a virtual meeting room.
Presenting you: Top tips on presenting with Prezi Video – PreziArticle providing top tips for presenting with Prezi Video.
Research PresentationPPT: Resident Research Presentation Slide Deck.
Adding narration to your presentation (using Prezi Video) – PreziLearn how to add narration to your Prezi presentation with Prezi Video.

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/206591.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

查看电脑cuda版本

1.找到NVODIA控制面板 输入NVIDIA搜索即可 出现NVIDIA控制面板 点击系统信息 2.WINR 输入nvidia-smi 检查了一下,电脑没用过GPU,连驱动都没有 所以,装驱动…… 选版本,下载 下载后双击打开安装 重新输入nvidia-smi 显示如下…

Python函数默认参数设置

在某些情况下,程序需要在定义函数时为一个或多个形参指定默认值,这样在调用函数时就可以省略为该形参传入参数值,而是直接使用该形参的默认值。 为形参指定默认值的语法格式如下: 形参名 默认值 从上面的语法格式可以看出&…

Java实现布隆过滤器

一、概述 布隆过滤器本质上是一个很长的二进制数组,主要用来判断一个数据存不存在数组里,如果存在就用1表示,不存在用0表示,它的优点是空间效率和查询时间都比一般的算法要好的多,缺点是有一定的误识别率和删除困难。 …

【Python系列】Python函数

💝💝💝欢迎来到我的博客,很高兴能够在这里和您见面!希望您在这里可以感受到一份轻松愉快的氛围,不仅可以获得有趣的内容和知识,也可以畅所欲言、分享您的想法和见解。 推荐:kwan 的首页,持续学…

uni-app 微信小程序之好看的ui登录页面(一)

文章目录 1. 页面效果2. 页面样式代码 更多登录ui页面 uni-app 微信小程序之好看的ui登录页面(一) uni-app 微信小程序之好看的ui登录页面(二) uni-app 微信小程序之好看的ui登录页面(三) uni-app 微信小程…

经纬恒润以太网网关,智能时代网络通关

汽车产业新四化步伐持续加速,智能网联逐渐成为整车标配,随着近年来相关政策频出以及对网联需求和功能的深度挖掘与发展,中国本土市场及本土供应商在这场新浪潮中逐渐走向C位。经纬恒润深耕智能网联领域多年,先后推出四代网关产品&…

JavaSE基础50题:18. 写一个递归方法,输入一个非负整数,返回组成它的数字之和。例如:输入1729,则应该返回1+7+2+9,它的和是19

概述 写一个递归方法&#xff0c;输入一个非负整数&#xff0c;返回组成它的数字之和。例如&#xff1a;输入1729&#xff0c;则应该返回1729&#xff0c;它的和是19。 代码 public class P18 {public static int func(int n) {if (n < 10) {return n;}return n%10 func…

文章解读与仿真程序复现思路——中国电机工程学报EI\CSCD\北大核心《考虑气电联合需求响应的气电综合能源配网系统协调优化运行》

这个标题涉及到一个涉及气体&#xff08;天然气&#xff09;和电力的综合能源配网系统&#xff0c;并且强调了考虑气电联合需求响应的协调优化运行。让我们逐步解读&#xff1a; 气电综合能源配网系统&#xff1a; 这指的是一个结合了气体&#xff08;通常是天然气&#xff09;…

【Java数据结构 -- List和ArrayList与顺序表】

List和ArrayList与顺序表 一. List1.1 List介绍2.1 常见接口介绍3.1 List的使用 二. ArrayList与顺序表1.线性表2.顺序表2.1 接口的实现 3.ArrayList简介4. ArrayList使用4.1 ArrayList的构造 4.2 ArrayList常见操作4.3 ArrayList的遍历4.4 ArrayList的扩容机制5. ArrayList的具…

芯科科技领先无线技术助力控客赋能智慧亚运村

采用芯科科技MG21无线SoC的控客智能家居解决方案为杭州亚运会媒体村打造智能、舒适、便捷、安全的生活体验 中国&#xff0c;北京 - 2023年12月4日 – 致力于以安全、智能无线连接技术&#xff0c;建立更互联世界的全球领导厂商Silicon Labs&#xff08;亦称“芯科科技”&…

名创优品出海更难了,Q3净利增速放缓

近日&#xff0c;有媒体报道称&#xff0c;名创优品受市场流传针对公司的做空报告&#xff0c;叠加高管团队两次减持&#xff0c;共计180万股&#xff0c;若对应25美元的价格&#xff0c;则达4500万美元(约3.2亿元)。 12月5日&#xff0c;其港股收盘股价下跌14.04%&#xff1b…

Javascript 函数介绍

Javascript 函数介绍 很多教程书一上来就讲解一堆语法&#xff0c;例如函数定义、函数调用什么。等读者看完了函数这一章都没搞懂什么是函数。 在讲解什么叫函数之前&#xff0c;我们先看下面一段代码&#xff1a; <!DOCTYPE html> <html xmlns"http://www.w3.…

文心一言 VS 讯飞星火 VS chatgpt (152)-- 算法导论12.2 8题

八、用go语言&#xff0c;证明&#xff1a;在一棵高度为 h 的二叉搜索树中&#xff0c;不论从哪个结点开始&#xff0c;k 次连续的 TREE-SUCCESSOR 调用所需时间为 O(kh)。 文心一言&#xff1a; 要证明在一棵高度为 h 的二叉搜索树中&#xff0c;不论从哪个结点开始&#xf…

VSCODE 运行C程序缓慢解决方法之一

最近更换了mingw的版本&#xff0c;安装路径与之前的mingw路径不大一样。结果发现代码运行的时候很慢&#xff0c;弹出窗口后&#xff0c;迟迟没有打印任何东西&#xff0c;就像卡死了一样。试过网上说的一堆方法&#xff0c;没有什么用。 我按照以下流程进行检查: 1.检查min…

地震反演基础知识3

文章目录 地震勘探原理1 地震波1. 1 地震波概念1. 2 波的传播1. 2. 1 波传播的基本原理1. 2. 2 地震波的反射,折射,透射的1. 2. 3 地震子波&#xff08;seismic wavelet&#xff09;1. 2. 4 地震合成记录 2 地震时距曲线2.1 地震时距曲线作用2.2 不同波的时距曲线2.2.1 直达波时…

【Jeecg Boot 3 - 保姆级】第1节 docker + redis + nginx + redis一键安装启动

一、前言 ▶ JEECG-BOOT 开源版难以吃透的原因 ▶ 为了针对上面痛点&#xff0c;笔者做了如下安排 ▶ 你能收获什么 二、效果(第一节效果) ▶ 启动后端 &#xff1e; 日志 &#xff1e; 接口文档 ▶ 启动前端 三、准备工作 四、实战 ▶ 1、服务器安装 Stag…

2024年甘肃省职业院校技能大赛信息安全管理与评估赛项二三阶段样题一

2024年甘肃省职业院校技能大赛高职学生组电子与信息大类信息安全管理与评估赛项样题一 第二阶段 任务书 任务描述 随着网络和信息化水平的不断发展&#xff0c;网络安全事件也层出不穷&#xff0c;网络恶意代码传播、信息窃取、信息篡改、远程控制等各种网络攻击 行为已严重…

3D渲染和动画制作软件KeyShot Pro mac附加功能

KeyShot 11 mac是一款专业化实时3D渲染工具&#xff0c;使用它可以简化3d渲染和动画制作流程&#xff0c;并且提供最准确的材质及光线&#xff0c;渲染效果更加真实&#xff0c;KeyShot为您提供了使用 CPU 或 NVIDIA GPU 进行渲染的能力和选择&#xff0c;并能够线性扩展以获得…

Linux 多进程并发设计-进程对核的亲缘设置

1设计结构 2 设计优点 1 充分利用多核系统的并发处理能力2 负载均衡3 职责明确&#xff0c;管理进程仅负责管理&#xff0c;工作进程仅负责处理业务逻辑 3 演示代码: //main.cpp #define _GNU_SOURCE #include<sys/types.h> #include<sys/wait.h> #include <…

新生儿出生缺陷筛查的关键注意事项

引言&#xff1a; 新生儿的出生缺陷是一个复杂而广泛的问题&#xff0c;及早的筛查和诊断对于预防和管理这些缺陷至关重要。出生缺陷可能涉及各个系统&#xff0c;包括心脏、神经、遗传等&#xff0c;因此及时而全面的筛查对新生儿的健康至关重要。本文将深入探讨新生儿出生缺…