Human Motion Diffusion Model 使用笔记

目录

依赖项:

依赖项目:

验证数据集

体验网址:

推理代码整理,一键运行

渲染mesh

可视化代码:


GitHub - GuyTevet/motion-diffusion-model: The official PyTorch implementation of the paper "Human Motion Diffusion Model"

依赖项:

spacy

依赖项目:

GitHub - EricGuo5513/HumanML3D: HumanML3D: A large and diverse 3d human motion-language dataset.

验证数据集

HumanML3D/HumanML3D at main · EricGuo5513/HumanML3D · GitHub

第1步,把目录 HumanML3D/HumanML3D拷贝到dataset目录中,

第2步,把HumanML3D数据集中的texts.zip解压到目录texts

体验网址:

daanelson/motion_diffusion_model – Run with an API on Replicate

推理代码整理,一键运行

# This code is based on https://github.com/openai/guided-diffusion
"""
Generate a large batch of image samples from a model and save them as a large
numpy array. This can be used to produce samples for FID evaluation.
"""
from utils.fixseed import fixseed
import os
import numpy as np
import torch
from utils.parser_util import generate_args
from utils.model_util import create_model_and_diffusion, load_model_wo_clip
from utils import dist_util
from model.cfg_sampler import ClassifierFreeSampleModel
from data_loaders.get_data import get_dataset_loader
from data_loaders.humanml.scripts.motion_process import recover_from_ric
import data_loaders.humanml.utils.paramUtil as paramUtil
from data_loaders.humanml.utils.plot_script import plot_3d_motion
import shutil
from data_loaders.tensors import collatedef main():args = generate_args()fixseed(args.seed)args.text_prompt=r''args.input_text=r'assets/example_text_prompts.txt'args.model_path = r'humanml_trans_enc_512/model000200000.pt'out_path = args.output_dirname = os.path.basename(os.path.dirname(args.model_path))niter = os.path.basename(args.model_path).replace('model', '').replace('.pt', '')max_frames = 196 if args.dataset in ['kit', 'humanml'] else 60fps = 12.5 if args.dataset == 'kit' else 20n_frames = min(max_frames, int(args.motion_length*fps))is_using_data = not any([args.input_text, args.text_prompt, args.action_file, args.action_name])dist_util.setup_dist(args.device)if out_path == '':out_path = os.path.join(os.path.dirname(args.model_path),'samples_{}_{}_seed{}'.format(name, niter, args.seed))if args.text_prompt != '':out_path += '_' + args.text_prompt.replace(' ', '_').replace('.', '')elif args.input_text != '':out_path += '_' + os.path.basename(args.input_text).replace('.txt', '').replace(' ', '_').replace('.', '')# this block must be called BEFORE the dataset is loadedif args.text_prompt != '':texts = [args.text_prompt]args.num_samples = 1elif args.input_text != '':assert os.path.exists(args.input_text)with open(args.input_text, 'r') as fr:texts = fr.readlines()texts = [s.replace('\n', '') for s in texts]args.num_samples = len(texts)elif args.action_name:action_text = [args.action_name]args.num_samples = 1elif args.action_file != '':assert os.path.exists(args.action_file)with open(args.action_file, 'r') as fr:action_text = fr.readlines()action_text = [s.replace('\n', '') for s in action_text]args.num_samples = len(action_text)assert args.num_samples <= args.batch_size, \f'Please either increase batch_size({args.batch_size}) or reduce num_samples({args.num_samples})'# So why do we need this check? In order to protect GPU from a memory overload in the following line.# If your GPU can handle batch size larger then default, you can specify it through --batch_size flag.# If it doesn't, and you still want to sample more prompts, run this script with different seeds# (specify through the --seed flag)args.batch_size = args.num_samples  # Sampling a single batch from the testset, with exactly args.num_samplesprint('Loading dataset...')data = load_dataset(args, max_frames, n_frames)total_num_samples = args.num_samples * args.num_repetitionsprint("Creating model and diffusion...")model, diffusion = create_model_and_diffusion(args, data)print(f"Loading checkpoints from [{args.model_path}]...")state_dict = torch.load(args.model_path, map_location='cpu')load_model_wo_clip(model, state_dict)if args.guidance_param != 1:model = ClassifierFreeSampleModel(model)   # wrapping model with the classifier-free samplermodel.to(dist_util.dev())model.eval()  # disable random maskingif is_using_data:iterator = iter(data)_, model_kwargs = next(iterator)else:collate_args = [{'inp': torch.zeros(n_frames), 'tokens': None, 'lengths': n_frames}] * args.num_samplesis_t2m = any([args.input_text, args.text_prompt])if is_t2m:# t2mcollate_args = [dict(arg, text=txt) for arg, txt in zip(collate_args, texts)]else:# a2maction = data.dataset.action_name_to_action(action_text)collate_args = [dict(arg, action=one_action, action_text=one_action_text) forarg, one_action, one_action_text in zip(collate_args, action, action_text)]_, model_kwargs = collate(collate_args)all_motions = []all_lengths = []all_text = []for rep_i in range(args.num_repetitions):print(f'### Sampling [repetitions #{rep_i}]')# add CFG scale to batchif args.guidance_param != 1:model_kwargs['y']['scale'] = torch.ones(args.batch_size, device=dist_util.dev()) * args.guidance_paramsample_fn = diffusion.p_sample_loopsample = sample_fn(model,# (args.batch_size, model.njoints, model.nfeats, n_frames),  # BUG FIX - this one caused a mismatch between training and inference(args.batch_size, model.njoints, model.nfeats, max_frames),  # BUG FIXclip_denoised=False,model_kwargs=model_kwargs,skip_timesteps=0,  # 0 is the default value - i.e. don't skip any stepinit_image=None,progress=True,dump_steps=None,noise=None,const_noise=False,)# Recover XYZ *positions* from HumanML3D vector representationif model.data_rep == 'hml_vec':n_joints = 22 if sample.shape[1] == 263 else 21sample = data.dataset.t2m_dataset.inv_transform(sample.cpu().permute(0, 2, 3, 1)).float()sample = recover_from_ric(sample, n_joints)sample = sample.view(-1, *sample.shape[2:]).permute(0, 2, 3, 1)rot2xyz_pose_rep = 'xyz' if model.data_rep in ['xyz', 'hml_vec'] else model.data_reprot2xyz_mask = None if rot2xyz_pose_rep == 'xyz' else model_kwargs['y']['mask'].reshape(args.batch_size, n_frames).bool()sample = model.rot2xyz(x=sample, mask=rot2xyz_mask, pose_rep=rot2xyz_pose_rep, glob=True, translation=True,jointstype='smpl', vertstrans=True, betas=None, beta=0, glob_rot=None,get_rotations_back=False)if args.unconstrained:all_text += ['unconstrained'] * args.num_sampleselse:text_key = 'text' if 'text' in model_kwargs['y'] else 'action_text'all_text += model_kwargs['y'][text_key]all_motions.append(sample.cpu().numpy())all_lengths.append(model_kwargs['y']['lengths'].cpu().numpy())print(f"created {len(all_motions) * args.batch_size} samples")all_motions = np.concatenate(all_motions, axis=0)all_motions = all_motions[:total_num_samples]  # [bs, njoints, 6, seqlen]all_text = all_text[:total_num_samples]all_lengths = np.concatenate(all_lengths, axis=0)[:total_num_samples]if os.path.exists(out_path):shutil.rmtree(out_path)os.makedirs(out_path)npy_path = os.path.join(out_path, 'results.npy')print(f"saving results file to [{npy_path}]")np.save(npy_path,{'motion': all_motions, 'text': all_text, 'lengths': all_lengths,'num_samples': args.num_samples, 'num_repetitions': args.num_repetitions})with open(npy_path.replace('.npy', '.txt'), 'w') as fw:fw.write('\n'.join(all_text))with open(npy_path.replace('.npy', '_len.txt'), 'w') as fw:fw.write('\n'.join([str(l) for l in all_lengths]))print(f"saving visualizations to [{out_path}]...")skeleton = paramUtil.kit_kinematic_chain if args.dataset == 'kit' else paramUtil.t2m_kinematic_chainsample_files = []num_samples_in_out_file = 7sample_print_template, row_print_template, all_print_template, \sample_file_template, row_file_template, all_file_template = construct_template_variables(args.unconstrained)for sample_i in range(args.num_samples):rep_files = []for rep_i in range(args.num_repetitions):caption = all_text[rep_i*args.batch_size + sample_i]length = all_lengths[rep_i*args.batch_size + sample_i]motion = all_motions[rep_i*args.batch_size + sample_i].transpose(2, 0, 1)[:length]save_file = sample_file_template.format(sample_i, rep_i)print(sample_print_template.format(caption, sample_i, rep_i, save_file))animation_save_path = os.path.join(out_path, save_file)plot_3d_motion(animation_save_path, skeleton, motion, dataset=args.dataset, title=caption, fps=fps)# Credit for visualization: https://github.com/EricGuo5513/text-to-motionrep_files.append(animation_save_path)sample_files = save_multiple_samples(args, out_path,row_print_template, all_print_template, row_file_template, all_file_template,caption, num_samples_in_out_file, rep_files, sample_files, sample_i)abs_path = os.path.abspath(out_path)print(f'[Done] Results are at [{abs_path}]')def save_multiple_samples(args, out_path, row_print_template, all_print_template, row_file_template, all_file_template,caption, num_samples_in_out_file, rep_files, sample_files, sample_i):all_rep_save_file = row_file_template.format(sample_i)all_rep_save_path = os.path.join(out_path, all_rep_save_file)ffmpeg_rep_files = [f' -i {f} ' for f in rep_files]hstack_args = f' -filter_complex hstack=inputs={args.num_repetitions}' if args.num_repetitions > 1 else ''ffmpeg_rep_cmd = f'ffmpeg -y -loglevel warning ' + ''.join(ffmpeg_rep_files) + f'{hstack_args} {all_rep_save_path}'os.system(ffmpeg_rep_cmd)print(row_print_template.format(caption, sample_i, all_rep_save_file))sample_files.append(all_rep_save_path)if (sample_i + 1) % num_samples_in_out_file == 0 or sample_i + 1 == args.num_samples:# all_sample_save_file =  f'samples_{(sample_i - len(sample_files) + 1):02d}_to_{sample_i:02d}.mp4'all_sample_save_file = all_file_template.format(sample_i - len(sample_files) + 1, sample_i)all_sample_save_path = os.path.join(out_path, all_sample_save_file)print(all_print_template.format(sample_i - len(sample_files) + 1, sample_i, all_sample_save_file))ffmpeg_rep_files = [f' -i {f} ' for f in sample_files]vstack_args = f' -filter_complex vstack=inputs={len(sample_files)}' if len(sample_files) > 1 else ''ffmpeg_rep_cmd = f'ffmpeg -y -loglevel warning ' + ''.join(ffmpeg_rep_files) + f'{vstack_args} {all_sample_save_path}'os.system(ffmpeg_rep_cmd)sample_files = []return sample_filesdef construct_template_variables(unconstrained):row_file_template = 'sample{:02d}.mp4'all_file_template = 'samples_{:02d}_to_{:02d}.mp4'if unconstrained:sample_file_template = 'row{:02d}_col{:02d}.mp4'sample_print_template = '[{} row #{:02d} column #{:02d} | -> {}]'row_file_template = row_file_template.replace('sample', 'row')row_print_template = '[{} row #{:02d} | all columns | -> {}]'all_file_template = all_file_template.replace('samples', 'rows')all_print_template = '[rows {:02d} to {:02d} | -> {}]'else:sample_file_template = 'sample{:02d}_rep{:02d}.mp4'sample_print_template = '["{}" ({:02d}) | Rep #{:02d} | -> {}]'row_print_template = '[ "{}" ({:02d}) | all repetitions | -> {}]'all_print_template = '[samples {:02d} to {:02d} | all repetitions | -> {}]'return sample_print_template, row_print_template, all_print_template, \sample_file_template, row_file_template, all_file_templatedef load_dataset(args, max_frames, n_frames):data = get_dataset_loader(name=args.dataset,batch_size=args.batch_size,num_frames=max_frames,split='test',hml_mode='text_only')if args.dataset in ['kit', 'humanml']:data.dataset.t2m_dataset.fixed_length = n_framesreturn dataif __name__ == "__main__":main()

渲染mesh

import argparse
import os
from visualize import vis_utils
import shutil
from tqdm import tqdmif __name__ == '__main__':parser = argparse.ArgumentParser()parser.add_argument("--input_path", type=str, default=r"E:\project\202404\motion-diffusion-model-main\humanml_trans_enc_512\samples_humanml_trans_enc_512_000200000_seed10_example_text_prompts\sample00_rep00.mp4", help='stick figure mp4 file to be rendered.')parser.add_argument("--cuda", type=bool, default=True, help='')parser.add_argument("--device", type=int, default=0, help='')params = parser.parse_args()assert params.input_path.endswith('.mp4')parsed_name = os.path.basename(params.input_path).replace('.mp4', '').replace('sample', '').replace('rep', '')sample_i, rep_i = [int(e) for e in parsed_name.split('_')]npy_path = os.path.join(os.path.dirname(params.input_path), 'results.npy')out_npy_path = params.input_path.replace('.mp4', '_smpl_params.npy')assert os.path.exists(npy_path)results_dir = params.input_path.replace('.mp4', '_obj')if os.path.exists(results_dir):shutil.rmtree(results_dir)os.makedirs(results_dir)npy2obj = vis_utils.npy2obj(npy_path, sample_i, rep_i,device=params.device, cuda=params.cuda)print('Saving obj files to [{}]'.format(os.path.abspath(results_dir)))for frame_i in tqdm(range(npy2obj.real_num_frames)):npy2obj.save_obj(os.path.join(results_dir, 'frame{:03d}.obj'.format(frame_i)), frame_i)print('Saving SMPL params to [{}]'.format(os.path.abspath(out_npy_path)))npy2obj.save_npy(out_npy_path)

可视化代码:

自带的可视化代码windows系统不能显示,经过改进实现可以显示的代码:
 

import math
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.animation import FuncAnimation, FFMpegFileWriter
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
import mpl_toolkits.mplot3d.axes3d as p3
# import cv2
from textwrap import wrapdef list_cut_average(ll, intervals):if intervals == 1:return llbins = math.ceil(len(ll) * 1.0 / intervals)ll_new = []for i in range(bins):l_low = intervals * il_high = l_low + intervalsl_high = l_high if l_high < len(ll) else len(ll)ll_new.append(np.mean(ll[l_low:l_high]))return ll_newdef plot_3d_motion(save_path, kinematic_tree, joints, title, dataset, figsize=(3, 3), fps=120, radius=3, vis_mode='default', gt_frames=[]):fig = plt.figure(figsize=figsize)ax = fig.add_subplot(111, projection='3d')plt.tight_layout()title = '\n'.join(wrap(title, 20))colors_blue = ["#4D84AA", "#5B9965", "#61CEB9", "#34C1E2", "#80B79A"]  # GT colorcolors_orange = ["#DD5A37", "#D69E00", "#B75A39", "#FF6D00", "#DDB50E"]  # Generation colorcolors = colors_orangedef init():ax.set_xlim3d([-radius / 2, radius / 2])ax.set_ylim3d([0, radius])ax.set_zlim3d([-radius / 3., radius * 2 / 3.])fig.suptitle(title, fontsize=10)ax.grid(b=False)return fig,def update(index):while len(ax.lines) > 0:ax.lines[0].remove()while len(ax.collections) > 0:ax.collections[0].remove()ax.view_init(elev=120, azim=-90)ax.dist = 7.5# Draw motionused_colors = colors_blue if index in gt_frames else colorsfor i, (chain, color) in enumerate(zip(kinematic_tree, used_colors)):linewidth = 4.0 if i < 5 else 2.0ax.plot3D(data[index, chain, 0], data[index, chain, 1], data[index, chain, 2], linewidth=linewidth, color=color)plt.axis('off')ax.set_xticklabels([])ax.set_yticklabels([])ax.set_zticklabels([])return fig,data = joints.copy().reshape(len(joints), -1, 3)frame_number = data.shape[0]ani = FuncAnimation(fig, update, frames=frame_number, interval=1000 / fps, repeat=False, init_func=init)ani.save(save_path, fps=fps)# plt.show()plt.close()if __name__ == '__main__':save_path='0415.mp4'dataset='humanml'save_path='0415.mp4'title='title'kinematic_tree=[[0, 2, 5, 8, 11], [0, 1, 4, 7, 10], [0, 3, 6, 9, 12, 15], [9, 14, 17, 19, 21], [9, 13, 16, 18, 20]]npz_data = np.load(r"E:\04151.npz", allow_pickle=True)joints = npz_data['joints_3d'].item()['data']joints/=20plot_3d_motion(save_path, kinematic_tree, joints, title, dataset, figsize=(3, 3), fps=120, radius=3, vis_mode='default', gt_frames=[])

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/818665.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

WordPress用户福音:Elementor Pro国产版替代方案,全新中文界面更懂你

如果你正在考虑创建自己的网站&#xff0c;那么在第一次谷歌搜索时&#xff0c;你可能已经看到了WordPress、Elementor和网站构建器这些专业名称。WordPress是最受欢迎的网站平台之一&#xff0c;这不难理解&#xff1a;它高度可定制&#xff0c;易于学习&#xff0c;而且是免费…

java算法day55 | 动态规划part16 ● 583. 两个字符串的删除操作 ● 72. 编辑距离

583. 两个字符串的删除操作 思路&#xff1a; 和1143.最长公共子序列这道题思路相同&#xff0c;只不过需要对return的数据做一些操作。 class Solution {public int minDistance(String word1, String word2) {int[][] dpnew int[word1.length()1][word2.length()1];for(int …

【Linux 驱动基础】设备树中断

# 前置知识 interrupts 文档 Specifying interrupt information for devices 1) Interrupt client nodes -------------------------Nodes that describe devices which generate interrupts must contain an "interrupts" property, an "interrupts-extende…

业务与数据的终极对决:如何让大数据成为企业的超能力?

在数字化转型的浪潮中&#xff0c;企业如同在茫茫数据海洋中航行的船只&#xff0c;而数据资产管理就是指引航向的罗盘。但是&#xff0c;当业务需求与数据脱节、数据孤岛林立、业务流程与数据流程不同步、以及业务增长带来的数据管理挑战成为阻碍&#xff0c;我们该如何突破重…

ios证书过期需要重新安装app

根据近日工业和信息化部发布的《工业和信息化部关于开展移动互联网应用程序备案工作的通知》&#xff0c;相信不少要进行IOS平台App备案的朋友遇到了一个问题&#xff0c;就是apple不提供云管理式证书的下载&#xff0c;也就无法获取公钥及证书SHA-1指纹。 已经上架的应用不想重…

如何让表单流程引擎提质增效?

随着社会的进步和科技的发展&#xff0c;低代码技术平台在诸多行业中成为利用价值高的平台。对于解决信息孤岛、部门协作不给力、办公效率不高等缺点&#xff0c;低代码技术平台都可以为其架设出一道优质的桥梁&#xff0c;共同朝着高效率的流程化办公方向前进。表单流辰引擎是…

31、链表-K个一组反转链表

思路&#xff1a; 首先知道如何反转链表&#xff0c;其次找出每组的开始节点和结束节点&#xff0c;然后对于不足与k个的链表保持原状。 代码如下&#xff1a; class Solution {public ListNode reverseKGroup(ListNode head, int k) {if (headnull||k1){return head;}ListN…

内网渗透-cobaltstrike之cs上线获取shell

cobaltstrike之cs上线获取shell 文章目录 cobaltstrike之cs上线获取shell前言一、什么是cobaltstrike二、cs上线获取shell 1.环境搭建 CS安装windows连接 2. cs上线获取shell 总结 前言 一、什么是cobaltstrike CobaltStrike是一款渗透测试神器&#xff0c;被业界人称为CS神器…

Python基于卷积神经网络的车牌识别系统

博主介绍&#xff1a;✌程序员徐师兄、7年大厂程序员经历。全网粉丝12w、csdn博客专家、掘金/华为云/阿里云/InfoQ等平台优质作者、专注于Java技术领域和毕业项目实战✌ &#x1f345;文末获取源码联系&#x1f345; &#x1f447;&#x1f3fb; 精彩专栏推荐订阅&#x1f447;…

结构体的内存对齐

目录 对齐规则&#xff1a; 为什么存在内存对齐&#xff1f; 对齐规则&#xff1a; 1、结构体的第一个成员对齐到和结构体起始位置偏移量为0的地址处 2、其他成员变量要对齐到某个数字&#xff08;对齐数&#xff09;的整数倍的地址处。 对齐数 编译器默认的一个对齐数 与 …

网站添加PWA支持,仅需三步,无视框架的类型

总结起来&#xff0c;网站配置PWA简单步骤为&#xff1a; 编写 manifest.json&#xff1b;编写 serviceWorker.js&#xff1b;在 index.html 引入上述两个文件&#xff1b;把上述三个文件放在网站根目录(或者同一目录下)&#xff1b;网站需要部署在https环境才能触发&#xff…

Cosmopolitan Libc 工作原理与多平台使用方法教程(x64 Linux / WSL2 / Windows)

⚠️阅读前请注意 本博客适用于Cosmopolitan Libc 3.X版本&#xff0c;不适用于Cosmopolitan Libc 2.X版本。Cosmopolitan Libc 是一个非常年轻的项目&#xff0c;可能存在各种问题。Cosmopolitan Libc 仍处于快速迭代开发之中&#xff0c;本文内容在一定时期内会持续更新。 Co…

【Java】HOT100 链表

代码随想录的链表题在这里&#xff1a; 链表1、链表2做过总结 目录 HOT 100&#xff1a;链表 LeetCode160&#xff1a;相交链表 LeetCode206&#xff1a;反转链表 LeetCode206&#xff1a;回文链表 LeetCode141&#xff1a;环形链表 LeetCode142&#xff1a;环形链表ii …

mixins

mixins Mixins的基本概念: 什么是mixins&#xff0c;它们用于解决什么问题&#xff1f; Mixins是Vue.js的一个功能&#xff0c;允许你创建可重用的代码片段&#xff0c;包括数据、生命周期钩子、方法、计算属性等&#xff0c;可以混入到单个Vue组件中。它们用于解决代码重用和…

大数据概述:大数据时代的发展与挑战

随着互联网、物联网、云计算等技术的飞速发展&#xff0c;大数据作为一种新兴产业&#xff0c;已经渗透到了各个领域。大数据时代&#xff0c;带来了前所未有的发展机遇&#xff0c;也带来了诸多挑战。本文将从大数据的概念、大数据的影响、大数据的应用、大数据关键技术、大数…

如何使用CANoe进行LINstress测试

1.创建Stress测试工程 依次按照1-3的步骤建立工程 4部分&#xff0c;主要是Description of the sample configurations&#xff08;对示例工程的描述&#xff09; 5部分主要是显示示例工程的位置和简单描述 工程打开后如下图所示 重点关注红框标注的地方&#xff0c;重新截一…

《由浅入深学习SAP财务》:第2章 总账模块 - 2.6 定期处理 - 2.6.5 年末操作:维护新财政年度会计凭证编号范围

2.6.5 年末操作&#xff1a;维护新财政年度会计凭证编号范围 财务系统的维护者要在每年年末预先设置好下一年度的会计凭证编号范围&#xff08;number range&#xff09;&#xff0c;以便下一年度会计凭证能够顺利生成。这一操作一定要在下一年度1月1日以前预先完成。 …

vue 组件通信的几种方法

vue是js一个非常热门的框架&#xff0c;组件之间的通信是vue基础也是重要的一部分。 1.props, 可以实现父子组件通信&#xff0c;但其数据是只读&#xff0c;不可修改 &#xff08;使用child之前需先接受一下&#xff0c;已下同理&#xff09; //父组件 <script setup la…

C++中STL迭代器如何使用

1.概念 迭代器是一种检查容器内元素并遍历元素的数据类型。 C 更趋向于使用迭代器而不是下标操作&#xff0c;因为标准库为每一种标准容器&#xff08;如vector &#xff09;定义了一种迭代器类型&#xff0c;而只用少数容器&#xff08;如 vector &#xff09;支持下标 操作访…

半导体材料(二)——半导体导电特性

本篇为西安交通大学本科课程《电气材料基础》的笔记。 本篇为这一单元的第二篇笔记&#xff0c;上一篇传送门。 半导体导电特性 载流子的迁移 外电场下电子和空穴定向位移产生电流。电流密度可写作&#xff1a; J e ( μ n n μ p p ) E σ E Je(\mu_n n\mu_p p)E\sigm…