webrtcP2P通话流程

文章目录

    • webrtcP2P通话流程
    • webrtc多对多 mesh方案
    • webrtc多对多 mcu方案
    • webrtc多对多 sfu方案
    • webrtc案例测试
      • getUserMedia
        • getUserMedia基础示例-打开摄像头
        • getUserMedia + canvas - 截图
      • 打开共享屏幕

webrtcP2P通话流程

在这里,stun服务器包括stun服务和turn转发服务。信令服服务还包括im等功能
在这里插入图片描述

webrtc多对多 mesh方案

适合人数较少的场景

在这里插入图片描述

webrtc多对多 mcu方案

(multipoint control point)将上行的视频/音频合成,然后分发。对客户端来说压力不大,但对服务器消耗较大,但节省带宽。适合开会人多会议场景。
在这里插入图片描述

webrtc多对多 sfu方案

(selective forwarding unit)对服务器压力小,不需要太高配置,但对带宽要求较高,流量消耗大。
在这里插入图片描述
在sfu中,它们的通信过程如下
在这里插入图片描述
再单独看下客户端与sfu的通信过程,并且在sfu内部的流媒体转发过程
在这里插入图片描述

webrtc案例测试

samples代码 https://github.com/webrtc/samples?tab=readme-ov-file

案例页面地址

要注意的一点是,如果不是本机地址,那就需要https,否则获取媒体的方法会调用不了

里面有不少示例,需要花时间看看

<!DOCTYPE html>
<!--*  Copyright (c) 2015 The WebRTC project authors. All Rights Reserved.**  Use of this source code is governed by a BSD-style license*  that can be found in the LICENSE file in the root of the source*  tree.
-->
<html>
<head><meta charset="utf-8"><meta name="description" content="WebRTC Javascript code samples"><meta name="viewport" content="width=device-width, user-scalable=yes, initial-scale=1, maximum-scale=1"><meta itemprop="description" content="Client-side WebRTC code samples"><meta itemprop="image" content="src/images/webrtc-icon-192x192.png"><meta itemprop="name" content="WebRTC code samples"><meta name="mobile-web-app-capable" content="yes"><meta id="theme-color" name="theme-color" content="#ffffff"><base target="_blank"><title>WebRTC samples</title><link rel="icon" sizes="192x192" href="src/images/webrtc-icon-192x192.png"><link href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700" rel="stylesheet" type="text/css"><link rel="stylesheet" href="src/css/main.css"/><style>h2 {font-size: 1.5em;font-weight: 500;}h3 {border-top: none;}section {border-bottom: 1px solid #eee;margin: 0 0 1.5em 0;padding: 0 0 1.5em 0;}section:last-child {border-bottom: none;margin: 0;padding: 0;}</style>
</head><body>
<div id="container"><h1>WebRTC samples</h1><section><p>This is a collection of small samples demonstrating various parts of the <ahref="https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API">WebRTC APIs</a>. The code for allsamples are available in the <a href="https://github.com/webrtc/samples">GitHub repository</a>.</p><p>Most of the samples use <a href="https://github.com/webrtc/adapter">adapter.js</a>, a shim to insulate appsfrom spec changes and prefix differences.</p><p><a href="https://webrtc.org/getting-started/testing" title="Command-line flags for WebRTC testing">https://webrtc.org/getting-started/testing</a>lists command line flags useful for development and testing with Chrome.</p><p>Patches and issues welcome! See <a href="https://github.com/webrtc/samples/blob/gh-pages/CONTRIBUTING.md">CONTRIBUTING.md</a>for instructions.</p><p class="warning"><strong>Warning:</strong> It is highly recommended to use headphones when testing thesesamples, as it will otherwise risk loud audio feedback on your system.</p></section><section><h2 id="getusermedia"><a href="https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia">getUserMedia():</a></h2><p class="description">Access media devices</p><ul><li><a href="src/content/getusermedia/gum/">Basic getUserMedia demo</a></li><li><a href="src/content/getusermedia/canvas/">Use getUserMedia with canvas</a></li><li><a href="src/content/getusermedia/filter/">Use getUserMedia with canvas and CSS filters</a></li><li><a href="src/content/getusermedia/resolution/">Choose camera resolution</a></li><li><a href="src/content/getusermedia/audio/">Audio-only getUserMedia() output to local audio element</a></li><li><a href="src/content/getusermedia/volume/">Audio-only getUserMedia() displaying volume</a></li><li><a href="src/content/getusermedia/record/">Record stream</a></li><li><a href="src/content/getusermedia/getdisplaymedia/">Screensharing with getDisplayMedia</a></li><li><a href="src/content/getusermedia/pan-tilt-zoom/">Control camera pan, tilt, and zoom</a></li><li><a href="src/content/getusermedia/exposure/">Control exposure</a></li></ul><h2 id="devices">Devices:</h2><p class="description">Query media devices</p><ul><li><a href="src/content/devices/input-output/">Choose camera, microphone and speaker</a></li><li><a href="src/content/devices/multi/">Choose media source and audio output</a></li></ul><h2 id="capture">Stream capture:</h2><p class="description">Stream from canvas or video elements</p><ul><li><a href="src/content/capture/video-video/">Stream from a video element to a video element</a></li><li><a href="src/content/capture/video-pc/">Stream from a video element to a peer connection</a></li><li><a href="src/content/capture/canvas-video/">Stream from a canvas element to a video element</a></li><li><a href="src/content/capture/canvas-pc/">Stream from a canvas element to a peer connection</a></li><li><a href="src/content/capture/canvas-record/">Record a stream from a canvas element</a></li><li><a href="src/content/capture/video-contenthint/">Guiding video encoding with content hints</a></li></ul><h2 id="peerconnection"><a href="https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection">RTCPeerConnection:</a></h2><p class="description">Controlling peer connectivity</p><ul><li><a href="src/content/peerconnection/pc1/">Basic peer connection demo in a single tab</a></li><li><a href="src/content/peerconnection/channel/">Basic peer connection demo between two tabs</a></li><li><a href="src/content/peerconnection/perfect-negotiation/">Peer connection using Perfect Negotiation</a></li><li><a href="src/content/peerconnection/audio/">Audio-only peer connection demo</a></li><li><a href="src/content/peerconnection/bandwidth/">Change bandwidth on the fly</a></li><li><a href="src/content/peerconnection/change-codecs/">Change codecs before the call</a></li><li><a href="src/content/peerconnection/upgrade/">Upgrade a call and turn video on</a></li><li><a href="src/content/peerconnection/multiple/">Multiple peer connections at once</a></li><li><a href="src/content/peerconnection/multiple-relay/">Forward the output of one PC into another</a></li><li><a href="src/content/peerconnection/munge-sdp/">Munge SDP parameters</a></li><li><a href="src/content/peerconnection/pr-answer/">Use pranswer when setting up a peer connection</a></li><li><a href="src/content/peerconnection/constraints/">Constraints and stats</a></li><li><a href="src/content/peerconnection/old-new-stats/">More constraints and stats</a></li><li><a href="src/content/peerconnection/per-frame-callback/">RTCPeerConnection and requestVideoFrameCallback()</a></li><li><a href="src/content/peerconnection/create-offer/">Display createOffer output for various scenarios</a></li><li><a href="src/content/peerconnection/dtmf/">Use RTCDTMFSender</a></li><li><a href="src/content/peerconnection/states/">Display peer connection states</a></li><li><a href="src/content/peerconnection/trickle-ice/">ICE candidate gathering from STUN/TURN servers</a></li><li><a href="src/content/peerconnection/restart-ice/">Do an ICE restart</a></li><li><a href="src/content/peerconnection/webaudio-input/">Web Audio output as input to peer connection</a></li><li><a href="src/content/peerconnection/webaudio-output/">Peer connection as input to Web Audio</a></li><li><a href="src/content/peerconnection/negotiate-timing/">Measure how long renegotiation takes</a></li><li><a href="src/content/extensions/svc/">Choose scalablilityMode before call - Scalable Video Coding (SVC) Extension </a></li></ul><h2 id="datachannel"><ahref="https://developer.mozilla.org/en-US/docs/Web/API/RTCDataChannel">RTCDataChannel:</a></h2><p class="description">Send arbitrary data over peer connections</p><ul><li><a href="src/content/datachannel/basic/">Transmit text</a></li><li><a href="src/content/datachannel/filetransfer/">Transfer a file</a></li><li><a href="src/content/datachannel/datatransfer/">Transfer data</a></li><li><a href="src/content/datachannel/channel/">Basic datachannel demo between two tabs</a></li><li><a href="src/content/datachannel/messaging/">Messaging</a></li></ul><h2 id="videoChat">Video chat:</h2><p class="description">Full featured WebRTC application</p><ul><li><a href="https://github.com/webrtc/apprtc/">AppRTC video chat client</a> that you can run out of a Docker image</li></ul><h2 id="capture">Insertable Streams:</h2><p class="description">API for processing media</p><ul><li><a href="src/content/insertable-streams/endtoend-encryption">End to end encryption using WebRTC Insertable Streams</a></li> (Experimental)<li><a href="src/content/insertable-streams/video-analyzer">Video analyzer using WebRTC Insertable Streams</a></li> (Experimental)<li><a href="src/content/insertable-streams/video-processing">Video processing using MediaStream Insertable Streams</a></li> (Experimental)<li><a href="src/content/insertable-streams/audio-processing">Audio processing using MediaStream Insertable Streams</a></li> (Experimental)<li><a href="src/content/insertable-streams/video-crop">Video cropping using MediaStream Insertable Streams in a Worker</a></li> (Experimental)<li><a href="src/content/insertable-streams/webgpu">Integrations with WebGPU for custom video rendering:</a></li> (Experimental)</ul>   </section></div><script src="src/js/lib/ga.js"></script></body>
</html>

getUserMedia

getUserMedia基础示例-打开摄像头
<template><video ref="videoRef" autoplay playsinline></video><button @click="openCamera">打开摄像头</button><button @click="closeCamera">关闭摄像头</button>
</template><script lang="ts" setup name="gum">import { ref } from 'vue';const videoRef = ref()let stream = null // 打开摄像头
const openCamera = async function () {stream = await navigator.mediaDevices.getUserMedia({audio: false,video: true});const videoTracks = stream.getVideoTracks();console.log(`Using video device: ${videoTracks[0].label}`);videoRef.value.srcObject = stream}// 关闭摄像头
const closeCamera = function() {const videoTracks = stream.getVideoTracks();stream.getTracks().forEach(function(track) {track.stop();});
}</script>
getUserMedia + canvas - 截图
<template><video ref="videoRef" autoplay playsinline></video><button @click="shootScreen">截图</button><button @click="closeCamera">关闭摄像头</button><canvas ref="canvasRef"></canvas>
</template><script lang="ts" setup name="gum">import { ref, onMounted } from 'vue';const videoRef = ref()
const canvasRef = ref()
let stream = nullonMounted(() => {canvasRef.value.width = 480;canvasRef.value.height = 360;// 打开摄像头const openCamera = async function () {stream = await navigator.mediaDevices.getUserMedia({audio: false,video: true});const videoTracks = stream.getVideoTracks();console.log(`Using video device: ${videoTracks[0].label}`);videoRef.value.srcObject = stream}openCamera()})// 截图
const shootScreen = function () {canvasRef.value.width = videoRef.value.videoWidth;canvasRef.value.height = videoRef.value.videoHeight;canvasRef.value.getContext('2d').drawImage(videoRef.value, 0, 0, canvasRef.value.width, canvasRef.value.height);
}// 关闭摄像头
const closeCamera = function() {const videoTracks = stream.getVideoTracks();stream.getTracks().forEach(function(track) {track.stop();});
}
</script>

打开共享屏幕

<template><video ref="myVideoRef" autoPlay playsinline  width="50%"></video><button @click="openCarmera">打开共享屏幕</button>
</template><script lang="ts" setup name="App">import {ref} from 'vue'const myVideoRef = ref()// 打开共享屏幕的代码const openScreen = async ()=>{const constraints = {video: true}try{const stream = await navigator.mediaDevices.getDisplayMedia(constraints);const videoTracks = stream.getTracks();console.log('使用的设备是: ' + videoTracks[0].label)myVideoRef.value.srcObject = stream}catch(error) {}}</script>

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/794936.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

docker进行jenkins接口自动化测试持续集成实战

文章目录 一、接口功能自动化测试项目源码讲解二、接口功能自动化测试运行环境配置1、下载jdk&#xff0c;maven&#xff0c;git&#xff0c;allure并配置对应的环境变量2、使用docker安装jenkins3、配置接口测试的运行时环境选择对应节点4、jenkins下载插件5、jenkins配置环境…

I2C驱动实验:验证所添加的I2C设备的设备节点

一. 简介 前面一篇文章向设备树中的 I2C1控制器节点下&#xff0c;添加了AP3216C设备节点。文章如下&#xff1a; I2C驱动实验&#xff1a;向设备树添加 I2C设备的设备节点信息-CSDN博客 本文对设备树进行测试&#xff0c;确认设备节点是否成功创建好。 二. I2C驱动实验&a…

算法刷题应用知识补充--基础算法、数据结构篇

这里写目录标题 位运算&#xff08;均是拷贝运算&#xff0c;不会影响原数据&#xff0c;这点要注意&#xff09;&、|、^位运算特性细节知识补充对于n-1的理解异或来实现数字交换找到只出现一次的数据&#xff0c;其余数据出现偶数次 >> 、<<二进制中相邻的位的…

动态多目标优化:动态约束多目标优化测试集DCP1-DCP9的TruePF(提供MATLAB代码)

一、进化动态约束多目标优化测试集DCP1-DCP9 参考文献&#xff1a; [1]G. Chen, Y. Guo, Y. Wang, J. Liang, D. Gong and S. Yang, “Evolutionary Dynamic Constrained Multiobjective Optimization: Test Suite and Algorithm,” in IEEE Transactions on Evolutionary Com…

Web3:数字化社会的下一步

随着技术的不断进步和互联网的发展&#xff0c;我们正逐渐迈入一个全新的数字化社会阶段。在这个新的时代&#xff0c;Web3作为数字化社会的重要组成部分&#xff0c;将发挥着举足轻重的作用。本文将探讨Web3在数字化社会中的意义、特点以及对未来发展的影响。 1. 重新定义数字…

人脸识别:Arcface--loss+code

之前只接触过传统方法的人脸识别算法&#xff0c;本以为基于深度学习的方法会使用对比损失之类的函数进行训练&#xff0c;但是Arcface算法基于softmax进行了创新&#xff0c;本文未深究其详细的loss公式原理&#xff0c;在大致明白其方向下&#xff0c;运行了代码&#xff0c;…

06-kafka及异步通知文章上下架

kafka及异步通知文章上下架 1)自媒体文章上下架 需求分析 2)kafka概述 消息中间件对比 特性ActiveMQRabbitMQRocketMQKafka开发语言javaerlangjavascala单机吞吐量万级万级10万级100万级时效性msusmsms级以内可用性高&#xff08;主从&#xff09;高&#xff08;主从&#…

spring中各种bean加载顺序

具体加载顺序按照罗列的顺序 XXXAware ApplicationContextAware、EnvironmentAware、BeanFactoryAware、BeanClassLoaderAware 顾名思义&#xff0c;用于获取对应的对象&#xff0c;需要在实体类中声明对应的对象且当前类为普通类能被注入。 InitializingBean void afterProp…

下载页面上的视频

引言&#xff1a;有些页面上的视频可以直接右键另存为或者F12检索元素找到视频地址打开后保存&#xff0c;但有些视频页面是转码后的视频&#xff0c;不能直接另存为视频格式&#xff0c;可以参考下本方法 以该页面视频为例&#xff1a;加载中...点击查看详情https://wx.vzan.c…

WindowsPowerShell安装配置Vim的折腾记录

说明 vim一直以来都被称为编辑器之神一样的存在。但用不用vim完全取决于你自己&#xff0c;但是作为一个学计算机的同学来说&#xff0c;免不了会和Linux打交道&#xff0c;而大部分的Linux操作系统都预装了vim作为编辑器&#xff0c;如果是简单的任务&#xff0c;其实vim只要会…

Java_自定义实体类的列表List<T>调用remove()失败讲解

示例1 前提&#xff1a; 新建一个主类Demo1。 需求&#xff1a; 在一个列表中有三条String的数据&#xff0c;想要使用remove(Object o)删掉其中一条。 结果&#xff1a; remove(Object o)成功把数据删掉。 示例2 前提&#xff1a; 新建一个自定义实体类DataExample和一个主…

爬取学习强国视频小示例

因为需要爬取的视频数量并不是很大&#xff0c;总共需要将131个视频下载下来&#xff0c;所以就直接去手动找找视频的地址和名称保存下来的。由于页面是动态加载的&#xff0c;所以我们无法在网站源码中直接找到视频的超链接。设想是可以用Selenium模拟浏览器点击进行动态加载获…

uni-app如何实现高性能

这篇文章主要讲解uni-app如何实现高性能的问题&#xff1f; 什么是uni-app&#xff1f; 简单说一下什么是uni-app&#xff0c;uni-app是继承自vue.js&#xff0c;对vue做了轻度定制&#xff0c;并且实现了完整的组件化开发&#xff0c;并且支持多端发布的一种架构&#xff0c…

电脑上音频太多,播放速度又不一致,如何批量调节音频播放速度?

批量调节音频速度是现代音频处理中的一个重要环节&#xff0c;尤其在音乐制作、电影剪辑、有声书制作等领域&#xff0c;它能够帮助制作者快速高效地调整音频的播放速度&#xff0c;从而满足特定的制作需求。本文将详细介绍批量调节音频速度的方法、技巧和注意事项&#xff0c;…

移动Web学习04-移动端订单结算页PC端个人中心页面

5、电商结算页面案例 css body{background-color: #F2F2F2; } * {box-sizing: border-box;margin: 0;padding: 0; }.main{padding: 12px 11px 80px; }.pay{display: flex;height: 80px;background-color: #fff;bottom: 0;width: 100%;border-top: 1px solid #ededed;position:…

04-自媒体文章-自动审核

自媒体文章-自动审核 1)自媒体文章自动审核流程 1 自媒体端发布文章后&#xff0c;开始审核文章 2 审核的主要是审核文章的内容&#xff08;文本内容和图片&#xff09; 3 借助第三方提供的接口审核文本 4 借助第三方提供的接口审核图片&#xff0c;由于图片存储到minIO中&…

JAVA毕业设计132—基于Java+Springboot+Vue的自习室座位预约小程序管理系统(源代码+数据库)

毕设所有选题&#xff1a; https://blog.csdn.net/2303_76227485/article/details/131104075 基于JavaSpringbootVue的自习室座位预约小程序管理系统(源代码数据库)132 一、系统介绍 本项目前后端分离带小程序&#xff0c;分为管理员、用户两种角色 1、用户&#xff1a; 注…

【Arthas案例】某应用依赖两个GAV-classifier不同的snakeyaml.jar,引起NoSuchMethodError

多个不同的GAV-classifier依赖冲突&#xff0c;引起NoSuchMethodError Maven依赖的三坐标体系GAV(G-groupId&#xff0c;A-artifactId&#xff0c;V-version) classifier通常用于区分从同一POM构建的具有不同内容的构件物&#xff08;artifact&#xff09;。它是可选的&#xf…

泰坦尼克号幸存者数据分析

泰坦尼克号幸存者数据分析 1、泰坦尼克号数据集2、数据集加载与概览3、泰坦尼克号幸存者数据分析4、哪些人可能成为幸存者&#xff1f; 1、泰坦尼克号数据集 泰坦尼克号的沉没是世界上最严重的海难事故之一&#xff0c;造成了大量的人员伤亡。这是一艘号称当时世界上最大的邮轮…

​​​​​​​【人工智能】手写数字识别

手写数字识别 实验背景 数据集介绍 MNIST数据集包含了一系列的手写数字图像&#xff0c;包括0到9的数字。每张图像都是灰度图像&#xff0c;尺寸为28x28像素。数据集共包含60000张训练图像和10000张测试图像。 MNIST数据集的目标是通过训练一个模型&#xff0c;使其能够正确地识…