Azure Functions + Azure Batch实现MP3音频转码方案

客户需求

     客户的环境是一个网络音乐播放系统,根据网络情况提供给手机用户收听各种码率的MP3歌曲,在客户没购买歌曲的情况下提供一个三十秒内的试听版本。这样一个系统非常明确地一个需求就是会定期需要将一批从音乐版商手中获取到的高比特率音乐文件转换成各种低码率的MP3文件和试听文件,由于收到版商的文件数量和时间都不确定,所以长期部署大量的转码服务器为系统提供转码服务显然非常浪费资源,但是如果不准备好足够的转码服务器的话,当大批量文件需要转码时又没法能够快速完成任务,在现在这个时间比金钱更加重要的互联网时代显然是不可接受的。这时候选择公有云这样高弹性、按需计费的计算平台就显得非常合适了。

技术选型

使用Azure Fuctions+Azure Batch+Azure Blob Storage方案,全部都是基于PaaS平台,无需对服务器进行管理,省去服务器在日常维护中各种补丁安全管理要求。

方案架构图:

方案实现:

利用Azure Function监控Blob文件变化,Azure Functions的一大优点就是提供了不同类型的触发器(http Trigger,Blob Trigger,Timer Trigger,Queue Trigger…),这里我们正好利用上Blob Trigger用来监控Blob文件的变化。

首先是创建一个Azure Functions的Project

然后指定Function是用Blob Trigger的。

创建ListeningBlob函数,

using System; 
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.Extensions.Logging;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;
using Microsoft.WindowsAzure.Storage.Queue;

namespace MS.CSU.mp3encoder {   public static class ListeningBlob{      
 
static string key_Convert = Environment.GetEnvironmentVariable("KeyConvert") ?? "-i \"{0}\" -codec:a libmp3lame -b:a {1} \"{2}\" -y";        

static string work_Dir = Path.GetTempPath();    
  
static string targetStorageConnection = Environment.GetEnvironmentVariable("targetStorageConnection");        static string sourceStorageConnection = Environment.GetEnvironmentVariable("sourceStorageConnection");        static string bitRates = Environment.GetEnvironmentVariable("bitRates") ?? "192k;128k;64k";        static string keyPreview = Environment.GetEnvironmentVariable("keyPreview") ?? "-ss 0 -t 29 -i \"{0}\" \"{1}\"";        static CloudBlobClient blobOutputClient;        static string blobOutputContainerName = Environment.GetEnvironmentVariable("outputContainer") ?? "output";        static CloudBlobContainer blobOutputContainer;        static CloudBlobClient blobInputClient;     static CloudBlobContainer blobInputContainer;[FunctionName("ListeningBlob")][return: Queue("jobs")]      
 
public static void Run([BlobTrigger("source/{name}", Connection = "sourceStorageConnection")]Stream myBlob, string name, Uri uri,TraceWriter log){AzureBatch batch = new AzureBatch(sourceStorageConnection);            //保证每个音频文件都有自己的处理文件夹,避免冲突Guid jobId = Guid.NewGuid();log.Info($"Job:{jobId},C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes,Path:{uri.ToString()}");            //将源Blob剪切到TargetBlob,将源文件移出监控Blob容器,避免误触发try{initBlobClient();CloudBlockBlob sourceBlob = blobInputContainer.GetBlockBlobReference($"{name}");name = Path.GetFileNameWithoutExtension(name);CloudBlockBlob targetBlob = blobOutputContainer.GetBlockBlobReference($"{name}_{jobId}/{name}.mp3");targetBlob.StartCopy(sourceBlob);sourceBlob.Delete();uri = targetBlob.Uri;}            catch (Exception err){log.Error($"删除源Blob错误!Err:{err}");               return ;}List<EncodeJob> jobs = new List<EncodeJob>();         string url = Uri.EscapeUriString(uri.ToString());log.Info($"需要转换的码率:{bitRates}");            string[] bitsRateNames = bitRates.Split(';');Dictionary<string, bool> status = new Dictionary<string, bool>(); foreach (var s in bitsRateNames){                if (string.IsNullOrWhiteSpace(s))                    continue;                var job = new EncodeJob(){OutputName = $"{name}{s}.mp3",Name = name,Command = string.Format(key_Convert, name, s, $"{name}{s}.mp3"),id = jobId,InputUri = uri};batch.QueueTask(job);}           var previewJob = new EncodeJob(){Name = name,OutputName = $"{name}preview.mp3",Command = string.Format(keyPreview, name, $"{name}preview.mp3"),InputUri = uri,id = jobId,};batch.QueueTask(previewJob);          //Directory.Delete($"{work_Dir}\\{jobId}",true);           }        static void initBlobClient(){CloudStorageAccount storageOutputAccount = CloudStorageAccount.Parse(targetStorageConnection);            // Create a blob client for interacting with the blob service.blobOutputClient = storageOutputAccount.CreateCloudBlobClient();blobOutputContainer = blobOutputClient.GetContainerReference(blobOutputContainerName);blobOutputContainer.CreateIfNotExists();            //初始化输入的Storage容器CloudStorageAccount storageInputAccount = CloudStorageAccount.Parse(sourceStorageConnection);            // Create a blob client for interacting with the blob service.blobInputClient = storageInputAccount.CreateCloudBlobClient();blobInputContainer = blobInputClient.GetContainerReference("source");}} }


创建Batch服务账号,并且获取Batch Account的相关信息。

到https://ffmpeg.zeranoe.com/下载最新的ffmpeg程序,安装后将ffmpeg.exe单独压缩成zip文件,然后上传到Batch中,为程序调用做准备,

构建Azure Batch类用于调用Azure Batch进行ffmpeg进行转换

using Microsoft.Azure.Batch; 
using Microsoft.Azure.Batch.Auth;
using Microsoft.Azure.Batch.Common;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;
using System;
using System.Collections.Generic;
using System.IO;using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace MS.CSU.mp3encoder {    

public class AzureBatch{        //ffmpeg相关信息;string env_appPackageInfo = Environment.GetEnvironmentVariable("ffmpegversion")??"ffmpeg 3.4";        string appPackageId = "ffmpeg";        string appPackageVersion = "3.4";        // Pool and Job constantsprivate const string PoolId = "WinFFmpegPool";        private const int DedicatedNodeCount = 0;        private const int LowPriorityNodeCount = 5;        //指定执行转码任务的VM机型private const string PoolVMSize = "Standard_F2";        private const string JobName = "WinFFmpegJob";        string outputStorageConnection;        string outputContainerName = "output";        string batchAccount = Environment.GetEnvironmentVariable("batchAccount");        string batchKey = Environment.GetEnvironmentVariable("batchKey");        string batchAccountUrl = Environment.GetEnvironmentVariable("batchAccountUrl");        string strMaxTaskPerNode = Environment.GetEnvironmentVariable("MaxTaskPerNode") ?? "4";       //设置每个计算节点能同时处理的任务数量,可根据选择的VM类型和任务类型适当调整int maxTaskPerNode=4;        public AzureBatch(string storageConnection){outputStorageConnection = storageConnection;}        //用于单元测试时创建Batch对象public AzureBatch(string storageConnection, string _batchAccount, string _batchAccountUrl, string _batchKey){outputStorageConnection = storageConnection;batchAccount = _batchAccount;batchAccountUrl = _batchAccountUrl;batchKey = _batchKey;maxTaskPerNode = int.TryParse(strMaxTaskPerNode, out maxTaskPerNode) ? maxTaskPerNode : 4;appPackageId = env_appPackageInfo.Split(' ')[0] ?? "ffmpeg";appPackageVersion = env_appPackageInfo.Split(' ')[1] ?? "3.4";}        /// <summary>/// Returns a shared access signature (SAS) URL providing the specified        ///  permissions to the specified container. The SAS URL provided is valid for 2 hours from        ///  the time this method is called. The container must already exist in Azure Storage.        /// </summary>/// <param name="blobClient">A <see cref="CloudBlobClient"/>.</param>/// <param name="containerName">The name of the container for which a SAS URL will be obtained.</param>/// <param name="permissions">The permissions granted by the SAS URL.</param>/// <returns>A SAS URL providing the specified access to the container.</returns>private string GetContainerSasUrl(CloudBlobClient blobClient, string containerName, SharedAccessBlobPermissions permissions){            // Set the expiry time and permissions for the container access signature. In this case, no start time is specified,            // so the shared access signature becomes valid immediately. Expiration is in 2 hours.SharedAccessBlobPolicy sasConstraints = new SharedAccessBlobPolicy{SharedAccessExpiryTime = DateTime.UtcNow.AddHours(2),Permissions = permissions};            // Generate the shared access signature on the container, setting the constraints directly on the signatureCloudBlobContainer container = blobClient.GetContainerReference(containerName);            string sasContainerToken = container.GetSharedAccessSignature(sasConstraints);            // Return the URL string for the container, including the SAS tokenreturn String.Format("{0}{1}", container.Uri, sasContainerToken);}        // BATCH CLIENT OPERATIONS - FUNCTION IMPLEMENTATIONS/// <summary>/// Creates the Batch pool.        /// </summary>/// <param name="batchClient">A BatchClient object</param>/// <param name="poolId">ID of the CloudPool object to create.</param>private void CreatePoolIfNotExist(BatchClient batchClient, string poolId){        //    if (batchClient.PoolOperations.GetPool(poolId) != null)        //    {        //        return;        //    }CloudPool pool = null;            try{ImageReference imageReference = new ImageReference(publisher: "MicrosoftWindowsServer",offer: "WindowsServer",sku: "2012-R2-Datacenter-smalldisk",version: "latest");                //ImageReference imageReference = new ImageReference(                //        publisher: "MicrosoftWindowsServer",                //        offer: "WindowsServer",                //        sku: "2016-Datacenter-samlldisk",                //        version: "latest");VirtualMachineConfiguration virtualMachineConfiguration = new VirtualMachineConfiguration(imageReference: imageReference,nodeAgentSkuId: "batch.node.windows amd64");                // Create an unbound pool. No pool is actually created in the Batch service until we call                // CloudPool.Commit(). This CloudPool instance is therefore considered "unbound," and we can                // modify its properties.pool = batchClient.PoolOperations.CreatePool(poolId: poolId,targetDedicatedComputeNodes: DedicatedNodeCount,targetLowPriorityComputeNodes: LowPriorityNodeCount,virtualMachineSize: PoolVMSize,virtualMachineConfiguration: virtualMachineConfiguration);pool.MaxTasksPerComputeNode = maxTaskPerNode;                // Specify the application and version to install on the compute nodes                // This assumes that a Windows 64-bit zipfile of ffmpeg has been added to Batch account                // with Application Id of "ffmpeg" and Version of "3.4".                // Download the zipfile https://ffmpeg.zeranoe.com/builds/win64/static/ffmpeg-3.4-win64-static.zip// to upload as application packagepool.ApplicationPackageReferences = new List<ApplicationPackageReference>{                    new ApplicationPackageReference{ApplicationId = appPackageId,Version = appPackageVersion}};pool.Commit();}            catch (BatchException be){                // Accept the specific error code PoolExists as that is expected if the pool already existsif (be.RequestInformation?.BatchError?.Code == BatchErrorCodeStrings.PoolExists){             //       Console.WriteLine("The pool {0} already existed when we tried to create it", poolId);                }                else{                    throw; // Any other exception is unexpected                }}}        /// <summary>/// Creates a job in the specified pool.        /// </summary>/// <param name="batchClient">A BatchClient object.</param>/// <param name="jobId">ID of the job to create.</param>/// <param name="poolId">ID of the CloudPool object in which to create the job.</param>private void CreateJobIfNotExist(BatchClient batchClient, string jobId, string poolId){            //if (batchClient.JobOperations.GetJob(jobId) != null)            //    return;try{Console.WriteLine("Creating job [{0}]...", jobId);CloudJob job = batchClient.JobOperations.CreateJob();job.Id = $"{JobName}";job.PoolInformation = new PoolInformation { PoolId = poolId };job.Commit();}            catch (BatchException be){                // Accept the specific error code JobExists as that is expected if the job already existsif (be.RequestInformation?.BatchError?.Code == BatchErrorCodeStrings.JobExists){Console.WriteLine("The job {0} already existed when we tried to create it", jobId);}                else{                    throw; // Any other exception is unexpected                }}}        /// <summary>/// /// </summary>Creates tasks to process each of the specified input files, and submits them        ///  to the specified job for execution.        /// <param name="batchClient">A BatchClient object.</param>/// <param name="jobId">ID of the job to which the tasks are added.</param>/// <param name="inputFiles">A collection of ResourceFile objects representing the input file        /// to be processed by the tasks executed on the compute nodes.</param>/// <param name="outputContainerSasUrl">The shared access signature URL for the Azure /// Storagecontainer that will hold the output files that the tasks create.</param>/// <returns>A collection of the submitted cloud tasks.</returns>private List<CloudTask> AddTasks(BatchClient batchClient,EncodeJob job, string outputContainerSasUrl){            // Create a collection to hold the tasks added to the job:List<CloudTask> tasks = new List<CloudTask>();            // Assign a task ID for each iterationvar taskId = String.Format("Task{0}", Guid.NewGuid());            // Define task command line to convert the video format from MP4 to MP3 using ffmpeg.            // Note that ffmpeg syntax specifies the format as the file extension of the input file            // and the output file respectively. In this case inputs are MP4.string appPath = String.Format("%AZ_BATCH_APP_PACKAGE_{0}#{1}%", appPackageId, appPackageVersion);            string inputMediaFile = job.Name;            string outputMediaFile = job.OutputName;            string taskCommandLine = String.Format("cmd /c {0}\\ffmpeg.exe {1}", appPath, job.Command);            // Create a cloud task (with the task ID and command line) and add it to the task listCloudTask task = new CloudTask(taskId, taskCommandLine);task.ApplicationPackageReferences = new List<ApplicationPackageReference>{                    new ApplicationPackageReference{ApplicationId = appPackageId,Version = appPackageVersion}};task.ResourceFiles = new List<ResourceFile>();task.ResourceFiles.Add(new ResourceFile(Uri.EscapeUriString(job.InputUri.ToString()), inputMediaFile));            // Task output file will be uploaded to the output container in Storage.List<OutputFile> outputFileList = new List<OutputFile>();OutputFileBlobContainerDestination outputContainer = new OutputFileBlobContainerDestination(outputContainerSasUrl,$"{job.Name}_{job.id}/{job.OutputName}");OutputFile outputFile = new OutputFile(outputMediaFile,                                                
  
new OutputFileDestination(outputContainer),                                              
   
new OutputFileUploadOptions(OutputFileUploadCondition.TaskSuccess));outputFileList.Add(outputFile);task.OutputFiles = outputFileList;tasks.Add(task);            // Call BatchClient.JobOperations.AddTask() to add the tasks as a collection rather than making a            // separate call for each. Bulk task submission helps to ensure efficient underlying API            // calls to the Batch service. batchClient.JobOperations.AddTask($"{JobName}", tasks);            return tasks;}      
  
private CloudBlobClient initBlobClient(){CloudStorageAccount storageOutputAccount = CloudStorageAccount.Parse(outputStorageConnection);            // Create a blob client for interacting with the blob service.var blobOutputClient = storageOutputAccount.CreateCloudBlobClient();            return blobOutputClient;            //blobOutputContainer = blobOutputClient.GetContainerReference(blobOutputContainerName);            //blobOutputContainer.CreateIfNotExists();}      
public void QueueTask(EncodeJob job){BatchSharedKeyCredentials sharedKeyCredentials = new BatchSharedKeyCredentials(batchAccountUrl, batchAccount, batchKey);          
  
var blobClient = initBlobClient();    
  
var outputContainerSasUrl = GetContainerSasUrl(blobClient, outputContainerName, SharedAccessBlobPermissions.Write);            using (BatchClient batchClient = BatchClient.Open(sharedKeyCredentials)){                // Create the Batch pool, which contains the compute nodes that execute the tasks.                CreatePoolIfNotExist(batchClient, PoolId);                // Create the job that runs the tasks.CreateJobIfNotExist(batchClient, $"{JobName}", PoolId);                // Create a collection of tasks and add them to the Batch job. // Provide a shared access signature for the tasks so that they can upload their output                // to the Storage container.                AddTasks(batchClient,job,outputContainerSasUrl);}}      
 
    
public async Task<Tuple<string,int>>  GetStatus(){BatchSharedKeyCredentials sharedKeyCredentials = new BatchSharedKeyCredentials(batchAccountUrl, batchAccount, batchKey);            

string result = "正在获取任务信息...";            int total = 0;          
 
using (BatchClient batchClient = BatchClient.Open(sharedKeyCredentials)){                var counts =await batchClient.JobOperations.GetJobTaskCountsAsync(JobName);total = counts.Active + counts.Running + counts.Completed;result = $"总任务:{total},等待的任务:{counts.Active},运行中的任务:{counts.Running},成功的任务:{counts.Succeeded},失败的任务:{counts.Failed}";}            

return new Tuple<string,int>(result,total);}} }

由于Azure Functions的最大Timeout时间为10分钟,当执行一些大型的文件转换时如果是同步执行往往会导致超时错误,所以我们需要在调用完Batch的任务后即可返回,让Batch Task后台执行。为了监控这些Task的完成状况,我们需要构建一个定时的Functions来检查任务状态。然后将获取到的状态信息写到output Blob Container的status.html中就好了

using System; 
using System.Threading.Tasks;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;

namespace MS.CSU.mp3encoder {    /// <summary>/// 用于更新任务处理状态    /// </summary>public static class StatusUpdate{      
 
static int lastTotal=0;    
 
static DateTime lastSubmitTime;      
 
static string targetStorageConnection = Environment.GetEnvironmentVariable("targetStorageConnection");        static CloudBlobClient blobOutputClient;      
 
static string blobOutputContainerName = Environment.GetEnvironmentVariable("outputContainer") ?? "output";        static CloudBlobContainer blobOutputContainer;[FunctionName("StatusUpdate")]    
   
public async static Task Run([TimerTrigger("0 */5 * * * *")]TimerInfo myTimer, TraceWriter log){            string strStatus = "";            int jobCount = 0;            try{AzureBatch batch = new AzureBatch(targetStorageConnection);              var result=await batch.GetStatus();strStatus = result.Item1;jobCount = result.Item2 - lastTotal;                if (lastTotal != result.Item2){lastTotal = result.Item2;lastSubmitTime = DateTime.Now;}}            catch (Exception err){strStatus = Uri.EscapeDataString(err.ToString());};initBlobContainer();          
 
var statusBlob = blobOutputContainer.GetBlockBlobReference("status.html");            string htmlStatus =$@"<html><head><meta http-equiv=""refresh"" content=""5"">< meta charset=""utf-8""></head><body><h1>{strStatus}</h1><br/><h1>最后更新 :{DateTime.Now.AddHours(8)}</h1><h1>上次任务提交时间:{lastSubmitTime.AddHours(8)}<h1><h2>上次任务最后五分钟内提交了{jobCount}<h2></body></html>";            await statusBlob.UploadTextAsync(htmlStatus);}        private static void initBlobContainer(){CloudStorageAccount storageOutputAccount = CloudStorageAccount.Parse(targetStorageConnection);            // Create a blob client for interacting with the blob service.blobOutputClient = storageOutputAccount.CreateCloudBlobClient();blobOutputContainer = blobOutputClient.GetContainerReference(blobOutputContainerName);blobOutputContainer.CreateIfNotExists();}} }

最终效果:


引用资料:

Azure Blob storage bindings for Azure Functions

Timer trigger for Azure Functions

Azure Batch .NET File Processing with ffmpeg

FFmpeg MP3 Encoding Guide

原文地址:http://www.cnblogs.com/wing-ms/p/8423221.html


.NET社区新闻,深度好文,欢迎访问公众号文章汇总 http://www.csharpkit.com

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/322159.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Springboot Mybatis多数据源配置MybatisProperties坑

一、场景复现 配置了两个数据源&#xff0c;查询Dao却报错表不存在。 &#xff08;1&#xff09;maven <dependency><groupId>org.mybatis.spring.boot</groupId><artifactId>mybatis-spring-boot-starter</artifactId> </dependency> …

dotnet core webapi +vue 搭建前后端完全分离web架构(一)

架构服务端采用 dotnet core webapi前端采用: Vue router elementUIaxios问题使用前后端完全分离的架构&#xff0c;首先遇到的问题肯定是跨域访问。前后端可能不在同个server上&#xff0c;即使前后端处在同个server上&#xff0c;由于前后端完全分离&#xff0c;前后端使用…

SpringBoot使用日志

转载自 SpringBoot使用日志 1、选什么日志框架 首先列举一下日志门面和实现SpringBoot默认选用SLF4J和Logback日志级别&#xff1a;springboot默认已经帮我们配置好了日志&#xff0c;日志级别为trace<debug<info<warn<error默认的日志级别为inifo&#xff0c;日…

Dotnet Core Windows Service

在dotnet 中有topshelf 可以很方便的写windows 服务并且安装也是很方便的&#xff0c;命令行 运行.exe install 就直接把exe 程序安装成windows 服务。当然代码也要做相应的修改&#xff0c;具体的可以参照例子。在dotnet core 2.0 中 我们也有一个很方便的dll 来使用 https://…

Dubbo(十)之配置加载流程

转载自 Dubbo配置加载流程 Dubbo 中的配置加载流程介绍 此篇文档主要讲在应用启动阶段&#xff0c;Dubbo框架如何将所需要的配置采集起来&#xff08;包括应用配置、注册中心配置、服务配置等&#xff09;&#xff0c;以完成服务的暴露和引用流程。 根据驱动方式的不同&…

.NET 文档数据库 RavenDB 4.0 发布

RavenDB 还有一个新的版本&#xff0c;RavenDB 4.0.0&#xff0c;一个ACID文档数据库&#xff0c;为数据操作中的高性能业务提供完全事务性的开源NoSQL解决方案。新版本更新了许多关键功能。平台Windows x64Windows x86Ubuntu 16.04 x64树莓派Docker&#xff08;Ubuntu 16.04和…

dotnetcore+vue+elementUI 前后端分离架 二(后端篇)

前言最近几年前后端分离架构大行其道&#xff0c;而且各种框架也是层出不穷。本文通过dotnetcore vue 来介绍 前后端分离架构实战。涉及的技术栈服务端技术mysql本项目使用mysql 作为持久化层本项目采用了 mysql 的示例 employees 数据库, 需要的朋友可以自行下载 。http://www…

SpringCloud Gateway配置自定义路由404坑

一、场景复现 微服务自定义路由&#xff0c;返回404页面。 ①如图&#xff1a; &#xff08;1&#xff09;springcloud-gateway的路由设置 Configuration public class RouteConfig {Beanpublic RouteLocator customRouteLocator(RouteLocatorBuilder builder) {return buil…

Actor-ES框架:Ray-Handler之ToReadHandler编写

如图右上角所示&#xff0c;Ray中有两类Handler&#xff08;SubHandler和PartSubHandler&#xff09;,在使用中&#xff0c;SubHandler派生Actor的CoreHandler&#xff0c;PartSubHandler派生SQLToReadHandler&#xff0c;SQLToReadHandler派生Actor的ToReadHandler&#xff0c…

SpringCloud Zuul(十)之配置路由prefix坑

一、场景复现 配置prefixapi访问/api/micro-service/test接口404 &#xff08;1&#xff09;zuul配置 zuul:strip-prefix: true #转发路径截断匹配前缀prefix: "api"add-proxy-headers: falseset-content-length: truesemaphore:max-semaphores: 600 &#xff08;…

在Firefox 58中,WebAssembly组件性能提升了10倍

Mozilla在Firefox 58中为WebAssembly&#xff08;WASM&#xff09;组件推出了一套双层编译系统&#xff0c;号称解析和编译WASM代码的速度达到30-60MB/s&#xff0c;足够在有线网络中实现实时编译。基准测试表明&#xff0c;新版的性能比旧版提高了10倍&#xff0c;比Chrome快1…

SpringCloud Zuul(九)之路由自动刷新原理

一、现象 发布新服务&#xff0c;然后在数据库配置了路由&#xff0c;使用服务路径访问404。然后重新发布新的服务&#xff0c;就可以继续访问得到 &#xff08;1&#xff09;配置了路由第一次访问 &#xff08;2&#xff09;重新发布后访问 二、分析 &#xff08;1&#xf…

dotnetcore+vue+elementUI 前后端分离 三(前端篇)

说明&#xff1a;本项目使用了 mysql employees数据库&#xff0c;使用了vue axois element UI 2.0 ,演示了 单页程序 架构 ,vue router 的使用&#xff0c;axois 使用&#xff0c;以及 element UI 控件的使用。通过这几种技术的组合&#xff0c;实现了对 employee 的增&…

SpringCloud Greenwich(一)注册中心之nacos、Zuul和 gateway网关配置

本项目是搭建基于nacos注册中心的springcloud&#xff0c;使用zuul网关和gateway网关。 一、框架搭建 &#xff08;1&#xff09;项目结构 micro-service 服务提供者 zuul-gateway zuul网关 springcloud-gateway gateway网关 &#xff08;2&#xff09;环境 nacos 1.4.1…

欢乐ssl暑假赛【2019.8.6】

前言 莫得前言 成绩 JJJ表示初中&#xff0c;HHH表示高中后面加的是几年级&#xff0c;只放前10 RankRankRankPersonPersonPersonScoreScoreScoreAAABBBCCCDDDEEE111(J−3)WYC(J-3)WYC(J−3)WYC500500500100100100100100100100100100100100100100100100222(H−1)QYH(H-1)QYH(H…

Actor-ES框架:Ray-Handler-消息订阅器编写

消息订阅器&#xff1a;Ray是基于Event Sourcing设计的ES/Actor框架&#xff0c;消息发布后需要订阅处理&#xff0c;订阅器主要有以下两类&#xff1a;CoreHandler消息订阅器RabbitSubSubHandlerToReadHandler消息订阅器RabbitSubSQLToReadHandler&#xff08;ToReadHandler的…

Actor-ES框架:Actor编写-ESGrain与ESRepGrain

ESGrain生命周期Ray中ESGrain继承自Grain扩展了Grain的生命周期。Grain的生命周期参加文档附录&#xff1a;1-Grain生命周期-译注.mdESGrain重写了Grain的OnActivateAsync方法。ESGrain的初始化过程如下&#xff1a;初始化ESGrain中的State调用ReadSnapshotAsync()读快照。如果…

DotNetAnywhere:可供选择的 .NET 运行时

我最近在收听一个名为DotNetRock 的优质播客&#xff0c;其中有以Knockout.js而闻名的Steven Sanderson 正在讨论 " WebAssembly And Blazor "。也许你还没听过&#xff0c;Blazor 正试图凭借WebAssembly的魔力将 .NET 带入到浏览器中。如果您想了解更多信息&#xf…

SpringCloud Greenwich(二)注册中心之consul、Zuul和 gateway网关配置

本项目是搭建基于consul注册中心的springcloud&#xff0c;使用zuul网关和gateway网关 一、框架搭建 &#xff08;1&#xff09;项目结构 micro-service 服务提供者 zuul-gateway zuul网关 springcloud-gateway gateway网关 &#xff08;2&#xff09;环境 consul 1.9.0…

Actor-ES框架:消息发布器与消息存储器

消息发布器&#xff1a;Ray是基于Event Sourcing设计的ES/Actor框架&#xff0c;ESGrain状态&#xff08;State&#xff09;的修改、ESGrain之间的通信默认使用RabbitMQ通信。消息的发布器主要是RabbitPubESGrain。RabbitPub特性RabbitPub特性是RabbitMQ消息发布器。RabbitSub特…