聊聊PowerJob日志的上报及存储

本文主要研究一下PowerJob的日志上报及存储

OmsLoggerFactory.build

tech/powerjob/worker/log/OmsLoggerFactory.java

public class OmsLoggerFactory {public static OmsLogger build(Long instanceId, String logConfig, WorkerRuntime workerRuntime) {LogConfig cfg;if (StringUtils.isEmpty(logConfig)) {cfg = new LogConfig();} else {try {cfg = JsonUtils.parseObject(logConfig, LogConfig.class);} catch (Exception ignore) {cfg = new LogConfig();}}switch (LogType.of(cfg.getType())) {case LOCAL:return new OmsLocalLogger(cfg);case STDOUT:return new OmsStdOutLogger(cfg);case NULL:return new OmsNullLogger();case LOCAL_AND_ONLINE:return new OmsServerAndLocalLogger(cfg, instanceId, workerRuntime.getOmsLogHandler());default:return new OmsServerLogger(cfg, instanceId, workerRuntime.getOmsLogHandler());}}
}

默认logConfig为null,cfg是new LogConfig(),其build出来的是OmsServerLogger

OmsServerLogger

tech/powerjob/worker/log/impl/OmsServerLogger.java

public class OmsServerLogger extends AbstractOmsLogger {private final long instanceId;private final OmsLogHandler omsLogHandler;public OmsServerLogger(LogConfig logConfig, long instanceId, OmsLogHandler omsLogHandler) {super(logConfig);this.instanceId = instanceId;this.omsLogHandler = omsLogHandler;}@Overridepublic void debug0(String messagePattern, Object... args) {process(LogLevel.DEBUG, messagePattern, args);}@Overridepublic void info0(String messagePattern, Object... args) {process(LogLevel.INFO, messagePattern, args);}@Overridepublic void warn0(String messagePattern, Object... args) {process(LogLevel.WARN, messagePattern, args);}@Overridepublic void error0(String messagePattern, Object... args) {process(LogLevel.ERROR, messagePattern, args);}private void process(LogLevel level, String messagePattern, Object... args) {String logContent = genLogContent(messagePattern, args);omsLogHandler.submitLog(instanceId, level, logContent);}}

OmsServerLogger的process方法调用的是OmsLogHandler的submitLog方法

submitLog

tech/powerjob/worker/background/OmsLogHandler.java

@Slf4j
public class OmsLogHandler {private final String workerAddress;private final Transporter transporter;private final ServerDiscoveryService serverDiscoveryService;// 处理线程,需要通过线程池启动public final Runnable logSubmitter = new LogSubmitter();// 上报锁,只需要一个线程上报即可private final Lock reportLock = new ReentrantLock();// 生产者消费者模式,异步上传日志private final BlockingQueue<InstanceLogContent> logQueue = Queues.newLinkedBlockingQueue(10240);// 每次上报携带的数据条数private static final int BATCH_SIZE = 20;// 本地囤积阈值private static final int REPORT_SIZE = 1024;public OmsLogHandler(String workerAddress, Transporter transporter, ServerDiscoveryService serverDiscoveryService) {this.workerAddress = workerAddress;this.transporter = transporter;this.serverDiscoveryService = serverDiscoveryService;}/*** 提交日志* @param instanceId 任务实例ID* @param logContent 日志内容*/public void submitLog(long instanceId, LogLevel logLevel, String logContent) {if (logQueue.size() > REPORT_SIZE) {// 线程的生命周期是个不可循环的过程,一个线程对象结束了不能再次start,只能一直创建和销毁new Thread(logSubmitter).start();}InstanceLogContent tuple = new InstanceLogContent(instanceId, System.currentTimeMillis(), logLevel.getV(), logContent);boolean offerRet = logQueue.offer(tuple);if (!offerRet) {log.warn("[OmsLogHandler] [{}] submit log failed, maybe your log speed is too fast!", instanceId);}}//......
}    

OmsLogHandler的submitLog方法每次先判断logQueue大小是否大于REPORT_SIZE(1024),是则启动logSubmitter线程,否则放入logQueue队列

LogSubmitter

tech/powerjob/worker/background/OmsLogHandler.java

    private class LogSubmitter implements Runnable {@Overridepublic void run() {boolean lockResult = reportLock.tryLock();if (!lockResult) {return;}try {final String currentServerAddress = serverDiscoveryService.getCurrentServerAddress();// 当前无可用 Serverif (StringUtils.isEmpty(currentServerAddress)) {if (!logQueue.isEmpty()) {logQueue.clear();log.warn("[OmsLogHandler] because there is no available server to report logs which leads to queue accumulation, oms discarded all logs.");}return;}List<InstanceLogContent> logs = Lists.newLinkedList();while (!logQueue.isEmpty()) {try {InstanceLogContent logContent = logQueue.poll(100, TimeUnit.MILLISECONDS);logs.add(logContent);if (logs.size() >= BATCH_SIZE) {WorkerLogReportReq req = new WorkerLogReportReq(workerAddress, Lists.newLinkedList(logs));// 不可靠请求,WEB日志不追求极致TransportUtils.reportLogs(req, currentServerAddress, transporter);logs.clear();}}catch (Exception ignore) {break;}}if (!logs.isEmpty()) {WorkerLogReportReq req = new WorkerLogReportReq(workerAddress, logs);TransportUtils.reportLogs(req, currentServerAddress, transporter);}}finally {reportLock.unlock();}}}

LogSubmitter不断地从logQueue.poll数据,在logs的大小大于等于BATCH_SIZE(20)时通过TransportUtils.reportLogs给server上报日志

reportLogs

tech/powerjob/worker/common/utils/TransportUtils.java

    public static void reportLogs(WorkerLogReportReq req, String address, Transporter transporter) {final URL url = easyBuildUrl(ServerType.SERVER, S4W_PATH, S4W_HANDLER_REPORT_LOG, address);transporter.tell(url, req);}

reportLogs请求S4W_HANDLER_REPORT_LOG

processWorkerLogReport

tech/powerjob/server/core/handler/AbWorkerRequestHandler.java

    @Override@Handler(path = S4W_HANDLER_REPORT_LOG, processType = ProcessType.NO_BLOCKING)public void processWorkerLogReport(WorkerLogReportReq req) {WorkerLogReportEvent event = new WorkerLogReportEvent().setWorkerAddress(req.getWorkerAddress()).setLogNum(req.getInstanceLogContents().size());try {processWorkerLogReport0(req, event);event.setStatus(WorkerLogReportEvent.Status.SUCCESS);} catch (RejectedExecutionException re) {event.setStatus(WorkerLogReportEvent.Status.REJECTED);} catch (Throwable t) {event.setStatus(WorkerLogReportEvent.Status.EXCEPTION);log.warn("[WorkerRequestHandler] process worker report failed!", t);} finally {monitorService.monitor(event);}}

server端的processWorkerLogReport接收WorkerLogReportReq,执行processWorkerLogReport0方法

WorkerRequestHandlerImpl

tech/powerjob/server/core/handler/WorkerRequestHandlerImpl.java

    protected void processWorkerLogReport0(WorkerLogReportReq req, WorkerLogReportEvent event) {// 这个效率应该不会拉垮吧...也就是一些判断 + Map#get 吧...instanceLogService.submitLogs(req.getWorkerAddress(), req.getInstanceLogContents());}

WorkerRequestHandlerImpl的processWorkerLogReport0执行的是instanceLogService.submitLogs

submitLogs

tech/powerjob/server/core/instance/InstanceLogService.java

    @Async(value = PJThreadPool.LOCAL_DB_POOL)public void submitLogs(String workerAddress, List<InstanceLogContent> logs) {List<LocalInstanceLogDO> logList = logs.stream().map(x -> {instanceId2LastReportTime.put(x.getInstanceId(), System.currentTimeMillis());LocalInstanceLogDO y = new LocalInstanceLogDO();BeanUtils.copyProperties(x, y);y.setWorkerAddress(workerAddress);return y;}).collect(Collectors.toList());try {CommonUtils.executeWithRetry0(() -> localInstanceLogRepository.saveAll(logList));}catch (Exception e) {log.warn("[InstanceLogService] persistent instance logs failed, these logs will be dropped: {}.", logs, e);}}

InstanceLogService的submitLogs是个异步方法,它将InstanceLogContent转换为LocalInstanceLogDO,然后执行localInstanceLogRepository.saveAll保存

LocalJpaConfig

tech/powerjob/server/persistence/config/LocalJpaConfig.java

@Configuration
@EnableTransactionManagement
@EnableJpaRepositories(// repository包名basePackages = LocalJpaConfig.LOCAL_PACKAGES,// 实体管理bean名称entityManagerFactoryRef = "localEntityManagerFactory",// 事务管理bean名称transactionManagerRef = "localTransactionManager"
)
public class LocalJpaConfig {public static final String LOCAL_PACKAGES = "tech.powerjob.server.persistence.local";private static Map<String, Object> genDatasourceProperties() {JpaProperties jpaProperties = new JpaProperties();jpaProperties.setOpenInView(false);jpaProperties.setShowSql(false);HibernateProperties hibernateProperties = new HibernateProperties();// 每次启动都删除数据(重启后原来的Instance已经通过故障转移更换了Server,老的日志数据也没什么意义了)hibernateProperties.setDdlAuto("create");return hibernateProperties.determineHibernateProperties(jpaProperties.getProperties(), new HibernateSettings());}@Bean(name = "localEntityManagerFactory")public LocalContainerEntityManagerFactoryBean initLocalEntityManagerFactory(@Qualifier("omsLocalDatasource") DataSource omsLocalDatasource,EntityManagerFactoryBuilder builder) {return builder.dataSource(omsLocalDatasource).properties(genDatasourceProperties()).packages(LOCAL_PACKAGES).persistenceUnit("localPersistenceUnit").build();}@Bean(name = "localTransactionManager")public PlatformTransactionManager initLocalTransactionManager(@Qualifier("localEntityManagerFactory") LocalContainerEntityManagerFactoryBean localContainerEntityManagerFactoryBean) {return new JpaTransactionManager(Objects.requireNonNull(localContainerEntityManagerFactoryBean.getObject()));}@Bean(name = "localTransactionTemplate")public TransactionTemplate initTransactionTemplate(@Qualifier("localTransactionManager") PlatformTransactionManager ptm) {TransactionTemplate tt =  new TransactionTemplate(ptm);// 设置隔离级别tt.setIsolationLevel(TransactionDefinition.ISOLATION_DEFAULT);return tt;}
}

LocalJpaConfig针对tech.powerjob.server.persistence.local的dao采用了omsLocalDatasource数据源

MultiDatasourceConfig

tech/powerjob/server/persistence/config/MultiDatasourceConfig.java

@Configuration
public class MultiDatasourceConfig {private static final String H2_DRIVER_CLASS_NAME = "org.h2.Driver";private static final String H2_JDBC_URL_PATTERN = "jdbc:h2:file:%spowerjob_server_db";private static final int H2_MIN_SIZE = 4;private static final int H2_MAX_ACTIVE_SIZE = 10;@Primary@Bean("omsRemoteDatasource")@ConfigurationProperties(prefix = "spring.datasource.core")public DataSource initOmsCoreDatasource() {return DataSourceBuilder.create().build();}@Bean("omsLocalDatasource")public DataSource initOmsLocalDatasource() {String h2Path = OmsFileUtils.genH2WorkPath();HikariConfig config = new HikariConfig();config.setDriverClassName(H2_DRIVER_CLASS_NAME);config.setJdbcUrl(String.format(H2_JDBC_URL_PATTERN, h2Path));config.setAutoCommit(true);// 池中最小空闲连接数量config.setMinimumIdle(H2_MIN_SIZE);// 池中最大连接数量config.setMaximumPoolSize(H2_MAX_ACTIVE_SIZE);// JVM 关闭时删除文件try {FileUtils.forceDeleteOnExit(new File(h2Path));}catch (Exception ignore) {}return new HikariDataSource(config);}
}

MultiDatasourceConfig定义了两个数据源,一个是远程的数据源,比如mysql,一个是本地的h2数据源

processFinishedInstance

tech/powerjob/server/core/instance/InstanceManager.java

    public void processFinishedInstance(Long instanceId, Long wfInstanceId, InstanceStatus status, String result) {log.info("[Instance-{}] process finished, final status is {}.", instanceId, status.name());// 上报日志数据HashedWheelTimerHolder.INACCURATE_TIMER.schedule(() -> instanceLogService.sync(instanceId), 60, TimeUnit.SECONDS);// workflow 特殊处理if (wfInstanceId != null) {// 手动停止在工作流中也认为是失败(理论上不应该发生)workflowInstanceManager.move(wfInstanceId, instanceId, status, result);}// 告警if (status == InstanceStatus.FAILED) {alert(instanceId, result);}// 主动移除缓存,减小内存占用instanceMetadataService.invalidateJobInfo(instanceId);}

InstanceManager的processFinishedInstance方法会延时60s执行instanceLogService.sync(instanceId)

sync

tech/powerjob/server/core/instance/InstanceLogService.java

    @Async(PJThreadPool.BACKGROUND_POOL)public void sync(Long instanceId) {Stopwatch sw = Stopwatch.createStarted();try {// 先持久化到本地文件File stableLogFile = genStableLogFile(instanceId);// 将文件推送到 MongoDBFileLocation dfsFL = new FileLocation().setBucket(Constants.LOG_BUCKET).setName(genMongoFileName(instanceId));try {dFsService.store(new StoreRequest().setLocalFile(stableLogFile).setFileLocation(dfsFL));log.info("[InstanceLog-{}] push local instanceLogs to mongoDB succeed, using: {}.", instanceId, sw.stop());}catch (Exception e) {log.warn("[InstanceLog-{}] push local instanceLogs to mongoDB failed.", instanceId, e);}}catch (Exception e) {log.warn("[InstanceLog-{}] sync local instanceLogs failed.", instanceId, e);}// 删除本地数据库数据try {instanceId2LastReportTime.remove(instanceId);CommonUtils.executeWithRetry0(() -> localInstanceLogRepository.deleteByInstanceId(instanceId));log.info("[InstanceLog-{}] delete local instanceLog successfully.", instanceId);}catch (Exception e) {log.warn("[InstanceLog-{}] delete local instanceLog failed.", instanceId, e);}}

InstanceLogService的sync方法先通过genStableLogFile将日志持久化到server端的本地日志文件,接着将该任务实例日志的元信息(哪个任务实例、在哪个server、本地日志文件的路径)存储到dFsService(它有oss、gridfs、minio、mysql四种实现,具体看server的配置文件是启动哪个,如果是mysql则是存储到powerjob_files表中),最后通过localInstanceLogRepository.deleteByInstanceId清空该任务实例在h2中的LOCAL_INSTANCE_LOG表的记录

genStableLogFile

    private File genStableLogFile(long instanceId) {String path = genLogFilePath(instanceId, true);int lockId = ("stFileLock-" + instanceId).hashCode();try {segmentLock.lockInterruptibleSafe(lockId);return localTransactionTemplate.execute(status -> {File f = new File(path);if (f.exists()) {return f;}try {// 创建父文件夹(文件在开流时自动会被创建)FileUtils.forceMkdirParent(f);// 本地存在数据,从本地持久化(对应 SYNC 的情况)if (instanceId2LastReportTime.containsKey(instanceId)) {try (Stream<LocalInstanceLogDO> allLogStream = localInstanceLogRepository.findByInstanceIdOrderByLogTime(instanceId)) {stream2File(allLogStream, f);}}else {FileLocation dfl = new FileLocation().setBucket(Constants.LOG_BUCKET).setName(genMongoFileName(instanceId));Optional<FileMeta> dflMetaOpt = dFsService.fetchFileMeta(dfl);if (!dflMetaOpt.isPresent()) {OmsFileUtils.string2File("SYSTEM: There is no online log for this job instance.", f);return f;}dFsService.download(new DownloadRequest().setTarget(f).setFileLocation(dfl));}return f;}catch (Exception e) {CommonUtils.executeIgnoreException(() -> FileUtils.forceDelete(f));throw new RuntimeException(e);}});}finally {segmentLock.unlock(lockId);}}private static String genLogFilePath(long instanceId, boolean stable) {if (stable) {return OmsFileUtils.genLogDirPath() + String.format("%d-stable.log", instanceId);}else {return OmsFileUtils.genLogDirPath() + String.format("%d-temporary.log", instanceId);}}    

genStableLogFile它先判断该server是否有存储该任务实例的日志文件(~/powerjob/server/online_log/%d-stable.log),有则直接返回;否则判断该instanceId2LastReportTime是否包含该任务实例,包含则从localInstanceLogRepository拉取日志然后写入到文件;不包含则通过dFsService.fetchFileMeta拉取元信息,然后下载到本地再返回

相关表结构

LOCAL_INSTANCE_LOG

CREATE TABLE PUBLIC.LOCAL_INSTANCE_LOG (ID BIGINT NOT NULL AUTO_INCREMENT,INSTANCE_ID BIGINT,LOG_CONTENT CHARACTER VARYING,LOG_LEVEL INTEGER,LOG_TIME BIGINT,WORKER_ADDRESS CHARACTER VARYING(255),CONSTRAINT CONSTRAINT_8 PRIMARY KEY (ID)
);
CREATE INDEX IDXPJ6CD8W5EAW8QBKMD84I8KYS7 ON PUBLIC.LOCAL_INSTANCE_LOG (INSTANCE_ID);
CREATE UNIQUE INDEX PRIMARY_KEY_8 ON PUBLIC.LOCAL_INSTANCE_LOG (ID);

powerjob_files

CREATE TABLE `powerjob_files` (`id` bigint NOT NULL AUTO_INCREMENT COMMENT 'ID',`bucket` varchar(255) COLLATE utf8mb4_general_ci NOT NULL COMMENT '分桶',`name` varchar(255) COLLATE utf8mb4_general_ci NOT NULL COMMENT '文件名称',`version` varchar(255) COLLATE utf8mb4_general_ci NOT NULL COMMENT '版本',`meta` varchar(255) COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '元数据',`length` bigint NOT NULL COMMENT '长度',`status` int NOT NULL COMMENT '状态',`data` longblob NOT NULL COMMENT '文件内容',`extra` varchar(255) COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '其他信息',`gmt_create` datetime NOT NULL COMMENT '创建时间',`gmt_modified` datetime DEFAULT NULL COMMENT '更新时间',PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci;

小结

  • PowerJob的worker端的OmsServerLogger的process方法调用的是OmsLogHandler的submitLog方法,它每次先判断logQueue大小是否大于REPORT_SIZE(1024),是则启动logSubmitter线程,否则放入logQueue队列LogSubmitter不断地从logQueue.poll数据,在logs的大小大于等于BATCH_SIZE(20)时通过TransportUtils.reportLogs给server上报日志
  • server端的AbWorkerRequestHandler的processWorkerLogReport接收WorkerLogReportReq,执行processWorkerLogReport0方法,它执行的是instanceLogService.submitLogs;InstanceLogService的submitLogs是个异步方法,它将InstanceLogContent转换为LocalInstanceLogDO,然后执行localInstanceLogRepository.saveAll保存;server端有两份数据源,一份是mysql,一份是h2,而localInstanceLog存储到的是h2的LOCAL_INSTANCE_LOG表
  • 另外server端在任务实例结束时会执行InstanceManager的processFinishedInstance方法,它会延时60s执行instanceLogService.sync(instanceId);sync方法先通过genStableLogFile将日志持久化到server端的本地日志文件,接着将该任务实例日志的元信息(哪个任务实例、在哪个server、本地日志文件的路径)存储到dFsService(它有oss、gridfs、minio、mysql四种实现,具体看server的配置文件是启动哪个,如果是mysql则是存储到powerjob_files表中),最后通过localInstanceLogRepository.deleteByInstanceId清空该任务实例在h2中的LOCAL_INSTANCE_LOG表的记录

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/666085.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

uniapp 组件封装

1. uniapp 组件封装时间戳格式化为星期 1.1. components/m-week.vue <template><text>{{week}}</text> </template> <script>export default {props: {time: String},mounted(e) {this.week this.getWeek(Number(this.time))},data() {return …

FreeMark ${r‘原样输出‘} ${r“原样输出“}

FreeMark ${r’原样输出’} ${r"原样输出"} 在${}使用 小写字母r接两个单引号或两个双引号包裹的内容可以原样输出, 字母r只能用小写 ${r想要原样输出的内容} --用了单引号${r"想要原样输出的内容"} --用了双引号 例子: ${r"${r}"} 得到 ${r…

Unity引擎学习笔记之【动画、动画器操作】

动画Animate Animation是基于关键帧的动画系统&#xff0c;适用于简单的动画需求&#xff1b; 而Animator是一种状态机驱动的动画系统&#xff0c;适用于更复杂的动画逻辑和交互式动画。 通常&#xff0c;Animator组件更适合用于游戏中的角色动画控制&#xff0c; 而Animation…

车载测试Vector工具——基于DoIP的ECU/车辆的连接故障排除

车载测试Vector工具——基于DoIP的ECU/车辆的连接故障排除 我是穿拖鞋的汉子,魔都中坚持长期主义的汽车电子工程师(Wechat:gongkenan2013)。 老规矩,分享一段喜欢的文字,避免自己成为高知识低文化的工程师: 屏蔽力是信息过载时代一个人的特殊竞争力,任何消耗你的人和…

【考研408】计算机网络笔记

文章目录 计算机网络体系结构计算机网络概述计算机网络的组成计算机网络的功能计算机网络的分类计算机网络的性能指标课后习题 计算机网络体系结构与参考模型计算机网络协议、接口、服务的概念ISO/OSI参考模型和TCP/IP模型课后习题 物理层通信基础基本概念奈奎斯特定理与香农定…

PyCharm / DataSpell 导入WSL2 解析器,实现GPU加速

PyCharm / DataSpell 导入WSL2 解析器的实现 Windows的解析器不好么&#xff1f;设置WSL2和实现GPU加速为PyCharm / DataSpell 设置WSL解析器设置Interpreter Windows的解析器不好么&#xff1f; Windows上的解析器的确很方便&#xff0c;也省去了我们很多的麻烦。但是WSL2的解…

cesium-水平测距

cesium测量两点间的距离 <template><div id"cesiumContainer" style"height: 100vh;"></div><div id"toolbar" style"position: fixed;top:20px;left:220px;"><el-breadcrumb><el-breadcrumb-item&…

React16源码: React中处理hydrate的核心流程源码实现

hydrate 1 &#xff09;概述 hydrate 在react当中不算特别重要, 但是很多时候会用到的一个API这个 API 它主要作用就是在进入第一次渲染的时候&#xff0c;如果本身 dom 树上面已经有一个dom结构存在是否可以去利用这一部分已经存在的dom&#xff0c;然后去避免掉在第一次渲染…

千万级数据深分页查询SQL性能优化实践-京东零售技术团队

一、系统介绍和问题描述 如何在Mysql中实现上亿数据的遍历查询&#xff1f;先来介绍一下系统主角&#xff1a;关注系统&#xff0c;主要是维护京东用户和业务对象之前的关注关系&#xff1b;并对外提供各种关系查询&#xff0c;比如查询用户的关注商品或店铺列表&#xff0c;查…

贪心算法中关于重叠区间问题的感悟

在我这两天的感受中&#xff0c;对区间的排序是解题的关键&#xff0c;能够正确的排序就成功三分之一了。不过想到排序的方法很重要&#xff0c;有的是按照开始点从小到大排列&#xff0c;有的是按照从大到小&#xff0c;有的是按照结束节点排序&#xff0c;有的甚至再排过开始…

[晓理紫]CCF系列会议截稿时间订阅

关注{晓理紫|小李子}&#xff0c;每日更新CCF系列会议信息&#xff0c;如感兴趣&#xff0c;请转发给有需要的同学&#xff0c;谢谢支持&#xff01;&#xff01; 如果你感觉对你有所帮助&#xff0c;请关注我&#xff0c;每日准时为你推送最新会议信息。 SAC (CCF C) Select…

物流平台架构设计与实践

随着电商行业的迅猛发展&#xff0c;物流行业也得到了极大的发展。从最初的传统物流到现在的智慧物流&#xff0c;物流技术和模式也在不断的更新与升级。物流平台作为连接电商和物流的重要媒介&#xff0c;其架构设计和实践显得尤为重要。 一、物流平台架构设计 1. 前端架构设…

VSCode Vue 必备插件

VSCode Vue 插件 1. Vue Language Features(Volar)官方插件 volar不仅支持 Vue3 语言高亮、语法检测&#xff0c;还支持 TypeScript 和基于 vue-tsc 的类型检查功能。需要注意的是&#xff1a;使用时需要禁用 Vetur 插件,Volar与它会有冲突 2. Vue VSCode Snippets 此插件为…

京东广告算法架构体系建设--高性能计算方案最佳实践 | 京东零售广告技术团队

1、前言 推荐领域算法模型的在线推理是一个对高并发、高实时有较强要求的场景。算法最初是基于Wide & Deep相对简单的网络结构进行建模&#xff0c;容易满足高实时、高并发的推理性能要求。但随着广告模型效果优化进入深水区&#xff0c;基于Transformer用户行为序列和Att…

OpenSSL:configure: error: OpenSSL library not found解决方案

大家好,我是爱编程的喵喵。双985硕士毕业,现担任全栈工程师一职,热衷于将数据思维应用到工作与生活中。从事机器学习以及相关的前后端开发工作。曾在阿里云、科大讯飞、CCF等比赛获得多次Top名次。现为CSDN博客专家、人工智能领域优质创作者。喜欢通过博客创作的方式对所学的…

项目02《游戏-05-开发》Unity3D

基于 项目02《游戏-04-开发》Unity3D &#xff0c; 【任务】UI背包系统&#xff0c; 首先将Game窗口设置成1920 * 1080&#xff0c; 设置Canvas的缩放模式&#xff0c;&#xff1a;这样设置能让窗口在任意分辨率下都以一个正确的方式显示&#xff0c; 设置数值&…

Mac安装Homebrew+MySQL+Redis+Nginx+Tomcat等

Mac安装HomebrewMySQLRedisNginxTomcat等 文章目录 Mac安装HomebrewMySQLRedisNginxTomcat等一、Mac安装Mysql 8①&#xff1a;下载②&#xff1a;安装③&#xff1a;配置环境变量④&#xff1a;外部连接测试 二、Mac安装Redis和可视化工具①&#xff1a;安装Redis01&#xff1…

【Linux系统 01】Vim工具

目录 一、Vim概述 1. 文件打开方式 2. 模式切换 二、命令模式 1. 移动与跳转 2. 复制与粘贴 3. 剪切与撤销 三、编辑模式 1. 插入 2. 替换 四、末行模式 1. 保存与退出 2. 查找与替换 3. 分屏显示 4. 命令执行 一、Vim概述 1. 文件打开方式 vim 文件路径&#…

美国纳斯达克大屏怎么投放:投放完成需要多长时间-大舍传媒Dashe Media

陕西大舍广告传媒有限公司&#xff08;Shaanxi Dashe Advertising Media Co., Ltd&#xff09;&#xff0c;简称大舍传媒&#xff08;Dashe Media&#xff09;&#xff0c;是纳斯达克在中国区的总代理&#xff08;China General Agent&#xff09;。与纳斯达克合作已经有八年的…