初探Spark SQL catalog缓存机制

先说结论:Spark SQL catalog中对表结构的缓存一般不会自动更新。

实验如下:

  1. 在pg中新建一张表t1,其中只有一列 c1 int
  2. 在Spark SQL中注册这张表,并从中查询数据
    1. ./bin/spark-sql --driver-class-path postgresql-42.7.1.jar --jars postgresql-42.7.1.jar
    2. spark-sql (default)> create table c1v using org.apache.spark.sql.jdbc options (url "jdbc:postgresql://localhost:5432/postgres", dbtable "public.t1", user 'postgres', password 'postgres');
    3. spark-sql (default)> select * from c1v;
    4. 结果:一行数据
  3. 在pg中为t1新增一列 c2,并插入一行数据2,2
  4. 在Spark SQL中继续查询数据
    1. spark-sql (default)> select * from c1v;
    2. 结果:两行数据,但是没有c2列
  5. 在pg中删掉c1列
  6. 在Spark SQL中继续查询数据
    1. spark-sql (default)> select * from c1v;
    2. 结果:报错 c1列不存在
  7. 从pg的query log中也可以看到,Spark一直发送的都是 SELECT "c1" FROM public.wkt,也即Spark对上述表结构的变化一无所知。

Spark报错如下

spark-sql (default)> select * from c1v;
24/01/09 20:18:04 ERROR Executor: Exception in task 0.0 in stage 5.0 (TID 5)
org.postgresql.util.PSQLException: ERROR: column "c1" does not existHint: Perhaps you meant to reference the column "wkt.c2".Position: 8at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2712)at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2400)at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:367)at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:498)at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:415)at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:190)at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:134)at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:320)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)at org.apache.spark.scheduler.Task.run(Task.scala:139)at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)at java.base/java.lang.Thread.run(Thread.java:829)
24/01/09 20:18:04 WARN TaskSetManager: Lost task 0.0 in stage 5.0 (TID 5) (172.18.203.110 executor driver): org.postgresql.util.PSQLException: ERROR: column "c1" does not existHint: Perhaps you meant to reference the column "wkt.c2".Position: 8at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2712)at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2400)at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:367)at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:498)at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:415)at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:190)at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:134)at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:320)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)at org.apache.spark.scheduler.Task.run(Task.scala:139)at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)at java.base/java.lang.Thread.run(Thread.java:829)24/01/09 20:18:04 ERROR TaskSetManager: Task 0 in stage 5.0 failed 1 times; aborting job
Job aborted due to stage failure: Task 0 in stage 5.0 failed 1 times, most recent failure: Lost task 0.0 in stage 5.0 (TID 5) (172.18.203.110 executor driver): org.postgresql.util.PSQLException: ERROR: column "c1" does not existHint: Perhaps you meant to reference the column "wkt.c2".Position: 8at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2712)at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2400)at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:367)at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:498)at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:415)at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:190)at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:134)at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:320)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)at org.apache.spark.scheduler.Task.run(Task.scala:139)at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)at java.base/java.lang.Thread.run(Thread.java:829)Driver stacktrace:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 5.0 failed 1 times, most recent failure: Lost task 0.0 in stage 5.0 (TID 5) (172.18.203.110 executor driver): org.postgresql.util.PSQLException: ERROR: column "c1" does not existHint: Perhaps you meant to reference the column "wkt.c2".Position: 8at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2712)at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2400)at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:367)at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:498)at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:415)at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:190)at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:134)at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:320)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)at org.apache.spark.scheduler.Task.run(Task.scala:139)at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)at java.base/java.lang.Thread.run(Thread.java:829)Driver stacktrace:at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2785)at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2721)at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2720)at scala.collection.immutable.List.foreach(List.scala:333)at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2720)at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1206)at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1206)at scala.Option.foreach(Option.scala:437)at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1206)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2984)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2923)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2912)at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:971)at org.apache.spark.SparkContext.runJob(SparkContext.scala:2263)at org.apache.spark.SparkContext.runJob(SparkContext.scala:2284)at org.apache.spark.SparkContext.runJob(SparkContext.scala:2303)at org.apache.spark.SparkContext.runJob(SparkContext.scala:2328)at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1019)at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)at org.apache.spark.rdd.RDD.withScope(RDD.scala:405)at org.apache.spark.rdd.RDD.collect(RDD.scala:1018)at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:448)at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:475)at org.apache.spark.sql.execution.HiveResult$.hiveResultString(HiveResult.scala:76)at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.$anonfun$run$2(SparkSQLDriver.scala:69)at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:118)at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:195)at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:103)at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:69)at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:415)at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:533)at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:527)at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:563)at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:561)at scala.collection.AbstractIterable.foreach(Iterable.scala:926)at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:527)at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:307)at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.base/java.lang.reflect.Method.invoke(Method.java:566)at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:1020)at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:192)at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:215)at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:91)at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1111)at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1120)at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.postgresql.util.PSQLException: ERROR: column "c1" does not existHint: Perhaps you meant to reference the column "wkt.c2".Position: 8at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2712)at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2400)at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:367)at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:498)at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:415)at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:190)at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:134)at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:320)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)at org.apache.spark.scheduler.Task.run(Task.scala:139)at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)at java.base/java.lang.Thread.run(Thread.java:829)

进一步对Spark SQL的报错进行分析,可见报错点是在org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:320)。从代码中可见,Spark直接使用catalog中缓存的表结构拼接SQL语句下发,直到SQL语句真正被pg执行时,才识别到c1这一列已经不存在的错误。

    var builder = dialect.getJdbcSQLQueryBuilder(options).withColumns(columns).withPredicates(predicates, part).withSortOrders(sortOrders).withLimit(limit).withOffset(offset)groupByColumns.foreach { groupByKeys =>builder = builder.withGroupByColumns(groupByKeys)}sample.foreach { tableSampleInfo =>builder = builder.withTableSample(tableSampleInfo)}val sqlText = builder.build()stmt = conn.prepareStatement(sqlText,ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY)stmt.setFetchSize(options.fetchSize)stmt.setQueryTimeout(options.queryTimeout)rs = stmt.executeQuery()

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/611798.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【前端素材】bootstrap4实现在线蛋糕甜品店网页Tehzeeb

一、需求分析 在线蛋糕甜品店的网站通常包含以下几个方面的内容和功能: 主页:网站的主页是用户进入网站的第一个页面,通常会展示一些精选蛋糕和甜品的图片和介绍,以吸引用户的注意力。主页还可能包含一些特别促销或最新的产品信息…

Mysql是怎样运行的--下

文章目录 Mysql是怎样运行的--下查询优化explainoptimizer_trace InnoDB的Buffer Pool(缓冲池)Buffer Pool的存储结构空闲页存储--free链表脏页(修改后的数据)存储--flush链表 使用Buffer PoolLRU链表的管理 事务ACID事务的状态事…

SpringBoot-admin健康监控

监控-健康监控服务 目的:能够理解健康监控actuator的作用 背景: 在一些大型的业务应用中,工程会根据业务模块做微服务拆分,后期每一个微服务在云上部署以后,都需要对其进行监控、追踪、审计、控制等操纵&#xff0c…

稀疏矩阵的三元组表示----(算法详解)

目录 基本算法包括:(解释都在代码里) 1.创建 2.对三元组元素赋值 3.将三元组元素赋值给变量 4.输出三元组 5.转置(附加的有兴趣可以看看) 稀疏矩阵的概念:矩阵的非零元素相较零元素非常小时&#xff…

生成式 AI 如何重塑软件开发流程和开发工具?

生成式AI正在重塑开发流程和开发工具,通过自动化和优化软件开发过程,提高开发效率和质量。它可以帮助开发人员快速生成代码、测试和部署应用程序,同时减少错误和缺陷。此外,生成式AI还可以帮助开发人员快速理解和解决复杂的技术问…

【QML COOK】- 006-用C++定义一个QML元素类型

Qt原本是一个C图形框架,因此QML也少不了C。QML通常只负责显示,而后台逻辑由C实现,因此掌握C和QML之间的交互非常必要。 本例实现一个最简单的例子,用C定义一个QML的元素类型并在QML使用它。 需求是在窗口上显示鼠标点击的次数。…

PowerDesigner简介以及简单使用

软件简介: PowerDesigner是Sybase公司开发的数据库设计工具,开发人员能搞利用PowerDesigner开发数据流程图、各数据模型如物理数据模型,可以分别从概念数据模型(Conceptual Data Model)和物理数据模型(Physical Data Model)两个层次对数据库…

尝试OmniverseFarm的最基础操作

目标 尝试OmniverseFarm的最基础操作。本地机器作为Queue和Agent,同时在本地提交任务。 主要参考了官方文档: Farm Queue — Omniverse Farm latest documentation Farm Agent — Omniverse Farm latest documentation Farm Examples — Omniverse Far…

MySQL高级

一、MySQL存储过程和函数 1.存储过程和函数的概念 存储过程和函数是 事先经过编译并存储在数据库中的一段 SQL 语句的集合 2.存储过程和函数的好处 存储过程和函数可以重复使用,减轻开发人员的工作量。类似于java中方法可以多次调用减少网络流量,存储…

【Verilog】期末复习——设计带异步清零且高电平有效的4位循环移位寄存器

系列文章 数值(整数,实数,字符串)与数据类型(wire、reg、mem、parameter) 运算符 数据流建模 行为级建模 结构化建模 组合电路的设计和时序电路的设计 有限状态机的定义和分类 期末复习——数字逻辑电路分…

虚幻UE 材质-纹理 1

本篇笔记主要讲两个纹理内的内容:渲染目标和媒体纹理 媒体纹理可以参考之前的笔记:虚幻UE 媒体播放器-视频转成材质-播放视频 所以本篇主要讲两个组件:场景捕获2D、场景捕获立方体 两个纹理:渲染目标、立方体渲染目标 三个功能&am…

jmeter分布式测试

场景:需求要求使用用大量的客户访问时,可以使用分布式来完成 分布式实现原理: 材料:一台控制机器,若干台代理机也叫执行机 运行时,控制机将脚本发送到代理机上-->代理机拿到就开始执行,不会…

论文阅读 BERT GPT - transformer在NLP领域的延伸

文章目录 不会写的很详细,只是为了帮助我理解在CV领域transformer的拓展1 摘要1.1 BERT - 核心1.2 GPT - 核心 2 模型架构2.1 概览 3 区别3.1 finetune和prompt 3.2 transformer及训练总结 不会写的很详细,只是为了帮助我理解在CV领域transformer的拓展 …

1.10 力扣回溯中等题

93. 复原 IP 地址 代码随想录 (programmercarl.com) 有效 IP 地址 正好由四个整数(每个整数位于 0 到 255 之间组成,且不能含有前导 0),整数之间用 . 分隔。 例如:"0.1.2.201" 和 "192.168.1.1"…

Python exec 命令在函数内执行无效,已解决

看了很多文章,有很多介绍的很详细。但一直都不能正确使用,总是出现这样那样的问题。但学到了很多名词和描述。 继续和AI对话过程中,找到了解决方案。这里只针对个人问题做一个记录。 npz_data是一个字典, 方案2原因:由…

Java8新特性之函数式接口

JDK1.8 对函数式接口的描述 /*** An informative annotation type used to indicate that an interface* type declaration is intended to be a <i>functional interface</i> as* defined by the Java Language Specification.** Conceptually, a functional int…

面试专题一:js的数组

前言 想做最全的js数组方法总结。涵盖面试及日常使用。 这里写目录标题 前言如何判断数组修改数组元素的方法检测数组元素的相关方法一些跟数组相关且传参为回调函数的方法其他方法 总结 如何判断数组 不可以用typeOf方法。该方法只能用来检验最基本的数据类型&#xff0c;&…

构建安全可靠的系统:第十六章到第二十章

第四部分&#xff1a;维护系统 原文&#xff1a;Part IV. Maintaining Systems 译者&#xff1a;飞龙 协议&#xff1a;CC BY-NC-SA 4.0 准备应对不舒适情况的组织有更好的机会处理关键事件。 尽管不可能为可能扰乱您组织的每种情况制定计划&#xff0c;但作为综合灾难规划策略…

接口数据使用了 RSA 加密和签名?一篇文章带你了解

接口数据使用了RSA加密和签名&#xff1f;一篇文章带你搞定&#xff01; 1、前言 很多童鞋在工作中&#xff0c;会遇到一些接口使用RSA加密和签名来处理的请求参数&#xff0c;那么遇到这个问题的时候&#xff0c;第一时间当然是找开发要加解密的方法&#xff0c;但是开发给加…

并发编程之JUC并发工具类下

目录 CyclicBarrier&#xff08;回环栅栏或循环屏障&#xff09; 特点 常用方法 CyclicBarrier应用场景 CyclicBarrier与CountDownLatch区别 Exchanger 特点 常用方法 Exchanger的应用场景 Phaser&#xff08;阶段协同器&#xff09; 特点 常用方法 Phaser的应用场…