spark streaming 的 Job创建、调度、提交

2019独角兽企业重金招聘Python工程师标准>>> hot3.png

上文已经从源码分析了Receiver接收的数据交由BlockManager管理,整个数据接收流都已经运转起来了,那么让我们回到分析JobScheduler的博客中。

// JobScheduler.scala line 62def start(): Unit = synchronized {if (eventLoop != null) return // scheduler has already been startedlogDebug("Starting JobScheduler")eventLoop = new EventLoop[JobSchedulerEvent]("JobScheduler") {override protected def onReceive(event: JobSchedulerEvent): Unit = processEvent(event)override protected def onError(e: Throwable): Unit = reportError("Error in job scheduler", e)}eventLoop.start()// attach rate controllers of input streams to receive batch completion updatesfor {inputDStream <- ssc.graph.getInputStreamsrateController <- inputDStream.rateController} ssc.addStreamingListener(rateController)listenerBus.start(ssc.sparkContext)receiverTracker = new ReceiverTracker(ssc)inputInfoTracker = new InputInfoTracker(ssc)receiverTracker.start()jobGenerator.start()logInfo("Started JobScheduler")}

前面好几篇博客都是 由 receiverTracker.start() 延展开。延展完毕后,继续下一步。

// JobScheduler.scala line 83
jobGenerator.start()

jobGenerator的实例化过程,前面已经分析过。深入下源码了解到。

  1. 实例化eventLoop,此处的eventLoop与JobScheduler中的eventLoop不一样,对应的是不同的泛型。
  2. EventLoop.start
  3. 首次启动,startFirstTime
  // JobGenerator.scala line 78/** Start generation of jobs */def start(): Unit = synchronized {if (eventLoop != null) return // generator has already been started// Call checkpointWriter here to initialize it before eventLoop uses it to avoid a deadlock.// See SPARK-10125checkpointWritereventLoop = new EventLoop[JobGeneratorEvent]("JobGenerator") {override protected def onReceive(event: JobGeneratorEvent): Unit = processEvent(event)override protected def onError(e: Throwable): Unit = {jobScheduler.reportError("Error in job generator", e)}}eventLoop.start()if (ssc.isCheckpointPresent) {restart()} else {startFirstTime()}}
// JobGenerator.scala line 189/** Starts the generator for the first time */private def startFirstTime() {val startTime = new Time(timer.getStartTime())graph.start(startTime - graph.batchDuration)timer.start(startTime.milliseconds)logInfo("Started JobGenerator at " + startTime)}

将DStreamGraph.start

  1. 将所有的outputStreams都initialize,初始化首次执行时间,依赖的DStream一并设置。
  2. 如果设置了duration,将所有的outputStreams都remember,依赖的DStream一并设置
  3. 启动前验证,主要是验证chechpoint设置是否冲突以及各种Duration
  4. 将所有的inputStreams启动;读者扫描了下目前版本1.6.0InputDStraem及其所有的子类。start方法啥都没做。结合之前的博客,inputStreams都已经交由ReceiverTracker管理了。
// DStreamGraph.scala line 39def start(time: Time) {this.synchronized {require(zeroTime == null, "DStream graph computation already started")zeroTime = timestartTime = timeoutputStreams.foreach(_.initialize(zeroTime))outputStreams.foreach(_.remember(rememberDuration))outputStreams.foreach(_.validateAtStart)inputStreams.par.foreach(_.start())}}

至此,只是做了一些简单的初始化,并没有让数据处理起来。

再回到JobGenerator。此时,将循环定时器启动,

// JobGenerator.scala line 193timer.start(startTime.milliseconds)

循环定时器启动;读者是不是很熟悉,是不是在哪见过这个循环定时器?

没错,就是BlockGenerator.scala line 105 、109 ,两个线程,其中一个是循环定时器,定时将数据放入待push队列中。

// RecurringTimer.scala line 59def start(startTime: Long): Long = synchronized {nextTime = startTimethread.start()logInfo("Started timer for " + name + " at time " + nextTime)nextTime}

具体的逻辑是在构造是传入的方法:longTime => eventLoop.post(GenerateJobs(new Time(longTime)));

输入是Long,

方法体是eventLoop.post(GenerateJobs(new Time(longTime)))

// JobGenerator.scala line 58private val timer = new RecurringTimer(clock, ssc.graph.batchDuration.milliseconds,longTime => eventLoop.post(GenerateJobs(new Time(longTime))), "JobGenerator")

只要线程状态不是stopped,一直循环。

  1. 初始化的时候将上面的方法传进来,  callback: (Long) => Unit 对应的就是  longTime => eventLoop.post(GenerateJobs(new Time(longTime)))
  2. start的时候 thread.run启动,里面的loop方法被执行。
  3. loop中调用的是 triggerActionForNextInterval。
  4. triggerActionForNextInterval调用构造传入的callback,也就是上面的 longTime => eventLoop.post(GenerateJobs(new Time(longTime))) 
private[streaming]
class RecurringTimer(clock: Clock, period: Long, callback: (Long) => Unit, name: String)extends Logging {
// RecurringTimer.scala line 27private val thread = new Thread("RecurringTimer - " + name) {setDaemon(true)override def run() { loop }}
// RecurringTimer.scala line 56/*** Start at the given start time.*/def start(startTime: Long): Long = synchronized {nextTime = startTimethread.start()logInfo("Started timer for " + name + " at time " + nextTime)nextTime}
// RecurringTimer.scala line 92private def triggerActionForNextInterval(): Unit = {clock.waitTillTime(nextTime)callback(nextTime)prevTime = nextTimenextTime += periodlogDebug("Callback for " + name + " called at time " + prevTime)}// RecurringTimer.scala line 100/*** Repeatedly call the callback every interval.*/private def loop() {try {while (!stopped) {triggerActionForNextInterval()}triggerActionForNextInterval()} catch {case e: InterruptedException =>}}
// ...一些代码
}

定时发送GenerateJobs 类型的事件消息,eventLoop.post中将事件消息加入到eventQueue中

// EventLoop.scala line 102def post(event: E): Unit = {eventQueue.put(event)}

同时,此EventLoop中的另一个成员变量 eventThread。会一直从队列中取事件消息,将此事件作为参数调用onReceive。而此onReceive在实例化时被override了。

// JobGenerator.scala line 86eventLoop = new EventLoop[JobGeneratorEvent]("JobGenerator") {override protected def onReceive(event: JobGeneratorEvent): Unit = processEvent(event)override protected def onError(e: Throwable): Unit = {jobScheduler.reportError("Error in job generator", e)}}eventLoop.start()

onReceive调用的是

// JobGenerator.scala line 177/** Processes all events */private def processEvent(event: JobGeneratorEvent) {logDebug("Got event " + event)event match {case GenerateJobs(time) => generateJobs(time)// 其他case class}}

GenerateJobs case class 是匹配到 generateJobs(time:Time) 来处理

  1. 获取当前时间批次ReceiverTracker收集到的所有的Blocks,若开启WAL会执行WAL
  2. DStreamGraph生产任务
  3. 提交任务
  4. 若设置checkpoint,则checkpoint
// JobGenerator.scala line 240/** Generate jobs and perform checkpoint for the given `time`.  */private def generateJobs(time: Time) {// Set the SparkEnv in this thread, so that job generation code can access the environment// Example: BlockRDDs are created in this thread, and it needs to access BlockManager// Update: This is probably redundant after threadlocal stuff in SparkEnv has been removed.SparkEnv.set(ssc.env)Try {jobScheduler.receiverTracker.allocateBlocksToBatch(time) // allocate received blocks to batchgraph.generateJobs(time) // generate jobs using allocated block} match {case Success(jobs) =>val streamIdToInputInfos = jobScheduler.inputInfoTracker.getInfo(time)jobScheduler.submitJobSet(JobSet(time, jobs, streamIdToInputInfos))case Failure(e) =>jobScheduler.reportError("Error generating jobs for time " + time, e)}eventLoop.post(DoCheckpoint(time, clearCheckpointDataLater = false))}

上述代码不是特别容易理解。细细拆分:咋一看以为是try{} catch{case ... },仔细一看,是Try{}match{}

追踪下代码,原来Try是大写的,是一个伴生对象,apply接收的参数是一个方法,返回Try的实例。在scala.util.Try.scala 代码如下:

// scala.util.Try.scala line 155
object Try {/** Constructs a `Try` using the by-name parameter.  This* method will ensure any non-fatal exception is caught and a* `Failure` object is returned.*/def apply[T](r: => T): Try[T] =try Success(r) catch {case NonFatal(e) => Failure(e)}}

Try有两个子类,都是case class 。分别是Success和Failure。如图

再返回调用处,Try中的代码块最后执行的是 graph.generateJobs(time) 。跟踪下:

返回的是outputStream.generateJob(time)。

// DStreamGraph.scala line 111def generateJobs(time: Time): Seq[Job] = {logDebug("Generating jobs for time " + time)val jobs = this.synchronized {outputStreams.flatMap { outputStream =>val jobOption = outputStream.generateJob(time)jobOption.foreach(_.setCallSite(outputStream.creationSite))jobOption}}logDebug("Generated " + jobs.length + " jobs for time " + time)jobs}

从前文可知,outputStream其实都是ForEachDStream。进入ForEachDStream,override了generateJob。

  1. parent.getOrCompute(time) 返回一个Option[Job]。
  2. 若有rdd,则返回可能是new Job(time,jobFunc)
// ForEachDStream.scala line 46override def generateJob(time: Time): Option[Job] = {parent.getOrCompute(time) match {case Some(rdd) =>val jobFunc = () => createRDDWithLocalProperties(time, displayInnerRDDOps) {foreachFunc(rdd, time)}Some(new Job(time, jobFunc))case None => None}}

那么ForEachDStream的parent是什么呢?看下我们的案例:

import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Durations, StreamingContext}object StreamingWordCountSelfScala {def main(args: Array[String]) {val sparkConf = new SparkConf().setMaster("spark://master:7077").setAppName("StreamingWordCountSelfScala")val ssc = new StreamingContext(sparkConf, Durations.seconds(5)) // 每5秒收割一次数据val lines = ssc.socketTextStream("localhost", 9999) // 监听 本地9999 socket 端口val words = lines.flatMap(_.split(" ")).map((_, 1)).reduceByKey(_ + _) // flat map 后 reducewords.print() // 打印结果ssc.start() // 启动ssc.awaitTermination()ssc.stop(true)}
}

按照前文的描述:本例中 DStream的依赖是 SocketInputDStream << FlatMappedDStream << MappedDStream << ShuffledDStream << ForEachDStream

笔者扫描了下DStream及其所有子类,发现只有DStream有 getOrCompute,没有一个子类override了此方法。如此一来,是ShuffledDStream.getorCompute

在一般情况下,是RDD不存在,执行orElse代码快,

// DStream.scala line 338/*** Get the RDD corresponding to the given time; either retrieve it from cache* or compute-and-cache it.*/private[streaming] final def getOrCompute(time: Time): Option[RDD[T]] = {// If RDD was already generated, then retrieve it from HashMap,// or else compute the RDDgeneratedRDDs.get(time).orElse {// Compute the RDD if time is valid (e.g. correct time in a sliding window)// of RDD generation, else generate nothing.if (isTimeValid(time)) {val rddOption = createRDDWithLocalProperties(time, displayInnerRDDOps = false) {// Disable checks for existing output directories in jobs launched by the streaming// scheduler, since we may need to write output to an existing directory during checkpoint// recovery; see SPARK-4835 for more details. We need to have this call here because// compute() might cause Spark jobs to be launched.PairRDDFunctions.disableOutputSpecValidation.withValue(true) {compute(time)  // line 352}}rddOption.foreach { case newRDD =>// Register the generated RDD for caching and checkpointingif (storageLevel != StorageLevel.NONE) {newRDD.persist(storageLevel)logDebug(s"Persisting RDD ${newRDD.id} for time $time to $storageLevel")}if (checkpointDuration != null && (time - zeroTime).isMultipleOf(checkpointDuration)) {newRDD.checkpoint()logInfo(s"Marking RDD ${newRDD.id} for time $time for checkpointing")}generatedRDDs.put(time, newRDD)}rddOption} else {None}}}

ShuffledDStream.compute 

又调用parent.getOrCompute

// ShuffledDStream.scala line 40override def compute(validTime: Time): Option[RDD[(K, C)]] = {parent.getOrCompute(validTime) match {case Some(rdd) => Some(rdd.combineByKey[C](createCombiner, mergeValue, mergeCombiner, partitioner, mapSideCombine))case None => None}}

MappedDStream的compute,又是父类的getOrCompute,结果又调用compute,如此循环。

// MappedDStream.scala line 34override def compute(validTime: Time): Option[RDD[U]] = {parent.getOrCompute(validTime).map(_.map[U](mapFunc))}

FlatMappedDStream的compute,又是父类的getOrCompute。结果又调用compute,如此循环。

// FlatMappedDStream.scala line 34override def compute(validTime: Time): Option[RDD[U]] = {parent.getOrCompute(validTime).map(_.flatMap(flatMapFunc))}

直到DStreamshi SocketInputDStream,也就是inputStream时,compute是继承自父类。

先不考虑if中的逻辑,直接else代码块。

进入createBlockRDD

// ReceiverInputDStream.scala line 69override def compute(validTime: Time): Option[RDD[T]] = {val blockRDD = {if (validTime < graph.startTime) {// If this is called for any time before the start time of the context,// then this returns an empty RDD. This may happen when recovering from a// driver failure without any write ahead log to recover pre-failure data.new BlockRDD[T](ssc.sc, Array.empty)} else {// Otherwise, ask the tracker for all the blocks that have been allocated to this stream// for this batchval receiverTracker = ssc.scheduler.receiverTrackerval blockInfos = receiverTracker.getBlocksOfBatch(validTime).getOrElse(id, Seq.empty)// Register the input blocks information into InputInfoTrackerval inputInfo = StreamInputInfo(id, blockInfos.flatMap(_.numRecords).sum)ssc.scheduler.inputInfoTracker.reportInfo(validTime, inputInfo)// Create the BlockRDDcreateBlockRDD(validTime, blockInfos)}}Some(blockRDD)}
new BlockRDD[T](ssc.sc, validBlockIds) line 127,RDD实例化成功
// ReceiverInputDStream.scala line 94private[streaming] def createBlockRDD(time: Time, blockInfos: Seq[ReceivedBlockInfo]): RDD[T] = {if (blockInfos.nonEmpty) {val blockIds = blockInfos.map { _.blockId.asInstanceOf[BlockId] }.toArray// Are WAL record handles present with all the blocksval areWALRecordHandlesPresent = blockInfos.forall { _.walRecordHandleOption.nonEmpty }if (areWALRecordHandlesPresent) {// If all the blocks have WAL record handle, then create a WALBackedBlockRDDval isBlockIdValid = blockInfos.map { _.isBlockIdValid() }.toArrayval walRecordHandles = blockInfos.map { _.walRecordHandleOption.get }.toArraynew WriteAheadLogBackedBlockRDD[T](ssc.sparkContext, blockIds, walRecordHandles, isBlockIdValid)} else {// Else, create a BlockRDD. However, if there are some blocks with WAL info but not// others then that is unexpected and log a warning accordingly.if (blockInfos.find(_.walRecordHandleOption.nonEmpty).nonEmpty) {if (WriteAheadLogUtils.enableReceiverLog(ssc.conf)) {logError("Some blocks do not have Write Ahead Log information; " +"this is unexpected and data may not be recoverable after driver failures")} else {logWarning("Some blocks have Write Ahead Log information; this is unexpected")}}val validBlockIds = blockIds.filter { id =>ssc.sparkContext.env.blockManager.master.contains(id)}if (validBlockIds.size != blockIds.size) {logWarning("Some blocks could not be recovered as they were not found in memory. " +"To prevent such data loss, enabled Write Ahead Log (see programming guide " +"for more details.")}new BlockRDD[T](ssc.sc, validBlockIds) // line 127}} else {// If no block is ready now, creating WriteAheadLogBackedBlockRDD or BlockRDD// according to the configurationif (WriteAheadLogUtils.enableReceiverLog(ssc.conf)) {new WriteAheadLogBackedBlockRDD[T](ssc.sparkContext, Array.empty, Array.empty, Array.empty)} else {new BlockRDD[T](ssc.sc, Array.empty)}}}

此BlockRDD是Spark Core的RDD的子类,且没有依赖的RDD。至此,RDD的实例化已经完成。

// BlockRDD.scala line 30
private[spark]
class BlockRDD[T: ClassTag](sc: SparkContext, @transient val blockIds: Array[BlockId])extends RDD[T](sc, Nil) // RDd.scala line 74
abstract class RDD[T: ClassTag](@transient private var _sc: SparkContext,@transient private var deps: Seq[Dependency[_]]) extends Serializable with Logging

至此,最终还原回来的RDD:

new BlockRDD[T](ssc.sc, validBlockIds).map(_.flatMap(flatMapFunc)).map(_.map[U](mapFunc)).combineByKey[C](createCombiner, mergeValue, mergeCombiner, partitioner, mapSideCombine)。

在本例中则为

new BlockRDD[T](ssc.sc, validBlockIds).map(_.flatMap(t=>t.split(" "))).map(_.map[U](t=>(t,1))).combineByKey[C](t=>t, (t1,t2)=>t1+t2, (t1,t2)=>t1+t2,partitioner, true)

而最终的print为

() => foreachFunc(new BlockRDD[T](ssc.sc, validBlockIds).map(_.flatMap(t=>t.split(" "))).map(_.map[U](t=>(t,1))).combineByKey[C](t=>t, (t1,t2)=>t1+t2, (t1,t2)=>t1+t2,partitioner, true),time)

其中foreachFunc为 DStrean.scala line 766

至此,RDD已经通过DStream实例化完成,现在再回顾下,是否可以理解DStream是RDD的模版。

不过别急,回到ForEachDStream.scala line 46 ,将上述函数作为构造参数,传入Job。

 

-------------分割线--------------

补充下Job创建的流程图,来源于版本定制班学员博客,略有修改。

 

 

补充下RDD按照lineage从 OutputDStream 回溯 创建RDD Dag的流程图,来源于版本定制班学员博客

 

 

补充案例中 RDD按照lineage从 OutputDStream 回溯 创建RDD Dag的流程图,来源于版本定制班学员博客

 

 

下节内容从源码分析Job提交,敬请期待。

 

转载于:https://my.oschina.net/corleone/blog/672999

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/284783.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

CSS属性总结之background

最近在学习css3的一些新属性&#xff0c;就把一些使用中遇到的方法和问题做一个小结。 background-color 背景颜色在IE7之前只显示到padding区域&#xff0c;不包含border。而现代浏览器background-color都是从border的左上角&#xff0c;到border的右下角。 background-color:…

官宣!微软发布 VS Code Server!

北京时间 2022 年 7 月 7 日&#xff0c;微软在 VS Code 官方博客中宣布了 Visual Studio Code Server&#xff01;远程开发的过去与未来2019 年&#xff0c;微软发布了 VS Code Remote&#xff0c;开启了远程开发的新时代&#xff01;2020 年&#xff0c;微软发布了 GitHub Co…

iis管理常用命令 创建IIS站点 应用应用程序 及虚拟目录

::防止中文输出乱码 chcp 65001::临时设置PATH set PATH%SystemRoot%\system32\inetsrv;%PATH% ::列出所有站点 appcmd list site::站点名称 set sitename"WisdomEducation"::绑定域名和端口号 set domain"http/*:8080:,https/*:8443:"::网站源文件物理路径…

【QGIS入门实战精品教程】4.4:QGIS如何将点自动连成线、线生成多边形?

个人简介:刘一哥,多年研究地图学、地理信息系统、遥感、摄影测量和GPS等应用,精通ArcGIS等软件的应用,精通多门编程语言,擅长GIS二次开发和数据库系统开发,具有丰富的行业经验,致力于测绘、地信、数字城市、资源、环境、生态、国土空间规划、空间数字建模、无人机等领域…

.NET7之MiniAPI(特别篇) :Preview6 缓存和限流

前几在用MiniAPI时还想没有比较优雅的缓存&#xff0c;这不&#xff0c;Preivew6就带来了。使用起来很简单&#xff0c;注入Sevice&#xff0c;引用中间件&#xff0c;然后在Map方法的后面跟CacheOutput()就ok了&#xff0c;CacheOutpu也有不同的参数&#xff0c;可以根据每个方…

蓝桥杯C1

转一篇写的炒鸡棒的博客。讲了表达式求值和词法分析。 http://blog.csdn.net/StevenKyleLee/article/details/43099789 转载于:https://www.cnblogs.com/wangkaipeng/p/6343204.html

曾鸣:未来十年,将确定智能商业的格局|干货

2019独角兽企业重金招聘Python工程师标准>>> 20年来风云变幻&#xff0c;潮起潮涌&#xff0c;我自己最深的一个感受&#xff0c;是对“势”这个字的理解。 第一&#xff0c;敬畏。对于商业规律和对大势的把握&#xff0c;很容易在三五年内决定一个企业的命运。 第二…

Jedis 设置key的超时时间

一分钟之内只能发送一次短信, 若用户刷新页面,然后输入原来的手机号,则继续计时 方案:服务器端要记录时间戳 方法名:sMSWaitingTime 功能:返回倒计时剩余时间,单位秒 Java代码 /*** * 倒计时还剩余多长时间 * param mobile : 手机号 * return : second */…

[转]IIS7全新管理工具AppCmd.exe的命令使用

IIS 7 提供了一个新的命令行工具 Appcmd.exe&#xff0c;可以使用该工具来配置和查询 Web 服务器上的对象&#xff0c;并以文本或 XML 格式返回输出。 下面是一些可以使用 Appcmd.exe 完成的任务的示例&#xff1a; •创建和配置站点、应用程序、应用程序池和虚拟目录。 •停止…

【QGIS入门实战精品教程】4.1:QGIS栅格数据地理配准完整操作流程

推荐阅读:ArcGIS地理配准完整操作步骤 文章目录 一、安装地理配准插件二、准备实验数据三、配准操作流程1. 添加栅格数据2. 添加地面控制点3. 配准设置4. 开始配准5. 精度评价一、安装地理配准插件 点击下拉菜单【插件】→【管理并安装插件】,如下图所示: QGIS默认已经安装…

聊聊 C++ 中的几种智能指针 (上)

一&#xff1a;背景 我们知道 C 是手工管理内存的分配和释放&#xff0c;对应的操作符就是 new/delete 和 new[] / delete[], 这给了程序员极大的自由度也给了我们极高的门槛&#xff0c;弄不好就得内存泄露&#xff0c;比如下面的代码&#xff1a;void test() {int* i new i…

【Android 学习】深入理解Handler机制

版权声明&#xff1a;本文为博主原创文章&#xff0c;转载请注明出处http://blog.csdn.net/u013132758。 https://blog.csdn.net/u013132758/article/details/51355051 Android 提供了Handler和Looper来来满足线程间的通信&#xff0c;而前面我们所说的IPC指的是进程间的通信。…

第五天个人总结

1.昨天做了什么 页面完善 2.今天要做什么 暂未知转载于:https://www.cnblogs.com/sunshine-z/p/8298895.html

【QGIS入门实战精品教程】4.3:QGIS属性表按字段链接外部属性数据

属性数据是GIS空格数据的重要组成部分。属性数据采集的基本操作由于地理实体(如建筑物) 位于地块之内成者与地块有关(如道路),因此,描述地理实体的属性数据和描述地块实体与地理实体之间关系的属性数强大多数都是土地信息的范畴土地空间数据库的属性教据主要是用来描述空间目…

解决 Cmder 的光标跟文字有个间距 及常用配置

具体的方法&#xff1a; 菜单 > SettingStartup > Environment set PATH%ConEmuBaseDir%\Scripts;%PATH% set LANGzh_CN.UTF8 chcp 65001 如果无效&#xff1a;在 Cmder 下的 verndor 目录里&#xff0c;修改 clink.lua 文件大约40和46行&#xff0c;把符号 λ 改为 # …

32 commons-lang包学习

maven依赖 <dependency><groupId>commons-lang</groupId><artifactId>commons-lang</artifactId><version>2.6</version></dependency>一、DateUtils类1、日期比较 public static boolean isSameDay(Date date1, Date date2)&…

做一个高德地图的 iOS / Android .NET MAUI 控件系列 - 创建控件

我们知道 MAUI 是开发跨平台应用的解决方案 &#xff0c;用 C# 可以直接把 iOS , Android , Windows , macOS , Linux ,Tizen 等应用开发出来。那我们在这个框架除了用底层自定义的 UI 控件外&#xff0c;如果我们要用如高德地图这样的第三方控件&#xff0c;要如何做呢&#x…

flask中的session,render_template()第二和参数是字典

1. 设置一个secret_key 2.验证登入后加上session,这是最简单,不保险 . 3.注意render_template传的参数是字典 转载于:https://www.cnblogs.com/cuzz/p/8087844.html

统一设置网站html文件输出编码为utf-8,以解决乱码问题

如果设置整站&#xff0c;就在根目录web.config设置如下参数&#xff0c;如果是指定目录&#xff0c;在该目录下的web.config文件设置如下参数&#xff0c;如果没有web.config文件&#xff0c;可以直接新建一个&#xff1a; <?xml version"1.0" encoding"…

失败记录两则

一&#xff0c;未找出为什么有的CPU应用超高&#xff0c;而另一些CPU静静啥也不干。可能是将JOB的优先级设置低了&#xff1f; 二&#xff0c;给BOSS的三星I9300线刷港版ROM失败。可能文件坏&#xff0c;最可能数据线不是原装&#xff1f;