spark源码分析之Executor启动与任务提交篇

任务提交流程

概述

在阐明了Spark的Master的启动流程与Worker启动流程。接下继续执行的就是Worker上的Executor进程了,本文继续分析整个Executor的启动与任务提交流程

Spark-submit

提交一个任务到集群通过的是Spark-submit
通过启动脚本的方式启动它的主类,这里以WordCount为例子
spark-submit --class cn.apache.spark.WordCount

  1. bin/spark-clas -> org.apache.spark.deploy.SparkSubmit 调用这个类的main方法
  2. doRunMain方法中传进来一个自定义spark应用程序的main方法
    class cn.apache.spark.WordCount
  3. 通过反射拿到类的实例的引用mainClass = Utils.classForName(childMainClass)
  4. 在通过反射调用class cn.apache.spark.WordCountmain方法

我们来看SparkSubmit的main方法

  def main(args: Array[String]): Unit = {val appArgs = new SparkSubmitArguments(args)if (appArgs.verbose) {printStream.println(appArgs)}//匹配任务类型appArgs.action match {case SparkSubmitAction.SUBMIT => submit(appArgs)case SparkSubmitAction.KILL => kill(appArgs)case SparkSubmitAction.REQUEST_STATUS => requestStatus(appArgs)}}

这里的类型是submit,调用submit方法

  private[spark] def submit(args: SparkSubmitArguments): Unit = {val (childArgs, childClasspath, sysProps, childMainClass) = prepareSubmitEnvironment(args)def doRunMain(): Unit = {。。。。。。try {proxyUser.doAs(new PrivilegedExceptionAction[Unit]() {override def run(): Unit = {//childMainClass这个你自己定义的App的main所在的全类名runMain(childArgs, childClasspath, sysProps, childMainClass, args.verbose)}})} catch {。。。。。。        }}        。。。。。。。//掉用上面的doRunMaindoRunMain()}

submit里调用了doRunMain(),然后调用了runMain,来看runMain

  private def runMain(。。。。。。try {//通过反射mainClass = Class.forName(childMainClass, true, loader)} catch {。。。。。。}//反射拿到面方法实例val mainMethod = mainClass.getMethod("main", new Array[String](0).getClass)if (!Modifier.isStatic(mainMethod.getModifiers)) {throw new IllegalStateException("The main method in the given main class must be static")}。。。。。。try {//调用App的main方法mainMethod.invoke(null, childArgs.toArray)} catch {case t: Throwable =>throw findCause(t)}}

最主要的流程就在这里了,上面的代码注释很清楚,通过反射调用我们写的类的main方法,大体的流程到此

SparkSubmit时序图

Executor启动流程

SparkSubmit通过反射调用了我们程序的main方法后,就开始执行我们的代码
,一个Spark程序中需要创建SparkContext对象,我们就从这个对象开始

SparkContext的构造方法代码很长,主要关注的地方如下

class SparkContext(config: SparkConf) extends Logging with ExecutorAllocationClient {。。。。。。private[spark] def createSparkEnv(conf: SparkConf,isLocal: Boolean,listenerBus: LiveListenerBus): SparkEnv = {//通过SparkEnv来创建createDriverEnvSparkEnv.createDriverEnv(conf, isLocal, listenerBus)}//在这里调用了createSparkEnv,返回一个SparkEnv对象,这个对象里面有很多重要属性,最重要的ActorSystemprivate[spark] val env = createSparkEnv(conf, isLocal, listenerBus)SparkEnv.set(env)//创建taskScheduler// Create and start the schedulerprivate[spark] var (schedulerBackend, taskScheduler) =SparkContext.createTaskScheduler(this, master)//创建DAGSchedulerdagScheduler = new DAGScheduler(this)//启动TaksSchedulertaskScheduler.start()。。。。。
}

Spark的构造方法主要干三件事,创建了一个SparkEnv,taskScheduler,dagScheduler,我们先来看createTaskScheduler里干了什么

 //通过给定的URL创建TaskSchedulerprivate def createTaskScheduler(.....//匹配URL选择不同的方式master match {。。。。。。//这个是Spark的Standalone模式case SPARK_REGEX(sparkUrl) =>//首先创建TaskSchedulerval scheduler = new TaskSchedulerImpl(sc)val masterUrls = sparkUrl.split(",").map("spark://" + _)//很重要val backend = new SparkDeploySchedulerBackend(scheduler, sc, masterUrls)//初始化了一个调度器,默认是FIFOscheduler.initialize(backend)(backend, scheduler)。。。。。}        
}

通过master的url来匹配到Standalone模式:然后初始化了SparkDeploySchedulerBackendTaskSchedulerImpl,这两个对象很重要,是启动任务调度的核心,然后调用了scheduler.initialize(backend)进行初始化

启动TaksScheduler初始化完成,回到我们的SparkContext构造方法后面继续调用了
taskScheduler.start() 启动TaksScheduler
来看start方法

override def start() {//调用backend的实现的start方法backend.start()if (!isLocal && conf.getBoolean("spark.speculation", false)) {logInfo("Starting speculative execution thread")import sc.env.actorSystem.dispatchersc.env.actorSystem.scheduler.schedule(SPECULATION_INTERVAL milliseconds,SPECULATION_INTERVAL milliseconds) {Utils.tryOrExit { checkSpeculatableTasks() }}}}

这里的backend是SparkDeploySchedulerBackend调用了它的start

override def start() {//CoarseGrainedSchedulerBackend的start方法,在这个方法里面创建了一个DriverActorsuper.start()// The endpoint for executors to talk to us//下面是为了启动java子进程做准备,准备一下参数val driverUrl = AkkaUtils.address(AkkaUtils.protocol(actorSystem),SparkEnv.driverActorSystemName,conf.get("spark.driver.host"),conf.get("spark.driver.port"),CoarseGrainedSchedulerBackend.ACTOR_NAME)val args = Seq("--driver-url", driverUrl,"--executor-id", "{{EXECUTOR_ID}}","--hostname", "{{HOSTNAME}}","--cores", "{{CORES}}","--app-id", "{{APP_ID}}","--worker-url", "{{WORKER_URL}}")val extraJavaOpts = sc.conf.getOption("spark.executor.extraJavaOptions").map(Utils.splitCommandString).getOrElse(Seq.empty)val classPathEntries = sc.conf.getOption("spark.executor.extraClassPath").map(_.split(java.io.File.pathSeparator).toSeq).getOrElse(Nil)val libraryPathEntries = sc.conf.getOption("spark.executor.extraLibraryPath").map(_.split(java.io.File.pathSeparator).toSeq).getOrElse(Nil)// When testing, expose the parent class path to the child. This is processed by// compute-classpath.{cmd,sh} and makes all needed jars available to child processes// when the assembly is built with the "*-provided" profiles enabled.val testingClassPath =if (sys.props.contains("spark.testing")) {sys.props("java.class.path").split(java.io.File.pathSeparator).toSeq} else {Nil}// Start executors with a few necessary configs for registering with the schedulerval sparkJavaOpts = Utils.sparkJavaOpts(conf, SparkConf.isExecutorStartupConf)val javaOpts = sparkJavaOpts ++ extraJavaOpts//用command拼接参数,最终会启动org.apache.spark.executor.CoarseGrainedExecutorBackend子进程val command = Command("org.apache.spark.executor.CoarseGrainedExecutorBackend",args, sc.executorEnvs, classPathEntries ++ testingClassPath, libraryPathEntries, javaOpts)val appUIAddress = sc.ui.map(_.appUIAddress).getOrElse("")//用ApplicationDescription封装了一些重要的参数val appDesc = new ApplicationDescription(sc.appName, maxCores, sc.executorMemory, command,appUIAddress, sc.eventLogDir, sc.eventLogCodec)//在这里面创建ClientActorclient = new AppClient(sc.env.actorSystem, masters, appDesc, this, conf)//启动ClientActorclient.start()waitForRegistration()}

这里是拼装了启动Executor的一些参数,类名+参数 封装成ApplicationDescription。最后传给并创建AppClient并调用它的start方法

AppClient创建时序图

AppClient的start方法

接来下关注start方法

  def start() {// Just launch an actor; it will call back into the listener.actor = actorSystem.actorOf(Props(new ClientActor))}

在start方法里创建了与Master通信的ClientActor,然后会调用它的preStart方法向Master注册,接下来看它的preStart

  override def preStart() {context.system.eventStream.subscribe(self, classOf[RemotingLifecycleEvent])try {//ClientActor向Master注册registerWithMaster()} catch {case e: Exception =>logWarning("Failed to connect to master", e)markDisconnected()context.stop(self)}}

最后会调用该方法向所有Master注册

    def tryRegisterAllMasters() {for (masterAkkaUrl <- masterAkkaUrls) {logInfo("Connecting to master " + masterAkkaUrl + "...")//t通过actorSelection拿到了Master的引用val actor = context.actorSelection(masterAkkaUrl)//向Master发送异步的注册App的消息actor ! RegisterApplication(appDescription)}}

ClientActor发送来的注册App的消息,ApplicationDescription,他包含了需求的资源,要求启动的Executor类名和一些参数
Master的Receiver

 case RegisterApplication(description) => {if (state == RecoveryState.STANDBY) {// ignore, don't send response} else {logInfo("Registering app " + description.name)//创建App  sender:ClientActorval app = createApplication(description, sender)//注册AppregisterApplication(app)logInfo("Registered app " + description.name + " with ID " + app.id)//持久化ApppersistenceEngine.addApplication(app)//向ClientActor反馈信息,告诉他app注册成功了sender ! RegisteredApplication(app.id, masterUrl)//TODO 调度任务schedule()}}

registerApplication(app)

 def registerApplication(app: ApplicationInfo): Unit = {val appAddress = app.driver.path.addressif (addressToApp.contains(appAddress)) {logInfo("Attempted to re-register application at same address: " + appAddress)return}//把App放到集合里面applicationMetricsSystem.registerSource(app.appSource)apps += appidToApp(app.id) = appactorToApp(app.driver) = appaddressToApp(appAddress) = appwaitingApps += app}

Master将接受的信息保存到集合并序列化后发送一个RegisteredApplication消息通知反馈给ClientActor,接着执行schedule()方法,该方法中会遍历workers集合,并执行launchExecutor

  def launchExecutor(worker: WorkerInfo, exec: ExecutorDesc) {logInfo("Launching executor " + exec.fullId + " on worker " + worker.id)//记录该worker上使用了多少资源worker.addExecutor(exec)//Master向Worker发送启动Executor的消息worker.actor ! LaunchExecutor(masterUrl,exec.application.id, exec.id, exec.application.desc, exec.cores, exec.memory)//Master向ClientActor发送消息,告诉ClientActor executor已经启动了exec.application.driver ! ExecutorAdded(exec.id, worker.id, worker.hostPort, exec.cores, exec.memory)}

这里Master向Worker发送启动Executor的消息

`worker.actor ! LaunchExecutor(masterUrl,exec.application.id, exec.id, exec.application.desc, exec.cores, exec.memory)` 

application.desc里包含了Executor类的启动信息

case LaunchExecutor(masterUrl, appId, execId, appDesc, cores_, memory_) =>。。。。。appDirectories(appId) = appLocalDirs//创建一个ExecutorRunner,这个很重要,保存了Executor的执行配置和参数val manager = new ExecutorRunner(appId,execId,appDesc.copy(command = Worker.maybeUpdateSSLSettings(appDesc.command, conf)),cores_,memory_,self,workerId,host,webUi.boundPort,publicAddress,sparkHome,executorDir,akkaUrl,conf,appLocalDirs, ExecutorState.LOADING)executors(appId + "/" + execId) = manager//TODO 开始启动ExecutorRunnermanager.start()。。。。。。}}}

Worker的Receiver接受到了启动Executor的消息,appDesc对象保存了Command命令、Executor的实现类和参数

manager.start()里会创建一个线程

  def start() {//启动一个线程workerThread = new Thread("ExecutorRunner for " + fullId) {//用一个子线程来帮助Worker启动Executor子进程override def run() { fetchAndRunExecutor() }}workerThread.start()// Shutdown hook that kills actors on shutdown.shutdownHook = new Thread() {override def run() {killProcess(Some("Worker shutting down"))}}Runtime.getRuntime.addShutdownHook(shutdownHook)}

在线程中调用了fetchAndRunExecutor()方法,我们来看该方法

def fetchAndRunExecutor() {try {// Launch the processval builder = CommandUtils.buildProcessBuilder(appDesc.command, memory,sparkHome.getAbsolutePath, substituteVariables)//构建命令val command = builder.command()logInfo("Launch command: " + command.mkString("\"", "\" \"", "\""))builder.directory(executorDir)builder.environment.put("SPARK_LOCAL_DIRS", appLocalDirs.mkString(","))// In case we are running this from within the Spark Shell, avoid creating a "scala"// parent process for the executor commandbuilder.environment.put("SPARK_LAUNCH_WITH_SCALA", "0")// Add webUI log urlsval baseUrl =s"http://$publicAddress:$webUiPort/logPage/?appId=$appId&executorId=$execId&logType="builder.environment.put("SPARK_LOG_URL_STDERR", s"${baseUrl}stderr")builder.environment.put("SPARK_LOG_URL_STDOUT", s"${baseUrl}stdout")//启动子进程process = builder.start()val header = "Spark Executor Command: %s\n%s\n\n".format(command.mkString("\"", "\" \"", "\""), "=" * 40)// Redirect its stdout and stderr to filesval stdout = new File(executorDir, "stdout")stdoutAppender = FileAppender(process.getInputStream, stdout, conf)val stderr = new File(executorDir, "stderr")Files.write(header, stderr, UTF_8)stderrAppender = FileAppender(process.getErrorStream, stderr, conf)// Wait for it to exit; executor may exit with code 0 (when driver instructs it to shutdown)// or with nonzero exit code//开始执行,等待结束信号val exitCode = process.waitFor()。。。。}}

这里面进行了类名和参数的拼装,具体拼装过程不用关心,最终builder.start()会以SystemRuntime的方式启动一个子进程,这个是进程的类名是CoarseGrainedExecutorBackend
到此Executor进程就启动起来了

Executor创建时序图

Executor任务调度对象启动

Executor进程后,就首先要执行main方法,main的代码如下

  //Executor进程启动的入口def main(args: Array[String]) {。。。。//拼装参数while (!argv.isEmpty) {argv match {case ("--driver-url") :: value :: tail =>driverUrl = valueargv = tailcase ("--executor-id") :: value :: tail =>executorId = valueargv = tailcase ("--hostname") :: value :: tail =>hostname = valueargv = tailcase ("--cores") :: value :: tail =>cores = value.toIntargv = tailcase ("--app-id") :: value :: tail =>appId = valueargv = tailcase ("--worker-url") :: value :: tail =>// Worker url is used in spark standalone mode to enforce fate-sharing with workerworkerUrl = Some(value)argv = tailcase ("--user-class-path") :: value :: tail =>userClassPath += new URL(value)argv = tailcase Nil =>case tail =>System.err.println(s"Unrecognized options: ${tail.mkString(" ")}")printUsageAndExit()}}if (driverUrl == null || executorId == null || hostname == null || cores <= 0 ||appId == null) {printUsageAndExit()}//开始执行Executorrun(driverUrl, executorId, hostname, cores, appId, workerUrl, userClassPath)}

执行了run方法

  private def run(driverUrl: String,executorId: String,hostname: String,cores: Int,appId: String,workerUrl: Option[String],userClassPath: Seq[URL])   。。。。。//通过actorSystem创建CoarseGrainedExecutorBackend -> Actor//CoarseGrainedExecutorBackend -> DriverActor通信env.actorSystem.actorOf(Props(classOf[CoarseGrainedExecutorBackend],driverUrl, executorId, sparkHostPort, cores, userClassPath, env),name = "Executor")。。。。。。}env.actorSystem.awaitTermination()}}

run方法中创建了CoarseGrainedExecutorBackend的Actor对象用于准备和DriverActor通信,接着会继续调用preStart生命周期方法

  override def preStart() {logInfo("Connecting to driver: " + driverUrl)//Executor跟DriverActor建立连接driver = context.actorSelection(driverUrl)//Executor向DriverActor发送消息driver ! RegisterExecutor(executorId, hostPort, cores, extractLogUrls)context.system.eventStream.subscribe(self, classOf[RemotingLifecycleEvent])}

Executor向DriverActor发送注册的消息
driver ! RegisterExecutor(executorId, hostPort, cores, extractLogUrls)

DriverActor的receiver收到消息后

def receiveWithLogging = {//Executor发送给DriverActor的注册消息case RegisterExecutor(executorId, hostPort, cores, logUrls) =>Utils.checkHostPort(hostPort, "Host port expected " + hostPort)if (executorDataMap.contains(executorId)) {sender ! RegisterExecutorFailed("Duplicate executor ID: " + executorId)} else {logInfo("Registered executor: " + sender + " with ID " + executorId)//DriverActor向Executor发送注册成功的消息sender ! RegisteredExecutoraddressToExecutorId(sender.path.address) = executorIdtotalCoreCount.addAndGet(cores)totalRegisteredExecutors.addAndGet(1)val (host, _) = Utils.parseHostPort(hostPort)//将Executor的信息封装起来val data = new ExecutorData(sender, sender.path.address, host, cores, cores, logUrls)// This must be synchronized because variables mutated// in this block are read when requesting executorsCoarseGrainedSchedulerBackend.this.synchronized {//往集合添加Executor的信息对象executorDataMap.put(executorId, data)if (numPendingExecutors > 0) {numPendingExecutors -= 1logDebug(s"Decremented number of pending executors ($numPendingExecutors left)")}}listenerBus.post(SparkListenerExecutorAdded(System.currentTimeMillis(), executorId, data))//将来用来执行真正的业务逻辑makeOffers()}

DriverActor的receiver里将Executor信息封装到Map中保存起来,并发送反馈消息 sender ! RegisteredExecutor
给CoarseGrainedExecutorBackend

override def receiveWithLogging = {case RegisteredExecutor =>logInfo("Successfully registered with driver")val (hostname, _) = Utils.parseHostPort(hostPort)executor = new Executor(executorId, hostname, env, userClassPath, isLocal = false)

CoarseGrainedExecutorBackend收到消息后创建一个Executor对象用于准备任务的执行,到此Executor的创建就完成了,接下来下篇介绍任务的调度。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/541513.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

mysql 5.5.22.tar.gz_MySQL 5.5.22源码编译安装

MySQL 最新的版本都需要cmake编译安装&#xff0c;估计以后的版本也会采用这种方式&#xff0c;所以特地记录一下安装步骤及过程&#xff0c;以供参考。注意&#xff1a;此安装是默认CentOS下已经安装了最新工具包&#xff0c;比如GNU make, GCC, Perl, libncurses5-dev&#x…

Java Vector setElementAt()方法与示例

向量类setElementAt()方法 (Vector Class setElementAt() method) setElementAt() method is available in java.util package. setElementAt()方法在java.util包中可用。 setElementAt() method is used to set the given element (ele) at the given indices in this Vector.…

利用python进行数据分析D2——ch03IPython

为无为,事无事,味无味。大小多少,报怨以德。图难于其易,为大于其细;天下难事必作于易,天下大事必作于细。——老子关于图片的例子&#xff1a;import matplotlib.pyplot as plt imgplt.imread(ch03/stinkbug.png) import pylab plt.imshow(img) pylab.show()结果&#xff1a;调…

mysql 视图 字典_MySQL深入01-SQL语言-数据字典-服务器变量-数据操作DML-视图

SQL语言的组成部分常见分类&#xff1a;DDL&#xff1a;数据定义语言DCL&#xff1a;数据控制语言&#xff0c;如授权DML&#xff1a;数据操作语言其它分类&#xff1a;完整性定义语言&#xff1a;DDL的一部分功能约束约束&#xff1a;包括主键&#xff0c;外键&#xff0c;唯一…

为什么我会被淘汰?

这是一个值得讨论的问题。华为前段时间也传出了大规模裁员的一些负面新闻&#xff0c;一时间搞的人心惶惶。总结起来说&#xff0c;还是怕失去这份赖以生存的工作&#xff0c;尤其是对于上有老下有小的中年人来说&#xff0c;工作尤为重要。 淘汰&#xff0c;是软件行业不变的真…

Java Throwable initCause()方法与示例

Throwable类initCause()方法 (Throwable Class initCause() method) initCause() Method is available in java.lang package. initCause()方法在java.lang包中可用。 initCause() Method is used to instantiate the cause of this throwable to the given value and this met…

mysql 存储过程死循环_pl/sql存储过程loop死循环

今早&#xff0c;一个存储过程&#xff0c;写过很多次的存储过程&#xff0c;随手一写&#xff0c;各种报错&#xff0c;各种纠结&#xff0c;网上一搜&#xff0c;有好多个都遇到&#xff0c;论坛上给出的结局答案&#xff0c;今早&#xff0c;一个存储过程&#xff0c;写过很…

GATK之VariantAnnotator

VariantAnnotator 简要说明 用途&#xff1a; 利用上下文信息注释识别的变异位点(variant calls)分类&#xff1a; 变异位点操作工具概要&#xff1a; 根据变异位点的背景信息&#xff08;与功能注释相对&#xff09;进行注释。目前有许多的注释模块&#xff08;见注释模块一节…

pipedreader_Java PipedReader connect()方法与示例

pipedreaderPipedReader类的connect()方法 (PipedReader Class connect() method) connect() method is available in java.io package. connect()方法在java.io包中可用。 connect() method is used to cause this PipedReader to be connected to the given PipedWriter (sou…

《Java学习指南》—— 1.4 设计安全

本节书摘来异步社区《Java学习指南》一书中的第1章&#xff0c;第1.4节&#xff0c;作者&#xff1a;【美】Patrick Niemeyer , Daniel Leuck&#xff0c;更多章节内容可以访问云栖社区“异步社区”公众号查看。 1.4 设计安全 Java被设计为一种安全语言&#xff0c;对于这一事实…

ppython_Python pcom包_程序模块 - PyPI - Python中文网

PCOM在python中一个非常基本的unitronics pcom协议实现。如何使用from pcom import commandsfrom pcom.plc import EthernetPlcwith EthernetPlc(address(192.168.5.43, 1616)) as plc:# Read realtime clockc commands.ReadRtc()res plc.send(c)print(res)# Set realtime cl…

bitcount方法详解_Java Long类的bitCount()方法和示例

bitcount方法详解长类bitCount()方法 (Long class bitCount() method) bitCount() method is available in java.lang package. bitCount()方法在java.lang包中可用。 bitCount() method is used to find the number of 1s bits in the 2s complement binary denotation of the…

《软件定义数据中心:Windows Server SDDC技术与实践》——导读

前言 通过对自身的审视和对身边IT 技术专家的观察&#xff0c;我发现对于我们来说&#xff0c;掌握一项新的技术或熟悉一个新的产品&#xff0c;大都是闻而后知&#xff0c;知而后学&#xff0c;学以致用&#xff0c;用以知其然。然而Windows Server作为一个简单的、易上手的操…

python二维向量运算模拟_python二维向量运算_[VB.NET][C#]二维向量的基本运算

前言在数学中&#xff0c;几何向量是指具有大小和方向的几何对象。在编程中&#xff0c;向量有着广泛的应用&#xff0c;其作用在图形编程和游戏物理引擎方面尤为突出。第一节 构造函数通过创建一个二维向量的类(或结构体)&#xff0c;实现向量的表示及其运算。1. 首先&#xf…

Java LinkedHashMap clear()方法与示例

LinkedHashMap类的clear()方法 (LinkedHashMap Class clear() method) clear() method is available in java.util package. clear()方法在java.util包中可用。 clear() method is used to remove all the existing elements from this LinkedHashMap. clear()方法用于从此Link…

《Imperfect C++中文版》——1.3 运行期契约:前置条件、后置条件和不变式

本节书摘来自异步社区出版社《Imperfect C中文版》一书中的第1章&#xff0c;第1.3节&#xff0c;作者&#xff1a; 【美】Matthew Wilson&#xff0c;更多章节内容可以访问云栖社区“异步社区”公众号查看。 1.3 运行期契约&#xff1a;前置条件、后置条件和不变式 Imperfect …

python名称空间与运用域_Python名称空间和作用域讲座,命名,Namespaces,Scopes

Python命名空间(Namespaces)和作用域(Scopes)讲座命名空间(Namespace)命名空间(Namespace)&#xff0c;是名称到对象的映射。命名空间提供了在项目中避免名字冲突的一种方法。命名空间是独立的&#xff0c;没有任何关系的&#xff0c;所以一个命名空间中不能有重名&#xff0c;…

getminimum_Java Calendar getMinimum()方法与示例

getminimumCalendar类的getMinimum()方法 (Calendar Class getMinimum() method) getMinimum() method is available in java.util package. getMinimum()方法在java.util包中可用。 getMinimum() method is used to get the minimum value of the given field(fi) of this Cal…

《Spark核心技术与高级应用》——3.2节构建Spark的开发环境

本节书摘来自华章社区《Spark核心技术与高级应用》一书中的第3章&#xff0c;第3.2节构建Spark的开发环境&#xff0c;作者于俊 向海 代其锋 马海平&#xff0c;更多章节内容可以访问云栖社区“华章社区”公众号查看 3.2 构建Spark的开发环境无论Windows或Linux操作系统&am…

python闭包怎么理解_Python 闭包的理解

Last Updated on 2019年10月15日Python中的闭包是一个比较模糊的概念&#xff0c;不太好理解&#xff0c;我最近的面试中也被问及&#xff0c;在一个单例模式的实现上&#xff0c;我用装饰器实现单例&#xff0c;然后面试官就问到了我对闭包的理解&#xff0c;回答的不太清楚。…