背景
在实时计算平台上通过YarnClient向yarn上提交flink任务时一直卡在那里,并在client端一直输出如下日志:
(YarnClusterDescriptor.java:1036)- Deployment took more than 60 seconds. Please check if the requested resources are available in the YARN cluster
看到这个的第一反应是yarn上的资源分配问题,于是来到yarn控制台,发现Cluster Metrics中Apps Pending项为1。what?新提交的job为什么会处于pending状态了?
1. 先确定cpu和内存情况如下:
可以看出cpu和内存资源充足,没有发现问题。
2. 查看调度器的使用情况
集群中使用的调度器的类型如下图:
可以看到,集群中使用的是Capacity Scheduler调度器,也就是所谓的容量调度,这种方案更适合多租户安全地共享大型集群,以便在分配的容量限制下及时分配资源。采用队列的概念,任务提交到队列,队列可以设置资源的占比,并且支持层级队列、访问控制、用户限制、预定等等配置。但是,对于资源的分配占比调优需要更多的经验处理。但它不会出现在使用FIFO Scheduler时会出现的有大任务独占资源,会导致其他任务一直处于 pending 状态的问题。
3. 查看任务队列的情况
从上图中可以看出Configured Minimum User Limit Percent的配置为100%,由于集群目前相对较小,用户队列没有做租户划分,用的都是default队列,从图中可以看出使用的容量也只有38.2%,队列中最多可存放10000个application,而实际的远远少于10000,貎似这里也看不出来什么问题。
4. 查看resourceManager的日志
日志内容如下:
2020-11-26 19:33:46,669 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 3172020-11-26 19:33:48,874 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 317 submitted by user root2020-11-26 19:33:48,874 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1593338489799_03172020-11-26 19:33:48,874 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root IP=x.x.x.x OPERATION=Submit Application Request TARGET=ClientRMService RESULT=SUCCESS APPID=application_1593338489799_03172020-11-26 19:33:48,874 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1593338489799_0317 State change from NEW to NEW_SAVING on event=START2020-11-26 19:33:48,875 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1593338489799_03172020-11-26 19:33:48,875 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1593338489799_0317 State change from NEW_SAVING to SUBMITTED on event=APP_NEW_SAVED2020-11-26 19:33:48,875 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1593338489799_0317 user: root leaf-queue of parent: root #applications: 162020-11-26 19:33:48,875 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1593338489799_0317 from user: root, in queue: default2020-11-26 19:33:48,875 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1593338489799_0317 State change from SUBMITTED to ACCEPTED on event=APP_ACCEPTED2020-11-26 19:33:48,875 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1593338489799_0317_0000012020-11-26 19:33:48,875 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1593338489799_0317_000001 State change from NEW to SUBMITTED2020-11-26 19:33:48,875 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: not starting application as amIfStarted exceeds amLimit2020-11-26 19:33:48,877 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1593338489799_0317 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@6c0d5b4d, leaf-queue: default #user-pending-applications: 1 #user-active-applications: 15 #queue-pending-applications: 1 #queue-active-applications: 15
从日志中可以看到一个Application在yarn上进行资源分配的完整流程,只是这个任务因为一些原因进入了pending队列而已,与我们要查找的问题相关的日志主要是如下几行:
2020-11-26 19:33:48,875 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: not starting application as amIfStarted exceeds amLimit2020-11-26 19:33:48,877 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1593338489799_0317 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@6c0d5b4d, leaf-queue: default #user-pending-applications: 1 #user-active-applications: 15 #queue-pending-applications: 1 #queue-active-applications: 15
没错,问题就出来not starting application as amIfStarted exceeds amLimit,那么这个是什么原因引起的呢,我们看下stackoverflow[1]上的解释:
那么yarn.scheduler.capacity.maximum-am-resource-percent参数的真正含义是什么呢?国语意思就是集群中可用于运行application master的资源比例上限,这通常用于限制并发运行的应用程序数目,它的默认值为0.1。
查看了下集群上目前的任务总数有15个左右,每个任务分配有一个约1G左右的jobmanager(jobmanager为Application master类型的application),占15G左右,而集群上的总内存为144G,那么15>144 * 0.1 ,从而导致jobmanager的创建处于pending状态。
5. 解决验证
修改capacity-scheduler.xml的yarn.scheduler.capacity.maximum-am-resource-percent配置为如下:
yarn.scheduler.capacity.maximum-am-resource-percent 0.5
除了动态减少队列数目外,capacity-scheduler.xml的其他配置的修改是可以动态更新的,更新命令为:
yarn rmadmin -refreshQueues
执行命令后,在resourceManager的日志中可以看到如下输出:
2020-11-27 09:37:56,340 INFO org.apache.hadoop.conf.Configuration: found resource capacity-scheduler.xml at file:/work/hadoop-2.7.4/etc/hadoop/capacity-scheduler.xml2020-11-27 09:37:56,356 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Re-initializing queues...---------------------------------------------------------------------------2020-11-27 09:37:56,371 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized queue mappings, override: false2020-11-27 09:37:56,372 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root IP=x.x.x.x OPERATION=refreshQueues TARGET=AdminServicRESULT=SUCCESS
仔细查看日志可以看到更新已经成功,在平台上重新发布任务显示成功,问题解决。
yarn Queue的配置
Resource Allocation
Property | Description |
yarn.scheduler.capacity..capacity | Queue capacity in percentage (%) as a float (e.g. 12.5) OR as absolute resource queue minimum capacity. The sum of capacities for all queues, at each level, must be equal to 100. However if absolute resource is configured, sum of absolute resources of child queues could be less than it’s parent absolute resource capacity. Applications in the queue may consume more resources than the queue’s capacity if there are free resources, providing elasticity. |
yarn.scheduler.capacity..maximum-capacity | Maximum queue capacity in percentage (%) as a float OR as absolute resource queue maximum capacity. This limits the elasticity for applications in the queue. 1) Value is between 0 and 100. 2) Admin needs to make sure absolute maximum capacity >= absolute capacity for each queue. Also, setting this value to -1 sets maximum capacity to 100%. |
yarn.scheduler.capacity..minimum-user-limit-percent | Each queue enforces a limit on the percentage of resources allocated to a user at any given time, if there is demand for resources. The user limit can vary between a minimum and maximum value. The former (the minimum value) is set to this property value and the latter (the maximum value) depends on the number of users who have submitted applications. For e.g., suppose the value of this property is 25. If two users have submitted applications to a queue, no single user can use more than 50% of the queue resources. If a third user submits an application, no single user can use more than 33% of the queue resources. With 4 or more users, no user can use more than 25% of the queues resources. A value of 100 implies no user limits are imposed. The default is 100. Value is specified as a integer. |
yarn.scheduler.capacity..user-limit-factor | The multiple of the queue capacity which can be configured to allow a single user to acquire more resources. By default this is set to 1 which ensures that a single user can never take more than the queue’s configured capacity irrespective of how idle the cluster is. Value is specified as a float. |
yarn.scheduler.capacity..maximum-allocation-mb | The per queue maximum limit of memory to allocate to each container request at the Resource Manager. This setting overrides the cluster configuration yarn.scheduler.maximum-allocation-mb . This value must be smaller than or equal to the cluster maximum. |
yarn.scheduler.capacity..maximum-allocation-vcores | The per queue maximum limit of virtual cores to allocate to each container request at the Resource Manager. This setting overrides the cluster configuration yarn.scheduler.maximum-allocation-vcores . This value must be smaller than or equal to the cluster maximum. |
yarn.scheduler.capacity..user-settings..weight | This floating point value is used when calculating the user limit resource values for users in a queue. This value will weight each user more or less than the other users in the queue. For example, if user A should receive 50% more resources in a queue than users B and C, this property will be set to 1.5 for user A. Users B and C will default to 1.0. |
Resource Allocation using Absolute Resources configuration
CapacityScheduler
supports configuration of absolute resources instead of providing Queue capacity in percentage. As mentioned in above configuration section for yarn.scheduler.capacity..capacity
and yarn.scheduler.capacity..max-capacity
, administrator could specify an absolute resource value like [memory=10240,vcores=12]
. This is a valid configuration which indicates 10GB Memory and 12 VCores.
Running and Pending Application Limits
The CapacityScheduler
supports the following parameters to control the running and pending applications:
Property | Description |
yarn.scheduler.capacity.maximum-applications / yarn.scheduler.capacity..maximum-applications | Maximum number of applications in the system which can be concurrently active both running and pending. Limits on each queue are directly proportional to their queue capacities and user limits. This is a hard limit and any applications submitted when this limit is reached will be rejected. Default is 10000. This can be set for all queues with yarn.scheduler.capacity.maximum-applications and can also be overridden on a per queue basis by setting yarn.scheduler.capacity..maximum-applications . Integer value expected. |
yarn.scheduler.capacity.maximum-am-resource-percent / yarn.scheduler.capacity..maximum-am-resource-percent | Maximum percent of resources in the cluster which can be used to run application masters - controls number of concurrent active applications. Limits on each queue are directly proportional to their queue capacities and user limits. Specified as a float - ie 0.5 = 50%. Default is 10%. This can be set for all queues with yarn.scheduler.capacity.maximum-am-resource-percent and can also be overridden on a per queue basis by setting yarn.scheduler.capacity..maximum-am-resource-percent |
更多配置
参考:https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html
附
如果想要不重启集群来动态刷新hadoop配置可尝试如下方法:
1、刷新hdfs配置
在两个(以三节点的集群为例)namenode节点上执行:
hdfs dfsadmin -fs hdfs://node1:9000 -refreshSuperUserGroupsConfigurationhdfs dfsadmin -fs hdfs://node2:9000 -refreshSuperUserGroupsConfiguration
2、刷新yarn配置
在两个(以三节点的集群为例)namenode节点上执行:
yarn rmadmin -fs hdfs://node1:9000 -refreshSuperUserGroupsConfigurationyarn rmadmin -fs hdfs://node2:9000 -refreshSuperUserGroupsConfiguration
参考
•https://stackoverflow.com/questions/33465300/why-does-yarn-job-not-transition-to-running-state•https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html•https://stackoverflow.com/questions/29917540/capacity-scheduler•https://cloud.tencent.com/developer/article/1357111•https://cloud.tencent.com/developer/article/1194501
References
[1]
stackoverflow: https://stackoverflow.com/questions/33465300/why-does-yarn-job-not-transition-to-running-state