已有yarn集群的情况下,部署spark只需要部署客户端。
一、前提条件
- 已部署yarn集群,部署方式参考:https://blog.csdn.net/weixin_39750084/article/details/136750613?spm=1001.2014.3001.5502,我部署的hadoop版本是3.3.6
- 已安装jdk1.8,如果没安装或版本不对,可参考:https://blog.csdn.net/weixin_39750084/article/details/138674399?spm=1001.2014.3001.5502中的第六部分,客户端部署中jdk的安装。
二、部署spark客户端
下载链接:
https://mirrors.aliyun.com/apache/spark/spark-3.4.3/spark-3.4.3-bin-hadoop3.tgz
mkdir spark
cd spark
wget https://mirrors.aliyun.com/apache/spark/spark-3.4.3/spark-3.4.3-bin-hadoop3.tgz
tar -zxvf spark-3.4.3-bin-hadoop3.tgz
vi /etc/profile#添加以下几行
export HIVE_HOME=/mnt/admin/apache-hive-3.1.3-bin
export HADOOP_HOME=/mnt/admin/hadoop-3.3.6
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/jre
export HADOOP_CONF_DIR=/mnt/admin/hadoop-3.3.6/etc/hadoop
export YARN_CONF_DIR=/mnt/admin/hadoop-3.3.6/etc/hadoop
export SPARK_HOME=/mnt/admin/spark/spark-3.4.3-bin-hadoop3
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native
export PATH=$PATH:$HIVE_HOME/bin:$HADOOP_HOME/bin:$JAVA_HOME/bin:$SPARK_HOME/bin#让环境变量生效
source /etc/profile
yarn-site.xml需要配置为以下内容:
<configuration><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property><property><name>yarn.resourcemanager.hostname</name><value>hadoop-hadoop-yarn-rm-0.hadoop-hadoop-yarn-rm.default</value></property><property><name>yarn.resourcemanager.address</name><value>hadoop-hadoop-yarn-rm-0.hadoop-hadoop-yarn-rm.default:8032</value></property><property><name>yarn.resourcemanager.scheduler.address</name><value>hadoop-hadoop-yarn-rm-0.hadoop-hadoop-yarn-rm.default:8030</value></property><property><name>yarn.nodemanager.env-whitelist</name><value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_HOME,PATH,LANG,TZ,HADOOP_MAPRED_HOME</value></property>
</configuration>
这个yarn中不仅要配置address,还需要配置hostname,之前配置好之后,使用pyspark时,只要master用local就可以,用yarn就不行,就是因为yarn没有设置hostname。
根据自己的需要,还可以配置spark-defaults.conf文件,配置以下内容:
spark.driver.port 10022
spark.blockManager.port 10023
spark.driver.bindAddress 0.0.0.0
spark.driver.host 192.168.3.100
10022可以在终端通过echo $PORT1查看,10023可以通过echo $PORT2查看。
三、测试客户端
bin/spark-sql,看能不能正确执行sql语句,能够正常在hdfs中写入等。
在jupyter中测试是否能正确使用
四、使用pyspark
我是用的cube studio平台的notebook,自带了pyspark,所以我没有额外装了,需要的可以看这篇文章来装pyspark:https://blog.csdn.net/weixin_46560589/article/details/132857521
在jupyter notebook中测试是否能正确使用pyspark,可以使用以下示例代码,前提是已经在hive中创建了表,写入了数据:
import os
from pyspark import SparkContext, SparkConf
from pyspark.conf import SparkConf
from pyspark.sql import SparkSession, HiveContext
"""
SparkSession ss = SparkSession
.builder()
.appName(" Hive example")
.config("hive.metastore.uris", "thrift://localhost:9083")
.enableHiveSupport()
.getOrCreate();
"""
# os.environ['HADOOP_CONF_DIR'] = '/mnt/admin/hadoop-3.3.6/etc/hadoop'
# os.environ['YARN_CONF_DIR'] = '/mnt/admin/hadoop-3.3.6/etc/hadoop'spark = (SparkSession.builder.master("yarn").appName('example-pyspark-read-and-write-from-hive').config("hive.metastore.uris", "thrift://hive-service.default:9083", conf=SparkConf()).enableHiveSupport().getOrCreate())df_load = spark.sql('select * from demo limit 2')
df_load.show()
参考链接:
https://blog.csdn.net/weixin_46560589/article/details/132898417
https://blog.csdn.net/weixin_46560589/article/details/132857521