Spark应用程序怎么部署
更新:HHH   时间:2023-1-7


这篇文章主要介绍“Spark应用程序怎么部署”,在日常操作中,相信很多人在Spark应用程序怎么部署问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解答”Spark应用程序怎么部署”的疑惑有所帮助!接下来,请跟着小编一起来学习吧!

Spark应用程序的部署
local
spark standalone
hadoop yarn
apache mesos
amazon ec2
spark standalone集群部署
standalonestandalone ha
SPARK源码编译
SBT编译
SPARK_HADOOP_VERSION=2.2.0 SPARK_YARN=true sbt/sbt assembly
export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"
mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests clean package
Spark部署包生成命令make-distribution.sh
--hadoop VERSION:hadoop版本号 不加此参数是hadoop版本为1.0.4
--with-yarn是否支持hadoop yarn不加参数时为不支持
--with-hive是否在sparksql中支持hive不加此参数为不支持hive
--skip-tachyon是否支持内存文件系统Tachyon,不加此参数时不生成tgz文件,只生成/dist目录
--name NAME和-tgz结合可以生成spark-¥VERSION-bin-$NAME.tgz的部署包,不加此参数时NAME为hadoop的版本号
部署包生成
生成支持yarn hadoop2.2.0的部署包
./make-distribution.sh --hadoop 2.2.0 --with-yarn --tgz
生成支持yarn hive的部署包
./make-distribution.sh --hadoop 2.2.0 --with-yarn --with-hive --tgz


[root@localhost lib]# ls /root/soft/spark-1.4.0-bin-hadoop2.6/lib/spark-assembly-1.4.0-hadoop2.6.0.jar
/root/soft/spark-1.4.0-bin-hadoop2.6/lib/spark-assembly-1.4.0-hadoop2.6.0.jar

[root@localhost conf]# vi slaves【slave节点,如果伪分布就是】
localhost

[root@localhost conf]# cp spark-env.sh.template spark-env.sh
[root@localhost conf]# vi spark-env.sh拷贝到所有节点
文件conf/spark-env.sh
export SPARK_MASTER_IP=localhost
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_CORES=1
export SPARK__WORKER_INSTANCES=1
export SPARK__WORKER_MEMORY=1

[root@localhost conf]# ../sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /root/soft/spark-1.4.0-bin-hadoop2.6/sbin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-localhost.localdomain.out
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /root/soft/spark-1.4.0-bin-hadoop2.6/sbin/../logs/spark-root-org.apache.spark.deploy.worker.Worker-1-localhost.localdomain.out
localhost: failed to launch org.apache.spark.deploy.worker.Worker:
localhost:   JAVA_HOME is not set
localhost: full log in /root/soft/spark-1.4.0-bin-hadoop2.6/sbin/../logs/spark-root-org.apache.spark.deploy.worker.Worker-1-localhost.localdomain.out
访问http://192.168.141.10:8080/


[root@localhost conf]# ../bin/spark-shell --master  spark://localhost:7077

访问http://192.168.141.10:8080/有application id生成

sparkstandalone HA部署
基于文件系统的HA
spark.deploy.recoveryMode设成FILESYSTEM
spark.deploy.recoveryDirecory Spark保存恢复状态的目录
Spark-env.sh里对SPARK_DAEMON_JAVA_OPTS设置
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=FILESYSTEM -Dspark.deploy.recoveryDirecory=$dir"
基于zookeeper的ha
spark.deploy.recoveryMode设成ZOOKEEPER
spark.deploy.zookeeper.url Zookeeper url
spark.deploy.zookeeper.dir Zookeeper保存恢复状态的目录缺省为spark
spark-env里对SPARK_DAEMON_JAVA_OPTS设置
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop1:2181,hadoop2:2181 -D=spark.deploy.zookeeper.dir=$DIR"
启动startall
然后在另外一台启动start-master

[root@localhost ~]# jps
4609 Jps
4416 SparkSubmit
4079 Master
4291 SparkSubmit

ssh 免密
[root@localhost ~]# ssh-keygen -t rsa -P ''

[root@localhost ~]# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[root@localhost ~]# chmod 600 ~/.ssh/authorized_keys

[root@localhost conf]# ../bin/spark-shell --master  spark://localhost:7077 --executor-memory 2g

spark工具简介
spark交互工具 spark-shell
spark应用程序部署工具 spark-submit
option
--master MASTER_URL spark://host:port mesos://host:port yarn or local
--deploy-mode DEPLOY_MODE driver运行之处 client运行在本机 cluster运行在集群
--class CLASS_NAME应用程序包要运行的class
--name 应用程序名称
--jars用逗号隔开的driver本地要运行的本地jar包以及executor类路径
--py-files PY_FILES用逗号隔开的要放置在每个executor工作目录的文件列表
--properties-file FILE设置应用程序属性的文件放置文字默认是conf/spark-defaults.conf
--driver-memory MEMDRIVER内存大小默认512m
--driver-java-options driver的java选项
--driver-library-path driver库路径
--driver-class-path driver类路径
--executor-memory MEM设置内存大小默认1G
[root@localhost sbin]# sh start-dfs.sh
scala>  val rdd=sc.textFile("hdfs://localhost.localdomain:9000/20140824/test-data.csv")
scala> val rdd2=rdd.flatMap(_.split(" ")).map(x=>(x,1)).reduceByKey(_+_)

到此,关于“Spark应用程序怎么部署”的学习就结束了,希望能够解决大家的疑惑。理论与实践的搭配能更好的帮助大家学习,快去试试吧!若想继续学习更多相关知识,请继续关注天达云网站,小编会继续努力为大家带来更多实用的文章!

返回云计算教程...