论坛类网站建站,贵阳公众号开发公司,深圳网站建设公司有哪些,宁波网站建设公司哪里有Apache Spark是专为大规模数据处理而设计的快速通用的计算引擎#xff1b;现在形成一个高速发展应用广泛的生态系统。Spark三种分布式部署方式比较目前Apache Spark支持三种分布式部署方式#xff0c;分别是standalone、spark on mesos和 spark on YARN#xff0c;详情参考。…Apache Spark是专为大规模数据处理而设计的快速通用的计算引擎现在形成一个高速发展应用广泛的生态系统。Spark三种分布式部署方式比较目前Apache Spark支持三种分布式部署方式分别是standalone、spark on mesos和 spark on YARN详情参考。Spark standalone模式分布式部署环境介绍主机名 应用tvm11 zookeepertvm12 zookeepertvm13 zookeeper、spark(master)、spark(slave)、Scalatvm14 spark(backup)、spark(slave)、Scalatvm15 spark(slave)、Scala说明依赖scalaNote that support for Java 7, Python 2.6 and old Hadoop versions before 2.6.5 were removed as of Spark 2.2.0. Support for Scala 2.10 was removed as of 2.3.0. Support for Scala 2.11 is deprecated as of Spark 2.4.1 and will be removed in Spark 3.0.zookeeper Master结点存在单点故障所以要借助zookeeper至少启动两台Master结点来实现高可用配置方案比较简单。安装scala由上面的说明可知spark对scala版本依赖较为严格spark-2.4.5依赖scala-2.12.x所以首先要安装scala-2.12.x在此选用scala-2.12.10。使用二进制安装$ wget https://downloads.lightbend.com/scala/2.12.10/scala-2.12.10.tgz$ tar zxvf scala-2.12.10.tgz -C /path/to/scala_install_dir如果系统环境也要使用相同版本的scala可以将其加入到用户环境变量(.bashrc或.bash_profile)。安装spark打通三台spark机器的work用户ssh通道现在安装包到master机器tvm13注意提示信息及Hadoop版本(与已有环境匹配如果不匹配则选非预编译的版本自己编译)。解压到安装目录即可。配置sparkspark服务配置文件主要有两个spark-env.sh和slaves。spark-evn.sh配置spark运行相关环境变量slaves指定worker服务器配置spark-env.shcp spark-env.sh.template spark-env.shexport JAVA_HOME/data/template/j/java/jdk1.8.0_201export SCALA_HOME/data/template/s/scala/scala-2.12.10export SPARK_WORKER_MEMORY2048mexport SPARK_WORKER_CORES2export SPARK_WORKER_INSTANCES2export SPARK_DAEMON_JAVA_OPTS-Dspark.deploy.recoveryModeZOOKEEPER -Dspark.deploy.zookeeper.urltvm11:2181,tvm12:2181,tvm13:2181 -Dspark.deploy.zookeeper.dir/data/template/s/spark# 关于 SPARK_DAEMON_JAVA_OPTS 参数含义# -Dspark.deploy.recoverModeZOOKEEPER #代表发生故障使用zookeeper服务# -Dspark.depoly.zookeeper.urlmaster.hadoop,slave1.hadoop,slave1.hadoop #主机名的名字# -Dspark.deploy.zookeeper.dir/spark #spark要在zookeeper上写数据时的保存目录# 其他参数含义https://blog.csdn.net/u010199356/article/details/89056304配置slavescp slaves.template slaves# A Spark Worker will be started on each of the machines listed below.tvm13tvm14tvm15配置 spark-default.sh 主要用于spark执行任务(可以命令行动态指定)# http://spark.apache.org/docs/latest/configuration.html#configuring-logging# spark-defaults.shspark.app.name YunTuSparkspark.driver.cores 2spark.driver.memory 2gspark.master spark://tvm13:7077,tvm14:7077spark.eventLog.enabled truespark.eventLog.dir hdfs://cluster01/tmp/event/logsspark.serializer org.apache.spark.serializer.KryoSerializerspark.serializer.objectStreamReset 100spark.executor.logs.rolling.time.interval dailyspark.executor.logs.rolling.maxRetainedFiles 30spark.ui.enabled truespark.ui.killEnabled truespark.ui.liveUpdate.period 100msspark.ui.liveUpdate.minFlushPeriod 3sspark.ui.port 4040spark.history.ui.port 18080spark.ui.retainedJobs 100spark.ui.retainedStages 100spark.ui.retainedTasks 1000spark.ui.showConsoleProgress truespark.worker.ui.retainedExecutors 100spark.worker.ui.retainedDrivers 100spark.sql.ui.retainedExecutions 100spark.streaming.ui.retainedBatches 100spark.ui.retainedDeadExecutors 100# spark.executor.extraJavaOptions -XX:PrintGCDetails -Dkeyvalue -Dnumbersone two threehdfs资源准备因为 spark.eventLog.dir 指定为hdfs存储所以需要在hdfs预先创建相应的目录文件hdfs dfs -mkdir -p hdfs://cluster01/tmp/event/logs配置系统环境变量编辑 ~/.bashrc export SPARK_HOME/data/template/s/spark/spark-2.4.5-bin-hadoop2.7export PATH$SPARK_HOME/bin/:$PATH分发以上配置完成后将 /path/to/spark-2.4.5-bin-hadoop2.7 分发至各个slave节点并配置各个节点的环境变量。启动先在master节点启动所有服务./sbin/start-all.sh然后在backup节点单独启动master服务./sbin/start-master.sh查看状态启动完成后到web去查看master(8081端口)Status: ALIVEbackup(8080端口)Status: STANDBY完成