当前位置: 首页 > news >正文

网站不备案会有什么影响吗自助网站模板平台

网站不备案会有什么影响吗,自助网站模板平台,做资质去哪个网站填资料,商城外贸网站设计平时我们学习Hadoop技术时#xff0c;可不一直沉溺于理论学习#xff0c;一定要理论和实践相结合#xff0c;所以就必须有一个Hadoop环境#xff0c;我们能在这个Hadoop环境里进行各种操作#xff0c;来验证我们在书本上学到的知识。最小的环境#xff0c;至少要具有一台…    平时我们学习Hadoop技术时可不一直沉溺于理论学习一定要理论和实践相结合所以就必须有一个Hadoop环境我们能在这个Hadoop环境里进行各种操作来验证我们在书本上学到的知识。最小的环境至少要具有一台Linux服务器吧部署一个最简单的单节点环境我们可以来部署一个伪分布式集群。1. 软硬件系统准备所谓伪分布式集群, 就是在一台Linux服务器上部署一个Hadoop集群, 即单节点部署。本次安装用的是Ubuntu16操作系统IP设置为192.168.1.11, 系统资源是CPU: 4核; 内存: 32G; 磁盘: /分区为1T, /data分区为3.6T。从Hadoop官网下载安装包目前使用的是hadoop-2.10.0下载地址https://archive.apache.org/dist/hadoop/common/hadoop-2.10.0/, 下载得到hadoop-2.10.0.tar.gz。上传到目标Linux服务器创建安装目录mkdir /opt/bigdata把hadoop-2.10.0.tar.gz放到/opt/bigdata目录下解压tar zxvf hadoop-2.10.0.tar.gzcd hadoop-2.10.0/创建需要的子目录mkdir dfsmkdir logsmkdir tmp 2. 配置所有配置文件和环境脚本文件到放到etc/hadoop/目录下, 进入配置目录:cd hadoop-2.10.0/etc/hadoop/这里配置文件很多但只要修改4个.xml配置文件和2个.sh脚本文件core-site.xmlhdfs-site.xmlmapred-site.xmlyarn-site.xmlhadoop-env.shyarn-env.sh因为默认的模板文件基本是空的需要我们根据官网文档和实际安装情况来配置为了降低部署难度特提供测试环境使用的6个文件我们在线上部署时在提供的文件基础上进行修改就比较简单了。2.1 core-site.xml主要修改两项安装路径要填写实际的目录如: /opt/bigdata/hadoop-2.10.0propertynamehadoop.home/namevalue/opt/bigdata/hadoop-2.10.0/value/property修改hdfs的IP和端口。propertynamefs.defaultFS/namevaluehdfs://192.168.1.11:9000/value/property完整配置文件如下   ?xml version1.0 encodingUTF-8? ?xml-stylesheet typetext/xsl hrefconfiguration.xsl? !--Licensed under the Apache License, Version 2.0 (the License);you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an AS IS BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file. --!-- Put site-specific property overrides in this file. --configurationpropertynamehadoop.home/namevalue/opt/bigdata/hadoop-2.10.0/value/propertypropertynamefs.defaultFS/namevaluehdfs://192.168.1.11:9000/value/propertypropertynameio.file.buffer.size/namevalue131072/value/propertypropertynamehadoop.tmp.dir/namevalue${hadoop.home}/tmp/hdfs/value/property/configuration2.2 hdfs-site.xml主要修改两项安装路径要填写实际的目录如: /opt/bigdata/hadoop-2.10.0propertynamehadoop.home/namevalue/opt/bigdata/hadoop-2.10.0/value/property修改节点IP:propertynamenamenode.ip/namevalue192.168.1.11/value/property完整配置文件如下   ?xml version1.0 encodingUTF-8? ?xml-stylesheet typetext/xsl hrefconfiguration.xsl? !--Licensed under the Apache License, Version 2.0 (the License);you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an AS IS BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file. --!-- Put site-specific property overrides in this file. --configurationpropertynamehadoop.home/namevalue/opt/bigdata/hadoop-2.10.0/value/propertypropertynamenamenode.ip/namevalue192.168.1.11/value/propertypropertynamedfs.replication/namevalue1/value/propertypropertynamedfs.namenode.name.dir/namevalue${hadoop.home}/dfs/name/value/propertypropertynamedfs.hosts/namevalue/value/propertypropertynamedfs.hosts.exclude/namevalue/value/property propertynamedfs.blocksize/namevalue134217728/value/propertypropertynamedfs.namenode.handler.count/namevalue10/value/propertypropertynamedfs.datanode.data.dir/namevalue${hadoop.home}/dfs/data/value/propertypropertynamedfs.permissions.enabled/namevaluefalse/value/propertypropertynamedfs.webhdfs.enabled/namevaluetrue/value/propertypropertynamedfs.http.address/namevalue${namenode.ip}:50070/valuedescription hdfs的namenode的web ui 地址 /description/propertypropertynamedfs.secondary.http.address/namevalue${namenode.ip}:50090/valuedescription hdfs的snn的web ui 地址 /description/property/configuration2.3 mapred-site.xml主要修改IP:property namemaster.ip/name value192.168.1.11/value/property完整配置文件如下   ?xml version1.0? ?xml-stylesheet typetext/xsl hrefconfiguration.xsl? !--Licensed under the Apache License, Version 2.0 (the License);you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an AS IS BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file. --!-- Put site-specific property overrides in this file. --configurationpropertynamemaster.ip/namevalue192.168.1.11/value/property!--指定maoreduce运行框架--propertynamemapreduce.framework.name/namevalueyarn/value/property!--历史服务的通信地址--propertynamemapreduce.jobhistory.address/namevalue${master.ip}:10020/value/property!--历史服务的web ui地址--propertynamemapreduce.jobhistory.webapp.address/namevalue${master.ip}:19888/value/property /configuration2.4 yarn-site.xml修改节点IP:propertynameyarn.resourcemanager.hostname/namevalue192.168.1.11/value/property安装路径要填写实际的目录如: /opt/bigdata/hadoop-2.10.0propertynamehadoop.home/namevalue/opt/bigdata/hadoop-2.10.0/value/property下面5项的端口号如果有被其它程序服务所占用就修改一下不然不要改动propertynameyarn.resourcemanager.address/namevalue${yarn.resourcemanager.hostname}:8032/value/propertypropertynameyarn.resourcemanager.scheduler.address/namevalue${yarn.resourcemanager.hostname}:8030/value/propertypropertynameyarn.resourcemanager.resource-tracker.address/namevalue${yarn.resourcemanager.hostname}:8031/value/propertypropertynameyarn.resourcemanager.admin.address/namevalue${yarn.resourcemanager.hostname}:8033/value/propertypropertynameyarn.resourcemanager.webapp.address/namevalue${yarn.resourcemanager.hostname}:8088/value/property任务资源调度策略1) CapacityScheduler: 按队列调度2) FairScheduler: 平均分配。很重要的配置一定要理解原理。propertydescriptionThe class to use as the resource scheduler./descriptionnameyarn.resourcemanager.scheduler.class/name!--valueorg.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler/value--valueorg.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler/value/property分配给AM单个容器可申请的最小内存: MBpropertydescriptionThe minimum allocation for every container request at the RMin MBs. Memory requests lower than this will be set to the value of thisproperty. Additionally, a node manager that is configured to have less memorythan this value will be shut down by the resource manager./descriptionnameyarn.scheduler.minimum-allocation-mb/namevalue1024/value/property分配给AM单个容器可申请的最大内存: MBpropertydescriptionThe maximum allocation for every container request at the RMin MBs. Memory requests higher than this will throw anInvalidResourceRequestException./descriptionnameyarn.scheduler.maximum-allocation-mb/namevalue8192/value/property分配给AM单个容器可申请的最小虚拟的CPU个数: propertydescriptionThe minimum allocation for every container request at the RMin terms of virtual CPU cores. Requests lower than this will be set to thevalue of this property. Additionally, a node manager that is configured tohave fewer virtual cores than this value will be shut down by the resourcemanager./descriptionnameyarn.scheduler.minimum-allocation-vcores/namevalue1/value/property分配给AM单个容器可申请的最大虚拟的CPU个数: propertydescriptionThe maximum allocation for every container request at the RMin terms of virtual CPU cores. Requests higher than this will throw anInvalidResourceRequestException./descriptionnameyarn.scheduler.maximum-allocation-vcores/namevalue4/value/propertyNodeManager节点最大可用内存, 根据实际机器上的物理内存进行配置NodeManager节点最大Container数量: max(Container) yarn.nodemanager.resource.memory-mb / yarn.scheduler.maximum-allocation-mbpropertydescriptionAmount of physical memory, in MB, that can be allocatedfor containers. If set to -1 andyarn.nodemanager.resource.detect-hardware-capabilities is true, it isautomatically calculated(in case of Windows and Linux).In other cases, the default is 8192MB./descriptionnameyarn.nodemanager.resource.memory-mb/namevalue24576/value/property节点服务器上yarn可以使用的虚拟的CPU个数默认是8推荐配置与核心个数相同。如果节点CPU的核心个数不足8个需要调小这个值yarn不会智能的去检测物理核数。如果机器性能较好可以配置为物理核数的2倍。propertydescriptionNumber of vcores that can be allocatedfor containers. This is used by the RM scheduler when allocatingresources for containers. This is not used to limit the number ofCPUs used by YARN containers. If it is set to -1 andyarn.nodemanager.resource.detect-hardware-capabilities is true, it isautomatically determined from the hardware in case of Windows and Linux.In other cases, number of vcores is 8 by default./descriptionnameyarn.nodemanager.resource.cpu-vcores/namevalue30/value/property完整配置文件如下  ?xml version1.0? !--Licensed under the Apache License, Version 2.0 (the License);you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an AS IS BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file. -- configuration!-- Site specific YARN configuration properties --propertynameyarn.resourcemanager.hostname/namevalue192.168.1.11/value/property propertynamehadoop.home/namevalue/opt/bigdata/hadoop-2.10.0/value/propertypropertynameyarn.acl.enable/namevaluefalse/value/propertypropertynameyarn.admin.acl/namevalue*/value/propertypropertynameyarn.log-aggregation-enable/namevaluefalse/value/propertypropertynameyarn.resourcemanager.address/namevalue${yarn.resourcemanager.hostname}:8032/value/propertypropertynameyarn.resourcemanager.scheduler.address/namevalue${yarn.resourcemanager.hostname}:8030/value/propertypropertynameyarn.resourcemanager.resource-tracker.address/namevalue${yarn.resourcemanager.hostname}:8031/value/propertypropertynameyarn.resourcemanager.admin.address/namevalue${yarn.resourcemanager.hostname}:8033/value/propertypropertynameyarn.resourcemanager.webapp.address/namevalue${yarn.resourcemanager.hostname}:8088/value/propertypropertydescriptionA comma separated list of services where service name should onlycontain a-zA-Z0-9_ and can not start with numbers/descriptionnameyarn.nodemanager.aux-services/namevaluemapreduce_shuffle/value/propertypropertydescriptionThe class to use as the resource scheduler./descriptionnameyarn.resourcemanager.scheduler.class/name!-- valueorg.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler/value--valueorg.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler/value/propertypropertydescriptionThe minimum allocation for every container request at the RMin MBs. Memory requests lower than this will be set to the value of thisproperty. Additionally, a node manager that is configured to have less memorythan this value will be shut down by the resource manager./descriptionnameyarn.scheduler.minimum-allocation-mb/namevalue1024/value/propertypropertydescriptionThe maximum allocation for every container request at the RMin MBs. Memory requests higher than this will throw anInvalidResourceRequestException./descriptionnameyarn.scheduler.maximum-allocation-mb/namevalue8192/value/propertypropertydescriptionThe minimum allocation for every container request at the RMin terms of virtual CPU cores. Requests lower than this will be set to thevalue of this property. Additionally, a node manager that is configured tohave fewer virtual cores than this value will be shut down by the resourcemanager./descriptionnameyarn.scheduler.minimum-allocation-vcores/namevalue1/value/propertypropertydescriptionThe maximum allocation for every container request at the RMin terms of virtual CPU cores. Requests higher than this will throw anInvalidResourceRequestException./descriptionnameyarn.scheduler.maximum-allocation-vcores/namevalue4/value/propertypropertydescriptionAmount of physical memory, in MB, that can be allocated for containers. If set to -1 andyarn.nodemanager.resource.detect-hardware-capabilities is true, it isautomatically calculated(in case of Windows and Linux).In other cases, the default is 8192MB./descriptionnameyarn.nodemanager.resource.memory-mb/namevalue24576/value/propertypropertydescriptionNumber of vcores that can be allocatedfor containers. This is used by the RM scheduler when allocatingresources for containers. This is not used to limit the number ofCPUs used by YARN containers. If it is set to -1 andyarn.nodemanager.resource.detect-hardware-capabilities is true, it isautomatically determined from the hardware in case of Windows and Linux.In other cases, number of vcores is 8 by default./descriptionnameyarn.nodemanager.resource.cpu-vcores/namevalue30/value/propertypropertydescriptionRatio between virtual memory to physical memory whensetting memory limits for containers. Container allocations areexpressed in terms of physical memory, and virtual memory usageis allowed to exceed this allocation by this ratio./descriptionnameyarn.nodemanager.vmem-pmem-ratio/namevalue2.1/value/propertypropertynamehadoop.tmp.dir/namevalue${hadoop.home}/tmp/value/propertypropertynameyarn.log.dir/namevalue${hadoop.home}/logs/yarn/value/propertypropertydescriptionList of directories to store localized files in. An applications localized file directory will be found in:${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}.Individual containers work directories, called container_${contid}, willbe subdirectories of this./descriptionnameyarn.nodemanager.local-dirs/namevalue${hadoop.tmp.dir}/yarn/nm-local-dir/value/propertypropertydescriptionWhere to store container logs. An applications localized log directorywill be found in ${yarn.nodemanager.log-dirs}/application_${appid}.Individual containers log directories will be below this, in directories named container_{$contid}. Each container directory will contain the filesstderr, stdin, and syslog generated by that container./descriptionnameyarn.nodemanager.log-dirs/namevalue${yarn.log.dir}/userlogs/value/propertypropertydescriptionTime in seconds to retain user logs. Only applicable iflog aggregation is disabled/descriptionnameyarn.nodemanager.log.retain-seconds/namevalue10800/value/propertypropertydescriptionWhether physical memory limits will be enforced forcontainers./descriptionnameyarn.nodemanager.pmem-check-enabled/namevaluefalse/value/propertypropertydescriptionWhether virtual memory limits will be enforced forcontainers./descriptionnameyarn.nodemanager.vmem-check-enabled/namevaluefalse/value/property/configuration2.5 hadoop-env.sh主要把以下配置项的路径按实际情况进行配置HOME/opt/bigdataexport JAVA_HOME$HOME/jdk1.8.0_144export HADOOP_PREFIX$HOME/hadoop-2.10.0export HADOOP_HOME$HADOOP_PREFIXexport HADOOP_YARN_HOME$HADOOP_PREFIXexport HADOOP_CONF_DIR$HADOOP_HOME/etc/hadoopexport HADOOP_LOG_DIR${HADOOP_HOME}/logs/hdfs完整脚本文件如下  # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # License); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an AS IS BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.# Set Hadoop-specific environment variables here.# The only required environment variable is JAVA_HOME. All others are # optional. When running a distributed configuration it is best to # set JAVA_HOME in this file, so that it is correctly defined on # remote nodes.# The java implementation to use. #export JAVA_HOME${JAVA_HOME} HOME/opt/bigdata export JAVA_HOME$HOME/jdk1.8.0_144 export HADOOP_PREFIX$HOME/hadoop-2.10.0 export HADOOP_HOME$HADOOP_PREFIX export HADOOP_YARN_HOME$HADOOP_PREFIX export HADOOP_CONF_DIR$HADOOP_HOME/etc/hadoop# The jsvc implementation to use. Jsvc is required to run secure datanodes # that bind to privileged ports to provide authentication of data transfer # protocol. Jsvc is not required if SASL is configured for authentication of # data transfer protocol using non-privileged ports. #export JSVC_HOME${JSVC_HOME} HADOOP_CONF_DIR$HADOOP_HOME/etc/hadoopexport HADOOP_CONF_DIR${HADOOP_CONF_DIR:-/etc/hadoop}# Extra Java CLASSPATH elements. Automatically insert capacity-scheduler. for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; doif [ $HADOOP_CLASSPATH ]; thenexport HADOOP_CLASSPATH$HADOOP_CLASSPATH:$felseexport HADOOP_CLASSPATH$ffi done# The maximum amount of heap to use, in MB. Default is 1000. #export HADOOP_HEAPSIZE #export HADOOP_NAMENODE_INIT_HEAPSIZE# Enable extra debugging of Hadoops JAAS binding, used to set up # Kerberos security. # export HADOOP_JAAS_DEBUGtrue# Extra Java runtime options. Empty by default. # For Kerberos debugging, an extended option set logs more invormation # export HADOOP_OPTS-Djava.net.preferIPv4Stacktrue -Dsun.security.krb5.debugtrue -Dsun.security.spnego.debug export HADOOP_OPTS$HADOOP_OPTS -Djava.net.preferIPv4Stacktrue# Command specific options appended to HADOOP_OPTS when specified export HADOOP_NAMENODE_OPTS-Dhadoop.security.logger${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS export HADOOP_DATANODE_OPTS-Dhadoop.security.loggerERROR,RFAS $HADOOP_DATANODE_OPTSexport HADOOP_SECONDARYNAMENODE_OPTS-Dhadoop.security.logger${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTSexport HADOOP_NFS3_OPTS$HADOOP_NFS3_OPTS export HADOOP_PORTMAP_OPTS-Xmx512m $HADOOP_PORTMAP_OPTS# The following applies to multiple commands (fs, dfs, fsck, distcp etc) export HADOOP_CLIENT_OPTS$HADOOP_CLIENT_OPTS # set heap args when HADOOP_HEAPSIZE is empty if [ $HADOOP_HEAPSIZE ]; thenexport HADOOP_CLIENT_OPTS-Xmx512m $HADOOP_CLIENT_OPTS fi #HADOOP_JAVA_PLATFORM_OPTS-XX:-UsePerfData $HADOOP_JAVA_PLATFORM_OPTS# On secure datanodes, user to run the datanode as after dropping privileges. # This **MUST** be uncommented to enable secure HDFS if using privileged ports # to provide authentication of data transfer protocol. This **MUST NOT** be # defined if SASL is configured for authentication of data transfer protocol # using non-privileged ports. export HADOOP_SECURE_DN_USER${HADOOP_SECURE_DN_USER}# Where log files are stored. $HADOOP_HOME/logs by default. #export HADOOP_LOG_DIR${HADOOP_LOG_DIR}/$USER export HADOOP_LOG_DIR${HADOOP_HOME}/logs/hdfs# Where log files are stored in the secure data environment. #export HADOOP_SECURE_DN_LOG_DIR${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}### # HDFS Mover specific parameters ### # Specify the JVM options to be used when starting the HDFS Mover. # These options will be appended to the options specified as HADOOP_OPTS # and therefore may override any similar flags set in HADOOP_OPTS # # export HADOOP_MOVER_OPTS### # Router-based HDFS Federation specific parameters # Specify the JVM options to be used when starting the RBF Routers. # These options will be appended to the options specified as HADOOP_OPTS # and therefore may override any similar flags set in HADOOP_OPTS # # export HADOOP_DFSROUTER_OPTS ###### # Advanced Users Only! #### The directory where pid files are stored. /tmp by default. # NOTE: this should be set to a directory that can only be written to by # the user that will run the hadoop daemons. Otherwise there is the # potential for a symlink attack. export HADOOP_PID_DIR${HADOOP_PID_DIR} export HADOOP_SECURE_DN_PID_DIR${HADOOP_PID_DIR}# A string representing this instance of hadoop. $USER by default. export HADOOP_IDENT_STRING$USER2.6 yarn-env.sh主要把以下配置项的路径按实际情况进行配置HOME/opt/bigdataexport JAVA_HOME$HOME/jdk1.8.0_144export HADOOP_PREFIX$HOME/hadoop-2.10.0export HADOOP_HOME$HADOOP_PREFIXexport HADOOP_YARN_HOME$HADOOP_PREFIXexport HADOOP_CONF_DIR$HADOOP_HOME/etc/hadoopHADOOP_YARN_USERtestYARN_CONF_DIR$HADOOP_YARN_HOME/etc/hadoopYARN_LOG_DIR$HADOOP_YARN_HOME/logs/yarn完整脚本文件如下# Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the License); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an AS IS BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. HOME/opt/bigdata export JAVA_HOME$HOME/jdk1.8.0_144 export HADOOP_PREFIX$HOME/hadoop-2.10.0 export HADOOP_HOME$HADOOP_PREFIX export HADOOP_YARN_HOME$HADOOP_PREFIX export HADOOP_CONF_DIR$HADOOP_HOME/etc/hadoop# User for YARN daemons HADOOP_YARN_USERtest export HADOOP_YARN_USER${HADOOP_YARN_USER:-yarn}# resolve links - $0 may be a softlink YARN_CONF_DIR$HADOOP_YARN_HOME/etc/hadoop export YARN_CONF_DIR${YARN_CONF_DIR:-$HADOOP_YARN_HOME/conf}# some Java parameters # export JAVA_HOME/home/y/libexec/jdk1.6.0/ if [ $JAVA_HOME ! ]; then#echo run java in $JAVA_HOMEJAVA_HOME$JAVA_HOME fiif [ $JAVA_HOME ]; thenecho Error: JAVA_HOME is not set.exit 1 fiJAVA$JAVA_HOME/bin/java JAVA_HEAP_MAX-Xmx1000m # For setting YARN specific HEAP sizes please use this # Parameter and set appropriately # YARN_HEAPSIZE1000# check envvars which might override default args if [ $YARN_HEAPSIZE ! ]; thenJAVA_HEAP_MAX-Xmx$YARN_HEAPSIZEm fi# Resource Manager specific parameters# Specify the max Heapsize for the ResourceManager using a numerical value # in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set # the value to 1000. # This value will be overridden by an Xmx setting specified in either YARN_OPTS # and/or YARN_RESOURCEMANAGER_OPTS. # If not specified, the default value will be picked from either YARN_HEAPMAX # or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two. #export YARN_RESOURCEMANAGER_HEAPSIZE1000# Specify the max Heapsize for the timeline server using a numerical value # in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set # the value to 1000. # This value will be overridden by an Xmx setting specified in either YARN_OPTS # and/or YARN_TIMELINESERVER_OPTS. # If not specified, the default value will be picked from either YARN_HEAPMAX # or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two. #export YARN_TIMELINESERVER_HEAPSIZE1000# Specify the JVM options to be used when starting the ResourceManager. # These options will be appended to the options specified as YARN_OPTS # and therefore may override any similar flags set in YARN_OPTS #export YARN_RESOURCEMANAGER_OPTS# Node Manager specific parameters# Specify the max Heapsize for the NodeManager using a numerical value # in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set # the value to 1000. # This value will be overridden by an Xmx setting specified in either YARN_OPTS # and/or YARN_NODEMANAGER_OPTS. # If not specified, the default value will be picked from either YARN_HEAPMAX # or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two. #export YARN_NODEMANAGER_HEAPSIZE1000# Specify the JVM options to be used when starting the NodeManager. # These options will be appended to the options specified as YARN_OPTS # and therefore may override any similar flags set in YARN_OPTS #export YARN_NODEMANAGER_OPTS# so that filenames w/ spaces are handled correctly in loops below IFS# default log directory file if [ $YARN_LOG_DIR ]; thenYARN_LOG_DIR$HADOOP_YARN_HOME/logs/yarn fi if [ $YARN_LOGFILE ]; thenYARN_LOGFILEyarn.log fi# default policy file for service-level authorization if [ $YARN_POLICYFILE ]; thenYARN_POLICYFILEhadoop-policy.xml fi# restore ordinary behaviour unset IFSYARN_OPTS$YARN_OPTS -Dhadoop.log.dir$YARN_LOG_DIR YARN_OPTS$YARN_OPTS -Dyarn.log.dir$YARN_LOG_DIR YARN_OPTS$YARN_OPTS -Dhadoop.log.file$YARN_LOGFILE YARN_OPTS$YARN_OPTS -Dyarn.log.file$YARN_LOGFILE YARN_OPTS$YARN_OPTS -Dyarn.home.dir$YARN_COMMON_HOME YARN_OPTS$YARN_OPTS -Dyarn.id.str$YARN_IDENT_STRING YARN_OPTS$YARN_OPTS -Dhadoop.root.logger${YARN_ROOT_LOGGER:-INFO,console} YARN_OPTS$YARN_OPTS -Dyarn.root.logger${YARN_ROOT_LOGGER:-INFO,console} if [ x$JAVA_LIBRARY_PATH ! x ]; thenYARN_OPTS$YARN_OPTS -Djava.library.path$JAVA_LIBRARY_PATH fi YARN_OPTS$YARN_OPTS -Dyarn.policy.file$YARN_POLICYFILE### # Router specific parameters #### Specify the JVM options to be used when starting the Router. # These options will be appended to the options specified as HADOOP_OPTS # and therefore may override any similar flags set in HADOOP_OPTS # # See ResourceManager for some examples # #export YARN_ROUTER_OPTS3. 设置ssh免密登录ssh-keygen -t rsa -P -f ~/.ssh/id_rsacat ~/.ssh/id_rsa.pub ~/.ssh/authorized_keyschmod 0600 ~/.ssh/authorized_keys4. 格式化hdfsbin/hdfs namenode -format5. 启动/关闭hadoop NameNode daemon和DataNode daemon启动服务执行sbin/start-dfs.sh启动成功后可以打开管理系统http://192.168.1.11:50070/如果要关闭服务则执行sbin/stop-dfs.sh6. 启动/关闭ResourceManager daemon 和 NodeManager daemon启动服务执行sbin/start-yarn.sh启动成功后可以打开管理系统http://192.168.1.11:8088/如果要关闭服务则执行sbin/stop-yarn.sh
http://www.zqtcl.cn/news/162957/

相关文章:

  • 邹平做网站的公司莱芜人才网莱芜招聘
  • 旅行网站开发意义怎样优化网络速度
  • 手机微网站建设多少钱拟定网络设计方案
  • 厦门制作公司网站安卓原生app开发工具
  • worldpress英文网站建设wordpress输出外部文章
  • u9u8网站建设商业公司的域名
  • 有学给宝宝做衣服的网站吗防网站黑客
  • 十大搜索引擎网站微信小程序有什么用处?
  • 团购网站 seo烟台网站建设方案优化
  • 公司网站建设招标文件范本公益永久免费主机
  • 建设银行网站查询企业年金五合一免费建站
  • 做网站开发挣钱吗做网站手机版
  • 网站建设案例精粹 电子书广州白云学校网站建设
  • 良品铺子网站制作用什么软件来做网站
  • ip直接访问网站 备案哪有深圳设计公司
  • 平面构成作品网站第一设计
  • 济南小程序开发多少钱网站移动端优化工具
  • 大连开发区网站淘宝网站优化实例
  • 张家港建网站的公司做网站犯法了 程序员有责任吗
  • 小型企业网站建设项目浦东新区网站推广公司
  • 上海做网站优化公司ps最好用的素材网站
  • 网站建设品牌推广seo制作公司网站
  • 个人网站服务器一年多少钱科技让生活更美好作文450字
  • 开学第一课汉字做网站网盘资源搜索神器
  • 备案网站应用服务树莓派用来做网站
  • 找装修公司上什么网站湘潭交通网站
  • php网站服务建设网站增加关键字
  • 免费视频网站制作泰州东方医院
  • 单位的网站怎样设计才美观手机开发者选项
  • 网站可以做软件检测吗重庆潼南网站建设价格