这篇文章主要讲解了“Storm0.9.4的安装步骤”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“Storm0.9.4的安装步骤”吧!
环境:三台虚拟机,系统是CentOS6.5
1.关闭防火墙,配置hosts,添加集群中主机和IP的映射关系
[grid@hadoop4 ~]$ cat /etc/hosts
127.0.0.1 localhost
::1 localhost
192.168.0.106 hadoop4
192.168.0.107 hadoop5
192.168.0.108 hadoop6
2.安装Java(JDK6或以上版本),配置JAVA_HOME 、CLASSPATH环境变量
[grid@hadoop4 ~]$ cat .bash_profile
JAVA_HOME=/usr/java/jdk1.7.0_72
JRE_HOME=$JAVA_HOME/jre
PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
export JAVA_HOME JRE_HOME PATH CLASSPATH
3.安装python
先确定你系统自带的Python版本,如果是2.6.6或者以上的不需要安装
[grid@hadoop4 ~]$ python
Python 2.6.6 (r266:84292, Jan 22 2014, 09:42:36)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
4.搭建Zookeeper集群
##下载解压##
[grid@hadoop4 ~]$ wget http://mirrors.cnnic.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
[grid@hadoop4 ~]$ tar -zxf zookeeper-3.4.6.tar.gz
##修改配置文件##
[grid@hadoop4 ~]$ cd zookeeper-3.4.6/conf/
[grid@hadoop4 conf]$ cp -p zoo_sample.cfg zoo.cfg
[grid@hadoop4 conf]$ vi zoo.cfg
# The number of milliseconds of each tick
tickTime=2000 ##服务器心跳时间,单位毫秒
# The number of ticks that the initial
# synchronization phase can take
initLimit=10 ##投票选择新leader的初始化时间
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5 ##leader与follower心跳检测最大容忍时间,响应超过syncLimit*tickTime,leader认为follwer死掉,从服务器列表中删除follwer
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/home/grid/zookeeper-3.4.6/data ##数据目录,需要手动创建
# dataLogDir= ##日志目录,不指定将使用和dataDir相同的设置
# the port at which the clients will connect
clientPort=2181 ##监听client连接的端口
##server.id=host:port:port,id是一个数字,表示这是第几号server,这个id也会被写到myid文件中;host是zookeeper服务器ip或主机名;第一个port是leader与follwer通讯所使用的端口;第二个port是选举leader时所使用的端口
server.1=hadoop4:2888:3888
server.2=hadoop5:2888:3888
server.3=hadoop6:2888:3888
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
##手动创建数据目录##
[grid@hadoop4 conf]$ cd /home/grid/zookeeper-3.4.6
[grid@hadoop4 zookeeper-3.4.6]$ mkdir data
##分发zookeeper##
[grid@hadoop4 zookeeper-3.4.6]$ cd ..
[grid@hadoop4 ~]$ scp -rp zookeeper-3.4.6 grid@hadoop5:/home/grid/
[grid@hadoop4 ~]$ scp -rp zookeeper-3.4.6 grid@hadoop6:/home/grid/
##在数据目录下创建myid文件,写入id号,用来标识当前主机##
[grid@hadoop4 ~]$ echo "1" > zookeeper-3.4.6/data/myid
[grid@hadoop5 ~]$ echo "2" > zookeeper-3.4.6/data/myid
[grid@hadoop6 ~]$ echo "3" > zookeeper-3.4.6/data/myid
##启动zookeeper##
[grid@hadoop4 ~]$ zookeeper-3.4.6/bin/zkServer.sh start
[grid@hadoop5 ~]$ zookeeper-3.4.6/bin/zkServer.sh start
[grid@hadoop6 ~]$ zookeeper-3.4.6/bin/zkServer.sh start
##查看zookeeper状态##
[grid@hadoop4 ~]$ zookeeper-3.4.6/bin/zkServer.sh status
JMX enabled by default
Using config: /home/grid/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: follower
[grid@hadoop5 ~]$ zookeeper-3.4.6/bin/zkServer.sh status
JMX enabled by default
Using config: /home/grid/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: leader
[grid@hadoop6 ~]$ zookeeper-3.4.6/bin/zkServer.sh status
JMX enabled by default
Using config: /home/grid/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: follower
5.安装Storm
##下载解压##
[grid@hadoop4 ~]$ wget http://mirrors.cnnic.cn/apache/storm/apache-storm-0.9.4/apache-storm-0.9.4.tar.gz
[grid@hadoop4 ~]$ tar -zxf apache-storm-0.9.4.tar.gz
[grid@hadoop4 ~]$ mv apache-storm-0.9.4 storm-0.9.4
##修改配置项##
[grid@hadoop4 conf]$ vim storm.yaml
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
########### These MUST be filled in for a storm configuration
##集群使用的Zookeeper集群地址
storm.zookeeper.servers:
- "hadoop4"
- "hadoop5"
- "hadoop6"
storm.zookeeper.port: 2181
#
##集群的Nimbus机器的地址
nimbus.host: "hadoop4"
##Nimbus和Supervisor迚程用于存储少量状态,如jars、 confs等的本地磁盘目录,需要提前创建该目录并给以足够的访问权限
storm.local.dir: "/home/grid/storm-0.9.4/data"
##对于每个Supervisor工作节点,需要配置该工作节点可以运行的worker数量。每个worker占用一个单独的端口用于接收消息,该配置选项即用于定义哪些端口是可被worker使用。默认情况下,每个节点上可运行4个workers,分别在6700、 6701、 6702和6703端口上。
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
#
#
# ##### These may optionally be filled in:
#
## List of custom serializations
# topology.kryo.register:
# - org.mycompany.MyType
# - org.mycompany.MyType2: org.mycompany.MyType2Serializer
#
## List of custom kryo decorators
# topology.kryo.decorators:
# - org.mycompany.MyDecorator
#
## Locations of the drpc servers
# drpc.servers:
# - "server1"
# - "server2"
## Metrics Consumers
# topology.metrics.consumer.register:
# - class: "backtype.storm.metric.LoggingMetricsConsumer"
# parallelism.hint: 1
# - class: "org.mycompany.MyMetricsConsumer"
# parallelism.hint: 1
# argument:
# - endpoint: "metrics-collector.mycompany.org"
## 有关的其他配置项可以参看:https://github.com/nathanmarz/storm/blob/master/conf/defaults.yaml ##
##创建数据目录##
[grid@hadoop4 conf]$ cd /home/grid/storm-0.9.4/
[grid@hadoop4 storm-0.9.4]$ mkdir data
##分发Storm##
[grid@hadoop4 ~]$ scp -rp storm-0.9.4/ grid@hadoop5:/home/grid/
[grid@hadoop4 ~]$ scp -rp storm-0.9.4/ grid@hadoop6:/home/grid/
##编辑环境变量##
[grid@hadoop4 ~]$ vim .bash_profile
export STORM_HOME=/home/grid/storm-0.9.4
export PATH=$PATH:$STORM_HOME/bin
[grid@hadoop4 ~]$ source .bash_profile
##启动Storm(确保zookeeper已经启动)##
[grid@hadoop4 ~]$ storm nimbus & ##在主节点上运行Nimbus后台程序
[grid@hadoop5 ~]$ storm supervisor & ##在工作节点上运行Supervisor后台程序
[grid@hadoop6 ~]$ storm supervisor &
[grid@hadoop4 ~]$ storm ui & ##在主节点上运行UI程序,启动后可以在浏览器上输入http://主节点的ip:port (默认8080端口)
[grid@hadoop4 ~]$ storm logviewer & ##在主节点上运行LogViewer程序,启动后在UI上通过点击相应的Woker来查看对应的工作日志
[grid@hadoop4 ~]$ jps
2959 QuorumPeerMain
3310 logviewer
3414 Jps
3228 nimbus
3289 core
[grid@hadoop5 ~]$ jps
2907 QuorumPeerMain
3215 Jps
3154 supervisor
[grid@hadoop6 ~]$ jps
3248 Jps
2935 QuorumPeerMain
3186 supervisor
感谢各位的阅读,以上就是“Storm0.9.4的安装步骤”的内容了,经过本文的学习后,相信大家对Storm0.9.4的安装步骤这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是天达云,小编将为大家推送更多相关知识点的文章,欢迎关注!