这个可以通过clouder manger 主页关闭集群。
也可以用命令
/opt/cm-5.8.0/etc/init.d/cloudera-scm-server stop
/opt/cm-5.8.0/etc/init.d/cloudera-scm-agent stop
暴力杀掉所有集群进程
ps -ef | grep cloudera | grep -v grep | cut -b10-15 | xargs kill -9
ps -ef | grep supervisord | grep -v grep | cut -b10-15 | xargs kill -9
1、所有要卸载的集群均要执行清除工作:所有节点执行
umount /opt/cm-5.8.0/run/cloudera-scm-agent/process
2、删除Cloudera Manager数据 、数据库存放路径、Cloudera Manager Lock 文件、用户数据、清除安装文件 (安装方式不一样,可能配置文件路径不一样,删除时候注意,小心删错别的文件,安装的服务不一样,可能涉及到的文件路径有差别,以下是默认路径)注意:所有节点执行
rm -rf /usr/share/cmf /var/lib/cloudera* /var/cache/yum/cloudera* /var/log/cloudera* /var/run/cloudera* /etc/alternatives/yarn*
rm -rf /var/lib/cloudera-scm-server-db /etc/alternatives/sentry* /etc/alternatives/solr*
rm -rf /tmp/.scm_prepare_node.lock /tmp/hsperfdata_hadoop /tmp/hsperfdata_hdfs /tmp/hsperfdata_mapred /tmp/hsperfdata_zookeeper /tmp/hsperfdata_cloudera-scm /tmp/hadoop-root
rm -rf /var/lib/flume-ng /var/lib/hadoop* /var/lib/hue /var/lib/navigator /var/lib/oozie /var/lib/solr /var/lib/sqoop* /var/lib/zookeeper
rm -rf /dfs /mapred /yarn /etc/alternatives/avro* /etc/alternatives/beeline*
rm -rf /etc/cloudera* /etc/alternatives/bigtop* /etc/alternatives/catalogd* /etc/alternatives/cli_* /etc/alternatives/hue* /etc/alternatives/sqoop*
rm -rf /var/cache/yum/x86_64/6/cloudera* /etc/alternatives/llama* /etc/alternatives/*spark* /etc/alternatives/pig* /etc/alternatives/mahout*
rm -rf /var/lib/hadoop-* /var/lib/impala /var/lib/solr /var/lib/zookeeper /var/lib/hue /var/lib/oozie /var/lib/pgsql /etc/alternatives/impala*
rm -rf /var/lib/sqoop2 /data/dfs/ /data/impala/ /data/yarn/ /dfs/ /impala/ /yarn/ /etc/alternatives/oozie* /etc/alternatives/mapred* /etc/alternatives/mahout*
rm -rf /var/run/hadoop-*/ /var/run/hdfs-*/ /usr/bin/hadoop* /usr/bin/zookeeper* /usr/bin/hbase* /etc/alternatives/hadoop* /etc/alternatives/hbase*
rm -rf /usr/bin/hive* /usr/bin/hdfs /usr/bin/mapred /usr/bin/yarn /usr/bin/sqoop* /usr/bin/oozie /etc/alternatives/hcat* /etc/alternatives/hdfs*
rm -rf /etc/hadoop* /etc/zookeeper* /etc/hive* /etc/hue /etc/impala /etc/sqoop* /etc/oozie /etc/hbase* /etc/hcatalog /etc/alternatives/hive*
rm -rf /var/run/zookeeper /etc/alternatives/flume* /etc/alternatives/zookeeper* /etc/alternatives/parquet* /etc/alternatives/whirr
rm -rf /tmp/scm_prepare_node* /tmp/.scm_prepare_node.lock /etc/alternatives/bigtop* /etc/alternatives/yarn*
3、删除parcel包分发文件和解压文件 :注意:删前看一下有没什么重要文件
rm -rf /tmp/A* /tmp/cmf* /tmp/J* /tmp/jffi* /tmp/q* /tmp/scm* /opt/cloudera/*
rm -rf /opt/cloudera /opt/cm-5.8.0
到此卸载完毕。
操作系统:Centos 7.2
jdk环境 :版本:1.8.0_131(sun公司jdk)
mysql数据库
python 2.6以及2.6以上版本
IP 地址内网 |
主机名 |
角色 |
172.16.105.118 |
master01 |
namenode |
172.16.105.119 |
slave01 |
datanode |
172.16.105.120 |
slave02 |
datanode |
172.16.105.121 |
slave03 |
datanode |
172.16.105.122 |
slave04 |
datanode |
172.16.105.123 |
slave05 |
datanode |
172.16.105.124 |
slave06 |
datanode |
172.16.105.125 |
mysql01 |
mysql |
此处省略
用脚本
sh ssh_key.sh 172.16.105.117 172.16.105.118 172.16.105.119 172.16.105.120 ...
1、创建
#创建hive数据库
create database hive default charset latin1;
#创建集群监控数据库
create database amon default charset utf8;
#创建hue数据库
create database hue default charset utf8;
#创建oozie数据库
create database oozie default charset utf8;
2、数据库授权
#hive数据库授权
grant all privileges on hive.* to 'cdh'@'172.16.105.%' identified by '123456' with grant option;
#amon数据库授权
grant all privileges on amon.* to 'cdh'@'172.16.105.%' identified by '123456' with grant option;
#hue数据库授权
grant all privileges on hue.* to 'cdh'@'172.16.105.%' identified by '123456' with grant option;
#oozie数据库授权
grant all privileges on oozie.* to 'cdh'@'172.16.105.%' identified by '123456' with grant option;
#权限刷新
flush privileges;
[root@master01 opt] wget http://archive.cloudera.com/cm5/cm/5/cloudera-manager-centos7-cm5.8.5_x86_64.tar.gz
[root@master01 opt]# tar xf cloudera-manager-centos7-cm5.8.5_x86_64.tar.gz
解压后路径为:/opt/cm-5.8.5
#创建cm服务用户
useradd --system --home=/opt/cm-5.8.5/run/cloudera-scm-server--no-create-home --shell=/bin/false --comment "Cloudera SCM User" cloudera-scm
#创建cm服务日志本地目录,并将目录授予cloudera-scm 用户所有
mkdir /var/log/cloudera-scm-server
chown cloudera-scm:cloudera-scm /var/log/cloudera-scm-server
附:删除用户指令: sudo userdel cloudera-scm
vim /opt/cm-5.8.5/etc/cloudera-scm-agent/config.ini
将内容修改如下:
# Hostname of the CM server.
server_host=master01
#拷贝mysql-connector-java.jar到各个节点指定目录下(所有的节点)(注意jar包名字)
cp mysql-connector-java-5.1.36-bin.jar /usr/share/java/mysql-connector-java.jar
#执行以下脚本
#注意:数据库账户必须有创建数据库的权限,和操作数据库的权限,不然会报错:主节点执行
sudo /opt/cm-5.8.5/share/cmf/schema/scm_prepare_database.sh mysql scm -h msyql01 -ucdh -p --scm-host master01 cm cm cm
然后输入数据库密码,提示com.cloudera.enterprise.dbutil.DbCommandExecutor - Successfully connected to database.All done, your SCM database is configured correctly!表示成功,如果出现错误查看原因并解决。
附:
脚本参数说明:
-h --mysql数据库所在的IP地址或者主机名
-u --mysql数据库用户名
-p --mysql 数据库密码
--scm-host --CM 服务所在的IP地址或主机名,一般跟mysql安装在同一台节点上
URL:http://archive.cloudera.com/cdh5/parcels/5.8.5/
下载以下三个文件:
CDH-5.8.5-1.cdh5.8.5.p0.5-el7.parcel
CDH-5.8.5-1.cdh5.8.5.p0.5-el7.parcel.sha1
manifest.json
wget http://archive.cloudera.com/cdh5/parcels/5.8.5/CDH-5.8.5-1.cdh5.8.5.p0.5-el7.parcel
wget http://archive.cloudera.com/cdh5/parcels/5.8.5/CDH-5.8.5-1.cdh5.8.5.p0.5-el7.parcel.sha1
wget http://archive.cloudera.com/cdh5/parcels/5.8.5/manifest.json
将CDH-5.8.0-1.cdh5.8.0.p0.42-el7.parcel.sha1
重命名为CDH-5.8.0-1.cdh5.8.0.p0.42-el7.parcel.sha
在CM主节点上创建Parcel目录
mkdir -p /opt/cloudera/parcel-repo
将1.4.1下载的文件上传到该目录下
每个节点上创建目录
mkdir -p /opt/cloudera/parcels
解释:Clouder-Manager将CDH从主节点的/opt/cloudera/parcel-repo目录中抽取出来,分发解压激活到各个节点的/opt/cloudera/parcels目录中
启动server命令:只在主节点执行
sudo /opt/cm-5.8.5/etc/init.d/cloudera-scm-server start
对应的停止stop,查看状态status
启动agent命令:每个节点执行
sudo /opt/cm-5.8.5/etc/init.d/cloudera-scm-agent start
##错误
/opt/cm-5.8.5/etc/init.d/cloudera-scm-server start
/opt/cm-5.8.5/etc/init.d/cloudera-scm-server:行109: pstree: 未找到命令
cloudera-scm-server is already running
yum install psmisc
安装即可
启动agent命令:sudo /opt/cm-5.8.0/etc/init.d/cloudera-scm-agent
对应的停止stop,查看状态status
设置CM 开机启动
cp /opt/cm-5.8.0/etc/init.d/cloudera-scm-server /etc/init.d/cloudera-scm-server
vi /etc/init.d/cloudera-scm-server
将里面的内容改成如下:
CMF_DEFAULTS=$(readlink -e $(dirname${BASH_SOURCE-$0})/../default)
将${CMF_DEFAULTS:-/etc/default} 替换成以下红色部分
CMF_DEFAULTS=/opt/cm-5.8.5/etc/default
附:可能出现的错误
Starting cloudera-scm-agent: [失败]
查看日志:
cat /opt/cm-5.8.0/log/cloudera-scm-agent/cloudera-scm-agent.out
[20/Oct/2017 11:48:40 +0000] 33120 MainThread agent INFO SCM Agent Version: 5.8.5
Unable to create the pidfile.
解决办法:执行命令 mkdir /opt/cm-5.8.5/run/cloudera-scm-agent 即可,原因是cm的agent启动的时候不会自动创建run/cloudera-scm-agent文件夹,需要手动创建。
error:【errno 111】 connection refused
解决方法:删除/opt/cm-5.8.5/lib/cloudera-scm-agent/目录下的所有文件,然后重启所有节点的agent服务。产生原因: 每台机器上的agent的uuid不一样的。
通过以上步骤,就可以在浏览器输入地址:http://61.233.112.222:7180继续安装。用户名密码都是:admin, 登录成功后进行集群的安装配置.
输入admin admin登录
勾选同意,下一步
选择版本(免费版与收费版区别如图),继续
开始安装
选择当前管理的主机,然后选中所有的机子,进行部署
选择Parcel安装,版本选择下载的版本,其他默认
集群安装分发,等待N分钟,取决于群集的网速
在此步骤,一直卡在分配,查看agent日志
[20/Nov/2017 06:16:19 +0000] 89382 MainThread agent ERROR Caught unexpected exception in main loop.
Traceback (most recent call last):
File"/opt/cm-5.8.0/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.8.0-py2.7.egg/cmf/agent.py", line 688, in start
self._init_after_first_heartbeat_response(heartbeat_response["data"])
File"/opt/cm-5.8.0/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.8.0-py2.7.egg/cmf/agent.py", line 818, in _init_after_first_heartbeat_response
self.client_configs.load()
File"/opt/cm-5.8.0/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.8.0-py2.7.egg/cmf/client_configs.py", line 682, in load
new_deployed.update(self._lookup_alternatives(fname))
File"/opt/cm-5.8.0/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.8.0-py2.7.egg/cmf/client_configs.py", line 432, in _lookup_alternatives
return self._parse_alternatives(alt_name, out)
File"/opt/cm-5.8.0/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.8.0-py2.7.egg/cmf/client_configs.py", line 444, in _parse_alternatives
path, _, _, priority_str = line.rstrip().split(" ")
ValueError: too many values to unpack
[20/Nov/2017 06:16:24 +0000] 89382 MainThread agent INFO Using parcels directory from server provided value: /opt/cloudera/parcels
[20/Nov/2017 06:16:24 +0000] 89382 MainThread parcel INFO Agent does create users/groups and apply file permissions
[20/Nov/2017 06:16:24 +0000] 89382 MainThread agent ERROR Failed to configure inotify. Parcel repository will not auto-refresh.
Traceback (most recent call last):
File"/opt/cm-5.8.0/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.8.0-py2.7.egg/cmf/agent.py", line 791, in _init_after_first_heartbeat_response
self.inotify = self.repo.configure_inotify()
File"/opt/cm-5.8.0/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.8.0-py2.7.egg/cmf/parcel.py", line 415, in configure_inotify
wm = pyinotify.WatchManager()
File"/opt/cm-5.8.0/lib64/cmf/agent/build/env/lib/python2.7/site-packages/pyinotify-0.9.3-py2.7.egg/pyinotify.py", line 1706, in __init__
raise OSError(err % self._inotify_wrapper.str_errno())
OSError: Cannot initialize new instance of inotify, Errno=Too many open files (EMFILE)
[20/Nov/2017 06:16:24 +0000] 89382 MainThread parcel_cache INFO Using /opt/cloudera/parcel-cache for parcel cache
错误原因:jdk版本问题,检查系统是否安装了openjdk,如果安装了需要卸载掉,如果不是则需要安装sun公司jdk。
下一步主机检查:
出现2个警告,安装提示解决办法如下:
在警告的主机上用root用户执行以下命令
echo 10 > /proc/sys/vm/swappiness
echo never > /sys/kernel/mm/transparent_hugepage/defrag
生效: sysctl -p
下一步选择角色:按照需求选择自定义
数据库配置
遇错
由于系统是最小化安装,在安装hue的主机下yum install -y python-lxml即可。
下一步集群的设置,一般选择默认即可
点击继续进行群集的首次运行:
运行完后:
注意:如果运行过程中出现错误,去查看日志逐一解决。
(o゜▽゜)o☆[BINGO!]
1、停止所有主机的server、agent的服务
/opt/cm-5.8.0/etc/init.d/cloudera-scm-server stop
/opt/cm-5.8.0/etc/init.d/cloudera-scm-agent stop
2、修改/etc/hosts文件,增加对应主机的ip地址和主机名
3、在mysql数据库中修改对应的ip
进入cm数据库,在hosts表中将IP_ADDRESS修改为内网IP
4、重启所有服务,进入UI界面,重新部署所有客户端的配置
5、重启所有组件(服务)
生成并部署客户端配置
首个失败:主机:slave02(id=3)上的客户端配置(id=3)已使用1退出,而预期值为0.
解决:所有节点
find / -type f -name "*cc.sh"
/opt/cm-5.8.5/lib64/cmf/service/client/deploy-cc.sh
vim /opt/cm-5.8.5/lib64/cmf/service/client/deploy-cc.sh
JAVA_HOME=/home/deploy/jdk8
export JAVA_HOME=/home/deploy/jdk8
ln -s /home/deploy/jdk8 /usr/java/default