Centos搭建mysql/Hadoop/Hive/Hbase/Sqoop/Pig
当前位置:以往代写 > 其他教程 >Centos搭建mysql/Hadoop/Hive/Hbase/Sqoop/Pig
2019-06-14

Centos搭建mysql/Hadoop/Hive/Hbase/Sqoop/Pig

Centos搭建mysql/Hadoop/Hive/Hbase/Sqoop/Pig

目次:

筹备事情

Centos安装 mysql

Centos安装Hadoop

Centos安装hive

JDBC长途毗连Hive

Hbase和hive整合

Centos安装Hbase

筹备事情:

设置/etc/hosts

  1. 127.0.0.1   localhost localhost.localdomain192.168.0.120 centos

复制代码会合修改/etc/profile

  1. export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25/export JRE_HOME=/usr/lib/jvm/jdk1.7.0_25/jreexport CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/libexport PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/binexport HADOOP_HOME=/usr/local/hadoop-2.2.0export HBASE_HOME=/usr/local/hbaseexport HIVE_HOME=/usr/local/hiveexport PATH=$PATH:$HADOOP_HOME/:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$HIVE_HOME/bin  

复制代码其他确认信息:

  1.同步集群时间:集群之间的时间按差距不要高出30秒,因为假如差异步的话,后头hbase的regionServer的时候会报错的。
  2.在hadoop用户下,确认情况变量。hostname。service iptables status等。  
Centos安装 mysql

下载地点::http://dev.mysql.com/downloads/
http://mysql.mirror.kangaroot.net/Downloads/MySQL-5.7/
建设目用户及用户组权限
[[email protected]] groupadd mysql
[[email protected]]        useradd -r -g mysql mysql
yum安装,假如需要更高版本可以自行下载
yum -y install  mysql-server 安装版本较低
配置字符集和长途登录
vim /etc/my.cnf

  1. [client]port=3306socket=/var/lib/mysql/mysql.sockdefault-character-set=utf8[mysqld]datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.sockuser=mysql# Disabling symbolic-links is recommended to prevent assorted security riskssymbolic-links=0default-character-set = utf8character-set-server=utf8collation-server=utf8_general_ciinit_connect=’SET collation_connection=utf8_general_ci’init_connect=’SET NAMES utf8′[mysqld_safe]log-error=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.piddefault-charater-set=utf8

复制代码配置开机启动
chkconfig mysqld on
chkconfig –list mysqld
/etc/rc.d/init.d/mysqld start
为root配置暗码

  1. mysql -urootselect user,host,password  from mysql.user;set password for [email protected] = password(‘123456’);exit;从头登录看有没有生效。 

复制代码打开长途毗连

  1. mysql> use mysql;mysql> desc user;mysql> GRANT ALL PRIVILEGES ON *.* TO [email protected]"%" IDENTIFIED BY "root";        //为root添加长途毗连的本领mysql> update user set Password = password(‘123456′) where User=’root’;        //配置root用户暗码mysql> select Host,User,Password from user where User=’root’;mysql> flush privileges;mysql> exit

复制代码若还不能举办长途毗连,封锁防火墙
/etc/rc.d/init.d/iptables stop
手动安装更高版本,可参考:
http://www.cnblogs.com/zhoulf/archive/2013/01/25/zhoulf.html
http://www.cnblogs.com/xiongpq/p/3384681.html

Centos安装Hadoop

要长途会见Hadoop,首先得封锁防火墙:
查察状态:/etc/init.d/iptables status
封锁:/etc/init.d/iptables stop
永久性封锁:
chkconfig –level 35 iptables off 再重启
选择性开放:重启后生效 或修改/etc/sysconfig/iptables
/sbin/iptables -l INPUT -p tcp -dport 80 -j ACCEPT
/sbin/iptables -l INPUT -p tcp -dport 22 -j ACCEPT
安装Hadoop
cd /usr/local
tar -zxvf hadoop-2.2.0.tar.gz  
ln -s hadoop-2.2.0 hadoop
配置ssh无暗码登岸

  1. ssh-keygen -t  rsa 直接回车cat ~/.ssh/id_rsa.pub >>  ~/.ssh/authorized_keyschmod 700 ~/.sshchmod 644 ~/.ssh/authorized_keys

复制代码

  1. 这样就可以了,再者修改/etc/hosts,将ip和hostname对应起来修改文件:hadoop-env.sh

复制代码

  1. export HADOOP_IDENT_STRING=$USERexport  JAVA_HOME=/usr/lib/jvm/jdk1.8.0_25export HADOOP_HOME=/usr/local/hadoop-2.2.0export PATH=$PATH:/usr/local/hadoop-2.2.0/bin

复制代码 hdfs-site.xml

  1. <configuration><property><name>dfs.replication</name><value>1</value></property><property><name>dfs.name.dir</name><value>/usr/local/hadoop-2.2.0/hdfs/name</value></property><property><name>dfs.data.dir</name><value>/usr/local/hadoop-2.2.0/hdfs/data</value></property><property>    <name>dfs.permissions</name>    <value>false</value>    </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property></configuration>

复制代码留意:dfs.replication是系统重启之后,新插手文件就会凭据这个备份系数来存放文件。要改变现有的文件的备份,可以在呼吁行下去配置。
记得要建设文件目次:

  1. mkdir   /usr/local/hadoop-2.2.0/hdfs/mkdir   /usr/local/hadoop-2.2.0/hdfs/datamkdir   /usr/local/hadoop-2.2.0/hdfs/name

复制代码 core-site.xml

  1. export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25/export JRE_HOME=/usr/lib/jvm/jdk1.7.0_25/jreexport CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/libexport PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/binexport HADOOP_HOME=/usr/local/hadoop-2.2.0export HBASE_HOME=/usr/local/hbaseexport HIVE_HOME=/usr/local/hiveexport PATH=$PATH:$HADOOP_HOME/:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$HIVE_HOME/bin  0

复制代码留意:fs.defaultFS指定了HDFS的地点和端口,Hbase就能直接用此地点作为数据文件的根目次,假如不设置,默认的是在/tmp下,系统启动之后酒会没有了
mapred-site.xml

  1. export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25/export JRE_HOME=/usr/lib/jvm/jdk1.7.0_25/jreexport CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/libexport PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/binexport HADOOP_HOME=/usr/local/hadoop-2.2.0export HBASE_HOME=/usr/local/hbaseexport HIVE_HOME=/usr/local/hiveexport PATH=$PATH:$HADOOP_HOME/:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$HIVE_HOME/bin  1

复制代码yarn-site.xml

  1. export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25/export JRE_HOME=/usr/lib/jvm/jdk1.7.0_25/jreexport CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/libexport PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/binexport HADOOP_HOME=/usr/local/hadoop-2.2.0export HBASE_HOME=/usr/local/hbaseexport HIVE_HOME=/usr/local/hiveexport PATH=$PATH:$HADOOP_HOME/:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$HIVE_HOME/bin  2

复制代码 slaves

  1. export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25/export JRE_HOME=/usr/lib/jvm/jdk1.7.0_25/jreexport CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/libexport PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/binexport HADOOP_HOME=/usr/local/hadoop-2.2.0export HBASE_HOME=/usr/local/hbaseexport HIVE_HOME=/usr/local/hiveexport PATH=$PATH:$HADOOP_HOME/:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$HIVE_HOME/bin  3

复制代码start-all.sh jps
namenode datanode ResourceManager NodeManager SecondaryNameNode 必需有
留意:
假如是集群设置,则将目次拷贝到每一个datanode上。scp -r…
第一次启动记得要在Master hadoop上名目化文件爱你系统:hdfs namenode -format    hdfs dfsadmin -report

长途myeclipse和eclipse毗连hadoop,参考:
HDP2.0.6+hadoop2.2.0+eclipse(windows和linux下)调试情况搭建

Centos安装hive

cd /usr/local
tar -zxvf hive-0.12.0.tar.gz  
ln -s hive-0.12.0 hive 成立软毗连
修改文件:

  1. export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25/export JRE_HOME=/usr/lib/jvm/jdk1.7.0_25/jreexport CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/libexport PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/binexport HADOOP_HOME=/usr/local/hadoop-2.2.0export HBASE_HOME=/usr/local/hbaseexport HIVE_HOME=/usr/local/hiveexport PATH=$PATH:$HADOOP_HOME/:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$HIVE_HOME/bin  4

复制代码  在官方0.12.0的宣布版本中的 hive-default.xml.template 中有 bug,在 2000行:
    <value>auth</auth>  修改为:<value>auth</value>
hive-env.sh,在最后加:

  1. export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25/export JRE_HOME=/usr/lib/jvm/jdk1.7.0_25/jreexport CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/libexport PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/binexport HADOOP_HOME=/usr/local/hadoop-2.2.0export HBASE_HOME=/usr/local/hbaseexport HIVE_HOME=/usr/local/hiveexport PATH=$PATH:$HADOOP_HOME/:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$HIVE_HOME/bin  5

复制代码hive-site.xml:以下是需要修改的键值对,其余的默认

  1. export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25/export JRE_HOME=/usr/lib/jvm/jdk1.7.0_25/jreexport CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/libexport PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/binexport HADOOP_HOME=/usr/local/hadoop-2.2.0export HBASE_HOME=/usr/local/hbaseexport HIVE_HOME=/usr/local/hiveexport PATH=$PATH:$HADOOP_HOME/:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$HIVE_HOME/bin  6

复制代码  
成立mysql元数据库:
hive需要用到干系型数据库来存储元信息,今朝只支持mysql,建设mysql 的是记得配置字符集为latin1,不然后期建表会出问题
默认meta数据库为derby ,为了制止利用默认的Derby数据库(有并发会见和机能的问题),凡是还需要设置元数据库为MySQL
mysql  -uroot -p123456

  1. export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25/export JRE_HOME=/usr/lib/jvm/jdk1.7.0_25/jreexport CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/libexport PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/binexport HADOOP_HOME=/usr/local/hadoop-2.2.0export HBASE_HOME=/usr/local/hbaseexport HIVE_HOME=/usr/local/hiveexport PATH=$PATH:$HADOOP_HOME/:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$HIVE_HOME/bin  7

复制代码呈现错误:Specified key was too long:max key is 767 bytes
办理步伐:在hive数据库中执行:alter database hive character set latin1.hive元数据库的字符集为latin1.
设置HDFS中的目次和权限:
    hive.metastore.warehouse.dir:(HDFS上的)数据目次
    hive.exec.scratchdir:(HDFS上的)姑且文件目次
    hive.metastore.warehouse.dir默认值是/user/hive/warehouse
    hive.exec.scratchdir默认值是/tmp/hive-${user.name}
在hive-site.xml中这些都是默认值,临时不改。在hdfs上建设相应文件夹

  1. export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25/export JRE_HOME=/usr/lib/jvm/jdk1.7.0_25/jreexport CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/libexport PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/binexport HADOOP_HOME=/usr/local/hadoop-2.2.0export HBASE_HOME=/usr/local/hbaseexport HIVE_HOME=/usr/local/hiveexport PATH=$PATH:$HADOOP_HOME/:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$HIVE_HOME/bin  8

复制代码用mysql当元数据库,别忘了找一个mysql的jdbcjar包mysql-connector-java-3.1.12-bin.jar也拷贝到hive-0.12.0/lib下。
在CLI呼吁行下运行Hive,并举办测试。
呈现问题:ermission denied: user=root, access=EXECUTE, inode="/tmp":hadoop:supergroup:drwx-w—-
办理步伐:hdfs dfs -chmod 777 /tmp

JDBC长途毗连Hive

  假如需要通过jdbc/odbc的方法来毗连hive,需要启动metastore shfift,因此必需设置hive.metastore.uris。
而hive.aux.jars.path是与hbase整合的时候需要用到的jar包,必需加上。
  假如不加端口默认启动:hive –service metastore,则默认监听端口是:9083 ,留意客户端中的端口设置需要和启动监听的端口一致。
  处事端启动正常后,客户端就可以执行hive操纵了。
  hive –service metastore -p <port_num>   也可以自界说监听端口。
debug 模式启动hive处事:
  hive –service metastore –hiveconf  hive.root.logger=DEBUG,console
  hive –service hiveserver –hiveconf  hive.root.logger=DEBUG,console
启动处事,这样jdbc:hive就能连上,默认10000端口,后头的部门必然要带上,不然用eclipse毗连不上的。
起来后我们在eclipse就可以利用jdbc:hive来毗连了。如

  1. export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25/export JRE_HOME=/usr/lib/jvm/jdk1.7.0_25/jreexport CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/libexport PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/binexport HADOOP_HOME=/usr/local/hadoop-2.2.0export HBASE_HOME=/usr/local/hbaseexport HIVE_HOME=/usr/local/hiveexport PATH=$PATH:$HADOOP_HOME/:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$HIVE_HOME/bin  9

复制代码其实利用上和普通的数据库已经很相似了,除了建表的语句有一些不同。
留意:假如是hiveserver2的话,启动是相似的,接口:

  1. [client]port=3306socket=/var/lib/mysql/mysql.sockdefault-character-set=utf8[mysqld]datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.sockuser=mysql# Disabling symbolic-links is recommended to prevent assorted security riskssymbolic-links=0default-character-set = utf8character-set-server=utf8collation-server=utf8_general_ciinit_connect=’SET collation_connection=utf8_general_ci’init_connect=’SET NAMES utf8′[mysqld_safe]log-error=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.piddefault-charater-set=utf80

复制代码  

Hbase和hive整合

而hive.aux.jars.path是与hbase整合的时候需要用到的jar包,必需加上。
将hbase的包拷贝到hive/lib下
[[email protected] hive]# cd /usr/lib/hbase/lib/
[[email protected] lib]# cp ./hbase-*.jar ./protobuf-*.jar ./zookeeper.*jar ./htrace-core-2.04.jar  ../hive/lib
留意,假如hive/lib中存在zookeeper.jar,protobuf.jar,以hbase中的为准,移除到hive的备份目次去
修改hive设置文件
  加载hive毗连hbase需要的jar 包
  在Ambari上直接修改 hive ——>configs——》

  内容为:

  1. [client]port=3306socket=/var/lib/mysql/mysql.sockdefault-character-set=utf8[mysqld]datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.sockuser=mysql# Disabling symbolic-links is recommended to prevent assorted security riskssymbolic-links=0default-character-set = utf8character-set-server=utf8collation-server=utf8_general_ciinit_connect=’SET collation_connection=utf8_general_ci’init_connect=’SET NAMES utf8′[mysqld_safe]log-error=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.piddefault-charater-set=utf81

复制代码  假如在hive-site.xml中,对应加一项property.
将hbase-site.xml拷贝到hive/conf下
测试:
  首先先启动hadoop和hbase,再启动hive元数据库:

  1. [client]port=3306socket=/var/lib/mysql/mysql.sockdefault-character-set=utf8[mysqld]datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.sockuser=mysql# Disabling symbolic-links is recommended to prevent assorted security riskssymbolic-links=0default-character-set = utf8character-set-server=utf8collation-server=utf8_general_ciinit_connect=’SET collation_connection=utf8_general_ci’init_connect=’SET NAMES utf8′[mysqld_safe]log-error=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.piddefault-charater-set=utf82

复制代码   启动之后会一直卡住不动,放着不管,打开别的一个窗口即可。
  假如直接hive会呈现以下的错误:

  1. [client]port=3306socket=/var/lib/mysql/mysql.sockdefault-character-set=utf8[mysqld]datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.sockuser=mysql# Disabling symbolic-links is recommended to prevent assorted security riskssymbolic-links=0default-character-set = utf8character-set-server=utf8collation-server=utf8_general_ciinit_connect=’SET collation_connection=utf8_general_ci’init_connect=’SET NAMES utf8′[mysqld_safe]log-error=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.piddefault-charater-set=utf83

复制代码  例子:

  1. [client]port=3306socket=/var/lib/mysql/mysql.sockdefault-character-set=utf8[mysqld]datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.sockuser=mysql# Disabling symbolic-links is recommended to prevent assorted security riskssymbolic-links=0default-character-set = utf8character-set-server=utf8collation-server=utf8_general_ciinit_connect=’SET collation_connection=utf8_general_ci’init_connect=’SET NAMES utf8′[mysqld_safe]log-error=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.piddefault-charater-set=utf84

复制代码  修改hdfs上hbase文件夹的权限

  1. [client]port=3306socket=/var/lib/mysql/mysql.sockdefault-character-set=utf8[mysqld]datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.sockuser=mysql# Disabling symbolic-links is recommended to prevent assorted security riskssymbolic-links=0default-character-set = utf8character-set-server=utf8collation-server=utf8_general_ciinit_connect=’SET collation_connection=utf8_general_ci’init_connect=’SET NAMES utf8′[mysqld_safe]log-error=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.piddefault-charater-set=utf85

复制代码

Centos安装Hbase

tar zxvf hbase-0.92.0.tar.gz
ln -s hbase-0.92.0 hbase  成立软链接的长处就是,在举办更新时,只需要从头成立软毗连即可,而不消修改设置
修改文件:
(伪漫衍式)
hbase-env.sh:

  1. [client]port=3306socket=/var/lib/mysql/mysql.sockdefault-character-set=utf8[mysqld]datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.sockuser=mysql# Disabling symbolic-links is recommended to prevent assorted security riskssymbolic-links=0default-character-set = utf8character-set-server=utf8collation-server=utf8_general_ciinit_connect=’SET collation_connection=utf8_general_ci’init_connect=’SET NAMES utf8′[mysqld_safe]log-error=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.piddefault-charater-set=utf86

复制代码hbase-site.xml:

  1. [client]port=3306socket=/var/lib/mysql/mysql.sockdefault-character-set=utf8[mysqld]datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.sockuser=mysql# Disabling symbolic-links is recommended to prevent assorted security riskssymbolic-links=0default-character-set = utf8character-set-server=utf8collation-server=utf8_general_ciinit_connect=’SET collation_connection=utf8_general_ci’init_connect=’SET NAMES utf8′[mysqld_safe]log-error=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.piddefault-charater-set=utf87

复制代码

  1. [client]port=3306socket=/var/lib/mysql/mysql.sockdefault-character-set=utf8[mysqld]datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.sockuser=mysql# Disabling symbolic-links is recommended to prevent assorted security riskssymbolic-links=0default-character-set = utf8character-set-server=utf8collation-server=utf8_general_ciinit_connect=’SET collation_connection=utf8_general_ci’init_connect=’SET NAMES utf8′[mysqld_safe]log-error=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.piddefault-charater-set=utf88

复制代码  hbase.rootdir中的地点要跟hadoop2.2.0中设置fs.defaultFS的目次沟通,然后加上本身要新建的目次hbase
在呼吁行下启动hbase
  start-hbase.sh
  stop-hbase.h
  web界面:http://master:60010/
  jps:增加了3个历程,HRegionServer,HQnorumPeer,HMaster
留意:假如是由hbase本身打点zookeeper,需要配置export HBASE_MANAGES-ZK=true,假如不是,则需要再下载zookeeper举办设置
测试 hbase
1).登录hbase客户端

  1. [client]port=3306socket=/var/lib/mysql/mysql.sockdefault-character-set=utf8[mysqld]datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.sockuser=mysql# Disabling symbolic-links is recommended to prevent assorted security riskssymbolic-links=0default-character-set = utf8character-set-server=utf8collation-server=utf8_general_ciinit_connect=’SET collation_connection=utf8_general_ci’init_connect=’SET NAMES utf8′[mysqld_safe]log-error=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.piddefault-charater-set=utf89

复制代码2).新建数据表,并插入3笔记录

  1. mysql -urootselect user,host,password  from mysql.user;set password for [email protected] = password(‘123456’);exit;从头登录看有没有生效。 0

复制代码
3).查察插入的数据

  1. mysql -urootselect user,host,password  from mysql.user;set password for [email protected] = password(‘123456’);exit;从头登录看有没有生效。 1

复制代码4).读取单笔记录

  1. mysql -urootselect user,host,password  from mysql.user;set password for [email protected] = password(‘123456’);exit;从头登录看有没有生效。 2

复制代码5).停用并删除数据表

  1. mysql -urootselect user,host,password  from mysql.user;set password for [email protected] = password(‘123456’);exit;从头登录看有没有生效。 3

复制代码6).退出

  1. mysql -urootselect user,host,password  from mysql.user;set password for [email protected] = password(‘123456’);exit;从头登录看有没有生效。 4

复制代码

Centos安装sqoop

摘抄(Hbase漫衍式设置修改)

1.修改conf/hbase-env.sh,添加jdk支持

  1. mysql -urootselect user,host,password  from mysql.user;set password for [email protected] = password(‘123456’);exit;从头登录看有没有生效。 5

复制代码2. 修改conf/hbase-site.xml,

  1. mysql -urootselect user,host,password  from mysql.user;set password for [email protected] = password(‘123456’);exit;从头登录看有没有生效。 6

复制代码[url=http://www.bdtbus.com/][/url]

hbase.rootdir配置hbase在hdfs上的目次,主机名为hdfs的namenode节点地址的主机
hbase.cluster.distributed配置为true,表白是完全漫衍式的hbase集群
hbase.master配置hbase的master主机名和端口
hbase.zookeeper.quorum配置zookeeper的主机,发起利用单数

3.修改hadoop的目次下的conf/hdfs-site.xml

  1. mysql -urootselect user,host,password  from mysql.user;set password for [email protected] = password(‘123456’);exit;从头登录看有没有生效。 7

复制代码4.修改conf/regionservers
将所有的datanode添加到这个文件,雷同与hadoop中slaves文件
5.拷贝hbase到所有的节点
其他设置:
hbase-env.sh 假如想利用HBase自带的zookeeper,配置:
export HBASE_MANAGES_ZK=true(启用hbase打点zookeeper的形式)

Centos安装Sqoop

很是简朴,五分钟搞定。
[[email protected] local]# cp /home/kang/Desktop/temp/sqoop-1.4.5.bin__hadoop-2.0.4-alpha.tar.gz ./

  1. mysql -urootselect user,host,password  from mysql.user;set password for [email protected] = password(‘123456’);exit;从头登录看有没有生效。 8

复制代码
sqoop无需启动即可利用,如链接mysql数据库验证:

  1. mysql -urootselect user,host,password  from mysql.user;set password for [email protected] = password(‘123456’);exit;从头登录看有没有生效。 9

复制代码
Centos安装Pig

  1. mysql> use mysql;mysql> desc user;mysql> GRANT ALL PRIVILEGES ON *.* TO [email protected]"%" IDENTIFIED BY "root";        //为root添加长途毗连的本领mysql> update user set Password = password(‘123456′) where User=’root’;        //配置root用户暗码mysql> select Host,User,Password from user where User=’root’;mysql> flush privileges;mysql> exit0

复制代码

    关键字:

在线提交作业