当前位置:  开发笔记 > 运维 > 正文

HBase主站因"拒绝连接"错误而停止

如何解决《HBase主站因"拒绝连接"错误而停止》经验,为你挑选了1个好方法。

这发生在伪分布式和分布式模式中.当我尝试启动HBase时,最初所有3个服务 - 主服务器,区域服务器和quorumpeer启动.然而,在一分钟内,主人停止.在日志中,这是痕迹 -

2013-05-06 20:10:25,525 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: :9000. Already tried 0 time(s).
2013-05-06 20:10:26,528 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: :9000. Already tried 1 time(s).
2013-05-06 20:10:27,530 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: :9000. Already tried 2 time(s).
2013-05-06 20:10:28,533 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: :9000. Already tried 3 time(s).
2013-05-06 20:10:29,535 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: :9000. Already tried 4 time(s).
2013-05-06 20:10:30,538 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: :9000. Already tried 5 time(s).
2013-05-06 20:10:31,540 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: :9000. Already tried 6 time(s).
2013-05-06 20:10:32,543 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: :9000. Already tried 7 time(s).
2013-05-06 20:10:33,544 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: :9000. Already tried 8 time(s).
2013-05-06 20:10:34,547 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: :9000. Already tried 9 time(s).
2013-05-06 20:10:34,550 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown.
java.net.ConnectException: Call to :9000 failed on connection exception: java.net.ConnectException: Connection refused
        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1179)
        at org.apache.hadoop.ipc.Client.call(Client.java:1155)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
        at $Proxy9.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384)
        at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:132)
        at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:259)
        at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:220)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1611)
        at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:68)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1645)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1627)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:183)
        at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:363)
        at org.apache.hadoop.hbase.master.MasterFileSystem.(MasterFileSystem.java:86)
        at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:368)
        at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:301)
Caused by: java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:519)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:484)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:468)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:575)
        at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:212)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1292)
        at org.apache.hadoop.ipc.Client.call(Client.java:1121)
        ... 18 more

我采取的步骤没有任何成功地解决这个问题 - 从分布式模式降级到伪分布式模式.同样的问题. - 尝试独立模式 - 没有运气 - 为hadoop和hbase使用相同的用户(hadoop).为hadoop设置无密码ssh.- 同样的问题. - 编辑/ etc/hosts文件并将localhost/servername和127.0.0.1更改为引用SO和不同来源的实际IP地址.还是同样的问题. - 重启服务器

这是conf文件.

HBase的-site.xml中



  hbase.rootdir
  hdfs://:9000/hbase
        The directory shared by regionservers.



        hbase.cluster.distributed
        true



        hbase.zookeeper.quorum
        



        hbase.master
        :60000
        The host and port that the HBase master runs at.



        dfs.replication
        1
        The replication count for HLog and HFile storage. Should not be greater than HDFS datanode count.



/ etc/hosts文件

127.0.0.1 localhost.localdomain localhost :: 1 localhost6.localdomain6 localhost6.

我在这做错了什么?

Hadoop版本 - Hadoop 0.20.2-cdh3u5 HBase版本 - 版本0.90.6-cdh3u5



1> Tariq..:

通过查看配置文件,我假设您在配置文件中使用实际的主机名.如果是这种情况,请将主机名和计算机的IP添加到/ etc/hosts文件中.还要确保它与Hadoop的core-site.xml中的主机名匹配.正确的名称解析对于正确的HBase功能至关重要.

如果您仍然遇到任何问题,请正确执行此处提到的步骤.我试图详细解释这个程序,如果你仔细地遵循所有步骤,希望你能够使它运行.

HTH

推荐阅读
小妖694_807
这个屌丝很懒,什么也没留下!
DevBox开发工具箱 | 专业的在线开发工具网站    京公网安备 11010802040832号  |  京ICP备19059560号-6
Copyright © 1998 - 2020 DevBox.CN. All Rights Reserved devBox.cn 开发工具箱 版权所有