A summary of common port usage configurations for BI eco-circle

Source: Internet
Author: User
Tags stack trace

The parts of a Hadoop cluster are typically used for multiple ports, some for interaction between daemon, and some for RPC access and HTTP access. And with the increase of the peripheral components of Hadoop, it is completely unable to remember which port corresponds to which application, especially collect records so as to query. This includes the components we use: HDFS, YARN, HBase, Hive, ZooKeeper:
Component Node Default port Configuration Description of Use
Hdfs DataNode 50010 Dfs.datanode.address Datanode service port for data transfer
Hdfs DataNode 50075 Dfs.datanode.http.address Port for HTTP Service
Hdfs DataNode 50475 Dfs.datanode.https.address Ports for HTTPS services
Hdfs DataNode 50020 Dfs.datanode.ipc.address Ports for IPC Services
Hdfs NameNode 50070 Dfs.namenode.http-address Port for HTTP Service
Hdfs NameNode 50470 Dfs.namenode.https-address Ports for HTTPS services
Hdfs NameNode 8020 Fs.defaultfs The RPC port that receives the client connection for obtaining file system metadata information.
Hdfs Journalnode 8485 Dfs.journalnode.rpc-address RPC Service
Hdfs Journalnode 8480 Dfs.journalnode.http-address HTTP Service
Hdfs Zkfc 8019 Dfs.ha.zkfc.port ZooKeeper Failovercontroller, for nn HA
YARN ResourceManager 8032 Yarn.resourcemanager.address RM Applications Manager (ASM) port
YARN ResourceManager 8030 Yarn.resourcemanager.scheduler.address IPC Port for Scheduler components
YARN ResourceManager 8031 Yarn.resourcemanager.resource-tracker.address Ipc
YARN ResourceManager 8033 Yarn.resourcemanager.admin.address Ipc
YARN ResourceManager 8088 Yarn.resourcemanager.webapp.address HTTP Service Port
YARN NodeManager 8040 Yarn.nodemanager.localizer.address Localizer IPC
YARN NodeManager 8042 Yarn.nodemanager.webapp.address HTTP Service Port
YARN NodeManager 8041 Yarn.nodemanager.address Port of container Manager in NM
YARN Jobhistory Server 10020 Mapreduce.jobhistory.address Ipc
YARN Jobhistory Server 19888 Mapreduce.jobhistory.webapp.address HTTP Service Port
HBase Master 60000 Hbase.master.port Ipc
HBase Master 60010 Hbase.master.info.port HTTP Service Port
HBase Regionserver 60020 Hbase.regionserver.port Ipc
HBase Regionserver 60030 Hbase.regionserver.info.port HTTP Service Port
HBase Hquorumpeer 2181 Hbase.zookeeper.property.clientPort Hbase-managed ZK mode, using a standalone zookeeper cluster will not enable the port.
HBase Hquorumpeer 2888 Hbase.zookeeper.peerport Hbase-managed ZK mode, using a standalone zookeeper cluster will not enable the port.
HBase Hquorumpeer 3888 Hbase.zookeeper.leaderport Hbase-managed ZK mode, using a standalone zookeeper cluster will not enable the port.
Hive Metastore 9083 /etc/default/hive-metastore export port=<port> to update the default port
Hive Hiveserver 10000 /ETC/HIVE/CONF/HIVE-ENV.SH export hive_server2_thrift_port=<port> to update the default port
ZooKeeper Server 2181 /etc/zookeeper/conf/zoo.cfg in Clientport=<port> Ports that serve clients
ZooKeeper Server 2888 /etc/zookeeper/conf/zoo.cfg in server.x=[hostname]:nnnnn[:nnnnn], blue part The follower is used to connect to the leader and only listens on the leader on the port.
ZooKeeper Server 3888 /etc/zookeeper/conf/zoo.cfg in server.x=[hostname]:nnnnn[:nnnnn], blue part Used for the leader election. Required only if ELECTIONALG is 3 (default).

All port protocols are based on TCP. For all Hadoop daemon that exist with the Web UI (HTTP service), there are the following url:/logs
List of log files for downloading and viewing/loglevel
Allows you to set the log4j logging level, similar to Hadoop daemonlog/stacks
stack trace for all threads, which is helpful for debug/JMX
Server-side metrics, output in JSON format. /jmx?qry=hadoop:* will return all Hadoop-related metrics.
The/jmx?get=mxbeanname::attributename query specifies the value of the specified bean property, such as/jmx?get=hadoop:service=namenode,name=namenodeinfo:: Clusterid will return to Clusterid.
The processing class for this request: Org.apache.hadoop.jmx.JMXJsonServlet and the specific daemon has a specific URL path specific corresponding information. namenode:http://:50070//dfshealth.jsp
HDFs Information page, which has links to view file system/dfsnodelist.jsp?whatnodes= (dead| LIVE)
Show dead or live status of Datanode/fsck
Running the fsck command is not recommended to use when the cluster is busy! Datanode:http://:50075//blockscannerreport
Each datanode will specify interval validation block information

A summary of common port usage configurations for BI eco-circle

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.