Installing the Java Runtime Environment
1. Test machine Related information:
[Email protected] ~]# cat/etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[Email protected] ~]# uname-r
3.10.0-327.el7.x86_6
2. Configure the Epel source to install the OPENJDK in Yum mode
Yum Search Java | Grep-i JDK
Yum Install JAVA-1.8.0-OPENJDK Java-1.8.0-openjdk-devel
3. Setting Java_home Environment variables
[Email protected] ~]# cat/etc/profile.d/java_home.sh
Export java_home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64
Export path= $PATH: $JAVA _home/bin
Make configuration effective
Source/etc/profile.d/java_home.sh or. /etc/profile.d/java_home.sh
4. Test whether Java installation configuration is successful
[Email protected] ~]# java-version
OPENJDK Version "1.8.0_161"
OpenJDK Runtime Environment (build 1.8.0_161-b14)
OpenJDK 64-bit Server VM (build 25.161-b14, Mixed mode)
5. Create Java applet, compile and print Hello World
[[email protected] ~]# cat helloworld.javapublic class helloworld { public static void main(String[] args){ System.out.println("hello wolrd!"); }}
[Email protected] ~]# Javac Helloworld.java #编译后会出现helloworld. class file
[Email protected] ~]# java HelloWorld #运行
Hello wolrd!
- How do I run the. jar. War for these Java applications?
Java-jar/path/to/*.jar [Arg1] [arg2]
#############################################################################
Next, you'll know the Hadoop website: http://hadoop.apache.org/
What is Apache Hadoop?
Apache? Hadoop Project develops open source software for reliable, scalable, distributed computing.
Apache Hadoop Software Library is a framework that allows the use of a simple programming model for distributed processing of large datasets across computer clusters.
Designed to scale from a single server to thousands of machines, each machine provides local compute and storage.
Instead of relying on hardware to provide high availability, the library is designed to detect and handle failures at the application layer to provide high availability services on top of a set of computers, each of which can fail.
Hadoop Standalone mode operation
Download the binary package, unzip to the/USR/LOCL directory, create a soft connection in the same directory as Hadoop, configure the path variable, make it effective
[[email protected] ~]$ cat/etc/profile.d/hadoop.sh export path= $PATH:/usr/local/hadoop/bin:/usr/ Local/hadoop/sbin
[[email protected] ~]# hadoopusage:hadoop [options] subcommand [subcommand OPTIONS] or Hadoop [options] CLASSNAME [CLASSNAME OPTIONS] where CLASSNAME is a user-provided Java class OPTIONS are none or any of:buildpaths Attempt to add class files from build tree--config dir Hadoop config directory--debug Turn on shell script Debug mode--help usage informationhostnames list[,of,host,na MES] hosts to use in slave modehosts filename List of the hosts to use with slave modeloglevel level Set the log4j level for this commandworkers turn on worker mode subcommand is one of: Admin Commands:daemonlog Get/set The log level for each daemon Client commands:archive create a Hadoop ar chivechecknative Check native Hadoop and compression libraries Availabilityclasspath prints the class path needed to Get the Hadoop jar andThe required librariesconftest validate configuration XML filescredential interact with credential Providersdistch Distributed metadata changerdistcp Copy file or directories Recursivelydtutil operations related to Delegation tokensenvvars Display computed Hadoop environment VARIABLESFS run a generic filesystem user cl Ientgridmix submit a mix of synthetic job, modeling a profiled from production Loadjar <jar> run a jar fil E. note:please use ' yarn jar ' to launch yarn applications, not this command.jnipath prints the Java.library.pathkdi AG diagnose Kerberos problemskerbname Show auth_to_local principal conversionkey manage keys via th E Keyproviderrumenfolder scale A rumen input tracerumentrace convert logs into a rumen traces3guard manage meta Data on S3trace View and modify Hadoop tracing settingsversion Print the version Daemon commands:kms Run KMS, the Key MAnagement Serversubcommand may print help when invoked w/o parameters or with-h.
The Hadoop default configuration is run in non-distributed mode, which is a single Java process for easy debugging. You can perform the accompanying example WordCount to feel the operation of Hadoop. The files in the input folder are entered as input, and the regular expression wo[a-z in the statistics. + The number of occurrences of the word, and output the result to a folder in the output.
If you need to run again, you need to delete the output folder (because Hadoop does not overwrite the result file by default):
# cd /usr/local/hadoop/ # mkdir input # cp etc/hadoop/*.xml input # bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.0.jar grep input output ‘dfs[a-z.]+‘ # cat output/* 1 work
[[email protected]/usr/local/hadoop]# Hadoop jar Share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.0.jar grep/ ETC/PASSWD output ' root '
[email protected]/usr/local/hadoop]# cat output/*
CENTOS7 Yum installs Java Runtime Environment, first knowledge of Hadoop