Recent plans to learn Hadoop and spark require the installation of the Java Runtime Environment, and the system I use is CentOS.
First, install the JDK
1. Enter the USR directory
Cd/usr
2. Create the Java installation directory in the USR directory
mkdir Java
3. Copy the jdk-8u45-linux-x64.rpm to the Java directory
cp/root/hadoop_home/jdk-8u45-linux-x64.rpm/usr/java/
4. Installing the JDK
Cd/usr/java
RPM-IVH jdk-8u45-linux-x64.rpm
5. Installation completed for him to create a link to save the directory length
Ln-s/usr/java/jdk1.8.0_45//USR/JDK
6. Edit the configuration file
Vim/etc/profile
Add the following content:
JAVA_HOME=/usr/jdkCLASSPATH=.:$JAVA_HOME/lib/PATH=$PATH:$JAVA_HOME/binexport PATH JAVA_HOME CLASSPATH
8. Restart the machine or source/etc/profile
sudo shutdown-r now
9. View installation Status
Java-version
Java Version "1.7.0_25"
OpenJDK Runtime Environment (RHEL-2.3.10.4.EL6_4-X86_64)
OpenJDK 64-bit Server VM (build 23.7-b01, Mixed mode)
Second, the output Hello World
1. Create Javac Helloworld.java
class Test{ publicstaticvoidmain(String []args){ System.out.println("test input main arguments"); System.out.println(args[0]+" "+args[1]); System.out.println("end of main"); } }
2. Compile and run
1) Javac Helloworld.java
The Test.class file is generated
2) Java Test.class Hello world!
The output is:
-bash-4.1# java Test Hello World
Test input main arguments
Hello World
End of Main
Install Java Runtime environment and output Hello World under Linux (CentOS)