Cloudera VM 5.4.2 How to start Hadoop services
1. Mounting position/usr/libhadoopsparkhbasehiveimpalamahout2. Start the first process init automatically, read Inittab->runlevel 5
start the sixth step --init Process Execution Rc.sysinit
After the operating level has been set, the Linux system performsfirst user-level fileIt is/etc/rc.d/rc.sysinitScripting, it does a lot of work, including setting path, setting network configuration (/etc/sysconfig/network), starting swap partition, setting/proc, and so on.
start the seventh step --Start the kernel module
Specific is based on/etc/modules.confFile or/ETC/MODULES.DFiles in the directory to load the kernel modules.
start Eighth step --execute scripts with different RunLevel
Depending on the runlevel, the system will run the appropriate script from RC0.D to RC6.D to perform the appropriate initialization and start the appropriate service.
start the nineth step --Execute/etc/rc.d/rc.local
Rc.local is the place where Linux is left to the user for personalization after all initialization work。 You can put the things you want to set up and start up here.
3. rc5.d start the Cloudera services
all the services of Hadoop Spark are here, randomly launched. Each link's name starts with S or K, and s starts with a random start, and K starts with a random start. If I want any service to start randomly, it can change the first letter K to S, of course, after changing S to K, the service cannot be started randomly. Reference: 1, Linux boot process detailed, HTTP://BLOG.CHINAUNIX.NET/UID-26495963-ID-3066282.HTML2, [Original]linux system start-up Process analysis, http:// BLOG.CHINAUNIX.NET/UID-23069658-ID-3142047.HTML3, Linux startup process, Http://www.ruanyifeng.com/blog/2013/08/linux_boot_ Process.html
Cloudera VM 5.4.2 How to start Hadoop services