Some Linux commands

Source: Internet
Author: User
Tags egrep

Transferred from: http://www.fx114.net/qa-81-151600.aspx

Some miscellaneous things, recorded, may be used in the future, in addition, later encountered can be recorded can be appended here


To find the most CPU-intensive threads in the process:

PS-LFP pid  #列出进程内所有线程-L threads-f all full-p by process idps-mp Pid-o Thread,tid,time

TOP-HP pid #找出进程内最耗CPU线程IDprintf "%x\n" Tid #线程ID转成16进制jstak pid | grep tid  #找到最耗费CPU的线程


Jmap exporting Java process memory and using jhat analysis

Jmap-dump:format=b,file=/tmp/dump.dat 21711  Jhat-j-xmx512m-port 9998/tmp/dump.dat


Storm related process Start command:

Nohup./storm Nimbus >/dev/null 2>&1 &nohup./storm Supervisor >/dev/null 2>&1 &nohup./storm UI >/dev/null 2>&1 &nohup./storm logviewer >/dev/null 2>&1 &


Jstorm related process Start command:

Nohup $JSTORM _home/bin/jstorm nimbus >/dev/null 2>&1 &nohup $JSTORM _home/bin/jstorm Supervisor >/dev/ Null 2>&1 &


Storm Kill process Command:

Kill ' PS aux | Egrep ' (Daemon\.nimbus) | (Storm\.ui\.core) ' | Fgrep-v Egrep | awk ' {print $} ' kill ' ps aux | Fgrep Storm | Fgrep-v ' Fgrep ' | awk ' {print $} '


Hive related process Start command:

Nohup./hive--service hiveserver2 > Hiveserver2.log 2>&1  &nohup./hive--service metastore > Metast Ore.log 2>&1 &nohup./hive--service hwi > Hwi.log 2>&1 &


Find the list of files with the specified string in the directory:

Find. -type f-name "*.sh"-exec grep-nh "xxxxxx" {} \;


Linux Cleanup Memory:

Sync && echo 3 >/proc/sys/vm/drop_caches


Lists the rows before and after the line that contains the specified string in the file:

Grep-n-A 10-b "xxxx" file


Example of tcpdump grab bag:

Tcpdump-i eth1-xvvs-s  0 TCP port 10020tcpdump-s-nn-vvv-i eth1 Port 10020


Spark Task Commit instance:

./spark-submit--deploy-mode Cluster--master spark://10.49.133.77:6066  --jars hdfs://10.49.133.77:9000/spark/ Guava-14.0.1.jar--class Spark.itil.video.ItilData hdfs://10.49.133.77:9000/spark/ Sparktest2-0.0.1-jar-with-dependencies.jar--conf "Spark.executor.extrajavaoptions=-xx:+printgcdetails  -XX:+ Printgctimestamps-xx:-usegcoverheadlimit "


Spark starts a worker instance:

./spark-daemon.sh start Org.apache.spark.deploy.worker.Worker 1--webui-port 8081--port 8092 spark:// 100.65.32.215:8070,100.65.32.212:8070


Examples of Spark SQL operations:

Export Spark_classpath= $SPARK _classpath:/data/webitil/hive/lib/mysql-connector-java-5.0.8-bin.jarspark_ Classpath= $SPARK _classpath:/data/webitil/hive/lib/mysql-connector-java-5.0.8-bin.jar./spark-sql--master spark:/ /10.49.133.77:8070./spark-sql--master spark://10.49.133.77:8070--jars/data/webitil/hive/lib/ Mysql-connector-java-5.0.8-bin.jar./spark-shell--jars/data/webitil/hive/lib/mysql-connector-java-5.0.8-bin.jar ./spark-shell--packages com.databricks:spark-csv_2.11:1.4.0add_jars=.. /elasticsearch-hadoop-2.1.0.beta1/dist/elasticsearch-spark_2.10-2.1.0.beta1.jar./bin/spark-shell

./spark-shellimport Org.apache.spark.sql.SQLContextval SqlContext = new SqlContext (SC) Import sqlcontext.implicits._ Val url = "Jdbc:mysql://10.198.30.118:3311/logplatform" val table = "(SELECT * from T_log_stat limit 5) as tb1" Val Reader = SqlContext.read.format ("jdbc") reader.option ("url", url) reader.option ("dbtable", table) reader.option ("Driver", " Com.mysql.jdbc.Driver ") reader.option (" User "," Logplat_w ") reader.option (" Password "," rm5bey6x ") val df = Reader.load ( ) Df.show ()



MVN install your own jar package to the local MVN library instance:

MVN Install:install-file-dgroupid=com.tencent.omg.itil.net-dartifactid=ipservicejni-dversion=1.0-dpackaging=jar -dfile=d:\storm\ipservicejni-1.0.jar

Some Linux commands

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.