count the number of occurrences of each word in the Spark directory readme.md this file:First give the complete code, convenient for everyone to have a whole idea:val textFile = sc.textFile("file:/data/install/spark-2.0.0-bin-hadoop2.7/README.md")val wordCounts = textFile.flatMap(line => line.split(" ")).map(word => (
Java version of the spark large data Chinese word segmentation Statistics program completed, after a week of effort, the Scala version of the spark
Large data Chinese Word segmentation Statistics program also got out, here to share to you want to learn spark friends.
The fol
The Java version of the spark Big Data Chinese word Segmentation Statistics program was completed, and after a week of effort, the Scala version of the sparkBig Data Chinese Word segmentation Statistics program also made out, here to share to you want to learn spark friends.The following is the final interface of the p
Tutorial text effect production process is more complex, need to make more parts: such as background, hollow word, metal relief, sparks and so on. Hollow word and spark part is a bit complicated, need to according to the author's hints slowly set parameters, make sure to have patience.
Final effect
1, new 1024*786px size document, pull radial grad
Spark word production methods are many, with the path and layer style production is relatively fast. Process: First check out the path or convert the text to a path, and then use the Set brush stroke path to get a preliminary spark, later use layer style to increase the flame effect.Final effect
1. Create a new 102
Spark word production methods are many, with the path and layer style production is relatively fast. Process: First check out the path or convert the text to a path, and then use the Set brush stroke path to get a preliminary spark, later use layer style to increase the flame effect.
Final effect
1. Create a new 1024 * 1024 pixel resol
background.
5, New Black text. Use a bold font with a size of 500 pixels.
6, double-click the text layer, add an outer glow. Change the blending mode to "light", color #a6dc6b, size 10, range 100%.
7, the fill value of the type layer is changed to 0%. Then the text will have a very subtle halo effect.
8. Right-click the text layer and select Create work path.
9. Download the Diamond Spark
, right-click to establish work path, set foreground color to #fff7e5, background color is #363636. Stroke path, right-click Delete Path, execute filter-twist-wave, change layer mode to dot light.
14, and then set the brush as follows.
RDD, Spark SQL built-in functions, windowing functions, UDFs, Udaf,spark streaming Kafka Direct API, Updatestatebykey, transform, sliding windows , Foreachrdd performance optimizations, integration with Spark SQL, persistence, checkpoint, fault tolerance, and transactions. 7, multiple from the actual needs of the enterprise extraction of complex cases: daily UV
The effect of the tutorial is really admirable, the method of making and tools are commonly used in our usual, but the effect is really unexpected. Whether the color or the effect of the picture is very real and vivid, really worthy of the works of the master. Adobe Illustrator may be required to make 3D-word effects, and no installation can be made directly using the author's diagram. Production version requirements for Photoshop CS2 and above versio
spark://hadoop1:7077--executor-memory 512m--driver-memory 500m3.1.5 Running WordCount ScriptsHere is the execution script for WordCount, which is written in Scala, and the following is a one-line implementation:Scala>sc.textfile ("Hdfs://hadoop1:9000/user/hadoop/testdata/core-site.xml"). FlatMap (_.split ("")). Map (x=> (x,1)). Reducebykey (_+_). Map (x=> (x._2,x._1)). Sortbykey (FALSE). Map (x=> (x._2,x._1)). Take (10)In order to see the implementat
spark://hadoop1:7077--executor-memory 512m--driver-memory 500m3.1.5 Running WordCount ScriptsHere is the execution script for WordCount, which is written in Scala, and the following is a one-line implementation:Scala>sc.textfile ("Hdfs://hadoop1:9000/user/hadoop/testdata/core-site.xml"). FlatMap (_.split ("")). Map (x=> (x,1)). Reducebykey (_+_). Map (x=> (x._2,x._1)). Sortbykey (FALSE). Map (x=> (x._2,x._1)). Take (10)In order to see the implementat
]"). Setappname ("Networkwordcount")
Val SSC = new StreamingContext (conf, Seconds (1))
Create a DStream that would connect to Hostname:port, like localhost:9999
Val lines = Ssc.sockettextstream ("localhost", 9999)
Split each line into words
Val words = Lines.flatmap (_.split (""))
Import Org.apache.spark.streaming.streamingcontext._
Count each word in each batch
Val pairs = Words.map (Word = + (
.
Next, read the "readme. md" file:
We saved the read content to the file variable. In fact, file is a mappedrdd. In Spark code writing, everything is based on RDD;
Next, we will filter out all the "spark" words from the read files.
A filteredrdd is generated;
Next, let's count the total number of "Spark" occurrences:
From the execution results, w
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.