DatabaseService mysqld Restart//Restart MySQL databaseNetstat-NTLP //See if MySQL port is 3306I use the local test remote connection, the author is using the Mysqlworkbench client if not connected, remember to switch off the firewall under the test service iptables Restart/stop/start switch off the firewall can be connected, it is the port problem scheme1/sbin/iptables- I.INPUT- PTcp--Dport3306 -jACCEPT//
reaction time, users will sometimes reflect pictures can not go up, but in fact, the backstage has not returned ...
So I decided to use the Base64 to upload to the backstage way
The structure is similar to the original, just one more canvas
$ ("#uploadPic"). On (' Change ', function (event) {Event.preventdefault (); Console.log ($ (this) [0].files); var file = $ (
This) [0].files[0]; if (file.size>2097152) {alert ("Upload picture please less than 2M"); return false;} if (!/i
]) {maxxx=Maxx[i]; }} printf ("\ n \ nthe maximum value of all the sub-arrays of the array:%d\n\n", maxxx); return 0;}Third, the experimental test resultsArray One: 7,-3,5,-10,-12Second set of arrays: 100,3,-20,-10,-12Iv. Harvest and experience after development of knot pairThis pair of development let me benefit, first of all two of us have the same purpose, although there are different ideas, but in the communication we are to
Step 1: Test spark through spark Shell
Step 1:Start the spark cluster. This is very detailed in the third part. After the spark cluster is started, webui is as follows:
Step 2:Start spark shell:
In this case, you can view the shell in the following Web console:
Step 3:Copy the spark installation directory "readme. md" to the HDFS system.
Start a new command terminal on the master node and go to the spar
"Test_policy".650) this.width=650; "Src=" http://7xo6kd.com1.z0.glb.clouddn.com/ Upload-ueditor-image-20161129-1480374307243096152.jpg "/>650) this.width=650; "Src=" http://7xo6kd.com1.z0.glb.clouddn.com/ Upload-ueditor-image-20161129-1480374307384025725.jpg "/>In the dropdown box, select Rule "Allow SSH" and click "Save Changes".650) this.width=650; "Src=" http://7xo6kd.com1.z0.glb.clouddn.com/ Upload-ueditor-image-20161129-1480374310706029995.jpg "/>As you can see, "Allow SSH" has been succes
These days have been studying how to figure out, after a few days of thinking finally wrote a simple operation and maintenance of the prototype, the following paste out to share with you, because my data volume is small, so I directly from the client with the script to collect data, sent to the remote MySQL server, MySQL server deployed above, Django, combined with drawing tools to figure out:Monitoring top
. Modify environment variables:
Go to the configuration file as shown in:
Press "I" to enter the insert mode and add the scala environment compiling information, as shown in:
From the configuration file, we can see that we have set "scala_home" and set the scala bin directory to path.
Press the "ESC" key to return to normal mode, save and exit the configuration file:
Run the following command to modify the configuration file:
4. display the installed Scala version on the terminal,
Practice of setting a route table to restrict Website accessThe website that received this experiment: for short, "website", PING the website domain name and obtain the IP address. The IP address ends with. 69 www.2cto.com. Then, we can find out how many hops the route goes through under tracert.Www.2cto.com uses SolarWinds. Engineers to scan the IP address segment of the website. We know that there is
Python practice every day (1): Calculate the most frequently used words in each article in a folder,
# Coding: utf-8import OS, repath = 'test' files = OS. listdir (path) def count_word (words): dic ={} max = 0 marked_key = ''# calculate the number of times each word appears for word in words: if dic. has_key (word) is False: dic [word] = 1 else: dic [word] = dic
Step 4: build and test the spark development environment through spark ide
Step 1: Import the package corresponding to spark-hadoop, select "file"> "project structure"> "Libraries", and select "+" to import the package corresponding to spark-hadoop:
Click "OK" to confirm:
Click "OK ":
After idea is completed, we will find that the spark jar package is imported into our project:
Step 2: Develop the first spark program. Open th
C ++ practice: implementation of a lightweight array class
Note: This array class can be regarded as a simplified version of vector in the standard library: supports general operations on arrays, supports copying and assigning values, and supports redefinition of sizes. multithreading is not considered, no extra space is pre-allocated for performance optimization
Main topic:Very simple, just need to find the maximum depth of a binary tree, it seems that there is no time and space requirements.Solution Method:More simple, only need to follow the width first method to find, here I use a queue to save the node to be expanded, B to save a expanded node, and then use the T intermediate variable to exchange
Step 5: test the spark IDE development environment
The following error message is displayed when we directly select sparkpi and run it:
The prompt shows that the master machine running spark cannot be found.
In this case, you need to configure the sparkpi execution environment:
Select Edit configurations to go to the configuration page:
In program arguments, enter "local ":
This configuration indicates that our program runs in local mode
The previous section applies an irregular virtual firewall that does not allow any traffic to pass through. Today we will add a rule to the firewall to allow SSH.Finally, we will compare the security group and the FWaaS.Let's add a firewall rule: Allow SSH.Click the "Add Rule" button on the Firewall Rules tab page.Name the new rule "Allow SSH", protocal select "TCP", Action is "Allow", Destination port/port
Step 1: Test spark through spark Shell
Step 1:Start the spark cluster. This is very detailed in the third part. After the spark cluster is started, webui is as follows:
Step 2: Start spark shell:
In this case, you can view the shell in the following Web console:
Step 3:Copy the spark installation directory "readme. md" to the HDFS system.
Start a new command terminal on the master node and go to
in/etc/hosts on slave2. The configuration is as follows:
Save and exit.
In this case, we can ping both master and slave1;
Finally, configure the ing between the host name and IP address in/etc/hosts on the master. The configuration is as follows:
At this time, the ping command is used on the master to communicate with the slave1 and slave2 machines:
It is found that the machines on the two slave nodes have been pinged.
Finally, let's test
Article title: a simple practice of NAT using IPtables in RHLE5 operating system. Linux is a technology channel of the IT lab in China. Includes basic categories such as desktop applications, Linux system management, kernel research, embedded systems, and open source.
Implementation: The Linux host performs routing to achieve shared internet access for the subnet
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.