Installation Environment
Mac/os X 10.9.5, 64-bit/Sbt/git
1. Install Jupyter and Python
Use Anaconda to install Jupyter, Anaconda with Python2.7 and Python3.5
Anaconda Download Address
Anaconda website to provide the operating system version of the download, including Windows,linux,os X, 32-bit and 64-bit, choose the appropriate version of the download
#bash anaconda3-2.5.0-macosx-x86_64.sh
Both Linux and OS X please use the Bash command. anaconda3-2.5.0-linux-x86_64.sh or./anaconda3-2.5.0-linux-x86_64.sh can not be installed successfully, the Landlord Pro test
Welcome to Anaconda3 2.5.0 (by Continuum Analytics Inc.)
In order to continue the installation process, please review the license
Agreement.
Please, press ENTER to continue
>>>
All the way to return
Do you approve the license terms? [Yes|no]
>>>
Yes, of course.
Anaconda3 'll now is installed into this location:
/user/zhangsan/anaconda3
-Press ENTER to confirm the location
-Press CTRL-C to abort the installation
-Or Specify a different location below
[/user/zhangsan/anaconda3]>>>
Anaconda The default installation path is/user/zhangsan/anconda3/, and of course you can specify the installation directory yourself, and enter it here.
Open a new terminal after Setup completes
#jupyter Notebook
will start and automatically open http://localhost:8888/tree# in the browser
[I 17:37:52.287 Notebookapp] Serving notebooks from local directory:/users/zhangsan/anaconda3
[I 17:37:52.287 Notebookapp] 0 active kernels
[I 17:37:52.287 Notebookapp] The Jupyter notebook is running at:http://localhost:8888/
[I 17:37:52.287 Notebookapp] Use Control-c to the stop this server and shut the all kernels (twice to skip confirmation).
Of course, here you can choose to manually set the port and WebUI address, and then manually open the UI, using the following command
#jupyter notebook--no-browser--port 5678--ip=9.21.63.204
Specifies that the browser is temporarily not open, the port is 5678,ip to install the native IP address of Anaconda, and then manually enter 9.21.63.204:5678 in the browser to open
Here we have successfully installed the Ipython notebook and started the Notebook
Write a piece of code to test, here is a Python drawing program, click the Run cell can see the result is a Sin,cos function image
2. Install a kernel other than Python (take Scala for example)
#jupyter Kernelspec List
To view the kernel that have been installed
Available Kernels:
Python3/users/daheng/anaconda3/lib/python3.5/site-packages/ipykernel/resources can see that there are only Python3
Jupyter provides support for many programming languages outside of Python, the list of available kernels such as R,go, Scala, and so on, but all require manual installation by the user
Select the Scala version you want to download, providing Scala 2.10 and Scala 2.11, where we take 2.10 as an example
Jupyter-scala Download Address
Extract to any directory can
#cd Jupyter-scala_2.10.5-0.2.0-snapshot/bin
#bash Jupyter-scala
Also, use bash
Generated ~/.ipython/kernels/scala210/kernel.json
Run Ipython Console with this kernel with
Ipython Console--kernel scala210
Use this kernel from IPython notebook, running
Ipython Notebook
and selecting the "Scala 2.10" kernel.
After the installation, again to see, will send down kernels inside many scala210
#jupyter Kernelspec List
Available Kernels:
scala210/users/daheng/.ipython/kernels/scala210
Python3/users/daheng/anaconda3/lib/python3.5/site-packages/ipykernel/resources Start Notebook again
#jupyter Notebook
You can see that you've got a new Scala notebook.
We've got Python and Scala's notebook here.
3, Install Spark-kernel
Here is the focus is also difficult, the official documents are too complex to find the appropriate installation methods, the landlord is also groping for a long time before the successful will jupyter and spark together
First download spark, the landlord used the spark-1.5.1-bin-hadoop2.6
Spark Download Address
#tar-XVF spark-1.5.1-bin-hadoop2.6.tgz
Extract to the directory you want, such as/user/zhangsan/documents
Specify Spark_home
#export $SPARK _hoem=/user/zhangsan/documents/spark-1.5.1-bin-hadoop2.6
#cd/user/zhangsan/anaconda3/
#git Clone Https://github.com/apache/incubator-toree.git
Here you need your machine to use Git, and if you can't install Git yourself, here's an unknown.
#cd Incubator-toree
The following to compile spark kernel, open makefile can see, the default is compiled 1.5.1, of course, you can specify, as long as the unified version of the local spark_home OK
#export apache_spark_version=1.6.1
This allows you to compile the specified version of Spark kernel.
Compilation requires SBT, please ensure that SBT is installed properly
Compile using the following command
#make Build
Here will automatically download to solve the problems related to dependencies, so time will be long, preferably in a better place to do.
[Success] Total time:274 s, completed 2016-3-29 11:05:52 The last occurrence of success, which means that the compilation succeeded
#make Dist
#cd Dist/toree/bin
#ls
You will see that there is a run.sh file, remember this path and file name, and later configuration will use
#cd/user/zhangsan/.ipython/kernels/spark
#vim Kernel.json
{
"Display_name": "Spark 1.5.1 (Scala 2.10.4)",
"Lauguage_info": {"name": "Scala"},
"ARGV": [
"/users/zhangsan/anaconda3/incubator-toree/dist/toree/bin/run.sh",
"--profile",
' {connection_file} '
],
"Codemirror_mode": "Scala",
"env": {
"Spark_opts": "--master=local[2]--driver-java-options=-xms1024m--driver-java-options=-xms4096m-- Driver-java-options=-dlog4j.loglevel=info ",
"Max_interpreter_threads": "16",
' Capture_standard_out ': ' true ',
' Capture_standard_err ': ' true ',
"Send_empty_output": "false",
"Spark_home": "/users/zhangsan/documents/spark-1.5.1-bin-hadoop2.6",
"Pythonpath": "/users/zhangsan/documents/spark-1.5.1-bin-hadoop2.6/python:/users/zhangsan/documents/ Spark-1.5.1-bin-hadoop2.6/python/lib/py4j-0.8.2.1-src.zip "
}
#jupyter Kernelspec List
Available Kernels:
scala210/users/daheng/.ipython/kernels/scala210
Spark/users/daheng/.ipython/kernels/spark
Python3/users/daheng/anaconda3/lib/python3.5/site-packages/ipykernel/resources Here we have successfully installed the Spark kernel
#jupyter Notebook
Open Notebook again, you can new a spark notebook.
Write a piece of code to test it.
This is a SPARKPI program, click the Run cell to see the results
Here we have successfully installed the Spark-kernel
The first time to write a technical blog, you are welcome to comment and ask questions