The Linux 6.7 version of the virtual machine that comes with Python is 2.6 and can be installed directly if you do not need to upgrade the Python version Ipython[[email protected] ~]# python-v----View Python version[email protected] ~]# Yum install Python-pip[[Email protected] ~]# pip install Ipython[[Email protected] ~]# pip install ipython==1.2.1--Specify versi
Before I talked to you about some of the data bases of Python, starting with this article, we began to formally learn the modular programming of Python.Now let's explain what the module is calledI've already talked about how to define a method, if you're using a Python interaction (with an interactive device, or Ipython) to learn to define a method, you define the method, then exit the interaction, and then you use this method, obviously, it will not
;>>
Anaconda The default installation path is/user/zhangsan/anconda3/, and of course you can specify the installation directory yourself, and enter it here.
Open a new terminal after Setup completes
#jupyter Notebookwill start and automatically open http://localhost:8888/tree# in the browser
[I 17:37:52.287 Notebookapp] Serving notebooks from local directory:/users/zhangsan/anaconda3
[I 17:37:52.287 Notebookapp] 0 active kernels
[I 17:37:52.287 Notebookapp] The Jupyter notebook is running
Brief introduction:
Take a look at some of the installation steps for the Python environment package
1. Upgrade Python to 2.7.10 (default 2.6.6)
Shell > Yum-y Install epel-releaseshell > yum-y install gcc wget readline-devel zlib-devel openssl-develshell > wget Https://www.python.org/ftp/python/2.7.10/Python-2.7.10.tgzshell > Tar zxf python-2.7.10.tgzshell > CD Python-2.7.10;./configure--prefix=/usr/local/python2.7; make; Make Installshell > Mv/usr/bin/python/usr/bin/old_pythonshell > Ln-s/usr/
Centos6.6 install IPython3.0
1. Install Python2.7.8
Because Ipython only supports Versions later than Python2.7.8, you must first install Python2.7.8.
Download the source package https://www.python.org/downloads/release/python-278/ at the following address Python-2.7.8.tgz
1. decompress:
[root@centos_1~]
#tarxfPython-2.7.8
2. Go to the decompressed Python-2.7.8 folder and compile it:
[root@centos_1Python-2.7.8]
#./configure--prefix=/usr/local/pytho
://s1.51cto.com/wyfs02/M00/80/2E/wKiom1c6f7_zqymOAAGiCM09ZH0170.png "title=" Qq20160517101040.png "alt=" Wkiom1c6f7_zqymoaagicm09zh0170.png "/>Test: entering the cmd command line executes python into the Python environment interface.M A CThe most recent Macs system comes with a python environment you can also link http://www.python.org/download/Download the latest version of the installation.L i n u xPython (the python that comes with is not too friendly to perform the TAB key)
Transferred from: https://www.reddit.com/r/IPython/comments/3lf81w/using_python35_in_ubuntu_trusty/Note:after installing python3.5 according to this scheme, many system programs can not be used. Because the System program uses 3.4, it can only restore the system Python version:$ sudo rm/usr/bin/python3$ sudo mv/usr/bin/python3-old/usr/bin/python3$ wget https://bootstrap.pypa.io/get-pip.py$ sudo python3 get-pip.py$ sudo python get-pip.py$ sudo pip3 ins
/
# PythonPath: add the Python Environment added to the pySpark module in Spark
Export PYTHONPATH =/opt/spark-hadoop/python
Restart the computer to make the/etc/profile take effect permanently and take effect temporarily. Open the command window and execute source/etc/profile to take effect in the current window.
Test installation result
Open the command window and switch to the Spark root directory.
Run./bin/spark-shell to open the con
=$ {SCALA_HOME}/bin: $ PATH
# Setting Spark environment variable
Export SPARK_HOME =/opt/spark-hadoop/
# PythonPath: add the Python Environment added to the pySpark module in Spark
Export PYTHONPATH =/opt/spark-hadoop/python
Restart the computer to make the/etc/profile take effect permanently and take effect temporarily. Open the command window and execute source/etc/profile to take effect in the current window.
Test installation result
Open th
I recently wrote a machine learning program under spark and used the RDD programming model. The machine learning algorithm API provided by spark is too limited. Could you refer to scikit-learn in spark's programming model? I recently wrote a machine learning program under spark and used the RDD programming model. The machine learning algorithm API provided by spark is too limited. Could you refer to scikit-learn in spark's programming model? Reply: different from the above, I think it is possibl
JetBrains series of the IDE is really too easy to use, a kind of brief encounter feeling.Third-party libraries are essential in the development process, and if you have a full-complement IDE during development, you can save time checking documents.For example: Give Pycharm an environment variable with Pyspark, and set the code completion. The end result should be this:The first configuration is the compilation (interpretation) support of the third-par
export spark_home=/opt/spark-hadoop/ #PythonPath spark pyspark python environment Export Pythonpath=/opt/spark-hadoop/python
Restart the computer, make /etc/profile Permanent, temporary effective, open command window, execute source/etc/profile Takes effect in the current window
Test the installation Results
Open a Command window and switch to Spark root directory
Executio
Tags: spark pythonPreparation conditions:
Deploying Hadoop clusters
Deploying Spark clusters
Install Python (i installed the Anaconda3,python is 3.6)
To configure environment environment variables:Vi. BASHRC #添加如下内容
export spark_home=/opt/spark/current
export pythonpath= $SPARK _home/python/: $SPARK _home/ Python/lib/py4j-0.10.4-src.zipPs:spark inside will bring a pyspark module, but I am the official download spark2.1
1, download the followingOn the D-plate.Add spark_home = D:\spark-2.3.0-bin-hadoop2.7.
and add%spark_home%/bin to the environment variable path.
Then go to the command line and enter the Pyspark command. If executed successfully. The environment variable is set successfully
Locate the Pycharm sitepackage directoryRight click to enter the directory, the above D:\spark-2.3.0-bin-hadoop2.7 there is a/python/
the Pythonpath:spark installation directory4. Copy the Pyspark packageWrite Spark program, copy pyspark package, add code display functionIn order for us to have code hints and complete functionality when writing Spark programs in pycharm, we need to import the pyspark of spark into Python. In Spark's program, there's a python package called Pyspark.Pyspark BagP
Search, cross-validation, measurement.-pretreatment: Feature extraction, standardization.Stats models: is a statistical analysis package that contains classical statistics and econometrics algorithms, with the following sub-modules:-Regression model: linear regression, generalized linear model, robust linear model, linear mixed effect model, and so on.-Variance Analysis Anova-Time series Analysis: Ar Arma arima var and other models-Nonparametric Method: Kernel Density estimation, nuclear regres
Matplotlib: plotting (translation), matplotlibplotting
Thanks
Thank you very much for reviewing and correcting Bill Wing and Christoph Deil.
Author:Nicolas Rougier, Mike Müller, Ga ë l Varoquaux
Content of this chapter:
Introduction
Simple plotting
Graphics, subgraphs, axes, and scales
Other types of graphics: examples and exercises
Content not included in the tutorial
Quick Reference
4.1 Introduction
Matplotlib may be the most commonly used Python package in two-dimensional graphics. It
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.