JetBrains series of the IDE is really too easy to use, a kind of brief encounter feeling.
Third-party libraries are essential in the development process, and if you have a full-complement IDE during development, you can save time checking documents.
For example: Give Pycharm an environment variable with Pyspark, and set the code completion. The end result should be this:
The first configuration is the compilation (interpretation) support of the third-party library, in Run/edit configurations. Click the + sign to add a new configuration.
In the Configuration tab, locate the environment variables in environment, which is edited as follows:
Pythonpath is a directory of Pyspark, for example:/usr/local/spark/spark-1.6.2-bin-hadoop2.6/python
Spark_home is a catalog of spark, for example:/usr/local/spark/spark-1.6.2-bin-hadoop2.6
All the way OK, save on it. The Spark program you wrote at this time has actually been able to explain (compile) the run, but there is no code completion. Next, set up the code completion.
Pycharm will find your third-party library in the Dist-package and site-package directories of the corresponding version of the Python interpreter you choose, and then analyze the contents of the directory and make the completion of the code.
The next step is to find this dist directory, which can be done with locate:
1 locate/lib/python2.7/site-packages
A lot of content, direct CTRL + C stop on the line. And then actually found this directory, for example my is in:/usr/local/lib/python2.7/site-packages/
Next, just make a soft link, point to the Pyspark directory, save it in Python2.7/site-packages.
1 ln-s/usr/local/spark/spark-1.6.2-bin-hadoop2.6/python/pyspark//usr/local/lib/python2.7/site-packages /
Cut back to Pycharm, you will find Pycharm in retrieving these updated directories, and so on after the retrieval can be automatically completed.
[JetBrains Series] external chain third-party library + code completion settings