Python machine learning: 6.1 Creating workflows from pipelines

Source: Internet
Author: User

When we apply different preprocessing techniques, such as the standardization of features and the analysis of data principal components, we need to reuse certain parameters, such as standardizing the training set and normalizing the test set (both must use the same parameters).

In this section, you'll learn a very useful tool: pipelines (pipeline), where pipelines are not pipelines in Linux, but pipeline classes in Sklearn, and they do the same thing.

Reading Breast cancer Wisconsin datasets

In this chapter, we will use a new two categorical dataset Breast cancer Wisconsin, which contains 569 samples. The first two columns of each data are unique IDs and corresponding category values (m= malignant tumors, b= benign tumors), and the 第3-32 column is a characteristic of real values.

Not much to say, read the data set first, then turn y to 0, 1:

Then create the training set and the test set:

Put transformer and estimator in the same pipeline

As mentioned in the previous chapters, many machine learning algorithms require the same range of feature values. Therefore, we will standardize each column of the BCW dataset before applying it to a linear classifier. In addition, we want to compress the original 30 dimension features into 2 dimensions, which is given to the PCA.

Before we all performed an operation at each step, we now learn to connect Standardscaler, PCA, and logisticregression together using pipelines:

The pipeline object receives a list of tuples as input, each tuple has the first value as the variable name, and the second element of the tuple is transformer or estimator in Sklearn.

Each step in the middle of the pipeline is made up of transformer in Sklearn, and the final step is a estimator. In our example, the pipeline contains two intermediate steps, a standardscaler and a PCA, both of which are transformer, and the logistic regression classifier is estimator.

When the pipeline PIPE_LR executes the Fit method, first Standardscaler executes the fit and transform methods and then inputs the converted data to PCA,PCA to perform the fit and transform methods as well. Finally, input the data to logisticregression and train an LR model.

How many transformer can be in the middle for a pipe? The way the pipeline works can be used to demonstrate (be sure to note that the pipeline executes the Fit method, and transformer to perform fit_transform):

Python Machine learning Chinese catalog (http://www.aibbt.com/a/20787.html)

Reprint please specify the source, Python machine learning (http://www.aibbt.com/a/pythonmachinelearning/)

Python machine learning: 6.1 Creating workflows from pipelines

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.