Reprint Please specify source: http://www.codelast.com/
Logistic Regression (or logit Regression), i.e. logistic regression, précis-writers is LR, is a very common algorithm/method/model in machine learning field.
You can find 100,000 articles about logistic regression from the Internet, and not much of my article, but, like the most optimized series I've written, I still try to explain it again with "words"--maybe unprofessional, but easy to read. The ones that come up are just a few pages of mathematical formulas. What's the most annoying thing, isn't it.
So this article is written to people who have not heard of the logistic regression, I believe that after reading this article, you can almost from scratch, the logic regression application to practice.
The Logistic regression is a classification algorithm. Classification, that is, a group (or problem, or data) into several categories, for example, male/female/shemale; the person who loves her/doesn't love her; it's going to rain today/it won't rain today.
The Logistic regression is most commonly used to deal with the "two classification" problem, which means that there are only two categories, such as "people who love her/Don't Love Her" is two classification, and "male/female/shemale" is not two classification. Of course, the logistic Regression can also be used to deal with multi-classification problems, called "multi-Categorical Logistic regression" (Multiclass logistic Regression), but this is not covered in this article.
So, to put it simply, to give you a piece of data, use the logistic regression to determine which of the two categories the data should be divided into.
The Logistic regression is very useful in the real world. For example, it can be used to determine whether a user clicks on an ad (which is clicked/not clicked), and can use the logistic regression to determine whether two groups of people will fall in love (Love/Not Love), etc.
The main purpose of machine learning is to get some values of unknown parameters by calculating the historical data (i.e. "learning"), so as to infer the conclusion of new data. For example, a very simple function: y=ax+b, in the case of known sets of historical data (x, y):
(1, 5.5)
(1.5, 7)
(2, 6.5)