1. Background
When I was outside the company internship, a great God told me that learning computer is to a Bayesian formula applied to apply. Well, it's finally used. Naive Bayesian classifier is said to be a lot of anti-Vice software used in the algorithm, Bayesian formula is also relatively simple, the university to do probability problems often used. The core idea is to find out the most likely effect of the eigenvalue on the result. The formula is as follows:
What is naive Bayesian is the case that the eigenvalues are independent of each other. Bayesian can have a lot of deformation, here first make a simple, later encountered complex to write.
2. Data sets
From the machine learning combat.
[[' I ', ' dog ', ' has ', ' flea ', ' problems ', ' help ', ' please '], 0
[' Maybe ', ' not ', ' take ', ' him ', ' to ', ' dog ', ' Park ', ' stupid '], 1
[' My ', ' dalmation ', ' are ', ' so ', ' cute ', ' I ', ' love ', ' him '], 0
[' Stop ', ' posting ', ' stupid ', ' worthless ', ' garbage '], 1
[' Mr ', ' licks ', ' ate ', ' my ', ' steak ', ' How ', ' to ', ' stop ', ' him '], 0
[' Quit ', ' buying ', ' worthless ', ' dog ', ' food ', ' stupid ']] 1
The above is six sentences, the mark is 0 sentence of the expression of a curse sentence, the mark is 1 sentence of the expression as foul language. By analyzing every word in each sentence, the probability of a foul sentence or a normal sentence, we can find out which words are swearing.
3. Code
#以矩阵形式创建数据集
def loaddataset ():
postinglist=[[' i ', ' dog ', ' has ', ' flea ', ' problems ', ' help ', ' please '],
[ ' Maybe ', ' not ', ' take ', ' him ', ' to ', ' dog ', ' Park ', ' stupid ',
[' I ', ' dalmation ', ' are ', ' so ', ' cute ', ' I ', ' love ', ' H Im ',
[' Stop ', ' posting ', ' stupid ', ' worthless ', ' garbage '],
[' Mr ', ' licks ', ' ate ', ' my ', ' steak ', ' How ', ' to ', ' Stop ', ' him '],
[' Quit ', ' buying ', ' worthless ', ' dog ', ' food ', ' stupid ']]
Classvec = [0,1,0,1,0,1] #1 is Abusive, 0 not return
Postinglist,classvec
More Wonderful content: http://www.bianceng.cnhttp://www.bianceng.cn/Programming/sjjg/
#将矩阵内容添加到列表, set gets the non-duplicated element
def createvocablist (dataSet) in the list:
Vocabset = set ([]) #create empty set
for Document in DataSet:
vocabset = Vocabset | Set (document) #union of the two sets return
list (vocabset)
#判断list中每个词在总共词语list中的位置
def setofwords2vec (Vocablist, Inputset):
Returnvec = [0]*len (vocablist)
for Word in Inputset:
if Word in vocablist:
returnvec[vocablist.index (word)] = 1
else:print "The word:%s is n OT in my vocabulary! '% word return
Returnvec
def trainNB0 (trainmatrix,traincategory):
numtraindocs = Len (trainmatrix)
numwords = Len (trainmatrix[0))
pabusive = SUM (traincategory)/float (numtraindocs) #脏句的比例
p0num = zeros (numwords); p1num = Zeros (numwords) # Zero is the function of the NumPy band, zeros (i) is the list
p0denom = 0.0; p1denom = 0.0 for
i in range (Numtraindocs):
if Traincategory[i] = = 1: #如果是粗口句, each word in P1num plus one
p1num + = Trainmatrix[i]
p1denom + = SUM (trainmatrix[i))
else:
P0num + = Trainmatrix[i]
p0denom + = SUM (trainmatrix[i))
p1vect = p1num/p1denom #粗口字概率
P0vect = p0num/p0denom return
p0vect,p1vect,pabusive