Python implementation decision Tree _ID3 algorithm

Source: Internet
Author: User
Tags extend
Decision Tree:
ID3 algorithm: 1, Shannon entropy: If the transaction to be sorted may be divided into multiple classifications, then the information for x i is defined as:, where P (x i) chooses the probability of the classification. Entropy is defined as the expected value of information, and the calculation formula is: When the entropy is higher, the more data of different types, the higher the disorder of dataset set. Select the Shannon entropy calculation for the last category in the DataSet DataSet (featvec[-1]), the code is as follows:
Import Math defCalcshannonent (DataSet): NumEntries = Len (DataSet) labelcounts = {} forFeatvec inDataset:currentlabel = featvec[-1] Labelcounts[currentlabel] = labelcounts.get (CurrentLabel, 0) + 1 Shannonent = 0.0 forKey inLabelcounts:prob = float (Labelcounts[key])/numentries shannonent-= prob * Math.log (prob,2) returnShannonent
2, according to the given characteristics of the data set divided:
defSplitdataset (dataSet, axis, value): Retdataset = [] forFeatvec inDataSet: ifFeatvec[axis] = = Value:reducedfeatvec = Featvec[:axis] Reducedfeatvec.extend (featvec[axis+1:]) Retdataset.append (Reducedfeatvec) returnRetdataset
Somewhat similar to the filtering function in Excel, the following figure is equivalent to featvec[1] = = ' prescript ' Filtered matrix list

Step 1: Select the column that contains a field to filter ( ifFeatvec[axis] = = value) Step 2:excel automatically get a list of matrices containing ' value ' in column x, but Python needs to be sliced and combined as follows:
Reducedfeatvec = Featvec[:axis] Reducedfeatvec.extend (featvec[axis+1:]) retdataset.append (REDUCEDFEATVEC)
Tips:The difference between List.append and List.extend, both of which are adding elements at the end of the list: >>> a = [1,2,3]
>>> B = [4,5,6]
>>> A.append (b)
>>> A
[1, 2, 3, [4, 5, 6]] If the Append method is executed, the list gets the fourth element, and the fourth element is also a list >>> a = [1,2,3]
>>> A.extend (b)
>>> A
[1, 2, 3, 4, 5, 6] using the Extend method, you get a list of all elements A and B
3, choose the best way to divide the dataset: Requirements 1: The data must be a list of list elements, and all the list elements have the same length of data; Requirement 2: The last column of the data or the last element of each instance is the category label for the current instance.
defChoosebestfeaturetosplit (DataSet): Numfeatures = Len (dataset[0])-1 baseentropy = Calcshannonent (DataSet) b Estinfogain = 0.0; Bestfeature =-1 #创建唯一的分类标签列表 forI inRange (numfeatures): Featlist = [Example[i] forExample inDataSet] Uniquevals = set (featlist) newentropy = 0.0 #计算每种划分方式的信息熵 forValue inUniquevals:subdataset = Splitdataset (DataSet, I, value) prob = Len (subdataset)/float (Len (data           Set)) Newentropy + = prob * Calcshannonent (subdataset) Infogain = baseentropy-newentropy #计算最好的信息增益 if  (Infogain > bestinfogain):                       bestInfoGain = infoGain            
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.