One application direction of machine learning is how to make machines understand images. This includes the identification, tracking and measurement of objects in the image.
What to do -driverless cars, face recognition, license plate recognition gesture recognition (game orientation)
PIL Static Library
OpenCV Dynamic Library
ImportPandas as PDImportNumPy as NP fromPILImportImagetrain=pd.read_csv ('Train.csv') forInd,rowinchTrain.iloc[1:10].iterrows ():#the first line is the labelI=Row[0] Arr=np.array (row[1:],dtype=np.uint8) arr.resize ((28,28)) Im=Image.fromarray (arr)#IND is the first image, I means that the image is a few #if there is no ' train_pics ' This folder will be errorIm.save ('./train_pics/%s-%s.png'% (Ind,i))
OpenCV Video input and output-this program is not running because of environmental configuration issues:
################# camera input, output ##################defOnmouse (event, x, Y, Flags, param):#Event: #cv_event_mousemove 0 Slide #Cv_event_lbuttondown 1 left button click #Cv_event_rbuttondown 2 Right-click #Cv_event_mbuttondown 3 middle button click #cv_event_lbuttonup 4 left button release #Cv_event_rbuttonup 5 Right-click Release #cv_event_mbuttonup 6 middle button Release #cv_event_lbuttondblclk 7 left button double-click #CV_EVENT_RBUTTONDBLCLK 8 Right-click #CV_EVENT_MBUTTONDBLCLK 9 middle button Double click #x, y for mouse click position #Flags: #Cv_event_flag_lbutton 1 left button drag #Cv_event_flag_rbutton 2 Right-click Drag #Cv_event_flag_mbutton 4 middle button Drag #Cv_event_flag_ctrlkey 8 (8~15) press CTRL to leave events #Cv_event_flag_shiftkey 16 (16~31) press shift to leave an event #Cv_event_flag_altkey 32 (32~39) press ALT to leave an event #param: Custom numbering Globalclickedifevent = =cv2.cv.CV_EVENT_LBUTTONUP:clicked=trueclicked=False#Read camera inputCameracapture =Cv2. Videocapture (0) Cv2.namedwindow ('Mycamera')#Bind Mouse CallbackCv2.setmousecallback ('Mycamera', Onmouse)PrintU'Click the window or press any key to exit.'Success, Frame=Cameracapture.read () whileCv2.waitkey (1) = =-1 and notclicked:ifFrame is notNone:cv2.imshow ('Mycamera', frame) success, frame=Cameracapture.read () Cv2.destroywindow ('Mycamera')
First, Haar cascade classifier
Harr Cascade classifier =harr-like feature detection +adaboost
How to make a few weak classifiers into a strong classifier.
#-*-coding:utf-8-*-#Comment by HeibankeImportCv2#instantiating a classifier#The argument to the instantiated function is an XML file, which is a strong classifier that trains the milk well.Face_cascade = Cv2. Cascadeclassifier ('./xml/haarcascade_frontalface.xml') Eye_cascade= Cv2. Cascadeclassifier ('./xml/haarcascade_eye.xml')#Open Imageimg = Cv2.imread ('./pics/test_faces.jpg') Gray=Cv2.cvtcolor (IMG, Cv2. Color_bgr2gray)#Gray = cv2.imread (' test1.jpg ', Cv2. Cv_load_image_grayscale)#Detectmultiscale Parameter Explanation#Gray , grayscale image for detection#1.2:scale_factor The scale factor of the search window in successive scans of the previous and posterior two times. --that is, when searching, zoom in on the window proportionally.#For example, 1.1 refers to expanding the search window by 10%. #2:min_neighbors The minimum number of adjacent rectangles that comprise the detection target (Default-1). --only judging the adjacent rectangle is sometimes judged as a human face, if it is not, then it will not be treated as a human face .#if the number of small rectangles that comprise the detection target and less than min_neighbors-1 are excluded. #if Min_neighbors is 0, the function returns all the checked candidate rectangles without any action. --We choose 2, which is the rectangle that selects all the adjacent rectangles that are faces.faces = Face_cascade.detectmultiscale (Gray, 1.2, 2) for(X,Y,W,H)inchfaces:#Cv2.rectangle Parameter Explanation #(x, y) is the upper-left corner of the rectangle #(X+W,Y+H) is the lower right corner of the rectangle #(255,0,0) is the RGB color of the rectangle, which is red #2, is the line width of the drawing rectangleCv2.rectangle (IMG, (x, y), (x+w,y+h), (255,0,0), 2) Roi_gray= Gray[y:y+h, x:x+W] Roi_color= Img[y:y+h, x:x+W]#detection of the eyes based on human face detectionEyes =Eye_cascade.detectmultiscale (Roi_gray) for(Ex,ey,ew,eh)inchEyes:cv2.rectangle (Roi_color, (Ex,ey), (ex+ew,ey+eh), (0,255,0), 2) Cv2.imshow ('img', IMG) k=cv2.waitkey (0)ifK==27: Cv2.destroywindow ('Test')
Python Computer Vision