Since you are familiar with socket programming, you can also write a similar simple server client program. The raspberry pie still serves as the server, and the computer still serves as the client. Only now is the raspberry pie sent, the computer receives. The data is no longer a control command, but an image data obtained from a raspberry-sent USB camera. (implemented through the OPENCV and python-imaging libraries) This program works with the previous program, with only two separate ports on the line.
The basic implementation of the program is through the OPENCV library to the camera to obtain each image into a string, and then use python-imaging compression for JPEG format, and then sent out with a socket. The computer receives each small piece of data and then turns the string of each segment into a frame image and displays it in the window. (The video you see is actually a photo that keeps moving).
If you are familiar with OPENCV, you can handle the image here, such as using Haarcascade to do image recognition (face recognition, human body recognition), both before sending, in the raspberry pie on the processing of images, can also be sent after the computer processing.
Since the final requirement of this project is for the robot to run independently, it is recommended that most of the processing be done on the raspberry pie, which should consider performance issues. Although the raspberry pie can use haarcascade face recognition or human body recognition (upper torso), but the performance is not enough, can reach a lower frame rate, so eventually adopted a color recognition. The actual running effect is smoother, which will be described in detail later.
Run the code on the raspberry pie in the car:
#!/usr/bin/python Import socket, time import CV import cv2 import Image, Stringio #capture = cv. Capturefromcam (0) #cv. Setcaptureproperty (Capture, CV. Cv_cap_prop_frame_width, #cv). Setcaptureproperty (Capture, CV. Cv_cap_prop_frame_height, Cap=cv2). Videocapture (0) ret=cap.set (3,320) ret=cap.set (4,240) sock = Socket.socket (socket.af_inet,socket. Sock_stream) Sock.bind (("0.0.0.0", 9996)) Sock.listen (2) DST, dst_addr = sock.accept () print "Destination Connected by", Dst_addr while True: #img = cv. Queryframe (capture) ret, frame = Cap.read () Img=cv.fromarray (frame) pi = image.fromstring ("RGB", CV.
GetSize (IMG), img.tostring ()) buf = Stringio.stringio () pi.save (buf, format = "jpeg") JPEG = Buf.getvalue () Buf.close () transfer = Jpeg.replace ("\ n", "\-n") #print len (transfer), transfer[-1] Try:dst.sen Dall (transfer + "\ n") time.sleep (0.04) except Exception as EX:DST, dst_addr = sock.accept () p RinT "Destination Connected Again by", dst_addr except Keyboardinterrupt:print "interrupted" break D St.close () Sock.close ()
Code to run on the computer:
#!/usr/bin/python
import CV2.CV as CV
import cv2
import socket, time, Image, Stringio
import numpy as np< C4/>host, PORT = "10.0.1.13", 9996
sock = Socket.socket (socket.af_inet, socket. SOCK_STREAM)
sock.connect (HOST, PORT)
f = sock.makefile ()
CV. Namedwindow ("Camera_server") while
True:
msg = F.readline ()
if not msg:
break
jpeg = Msg.replace ("\-n", "\ n")
buf = Stringio.stringio (jpeg[0:-1])
buf.seek (0)
pi = Image.open (buf)
IMG = CV. Createimageheader ((), CV. IPL_DEPTH_8U, 3
cv. SetData (IMG, pi.tostring ())
buf.close ()
FRAME_CVMAT=CV. Getmat (IMG)
frame=np.asarray (Frame_cvmat)
cv2.imshow (' frame ', frame)
if Cv2.waitkey (1) &0xff = = Ord (' Q '):
break
#cv. ShowImage ("Camera_server", IMG)
#if CV. Waitkey (+) =:
# break
sock.close ()
CV. Destroyallwindows ()