Python socket (socket)

Source: Internet
Author: User
Tags epoll ftp upload file thread stop

Socket I. Overview

Sockets are also commonly referred to as "sockets," which describe IP addresses and ports, and are a handle to a chain of communication, where applications usually make requests to the network through "sockets" or respond to network requests.

Sockets originate from UNIX, and one of the basic philosophies of unix/linux is "Everything is file", and the file is operated with "open" "Read and Write" "Off" mode. Socket is an implementation of this pattern, the socket is a special kind of file, some of the socket function is the operation of it (read/write Io, open, close)

The difference between a socket and file:

    • The file module is "open", "Read and write" "Close" for a specified document
    • The socket module is "open", "Read and write" "Off" for server-side and client sockets

#!/usr/bin/env python#-*-coding:utf-8-*-import socketip_port = (' 127.0.0.1 ', 9999) SK = Socket.socket () sk.bind (ip_port ) Sk.listen (5) while True:    print ' server waiting ... '    conn,addr = sk.accept ()    client_data = CONN.RECV (1024)    Print Client_data    conn.sendall (' Don't answer, don't answer, don't answer ')    conn.close () socket server
#!/usr/bin/env python#-*-coding:utf-8-*-import socketip_port = (' 127.0.0.1 ', 9999) SK = Socket.socket () sk.connect (ip_ Port) Sk.sendall (' request to occupy the earth ') server_reply = Sk.recv (1024x768) print server_replysk.close () socket client

Web Service Applications:

#!/usr/bin/env python#coding:utf-8import socketdef handle_request (client):    buf = Client.recv (1024x768)    Client.send ("http/1.1 ok\r\n\r\n")    client.send ("Hello, World") def Main ():    sock = Socket.socket (socket.af _inet, Socket. SOCK_STREAM)    sock.bind ((' localhost ', 8080))    Sock.listen (5)    while True:        connection, address = Sock.accept ()        handle_request (connection)        connection.close () if __name__ = = ' __main__ ':  Main ()
second, explain

SK = Socket.socket (socket.af_inet,socket. sock_stream,0)

Parameter one: Address cluster socket.af_inet IPv4 (default) Socket.af_inet6 IPv6 Socket.af_unix can only be used for single UNIX system interprocess communication parameter two: type socket. Sock_stream streaming socket, for TCP (default) socket. SOCK_DGRAM datagram socket, for UDP socket. Sock_raw the original socket, the ordinary socket can not handle ICMP, IGMP and other network messages, and Sock_raw May, second, Sock_raw can also handle special IPV4 messages, in addition, the use of the original socket, can be IP_  The HDRINCL socket option constructs an IP header by the user. Socket. SOCK_RDM is a reliable form of UDP that guarantees the delivery of datagrams but does not guarantee the order. Sock_ram is used to provide low-level access to the original protocol, which is used when certain special operations are required, such as sending ICMP packets.  Sock_ram is typically used only by advanced users or by programs that are run by administrators. Socket. Sock_seqpacket reliable Continuous Packet service parameter three: protocol 0 (default) protocol related to a specific address family, if 0, the system will automatically select an appropriate protocol based on the address format and socket category.
Import Socketip_port = (' 127.0.0.1 ', 9999) SK = Socket.socket (socket.af_inet,socket. sock_dgram,0) Sk.bind (ip_port) while True:    data = SK.RECV (1024x768)    print DataImport socketip_port = (' 127.0.0.1 ', 9999) SK = Socket.socket (socket.af_inet,socket. sock_dgram,0) while True:    INP = raw_input (' data: '). Strip ()    if InP = = ' exit ': Break    Sk.sendto (inp,ip_ Port) Sk.close ()

Sk.bind (Address)

S.bind binds the socket to the address. The format of address addresses depends on the address family. Under Af_inet, address is represented as a tuple (host,port).

Sk.listen (Backlog)

Start listening for incoming connections. The backlog specifies the maximum number of connections that can be suspended before a connection is rejected. The      backlog equals 5, indicating that the kernel has received a connection request, but the server has not yet called accept to handle the maximum number of connections 5      This value cannot be infinite because the connection queue is maintained in the kernel

Sk.setblocking (BOOL)

Whether blocking (default true), if set to false, then the accept and recv when there is no data, the error.

Sk.accept ()

Accepts the connection and returns (Conn,address), where Conn is a new socket object that can be used to receive and send data.  Address is the location of the connection client. Incoming TCP Client connection (blocked) waiting for connection

Sk.connect (Address)

The socket that is connected to the address. Generally, address is in the form of a tuple (Hostname,port) and returns a socket.error error if there is an error in the connection.

SK.CONNECT_EX (Address)

Ditto, but there will be a return value, the connection succeeds when the return 0, the connection fails when the return encoding, for example: 10061

Sk.close ()

Close socket

SK.RECV (Bufsize[,flag])

Accepts the data for the socket. The data is returned as a string, and bufsize specifies the maximum quantity that can be received. Flag provides additional information about the message, which can usually be ignored.

Sk.recvfrom (Bufsize[.flag])

Similar to recv (), but the return value is (data,address). Where data is the string that contains the received information, address is the socket addressing that sent the data.

Sk.send (String[,flag])

Sends data from a string to a connected socket. The return value is the number of bytes to send, which may be less than the byte size of the string.

Sk.sendall (String[,flag])

Sends data from a string to a connected socket, but attempts to send all data before returning. Successful return none, Failure throws an exception.

Sk.sendto (string[,flag],address)

Sends the data to the socket, address is a tuple in the form of (Ipaddr,port), specifying the remote address. The return value is the number of bytes sent. This function is mainly used for UDP protocol.

Sk.settimeout (Timeout)

Sets the timeout period for the socket operation, and timeout is a floating-point number in seconds. A value of None indicates no over-time. In general, hyper-times should be set when a socket is just created, as they may be used for connected operations (such as client connections waiting up to 5s)

Sk.getpeername ()

Returns the remote address of the connection socket. The return value is typically a tuple (ipaddr,port).

Sk.getsockname ()

Returns the socket's own address. Typically a tuple (ipaddr,port)

Sk.fileno ()

File descriptor for sockets
Import Socketip_port = (' 127.0.0.1 ', 9999) SK = Socket.socket (socket.af_inet,socket. sock_dgram,0) Sk.bind (ip_port) while True:    data = SK.RECV (1024x768)    print DataImport socketip_port = (' 127.0.0.1 ', 9999) SK = Socket.socket (socket.af_inet,socket. sock_dgram,0) while True:    INP = raw_input (' data: '). Strip ()    if InP = = ' exit ': Break    Sk.sendto (inp,ip_ Port) sk.close () UDP Demo
Third, examplesIntelligent robot
#!/usr/bin/env python#-*-coding:utf-8-*-import socketip_port = (' 127.0.0.1 ', 8888) SK = Socket.socket () sk.bind (ip_port ) Sk.listen (5) while True:    conn,address =  sk.accept ()    Conn.sendall (' Welcome to call 10086, please enter 1xxx,0 to manual service. ')    Flag = True while    flag:        data = CONN.RECV (1024x768)        if data = = ' exit ':            flag = False        elif data = = ' 0 ':            C Onn.sendall (' Pass may be recorded. Balabala a big push ')        else:            conn.sendall (' please reenter. ')    Conn.close () service side
#!/usr/bin/env python#-*-coding:utf-8-*-import socketip_port = (' 127.0.0.1 ', 8005) SK = Socket.socket () sk.connect (ip_ Port) Sk.settimeout (5) while True:    data = SK.RECV (1024x768)    print ' Receive: ', data    INP = raw_input (' please Input: ')    Sk.sendall (INP)    if InP = = ' exit ':        breaksk.close () client
Socketserver Module first, the use of the source code analysis

For the default socket server to handle client requests, the request is processed in a blocking manner, and Socketserver enables a colleague to process multiple requests.

#!/usr/bin/env python#-*-coding:utf-8-*-import socketserverclass MyServer (socketserver.baserequesthandler):    def handle (self):        # print Self.request,self.client_address,self.server        conn = self.request        Conn.sendall (' Welcome to call 10086, please input 1xxx,0 to manual service. ')        Flag = True while        flag:            data = CONN.RECV (1024x768)            if data = = ' exit ':                flag = False            elif data = = ' 0 ': 
   conn.sendall (' Pass may be recorded. Balabala a big push ')            else:                conn.sendall (' please reenter. ') if __name__ = = ' __main__ ':    server = Socketserver.threadingtcpserver ((' 127.0.0.1 ', 8009), MyServer)    Server.serve_forever ()
#!/usr/bin/env python#-*-coding:utf-8-*-import socketip_port = (' 127.0.0.1 ', 8009) SK = Socket.socket () sk.connect (ip_ Port) Sk.settimeout (5) while True:    data = SK.RECV (1024x768)    print ' Receive: ', data    INP = raw_input (' Please input : ')    Sk.sendall (INP)    if InP = = ' exit ':        breaksk.close () client

From the analysis of the above source code execution process, the source code is streamlined as follows:

Import Socketimport threadingimport selectdef process (Request, client_address):    print request,client_address    conn = Request    Conn.sendall (' Welcome to call 10086, please enter 1xxx,0 to manual service. ')    Flag = True while    flag:        data = CONN.RECV (1024x768)        if data = = ' exit ':            flag = False        elif data = = ' 0 ':            C Onn.sendall (' Pass may be recorded. Balabala a big push ')        else:            conn.sendall (' please reenter. ') SK = Socket.socket (socket.af_inet, socket. Sock_stream) Sk.bind ((' 127.0.0.1 ', 8002)) Sk.listen (5) while True:    R, W, E = Select.select ([sk,],[],[],1)    print ' looping '    if SK in R:        print ' Get request '        request, client_address = Sk.accept ()        t = Threading. Thread (target=process, args= (Request, client_address))        T.daemon = False        t.start () sk.close ()

As you can see from the streamlined code, Socketserver can handle requests at the same time thanks to select and threading Two things, essentially creating a thread for each client on the server side, The current thread is used to handle requests from the corresponding client, so it is possible to support simultaneous n client links (long connections).

#!/usr/bin/env python#coding:utf-8import socketserverimport osclass MyServer (socketserver.baserequesthandler): def            Handle (self): Base_path = ' g:/temp ' conn = self.request print ' Connected ... ' while True:            Pre_data = Conn.recv (1024x768) #获取请求方法, filename, file size cmd,file_name,file_size = Pre_data.split (' | ')            # Prevent sticky packets and send a signal to the client. Conn.sendall (' nothing ') #已经接收文件的大小 recv_size = 0 #上传文件路径拼接 File_di                R = Os.path.join (base_path,file_name) f = file (File_dir, ' WB ') flag = True while flag:                    #未上传完毕, if int (file_size) >recv_size: #最多接收1024, may receive less than 1024 data = CONN.RECV (1024x768) recv_size+=len (data) #写入文件 F.write (d ATA) #上传完毕, exit the loop else:recv_size = 0 Flag = FAlse print ' upload successed. ' F.close () instance = Socketserver.threadingtcpserver ((' 127.0.0.1 ', 9999), MyServer) Instance.serve_forever () FTP upload file (server side)
#!/usr/bin/env python#coding:utf-8import socketimport sysimport osip_port = (' 127.0.0.1 ', 9999) SK = Socket.socket () Sk.connect (ip_port) container = {' key ': ', ' data ': '}while True:    # Client input path to upload file input    = raw_input (' Path: ')    # Gets the file name according to the path    file_name = os.path.basename (path)    # Gets the size    file_size=os.stat (path). St_size    # Send file name and file size    sk.send (file_name+ ' | ') +str (File_size)    # To prevent sticky packets, after sending the file name and size to the past, wait for the server to receive it until a signal is received from the server (stating that the server has received it)    Sk.recv (1024x768)    send_size = 0    f= file (path, ' RB ')    flag = True while    flag:        if send_size + 1024x768 >file_size:            data = F.read ( file_size-send_size)            Flag = False        else:            data = F.read (1024x768)            send_size+=1024        sk.send (data)    f.close ()    sk.close () FTP upload file (client)
for large file processing:Send writes only once to the buffer, and the incoming content does not necessarily end, so the return value is the actual size of the send. For example:
1023M = Send (1g data)   so it's actually sending 1023M, and the other 1M is leaking out.
Sendall, the internal call to send will keep writing to the buffer until the file is all written. For example:
   Sendall (1g data)    first time:        send (1023M) The    second time: Send        (1M) ========== when sending large files, it is not possible to read all 1G memory, need open file, 1.1 point to read, and then send again.
# Large File size
File_size=os.stat (file path). st_size

# Open large Files
f =  file (path, ' RB ')

# data that has been sent
Send_size = 0
While Flag:    # Large files have only less than 1024 bytes left, others have been sent.    if send_size + 1024x768 > File_size:        # reads less than 1024 bytes from a large file, possibly 10 bytes        ... data = F.read (file_size-send_size)        Flag = False    Else:        # reads 1024 bytes of        data = F.read (1024x768) from a large file        # Record how many bytes have been sent        send_size + = 1024x768    # Send data from large files to buffer in batches, up to 1024 bytes per    sk.sendall (data)
Second, select

The Select,poll,epoll in Linux is a mechanism for IO multiplexing.

I/O multiplexing refers to the ability to monitor multiple descriptors and, once a descriptor is ready (usually read-ready or write-ready), notifies the program to read and write accordingly.

Select Select, which first appeared in the 4.2BSD in 1983, is used by a select () system to monitor an array of multiple file descriptors, and when select () returns, the ready file descriptor in the array is changed by the kernel to the flag bit. Allows the process to obtain these file descriptors for subsequent read and write operations. Select is currently supported on almost all platforms, and its good cross-platform support is one of its advantages, and in fact it is now one of the few advantages it has left. A disadvantage of select is that the maximum number of file descriptors that a single process can monitor is limited to 1024 on Linux, but can be improved by modifying the macro definition or even recompiling the kernel. In addition, the data structure maintained by select () stores a large number of file descriptors, with the increase in the number of file descriptors, the cost of replication increases linearly. At the same time, because the latency of the network response time makes a large number of TCP connections inactive, but calling select () takes a linear scan of all sockets, so this also wastes some overhead. Poll Poll was born in 1986 in System V Release 3, and it does not differ substantially from select in nature, but poll has no limit on the maximum number of file descriptors. The disadvantage of poll and select is that an array containing a large number of file descriptors is copied in between the user state and the kernel's address space, regardless of whether the file descriptor is ready, and its overhead increases linearly as the number of file descriptors increases. In addition, when select () and poll () file descriptors are ready to tell the process, if the process does not have IO operations on it, the next time you invoke select () and poll (), the file descriptors are reported again, so they generally do not lose the ready message. This approach is called horizontal trigger (level triggered). Epoll until Linux2.6 the implementation method that is directly supported by the kernel is epoll, which has almost all the advantages previously mentioned, and is recognized as the best-performing multi-way I/O readiness notification method under Linux2.6. Epoll can support both horizontal and edge triggering (edge triggered, which only tells the process which file descriptor has just become ready, it only says it again, and if we do not take action then it will not be told again, this way is called edge triggering), The performance of edge triggering is theoretically higher, but the code implementation is quite complex. Epoll also only informs those file descriptors that are ready, and when we call Epoll_wait () to get the ready file descriptor, the return is not the actual descriptor, but a value that represents the number of ready descriptors, and you only need to go to epoll a specified numberThe corresponding number of file descriptors are obtained in the group, and memory-mapping (MMAP) technology is used, which eliminates the overhead of copying these file descriptors on system calls. Another essential improvement is the epoll adoption of event-based readiness notification methods. In Select/poll, the kernel scans all monitored file descriptors only after a certain method is called, and Epoll registers a file descriptor beforehand with Epoll_ctl (), once it is ready based on a file descriptor,  The kernel uses a callback mechanism like callback to quickly activate the file descriptor and be notified when the process calls Epoll_wait ().

The Python select is used to listen on multiple file descriptors:

#!/usr/bin/env python#-*-coding:utf-8-*-import socketimport threadingimport selectdef process (Request, Client_ Address):    print request,client_address    conn = Request    Conn.sendall (' Welcome to call 10086, please enter 1xxx,0 to human service. ')    Flag = True while    flag:        data = CONN.RECV (1024x768)        if data = = ' exit ':            flag = False        elif data = = ' 0 ':            C Onn.sendall (' Pass may be recorded. Balabala a big push ')        else:            conn.sendall (' please reenter. ') S1 = Socket.socket (socket.af_inet, socket. Sock_stream) S1.bind ((' 127.0.0.1 ', 8020)) S1.listen (5) s2 = Socket.socket (socket.af_inet, socket. Sock_stream) S2.bind ((' 127.0.0.1 ', 8021)) S2.listen (5) while True:    R, W, E = Select.select ([s1,s2,],[],[],1)    print ' looping '    for S in R:        print ' Get request '        request, client_address = S.accept ()        t = Threading. Thread (target=process, args= (Request, client_address))        T.daemon = False T.start () s1.close () s2.close ()        server
#!/usr/bin/env python#-*-coding:utf-8-*-import socketip_port = (' 127.0.0.1 ', 8020) SK = Socket.socket () sk.connect (ip_ Port) Sk.settimeout (5) while True:    data = SK.RECV (1024x768)    print ' Receive: ', data    INP = raw_input (' Please input : ')    Sk.sendall (INP)    if InP = = ' exit ':        breaksk.close () client: 8020
#!/usr/bin/env python#-*-coding:utf-8-*-import socketip_port = (' 127.0.0.1 ', 8021) SK = Socket.socket () sk.connect (ip_ Port) Sk.settimeout (5) while True:    data = SK.RECV (1024x768)    print ' Receive: ', data    INP = raw_input (' please Input: ')    Sk.sendall (INP)    if InP = = ' exit ':        breaksk.close () client: 8021
Third, threading

Question:

    • application, process, thread relationships?
    • Why use multiple CPUs?
    • Why use multithreading?
    • Why use multiple processes?
    • What is the difference between multithreading in Java and C # and Python multi-threading?
    • Python GIL?
    • Choice of threads and processes: compute-intensive and IO-intensive programs. (IO operation does not consume CPU)

1. Python Thread

Threading is used to provide thread-related operations, which are the smallest unit of work in an application.

#!/usr/bin/env python#-*-coding:utf-8-*-import threadingimport timedef Show (ARG):    time.sleep (1)    print ' Thread ' +str (ARG) for I in range:    t = Threading. Thread (target=show, args= (i,))    T.start () print ' main thread stop '

The code above creates 10 "foreground" threads, then the controller is handed over to the CPU,CPU according to the specified algorithm for scheduling, shard execution instructions.

More ways:

    • Start thread is ready to wait for CPU scheduling
    • SetName setting a name for a thread
    • GetName Get thread Name
    • Setdaemon set to background thread or foreground thread (default)
      If it is a background thread, during the main thread execution, the background thread is also in progress, and after the main thread finishes executing, the background thread stops regardless of success or not.
      If it is the foreground thread, during the main thread execution, the foreground thread is also in progress, and after the main thread finishes executing, wait for the foreground thread to finish, the program stops
    • The join executes each thread one by one, and continues execution after execution ...
    • Execute this method after the run thread is dispatched by the CPU
2. Thread Lock

The CPU then executes other threads because the threads are randomly dispatched, and each thread may execute only n execution. Therefore, the following problems may occur:

#!/usr/bin/env python#-*-coding:utf-8-*-import threadingimport timegl_num = 0def Show (ARG):    Global Gl_num    Time.sleep (1)    gl_num +=1    Print gl_numfor i in range (Ten):    t = Threading. Thread (target=show, args= (i,))    T.start () print ' main thread stop ' is not using the thread lock
#!/usr/bin/env python#coding:utf-8 Import threadingimport Time Gl_num = 0 lock = Threading. Rlock () def Func ():    lock.acquire ()    global gl_num    gl_num +=1    time.sleep (1)    print Gl_num    Lock.release () for     I in range:    t = Threading. Thread (Target=func)    T.start ()
extension: Process

1. Create a multi-process program

From multiprocessing import processimport threadingimport timedef foo (i):    print ' say hi ', ifor I in range:    p = Process (target=foo,args= (i,))    P.start ()

Note: Because the data between processes needs to be held separately, the creation process requires very large overhead.

2. Process sharing Data

The process holds one piece of data, and the data is not shared by default

#!/usr/bin/env python#coding:utf-8from multiprocessing Import processfrom multiprocessing import Managerimport Timeli = []def foo (i):    li.append (i)    print ' Say hi ', Li for I in range:    p = Process (target=foo,args= (i,))    P.start ()    print ' ending ', Li
#方法一, arrayfrom multiprocessing Import process,arraytemp = Array (' i ', [11,22,33,44]) def Foo (i):    temp[i] = 100+i    For item in temp:        print I, '-----> ', itemfor I in range (2):    p = Process (target=foo,args= (i,))    P.start ()    P.join () #方法二: Manage.dict () shared data from multiprocessing import process,managermanage = Manager () dic = Manage.dict () def Foo (i):    dic[i] = 100+i    print dic.values () for I in Range (2):    p = Process (target=foo,args= (i,))    P.start ()    p.join () sharing data between processes
3. Process Pool
#!/usr/bin/env python#-*-coding:utf-8-*-from  multiprocessing import process,poolimport timedef Foo (i):    Time.sleep (2)    return i+100def Bar (ARG):    Print Argpool = Pool (5) #print pool.apply (Foo, (1,)) #print Pool.apply_ Async (Func =foo, args= (1,)). Get () for I in range:    pool.apply_async (Func=foo, args= (i,), callback=bar) print ' End ' Pool.close () Pool.join ()

Python socket (socket)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.