The development of bit-coin mining machine (I.)

Source: Internet
Author: User
Tags ack time limit

Development is divided into two sections, part A:LSP (Live Sequence Protocol), parts b:distributed Bitcoin Miner

Document Location: HTTPS://GITHUB.COM/MODIZIRI/P1

Body:

"First of all, the low-level network protocols, called low-level because this IP can only provide unreliable data delivery services, that is, this simple data transfer can easily lead to delay, packet loss and duplication." Also, there is the maximum byte limit. Fortunately, however, the transmission below 1500 bytes is relatively safe, but if it is over, it is very easy to have the above problem.

Almost no applications will transmit data directly with IP, and they will use UDP and TCP instead.

UDP: That is, user DATAGRAM PROTOCOL, Subscriber Packet protocol. This is also unreliable data services, but allows packets to be transmitted through ports at different terminals on the same computer. That is, a computer can run multiple clients or servers, and this technique is called a multi-channel message.

TCP: Transmission Control Protocol, Transmission Control Protocol. Unlike UDP, this protocol provides a reliable and orderly flow service. The implementation is that a fixed length of a stream of data is dispersed to different packet transmission, to the terminal and then regroup. TCP handles packet loss and heavy packs and prevents sender from overwriting some data at the time of the package (most of which are the packet-covered front package), both in bandwidth, and buffer terminals.

But this time we're going to do the LSP (Live Sequence Protocol), which is both the same and different.

Features: Unlike UDP and TCP,LSP, there is a customer-terminal communication template that can save a lot of engineering volume

And this service is connected to a bunch of clients (here you can finally see the signs of the mining machine), and to each client has a dedicated connection ID.

The client-server connection in each direction is dependent on a sequence of discrete (discontinuous) data, which means it is difficult or impossible to decipher.

The data size is limited to the size of a UDP packet, about 1000 bytes.

Data transmission is absolutely reliable, each information can only be accepted once and must be in the order sent.

The client and server are connected and monitored, and if one of them is disconnected, it is immediately discovered. (plus security again)

LSP transmits data: Typically, each transmission will contain the following four data:

Message type: Can only be of the following three types

Connect: The client establishes a connection with the server

Data: Sending information from a client or server

Ack: By the client or server set up to get Connect or data (small knowledge, others have been established is placed in the public place you have not got, you have to go to ACK people will give you, and GitHub on the add and commit relationship is very similar)

Connection ID: Typically a non-0 positive number used to differentiate client-server connections

Sequence Number: The same connection requires incremented queue numbers for the transmission of multiple data, which is used to distinguish the order, and 0 represents the initial request

Payload: Load, indicating the upper bound that can be transmitted, generally a string of bytes, the format is determined by the application software. (You can decide how much you want to preach)

The above data can be sent in the following format:

(connect,0, 0): Connection request, the front 0 is the connection ID, the latter 0 is the queue number (establish a connection generally two are 0)

(Data,id, SN, D): Represents the data to be transferred, the ID is connected ID,SN is the queue number, D is the payload load.

(Ack,id, SN): Get data or connection, id,sn meaning ditto, if at the same time for 0 nature is to get the connection.

Let me tell you step-by-step about the whole connection process:

Establish a connection: Before data transfer, the connection must be established. The connection is usually sent by the client to the server for the request. In response, the server generates and assigns a unique connection ID to the new connection, and then packages the ID with queue number 0, load nil (i.e. 0), and sends it to the client as an acceptance signal. That is, a simple (connect,0,0) client gives a (ack,id,0)

This connection ID is not rigidly required, and this project we simply set the start ID to 1.

Send receive signal: When the connection is established, the information can be sent from both ends to each other, as mentioned above, is the hash of the queue data information. Assuming that all the information is legal, the client and the server have their own set of serial number of identified queues, which distinguishes who is sending the message. What good is that? First, the server and the client signal to each other is asynchronous, most likely you are still waiting for verification, you will send another, or you have not received the signal, the next one again. In this case, the queue number of the synchronization cannot be processed at all. In addition, it is possible that your first signal is not sent after the signal quickly, that is, the first to, this time if there is no sequence you mess up.

Like TCP, LSP contains a sliding window protocol (Sliding window protocol). This noun is a literal translation, a little awkward, what is the thing, in fact, is a similar to the transmission of the upper limit of things. (personally think the name is awful ...) Figuratively speaking, this agreement is to set up the signal ceiling in transmission. Because we know that the client and the server transmission must be a question, accumulate. So if the signal is still on the way, it is still on the road, not finished or problem (this situation is not considered now). So we set up the transmission limit to get the most out of communication, suppose the communication limit a = 1 is set up, then the first one goes out, and the second cannot go out before the response is received, if a = 2, then out of the two signal, if not back, the third can not send.

Of course, this send is to follow the queue sequence number, the impossible sequence is 3 than the sequence is 1 of the first processing, must be in order. That is to say, once a signal is stuck, the maximum transmission limit is n + a–1.

Is that the owner of the card that's the end of it? In fact, if the card is the main port will automatically be sent over a period of time again, of course, if the main card, or will continue to send, but can only continue to card ... This sliding window protocol is common to both the client and the server.

Here's a breakthrough optimization:

The troop sequence transmission method guarantees the transmission security, does not lose the packet not to be able to send again. But there are also problems, if there is a packet loss or other failure, then the client, server or both stop working-are waiting for the lost bag.

To make the LSP more funtional and robust, we need to use the following method.

We first set up a simple time trigger for both the client and the server, which starts working regularly, cutting the time into a queue point in time. Let's just say that time is made up of time and time, and the number of points is B, and we default to 2000 microseconds, although this amount is going to change.

Once the point of time starts, the client does the following things:

1, if the connection request is not answered by the server, then send another time request

2, if the connection request is sent and is responded to, but no data is received, so send the queue 0 response (explanation: If the server has already sent a connection response has not received the data only two possible, one is simply no data, this is certainly no problem, the other is to respond to lost the packet. In this case we send the queue 0 again, that is, connect's response can prevent the packet loss termination.

3, every data message that has been sent but not responded to will be sent again.

4, if the card master is sent again, send the last a (just defined transmission cap) to a packet of responses, note that only the response is sent.

The server will also set up a similar mechanism for connection:

1, if the client's data has not been received, then the response to the connection request to send again.

2, each sent data if not the response to send the data once again.

3, if the card master is sent again, send the last a (just defined transmission upper limit) packet response, only send a response.

Just now that the time trigger is not very clear, let's give an example. If the client wants to send the first data, the response file is dropped (this is a server problem). At the same time, the server also want to send a J data, but its own file lost packet, not sent out, with the above different, this is the server problem. However, when the time trigger time is on the client, the trigger sends the response of the J-1 packet, noting that the trigger is on the client, so the response of J instead of I can be sent. The server can then receive a response, and the client will send the packet of data I again.

If the point of the time trigger is also on the server, notice that the server is asynchronous with the client, so two triggers are most likely to find the same situation. That is, both sides of the problem, but both sides at the same time to resolve (with the resolution is a difference, their own measure), then the server solution is to send me the response again, and then send the data packet J. Take a closer look, in fact, both can solve the problem, and are solving the same problem.

And the example above validates and gives a case of a heavy package, which is actually a heavy-pack error. In most cases, the queue sequence number works here, and each port has a counter that calculates and distinguishes the serial number of the packets coming in, and then discards the numbers. Here is an example of a duplicate request, which, for the client, is likely to repeatedly send a connection request. At this point, the server must track the primary address, record the number of each connection request, and then discard all those numbers that have been established and connected to the host and are federated.

I've talked about time-point triggers before, so how do we define this point in time? In general, we all design this way, and at least one data will be in transit at every point in time. There is also an important feature that we will trace each connection to the point of time (that is, an invalid point in time) that is consumed after the first message is passed to that head (which has been received by that head and not responding) until the last point in time. Once this cost exceeds a special time limit, which is defined here as K, we will assume that the connection has been lost. When we implement the default K value of 5, so if there is an established port in the K*b time period has not received anything, then we can assume that the connection has been lost. Tip, here B is defined as the time interval for the point above, and the default is 2000 microseconds.

(To be continued)


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.