Your Own Blockchain part 3-writing Nodes that Mine and Talk__blockchain

Source: Internet
Author: User
Tags rounds

Hello all and welcome to Part 3 of building the JACKBLOCKCHAIN-JBC. Quick past intro, In part 1 i coded and went over the top level of math and requirements for a single node to mine its own blockchain; I Create new blocks that have the valid information, save them to a folder, and then start mining a new Block. part 2  covered has multiple nodes and them has the ability to sync. IF Node 1 was doing the mining in its own and Node 2 wanted to grab node 1 ' s blockchain, it can now.

For Part 3, read the TL;DR right below to-what we got going for us. And then read the rest of the post to get a (hopefully) great sense of how this happened. other Posts in this Series Part 1-creating, storing, Syncing, displaying, Mining, and proving Work part 2-syncing chains to different Par T 4.1- bitcoin Proof of Work difficulty explained part 4.2- ethereum Proof of Work Difficulty explained Tl;dr

Nodes'll compete to the WHO gets credit for mining a block. It ' s a race! To does this, we ' re adjusting mine.py to check if we have a valid blocks by only checking a section of nonce values Rather than all the nonces until a match. Then Apscheduler would handle running the mining jobs with the different nonce. We shift the mining to the background if we want node.py to mine as a being Web service. By the end, we can have different nodes this are competing for a mining and broadcasting their mined

Before we start, here's the code on Github if you are want to checkout the whole thing. There are code segments on this to illustrate about what I did, but if you want to the entire code, look There. The code works for me, but I ' m also working on cleaning everything up, and writing a usable README so people can clone and Run it themselves. Twitter, and contact if your want to get in the contact. Mining with Apscheduler and Mining Again

The "I" is "to adjust mining to have" ability to stop if a different node has found the blocks with the index That's it ' s working on. From Part 1, the mining was a while loop which'll only Whenz it finds a valid nonce. We need the ability to stop the mining if we are notified of a different node ' s success.

I ' m not going to lie here, it took awhile for me to figure out of the best way to does this. Read the list below for all I different thoughts and why they didn ' t work out. Run on Apscheduler, and when we flask app gets hit with a blocks from different node, stop the mining job for that index , and start mining for index + 1. This is seemed like the "thing to do" but I found out of that apscheduler remove_job ()  function doesn ' t work if the Job has already been taken off the queue and is running. So I can ' t stop mining. Celery? Celery would definitely work and I looked into it, but frankly, the amount of code and config required to get started with A basic use case is pretty large. Check out the Intro Page and You can be what I ' m talking about. Celery would definitely is an option, I am not trying to say it's not worth it, but in this case, it has too much overhead. Gevent? I ' ve used gevent in the past and am definitely a fan of using it for scraping where I grab A bunch of pages at once, and Don t have to the "request to" return. Makes gathering pages much quicker. It can work in this case, the but is mostly used for queues rather the single jobs. I ' m not mining for a bunch of blocks at once, so don ' t need to create multiple that gevent would run. I only have one. Rq? Nope, same issue for stopping jobs, are running. After doing some and implementations, like looking at The python Ethereum Libraries on, I s Aw that the way they mine are using this term rounds and starting_nonce. Instead of using a entire while loop so goes through nonces until finding a correct one, we only'll check nonces in t He values Of [starting_nonce:starting_nonce+rounds]. If it ' s successful, we move to the next block. If not, we run the job again with a different starting_nonce value.  when a mining job is called, it only checks a certain number of nonces at one time. If successful, it returns The valid nonce and hash. If not, it returns None. Combining that thought with apscheduler, it seems we have a deal. Then I realized that since we have these functions, I am able to the "any" of the libraries listed above to do the mining. If it lets you, it's ll work when you split the mining into different parts. Actually, a good post would is to implement the mining using all the libraries that work. Let me know if ' d want that.

The

Below is the code this is rewritten in mine.py. The main attractions are as follows. Since when you are run mine.py you ' re only wanting to do the mining, the apschedule we ' re going to-use is the b Lockingscheduler. Later you'll have to use The backgroundscheduler. There are multiple functions below for mining with different starting levels. You can mine for A's next block in a chain, mine for the blocks after a blocks, or mine and attempt to find the valid E for a blocks you just generated. Besides the scheduled job for mining's block, we also have a listener which checks the ' return values to ' if we keep m Ining for the "current block" with the "nonces in" range described in section 4 above, or go to the next block.

#mine. py Import Apscheduler from apscheduler.schedulers.blocking import blockingscheduler #if we ' re running mine.py, we Don ' t want it in the background #because the script would return after starting.
So we want the code of the #BlockingScheduler to run. Sched = Blockingscheduler (standalone=true) Import logging import sys logging.basicconfig (stream=sys.stdout, level= Logging.
    DEBUG) Standard_rounds = 100000 def mine_for_block (Chain=none, Rounds=standard_rounds, start_nonce=0): If not chain: Chain = Sync.sync_local () #gather last node Prev_block = Chain.most_recent_block () return Mine_from_prev_block (pre V_block, Rounds=rounds, start_nonce=start_nonce) def mine_from_prev_block (Prev_block, Rounds=standard_rounds, Start_ nonce=0): #create new block and correct new_block = Utils.create_new_block_from_prev (prev_block=prev_block) return Mine_block (New_block, Rounds=rounds, start_nonce=start_nonce) def mine_block (New_block, Rounds=standard_rounds, start_nonce=0): #Attempting to find a valid nonce to match the required difficulty #of leading zeros. We ' re only going to try 1000 nonce_range = [i+start_nonce for I in range (rounds)] for nonce in NONCE_RANGE:NEW_BL  Ock.nonce = Nonce New_block.update_self_hash () if STR (New_block.hash[0:num_zeros]) = = ' 0 ' * num_zeros:print "Block%s mined. Nonce:%s "% (New_block.index, new_block.nonce) assert New_block.is_valid () return new_block, rounds, start_no nce #couldn ' t find a hash of work with, return rounds and start_nonce #as we can know what we tried return None, Rounds, Start_nonce def Mine_for_block_listener (event): New_block, rounds, start_nonce = Event.retval #if didn ' T mine, New_block is None #we ' d use rounds and start_nonce to know what the next #mining task should K:print "mined a new block" New_block.self_save () sched.add_job (Mine_from_prev_block, Args=[new_block], Kwarg s={' rounds ': standard_rounds, ' start_nonce ': 0}, ID= ' Mine_for_block ') #add The block again else:print "No Dice Mining a new block." Restarting with different nonce range "Sched.add_job (Mine_for_block, kwargs={' rounds ': rounds, ' start_nonce ': Start_non Ce+rounds}, id= ' Mine_for_block ') #add The Block again sched.print_jobs () If __name__ = ' __main__ ': Sched.add_job (mine _for_block, kwargs={' rounds ': standard_rounds, ' start_nonce ': 0}, id= ' Mine_for_block ') #add The block again sched.add_ Listener (Mine_for_block_listener, Apscheduler.events.EVENT_JOB_EXECUTED) #, args=sched) Sched.start ()

When we run this, the node would mine successfully but in different jobs rather only one. Great starting point. Node Mining

The next part I want to add is the ability for the node.py flask node to run the mining as. Like I said above, running mine.py'll only does the mining, but we ' re going to need the mining to run in the background be  Low the Flask node. For this, we load the Backgroundscheduler, tell the imported mine that we ' re using out scheduler instead of the the one in tha T file, and then add the job and listener as before.

When we run this, we'll be the output being the same, where we have a logging of the jobs being run, and also have the AB Ility to go To/blockchain.json, reloading it, and the new nodes as they come.

#node. py ...
Import Mine
..... From Apscheduler.schedulers.background import backgroundscheduler
sched = Backgroundscheduler (standalone=true)
.....

if __name__ = = ' __main__ ':

...

  mine.sched = sched #to override the Blockingscheduler in this case, Sched is the Backgroundschedule
  sched.add_job (mine . Mine_for_block, kwargs={' rounds ': standard_rounds, ' start_nonce ': 0}, id= ' Mine_for_block ') #add The block again
  Sched.add_listener (Mine.mine_for_block_listener, Apscheduler.events.EVENT_JOB_EXECUTED)
  Sched.start ()

  node.run (host= ' 127.0.0.1 ', port=port)
Argparse

Intermission time! Previously, in order to, what port we want, I simply checked if I passed a argument, and if I did, That ' s the port.

if __name__ = = ' __main__ ':
  If Len (SYS.ARGV) >= 2:
    port = sys.argv[1]
  else:
    port = 5000

Simple, but too simple. Now, since I have the node being able to mine as very, I want to is able to specify whether or not the node should mine or Not mine.

Enter Argparse. It ' s very nice, and only takes a few line for me to define what args I ' m looking for. We want to is able to say what port to run in defaulting to 5000, and also the have to ability if we say to want.

if __name
  __ = = ' __main__ ': #args! Parser = Argparse. Argumentparser (description= ' JBC Node ') parser.add_argument ('--port ', '-P ', default= ' 5000 ', help= ' what ' p ORT we'll run the node on ') parser.add_argument ('--mine ', ' m ', dest= ' mine ', action= ' store_true ') args = Parser.pars
    E_args () #only mine if we want to if args.mine:mine.sched = Sched #to override the Blockingscheduler in the #in this case, Sched is the background sched sched.add_job (Mine.mine_for_block, kwargs={' rounds ': standard_rounds, ' sta Rt_nonce ': 0}, id= ' mining ') #add The Block again Sched.add_listener (Mine.mine_for_block_listener, APSCHEDULER.EVENTS.E  vent_job_executed) #, args=sched) Sched.start () #want this to start so we can validate on the schedule and not rely on Flask #now We know what port to use Node.run (host= ' 127.0.0.1 ', Port=args.port) 

For Examples,python node.py-m'll run the node on port 5000 and mine as. Python node.py-p 5001 'll run the node on port 5001 and not mine. Python--port=5002-m'll is on port 5002 and mine as. You have the idea.

Alright, mining show. Listening Nodes

In order for a node to is broadcasted, we need to have a flask endpoint this accepts block dicts. We don ' t want the node to run the job of the ' Validation-we want ' is thrown into the schedule.

Initially here, the validation'll check to the if it's a valid block, and if it's, save it.

#node. py
@node. Route ('/mined ', methods=[' POST ')
def mined ():
  possible_block_dict = Request.get_json ()
  sched.add_job (Mine.validate_possible_block, args=[possible_block_dict], id= ' Validate_possible_block ') #add The block again return
  jsonify (received=true)

#mine. PY

def validate_possible_block (possible_block_dict ):
  Possible_block = block (possible_block_dict)
  if Possible_block.is_valid ():
    possible_block.self_save () return True to return
False

To test, we run a node in one port to do the mining, and another node that's sitting there waiting for broadcasts. We Watch the Chaindir of the non-mining node and we'll be the nodes mined by its peer showing.

Side, the way we ' re communicating using flask and HTTP is pretty simple compared to the different blockchains. Checkout how Ethereum lets nodes talk. Wanted to mention this so people reading don ' t assume all blockchains spit simple JSON information back and forth over HTT P. There are better ways. competing Mining Nodes

Time for the finale! The whole point of it is to have multiple nodes that are both mining, the "a" to "a" valid block broadcasts to The other nodes, where they all receive the blocks, and all start mining for the next.

Remember right above when I said we wanted validate_possible_block to to be a job? This is because we want to check validation when there isn ' t a mining job running. If the is valid, we are want to remove the mining job in the schedule queue. Since the listener inserted the next mining job with increased nonce range in the queue, we remove that one, and then Inse RT a mining job for blocks after the new valid blocks and go to there.

def Validate_possible_block (possible_block_dict): Possible_block = Block (possible_block_dict) if Possible_block.is_  Valid (): #this means someone else won possible_block.self_save () #we want to kill and restart the mining block So it knows it lost try:sched.remove_job (' mining ') print "removed running the job in mine validating Le block "except apscheduler.jobstores.base.JobLookupError:print" mining job didn ' t exist when validating Possi BLE block "print" Readding mine for Block Validating_possible_block sched.add_job (mine_for_block, kwargs={' rounds ': Standard_rounds, ' start_nonce ': 0}, id= ' mining ') #add The block again return True to return False 

When running this and looking at the nodes, it's not simple to the which node won the mining of. This post isn ' t is about the "block" of the data in a, but we need a simple way to tell the world which node won.

In node.py we ' re going to write to a data.txt file which has the data for a blocks mined by a node on this port. Then in utils.py, where we create a unmined, invalid block, we read the data file and input this into the header.

 #node. py
    if __name__ = = ' __main__ ': ... filename = '%sdata.txt '% (Chaindata_dir) with open (filename, ' W ') as Data_file: Data_file.write ("Mined by node on port%s"% args.port) ... #utils. Py def create_new_block_from_prev (prev_block=no NE): If not prev_block: #index zero and arbitrary previous hash index = 0 Prev_hash = ' Else:index = i NT (Prev_block.index) + 1 Prev_hash = prev_block.hash filename = '%sdata.txt '% (Chaindata_dir) with open (filename , ' R ') as Data_file:data = Data_file.read () nonce = 0 timestamp = Datetime.datetime.utcnow (). Strftime ('%y%m%d%h%m %s%f ') block_info_dict = Dict_from_block_attributes (Index=index, Timestamp=timestamp, Data=data, Prev_hash=prev_hash , nonce=nonce) Print block_i 
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.