The work encountered in the scene, here to write an example, the actual application than here more robust and perfect
Application Scenarios:
Update or write to a table of 100,000 data (or 1 million or 100 million +);
Rookie is a one-piece execution, it's obviously not.
I am in the actual project is this application, batch update! Of course this is obviously not enough to thread a batch update just right!
How do you do it?
Example 1:100,000 data, I 1000 1000 of the processing is, 100,000/1000 = 100,
Here I use 100 threads at the same time, each thread is responsible for 1000 data, here is a key point, the processing of data can not be repeated!
Let's give a code example:
Thread pool threads can be calculated based on the amount of data/number of individual threads processed
public class Dyschedule { private static Atomicinteger line = new Atomicinteger (0); static Executorservice pool = Executors.newfixedthreadpool (+); public static int GetLine () { return line.addandget (+); } public static void Dojob () {for (int i = 0;i<100;i++) { thread thread = new MyThread (); Pool.execute (thread); } Pool.shutdown (); } public static void Main (string[] args) { dyschedule.dojob ();} }
Here's what each thread is going to do
public class MyThread extends thread { @Override public void Run () { System.out.println ("Thread:" + Thread.CurrentThread (). GetName ()); Integer num = Dyschedule.getline (); System.out.println ("startline =" + (num-1000) + ", endline =" + num);} }
Program Run Result:
Thread: pool-1-thread-1startline = 0,endline = 1000 Thread: pool-1-thread-2startline = 1000,endline = 2000 Thread: pool-1-thread-5 Thread: Pool-1-thread-3startline = 2000,endline = 3000startline = 3000,endline = 4000 thread: pool-1-thread-4startline = 4000,endline = 5000 Threads: pool-1-thread-6 thread: pool-1-thread-7startline = 6000,endline = 7000startline = 5000,endline = 6000 Thread: Pool-1-thread-9startline = 7000,endline = 8000 Thread: pool-1-thread-8startline = 8000,endline = 9000 Thread: Pool-1-thread-10startline = 9000,endline = 10000 thread: pool-1-thread-12startline = 10000,endline = 11000 Thread: Pool-1-thread-11startline = 11000,endline = 12000 Thread: pool-1-thread-16startline = 12000,endline = 13000 Thread: Pool-1-thread-15 Thread: pool-1-thread-19startline = 14000,endline = 15000startline = 13000,endline = 14000 Thread: Pool-1-thread-20startline = 15000,endline = 16000 .....
This only handles the number of data rows or partition numbers to be processed by each thread,
such as the above
Thread: pool-1-thread-1
Startline = 0,endline = 1000 processing data for database 0-1000 rows
Thread: Pool-1-thread-2
startline = 1000,endline = 2000 processingdata for 1000-2000 rows
the next batch task content is not much simpler, not much, remember the key point here is to ensure that the data is not duplicated! , not missing!
this way of thinking millions of or tens of millions of levels of batch processing is no problem,
But how many billions or tens of billions of of data do you need to deal with in bulk? Don't worry, lad. As a senior programmer, there's still a way.
Need to think of the above ideas or plans to design a distributed, multi-tasking, multi-threaded schedule on it,
Tired, donuts it. Jot down, the shortcomings can leave a message
Big Data multithreading efficient batch processing