Multicore processors differ from single-core processors and can execute multiple threads at the same time. It can be said that in theory dual-core processor performance is twice times the single core, we generally do not worry about whether the device is multicore or single-core, because the use of thread or Asytask to assign work to different threads is sufficient. If the processor is multicore, then these threads will run at different cores, but there may be times when it is most efficient to use the CPU to achieve acceptable performance, especially for multi-approved algorithms.
The following implementations do things:
1. Divide the original problem into more simple sub-problems
2. The result of the sub-problem is then combined to work out the solution of the original problem (using future and Excecutorservice)
3. Successful avoidance of repeated computations of many of the same Fibonacci numbers (using highly concurrent high throughput and thread-safe concurrenthashmap as the cache)
The code is as follows:
Private static final int proc = runtime.getruntime (). AvailableProcessors (); private static final executorservice executorservice = executors.newfixedthreadpool (proc + 2); public Static biginteger recursivefasterbiginteger (int n ) { if (n>1) { int m = (N/2) + (n&1); biginteger fm = recursivefasterbiginteger (m); BigInteger fM_1 = Recursivefasterbiginteger (m-1); / /combine results to calculate the solution &n of the original problemBsp; if ((n&1) ==1) { return fm.pow (2). Add (FM_1.pow (2)); }else { return fm_1.shiftleft (1). Add (FM). Multiply (FM); } } return (n==0)? biginteger.zero:biginteger.one; } Private static biginteger recursivefasterwithcache (int n) { hashmap<integer, biginteger> cache = new hashmap <integer, biginteGer> (); return recursivefasterwithcache (n, Cache); } private static Biginteger recursivefasterwithcache (Int n, map<integer, biginteger> cache) { if (n > 92) { biginteger fn = cache.get (n); if (Fn == null) { int m = (n / 2) + (n & 1); BigInteger fM = Recursivefasterwithcache (M, cache); Biginteger fm_1 = recursivefasterwithcache (M - 1, cache); if (n & 1) == 1) { fn = fm.pow (2). Add (Fm_1.pow (2)); } else { fn = fm_1.shiftleft (1). Add (FM). Multiply (FM); } &nbsP; cache.put (N,&NBSP;FN); } return fN; } Return biginteger.valueof (Iterativefaster (n)); } public static biginteger recursivefasterwithcacheandthread (int n) { int proc = runtime.getruntime (). Availableprocessors (); if (n < 128 | | &NBSP;PROC&NBSP;<=&NBSP;1) { return recursivefasterwithcache (n); } final concurrenthashmap<integer, biginteger> cache = new ConcurrentHashMap<Integer, BigInteger> (); final int m = (n / 2) + (n & 1); callable<biginteger> callable = new Callable<BigInteger> () { @Override public biginteger call () throws Exception { return recursivefasterwithcache (M , cache); } &nbsP;}; future<biginteger> ffm = executorservice.submit (callable); Callable = new callable<biginteger> () { @Override public biginteger call () throws Exception { return Recursivefasterwithcache (M-1,cache); } }; future<biginteger> ffm_1 = executorservice.submit (callable); //Get the results of each part to start merging BigInteger fM,fM_1,fN; try{ fm = ffm.get ();//Gets the result of the first sub-problem (blocking call) }catch (exception e) { //If an exception is thrown then the fm is computed in the current main thread fm = recursivefasterbiginteger (m); } try{ fm_1 = ffm_1.get ();//Gets the result of the first sub-problem (blocking call) }catch (exception e) {   //If an exception is thrown It is calculated fm in the current main thread fm = recursivefasterbiginteger (m-1); } if ((n & 1) != 0 ) { fn = Fm.pow (2). Add (Fm_1.pow (2)); }else { fn = fm_1.shiftleft (1). Add (FM). Multiply (FM); } return fn; }
actually Most of the time, even if the problem is broken down into sub-problems and assigning sub-problems to different threads, there is probably no improvement in performance, it is possible that there is a dependency between the data and that it has to be synchronized, and the thread may spend most of the time waiting for the data. Therefore, in practice, it is common to use multithreading to perform unrelated tasks and avoid synchronization requirements.
Android App performance Optimization (iii) Multi-core multithreaded computing