Java concurrent programming test (3)
Generate more alternate operations
Because errors in concurrent code are generally low-probability events, you need to execute these errors multiple times during testing. However, some methods can increase the probability of discovering these errors, as mentioned above, in a multi-processor system, if the number of processors is less than the number of active threads, then compared with a single processor system or a system containing multiple processors, it will generate more alternate behaviors.
There is a useful way to increase the number of alternate operations. In order to search for the state space of a program more effectively, it is to add Thread. yield as an empty operation in the access status operation. When the Code does not use enough synchronization when it is in the access status, there will be some timing-sensitive errors. by calling the yield method during the execution of an operation, these errors can be exposed. In this method, you need to add some calls in the test and delete these calls in the official product Lu.
public synchronized void tranferCredits(Account from,Account to,int amount) {from.setBalance(from.getBalance()-amount);if (random.nextInt(1000)>THRESHOLD) {Thread.yield();}to.setBalance(to.getBalance()+amount);}
Performance Testing
Performance testing must meet the application scenarios of the current program. Ideally, the actual usage of the tested object in the application should be reflected.
The second goal is to adjust different limits based on experience values, such as the number of threads and cache capacity. These limits depend on the platform features. We usually need to select these values reasonably, enable programs to run well on more systems.
Comparison of multiple algorithms
The test results show that the scalability of the LinkedBlockgingQueue is higher than that of the ArrayBlockingQueue. It seems strange at first, that each time an element is inserted in the chain list queue, a linked list Node object must be allocated, this seems to support more concurrent access than array-based queues, because some optimized connection queue algorithms can share the update operation of the queue header node with the update operation of the last node. Therefore, if an algorithm can reduce competition by executing too many Memory Allocation Operations, this algorithm is generally highly scalable.