Java Review-basic4

Source: Internet
Author: User

1. HashMap vs HashTable vs Concurrenthashmap

1). Thread-safe:concurrenthashmap is Thread-safe, that is, the code can be accessed by single Thread at a time. While HashMap are not thread-safe.

2). Synchronization method:hashmap can be synchronized by using Synchronizedmap (HASHMAP) Method. By using this method, we get a HashMap object which is equivalent to the HashTable object. So every modification are performed on map are locked on map object. Concurrenthashmap synchronizes or locks on the certain portion of the MAP. To optimize the performance of Concurrenthashmap, Map was divided into different partitions depending upon the Concurrency Level. So, we don't need to synchronize the whole Map Object.

3). Null Key:concurrenthashmap does not allow Null values. So the key can is null in Concurrenthashmap and in HashMap there can is only a null key.

4). performance:in Multiple threaded environment HashMap is usually faster than concurrenthashmap. As single thread can access the certain portion of the MAP and thus reducing the performance. While in HashMap any number of threads can access the code at the same time. Please write the comments in the if you have any doubts.

Personal understanding: This question is mainly asked Hashmap,hashtable and Concurrenthashmap difference in the following points:

1. Thread-safe, hashtable and Concurrenthashmap are thread-safe, while only one thread can access it.

2. Can join the Synchronize method to ensure HashMap thread safety, Concurrenthashmap can be divided into different levels according to different partitions, do not need to access at the same time, can improve efficiency

3. Concurrenthashmap not allow any null in it, HASHMAP with NULL as key.

4. HashMap faster in multiple enviroment because it can acess many threads at same time.

2.Synchronous vs Asynchronous

Synchronized means "connected", or "dependent" in some. In the other words the synchronous tasks must be aware of one another , and one must EXECU Te in some-a-dependent on the other. In most cases, that means, that one cannot start until the other have completed. Asynchronous means they is totally independent and neither one must consider the Oth Er in any, either in initiation or in execution.

As an aside, I should mention this technically, the concept of synchronous vs. asynchronous really does not having anything to does with threads. Although, in general, it would being unusual to find asynchronous tasks running on the same thread, it's possible, (see Belo W for e.g.) and it's common to find, or more tasks executing synchronously on separate threads ... No, the concept of synchronous/asynchronous have to does solely with whether or not a second or subsequent task can be Initia Ted before the other task has completed, or whether it must wait. That's all. What thread (or threads), or processes, or CPUs, or indeed, what hardware, the task[s] was executed on was not relevant. Indeed, to do this point I had edited the graphics to show this

Personal understanding: About multithreading, synchronous asynchronous in a single thread, the diagram above explains it clearly:

Synchronization means that threads have an effect on each other, and one ends up being able to execute the other, while Async is relatively independent, self-executing, without affecting each other.

3.thread contention:

Essentially thread contention is a condition where one thread was waiting for a lock/object that's currently being held by Another thread. Therefore, this waiting thread cannot use, the object until the other thread have unlocked that particular object.

A thread waits for a resource that is locked by another thread

4. Race Conditions/debug them

A race condition occurs when both or more threads can access shared data and they try-to-change it at the same time. Because the thread scheduling algorithm can swap between threads at any time and you don ' t know the order in which the thread S would attempt to access the shared data. Therefore, the result of the change in data was dependent on the thread scheduling algorithm, i.e. both threads is "racing "To access/change the data. In order to prevent race conditions from occurring, you would typically put a lock around the shared data to ensure only O NE thread can access the data at a time.

Race Conidtions:

Two threads request a resource at the same time, and try to change at the same time, the workaround is to lock on the shared resource.

5. Deadlocks

A deadlock is when both or more threads be blocked waiting to obtain locks that some of the other threads in the deadlock is holding. Deadlock can occur when multiple threads need the same locks, at the same time, but obtain them in different order. For instance, if thread 1 locks a, and tries to lock B, and thread 2 have already locked B, and tries to lock a, a deadlock Arises. Thread 1 can never get B, and thread 2 can never get A. In addition, neither of them would ever know. They would remain blocked on each their object, A and B, forever. This situation is a deadlock.

Personal understanding: Deadlocks are a common problem when two or more threads in order to obtain the same resource that is already occupied, but the resource is held by another thread, but the resource also wants or is a resource held by the previous two resources, the deadlock will occur.

6. How to prevent deadlocks

1) Lock Ordering

Deadlock occurs when multiple threads need the same locks but obtain them in different order.

2) Lock Timeout

Another deadlock prevention mechanism is to put a timeout in lock attempts meaning a thread trying to obtain a lock would o Nly try for so long before giving up. If a thread does not succeed in taking all necessary locks within the given timeout, it would backup, free all locks taken, Wait for a random amount of time and then retry. The random amount of time waited serves to give other threads trying to take the same locks a chance to take all locks, an D thus let the application continue running without locking.

3) Deadlock Detection

A better option is to determine or assign a, the threads so, only one (or a few) thread backs up. The rest of the threads continue taking the locks they need as if no deadlock had occurred. If the priority assigned to the threads is fixed, the same threads would always be given higher priority. To avoid the assign the priority randomly whenever a deadlock are detected.

Personal understanding: The solution to deadlock is as follows, 1. Set the order of locks, that is, set the priority

2. Set timeout to time direct release

7. Thread Confinement

The

Thread confinement is the practice of ensuring, that data are only accessible from one Thread. Such data is called thread-local as it's local, or specific, to a single thread. Thread-local data is Thread-safe, as-one thread can get at the data, which eliminates the risk of races. And because races is nonexistent, thread-local data doesn ' t need locking. Thus thread confinement is a practice this makes your code safer (by eliminating a huge source of programming error) and M Ore scalable (by eliminating locking). Most languages don ' t has mechanisms to enforce thread confinement; It is a higher-level programming the pattern and not a language or OS feature. Functionality such as thread local storage (TLS) makes thread confinement easier, but the programmer must still work to EN Sure references to the data does not escape the owning thread.

Personal understanding: Thread confinement which is data for occupied by local Thread, no race conidtion and thread safe, so it doesn ' t need To lock.

8. Cache coherence

When multiple processors with separate caches share a common memory, it's necessary to keep the caches in a state of Cohe Rence by ensuring, any shared operand, which is changed on any cache is changed throughout the entire system.

This is do in either of Ways:through a directory-based or a snooping system.

In a directory-based system, the data being shared are placed in a common directory that maintains the coherence between CA Ches. The directory acts as a filter through which the processor must ask permission to load an entry from the primary memory to Its cache. When a entry is changed the directory either updates or invalidates the other caches with that entry.

In a snooping system, all caches on the bus monitor (or snoop) the bus to determine if they has a copy of the Block of Da Ta that's requested on the bus. Every cache has a copy of the sharing status of every block of physical memory it has. Cache misses and memory traffic due to GKFX data blocks limit the performance of parallel computing in multiprocessor Co Mputers or systems. Cache coherence aims to solve the problems associated with sharing data

Personal Understanding:

The cache coherence means that there are different caches in the same memory, one cache has changed, and the rest of the system needs to be changed together.

There are two ways of doing this:

1. One cache under one directory permission is modified, and the others are modified as well.

2. The other is the snooping, meaning that a change occurs on the bus, and the same state is changed.

9.False Sharing:

The

Memory is stored within, the cache system in units know as cache lines. Cache lines is a power of 2 of contiguous bytes which is typically 32-256 in size. The most common cache line size is bytes. False sharing is a term which applies when threads unwittingly impact the performance of each and while modifying Indepe Ndent variables sharing the same cache line. Write contention on cache lines are the single most limiting factor on achieving scalability for parallel threads of Execut Ion in an SMP system. I ' ve heard false sharing described as the silent performance killer because it's far from obvious when looking at code.

To achieve linear scalability with number of threads, we must ensure no and threads write to the same variable or cache Li Ne. Threads writing to the same variable can is tracked down at a code level. To being able to know if independent variables share the same cache line we need to know the memory layout, or we can get a T Ool to tell us. Intel VTune is such a profiling tool. In this article I'll explain how memory was laid out for Java objects and how we can pad out our cache lines to avoid false Sharing.

In multi-processor, multi-threaded scenarios, if two threads are running on different CPUs, and one of them modifies the elements in the cache line, the cache line of another thread is declared invalid due to cache consistency, and the cache will appear on the next visit Line miss, even if the thread is fundamentally invalid for this element, so there is a false sharing problem

Java Review-basic4

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.