What is multi-threaded Java
Java provides a mechanism for concurrent (simultaneous, independent) handle multiple tasks.A plurality of threads which coexist in the same JVM process, so that share the same memory space, compared to multi-process, multi-thread communication between the lighter weight.In my understanding, Java multithreading is entirely in order to improve CPU utilization.Java threads has four states, New (New), runs (Runnable), blocked (Blocked), end (Dead), the key lies in blocked (Blocked), blocking means waiting, blocked thread does not participate in the thread dispatcher (Thread Scheduler) allocation of time slices, naturally, will not use the CPU.Multi-threaded environment, those non-blocking (Blocked) thread will run and make full use of CPU.
40 questions Summary
1, what''s the use multithreading?
A lot of people seem to be possible in a nonsense question: I would like to use multithreading, also cares what use?In my opinion, the answer is more nonsense.The so-called "know know why," "will use" just "know", "Why" is the "know why", only to the extent that "know know why" it can be said that the a knowledge with ease.OK, here to talk about my views on this issue:
(1) take advantage of multi-core CPU
With the progress of industry, now laptops, desktops and even commercial application servers are also at least a dual-core, 4-core, 8-core or 16-core is also not uncommon, if it is single-threaded program, then the dual-core CPU in wasted 50% on a 4-core CPU is wasted by 75%.On a single core CPU so-called "multi-threaded" That is false multithreaded, processor will only deal with the same time period of logic, but faster than switching between threads, looked like multiple threads "simultaneously" run Bale.Multi-threading on multi-core CPU is a true multi-threaded, it allows you to work at the same logic multi-segment, multi-threaded, can really play the advantages of multi-core CPU to achieve the purpose of full use of CPU.
(2) prevent obstruction
From a procedural point of view of operating efficiency, single-core CPU multithreading not only will not play advantage, because it will run on a single core CPU multithreading lead to thread context switching, and reduce overall program efficiency.But we still have to single-core CPU multi-threaded applications, it is to prevent obstruction.Just think, if it is before the single-core CPU using a single thread, so as long as this thread is blocked, say a data remotely read it, the peer has not yet been returned and no set time-out, then your entire program in return data back stopped working.Multithreading can prevent this problem, a number of threads to run simultaneously, even if the code to perform a data read thread blocked, it will not affect the execution of other tasks.
(3) ease of modeling
This is another advantage of the not so obvious.Suppose you have a large task A, single-threaded programming, then you must consider a lot more trouble to establish the whole process model.But if this great task A broken down into several small tasks, task B, task C, task D, were established program model and run these tasks through multi-threading, respectively, then a lot simpler.
2, create a thread way
A more common problem, and generally is in two ways:
(1) Thread class inheritance
(2) implement Runnable
As to which is better, needless to say the latter is certainly good, because the way of implementation of the interface is more flexible than class inheritance way, but also to reduce the degree of coupling between the programs oriented programming interface is also designed six models of core principles.
Distinction 3, start () method and the run () method
Only call the start () method, will exhibit characteristics of a multi-threaded, different threads run () method which alternately execute code.If you just call the run () method, the code is executed synchronously, must wait for a thread''s run () method after the code inside all is finished, another thread can execute its run () method inside the code.
4, the difference between Callable Runnable interface and interface
A little deeper problem, also see a breadth of knowledge of Java programmers to learn.
Runnable interface in the run () method''s return value is void, just do it purely to perform the run () method code only; call Callable interface () method returns a value, it is a generic , and Future, FutureTask with asynchronous execution can be used to obtain results.
In fact, this is a useful feature, because the single-threaded multi-threaded harder compared to a more important reason is because the complex multi-thread full of unknown sex, whether a certain threads?How long certain threads perform?Whether certain threads when the execution of an assignment we expect the data has been completed?Not know, we can do is wait this multi-threaded task is finished it.Cancel the thread task is really useful in case Callable + Future / FutureTask but you can get the results of running multiple threads, you can not wait too long to get the data needed.
5, the difference CyclicBarrier and CountDownLatch
Two looks a bit like a class in java.util.Concurrent lower, can be used to represent code to run on a certain point, the difference between them is that:
(1) a thread to run CyclicBarrier after a certain point, the thread will stop running until all threads have reached this point, all the threads before re-run; CountDownLatch is not, a thread to run at some point after that, just keep running it to a value of -1, the thread
(2) CyclicBarrier can only evoke a task, CountDownLatch can evoke a number of tasks
(3) CyclicBarrier reusable, CountDownLatch not be reused, the count value 0 is not used again on CountDownLatch
6, the role of the volatile keyword
A very important issue is that each study, multi-threaded applications Java programmers must master.Understand the role of the volatile keyword is to understand the premise of Java memory model, not talked about here Java memory model, you can see the first 31 points, the role of the volatile keyword two main reasons:
(1) multi-threaded mainly revolve around visibility and atomicity two properties, the use of the volatile keyword modified variables to ensure its visibility among multiple threads, that is, each read volatile variable, it must be the latest The data
(2) the underlying code is executed as we have seen high-level language ---- Java program so simple, its execution is Java code -> byte code -> execute bytecode corresponding C / C ++ code based on - -> C / C ++ code is compiled into assembly language -> and hardware interaction, in reality, in order to obtain better performance JVM might command reordering, there may be some unexpected problems under multiple threads.It will prohibit the use of volatile semantics of reordering, which of course must also reduces the degree of code efficiency
From a practical point of view, an important role is volatile and CAS combined to ensure the atomicity, details can be found in java.util.concurrent.Under the category atomic packages, such AtomicInteger.
7. What is thread-safe
Is a theoretical question, the answer has a lot of variety, I give a personal view to the best explanation: if your code in multiple threads under execution and execution never be able to get the same result in a single thread then your code is thread-safe.
This question is worth mentioning that the place is also a security thread several levels:
Like String, Integer, Long these are the final type of class, a thread can not change any of their values, to be changed unless a new creation, so these immutable objects without any synchronization means can be used directly in a multithreaded environment use
(2) Absolute thread safety
Regardless of the runtime environment, the caller does not need additional synchronization measures.To do this usually takes a lot of extra expense, Java noted in its own thread-safe class, the vast majority in fact are not thread-safe, but absolutely thread-safe class, Java also has, say CopyOnWriteArrayList, CopyOnWriteArraySet
(3) the relative security thread
The relative security thread that is said on our usual sense of the thread-safe, such as Vector, add, remove methods are atomic operations will not be interrupted, but only so far, if there is a thread traverse a Vector , there are threads at the same time add this Vector, will appear ConcurrentModificationException 99% of the cases, which is fail-fast mechanism.
This just nothing to say, ArrayList, LinkedList, HashMap are all non-thread-safe class
8, Java how to get to the thread dump file
Infinite loop, deadlocks, blocking, the page opens slow and other issues, playing a thread dump is the best way to solve problems.That is called a thread dump thread stack, to obtain the thread stack has two steps:
(1) to obtain the pid thread, you can use the jps command in the Linux environment can also use the ps -ef | grep java
(2) print thread stack, you can use jstack pid command in the Linux environment you can also use kill -3 pid
Further mention that, Thread class provides a getStackTrace () method may also be used to obtain a thread stack.This is an instance method, so this method is specific and binding thread instance, every acquisition is to obtain a specific currently running thread stack,
9. What happens if a thread exception occurs when running
If the exception is not caught, then the thread will stop executing the.Another important point is this: If this thread holds a monitor of an object, then the object''s monitor will be released immediately
10, how to share data between two threads
By sharing objects between threads on it, and then wait / notify / notifyAll, await / signal / signalAll be arouse and wait, say BlockingQueue blocking queue data is shared between threads and design
11, sleep and wait method method What is the difference
This question is often asked, sleep and wait method method can be used to give up some CPU time, except that if the thread holding an object monitor, sleep method does not give up this object''s monitor, wait method will give up this object monitor
12. What is the role of producer-consumer model
This problem is very theoretical, but very important:
(1) through the production capacity and the balance of consumer spending power producers to improve the operating efficiency of the entire system, which is the most important producer consumer model role
(2) decoupling, which is a producer-consumer model side-effect of decoupling means less contact between producers and consumers, the less contact can develop alone without the need to receive each other''s constraints
13, ThreadLocal what use
Simply put ThreadLocal is a kind of space for time approach, in which each Thread maintains a ThreadLocal to open address method to achieve.ThreadLocalMap, data isolation, data is not shared, naturally, there is no problem of security thread
14, why the wait () method, and notify () / notifyAll () method to be invoked in the sync blocks
This JDK is mandatory, wait () method, and notify () / notifyAll () method must first obtain the object before calling lock
15, wait () method, and notify () / notifyAll () method of any difference object is discarded when the monitor
wait () method, and notify () / notifyAll () method when the object is discarded monitor distinction is characterized: wait () method object monitor immediate release, notify () / notifyAll () method of the thread will wait the remaining code is completed will give up an object monitor.
16. Why use a thread pool
Avoid frequent create and destroy threads, to reuse the thread object.In addition, using a thread pool also have the flexibility to control the number of concurrent projects based on.
17, how to detect whether a thread holding an object monitor
I was in line to see more than one thread face questions before we know there are ways to determine whether a thread holding an object monitor: Thread class provides a holdsLock (Object obj) method, if and only if the monitor is an object obj returns the threads of time will hold true, that this is a static method, which means that "certain threads" refers to the current thread.
18, the difference between synchronized and ReentrantLock
is synchronized and if, else, for, while the same keyword, ReentrantLock is class, which is the essential difference between the two.Since ReentrantLock is a class, then it provides more flexibility than the synchronized characteristics can be inherited, there are ways you can, you can have a variety of class variables, scalability synchronized ReentrantLock than reflected in the points:
(1) ReentrantLock can set the waiting time to acquire the lock, thus avoiding the deadlock
(2) ReentrantLock may acquire various information lock
(3) ReentrantLock flexibility to achieve multi-channel notification
In addition, both the lock mechanism is actually not the same.ReentrantLock low-level calls that park Unsafe methods of locking, synchronized operation should be the subject header mark word, which I can not determine.
19, what is the degree of concurrency ConcurrentHashMap
ConcurrentHashMap concurrency is the size of the segment, the default is 16, which means that there can be up to 16 threads simultaneously operating ConcurrentHashMap, which is the biggest advantage of the Hashtable ConcurrentHashMap, in any case, there are two threads simultaneously Hashtable can get the Hashtable data do?
20, what is ReadWriteLock
First clear look at, not to say ReentrantLock bad, but sometimes there are limitations ReentrantLock.If you use ReentrantLock, may itself be in order to prevent the thread A write data, data inconsistencies thread B in the read data caused, but this way, if the thread C in reading data, the thread D also read data, the read data will not change the data, it is not necessary lock, but still locked, and reduces the performance of the program.
Because of this, before the birth of read-write locks ReadWriteLock.ReadWriteLock interfaces is a read-write lock, is a specific implementation ReentrantReadWriteLock with ReadWriteLock interface, realizes the separation of read and write, a read lock is shared, exclusive write lock is not mutually exclusive write and read, read and write, will mutually exclusive between the write and read, write, and write, read and write performance improves.
21, what is FutureTask
This fact has mentioned earlier, FutureTask task represents an asynchronous operation.FutureTask which can pass a Callable implementation class may be made to wait for access to the results of the task asynchronous operations, has been completed is determined whether to cancel the operation tasks.Of course, since FutureTask also Runnable interface implementation class, it can also be placed in the thread pool FutureTask.
22, how to find out which Linux environment using CPU longest thread
This is a partial practical problems that I feel quite meaningless.You can do:
pid (1) acquisition project, jps or ps -ef | grep java, have talked about this in front
(2) top -H -p pid, the order can not be changed
The percentage so that you can print out the current project, each thread CPU time of.Note that this play is the LWP, which is the operating system native threads thread, then I Notebook Hill did not deploy Java project under the Linux environment, so there is no way to capture the demo, users and friends if the company is using Linux environment deployment project, you can try a bit.
Use "top -H -p pid" + "jps pid" you can easily find a bar with a high CPU thread thread stack, thereby positioning occupy high CPU reasons, usually because of improper operation of the code results in an infinite loop.
Finally mention that, "top -H -p pid" break out of the LWP is decimal, "jps pid" break out of the local thread number is hexadecimal, convert it, you can navigate to the CPU-high thread the current thread stack up.
23, Java programmer to write a program would lead to deadlock
The first time I saw this topic, feel that this is a very good question.Many people know how it is deadlock children: thread A and thread B waiting for the locks held each other causes the program to go on an infinite loop dead.Of course, only so far, and ask how to write a deadlock in the program do not know this situation do not know what it means to be a deadlock, understand a theory on the finished thing, and practical problems encountered in deadlock basically do not see the.
Really understand what is deadlock, the problem is not difficult, a few steps:
(1) two threads which were held by two Object object: lock1 and lock2.Both the synchronization code lock as a lock block;
(2) the thread run 1 () method to get the object blocks of the synchronization code of the lock lock1, Thread.sleep (xxx), does not require much time, almost 50 milliseconds, and then acquire the object lock lock2.This is primarily intended to prevent the thread 1 start suddenly and win a lock1 lock2 two objects object lock
(3) a thread RUN 2) (method of acquiring objects synchronized block to lock lock2, and then acquire the object lock lock1, of course, then it has been subject lock1 lock locks held thread 1, thread 2 thread to wait certainly 1 release of the object lock lock1
In this way, the thread 1 "sleep" sleep finished, thread 2 has acquired the object lock lock2 of the thread 1 attempts to acquire the object lock lock2 this time, they are blocked, this time on the formation of a deadlock.Do not write the code a bit too much of the space, Java multithreading 7: Deadlock this article there is the code to achieve the above steps.
24, how to awaken a blocked thread
If the thread is blocked because the call wait (), sleep () or join () method caused, can interrupt threads, and to wake it up by throwing InterruptedException; if the IO thread encounters a blockage, do nothing, because the operating system IO implementation, Java code and no way to directly access to the operating system.
25, immutable objects was going to help multithreading
One problem mentioned previously, the immutable object to ensure visibility of memory objects, to read immutable object does not require additional synchronization means to enhance the efficiency of code execution.
26, what is the context of multi-threaded switch
Multi-thread context switch refers to the control of the CPU switches from one thread to another is already running a ready and waiting to acquire the execution of the CPU process threads.
27, if you submit the task, the thread pool queue is full, then what will happen
If you are using LinkedBlockingQueue, which is unbounded queue, then it does not matter, continue to add tasks to the blocking queue waiting to be executed, because LinkedBlockingQueue can almost considered to be an infinite queue, unlimited storage tasks; if you are using a bounded queue say ArrayBlockingQueue words, the first task will be added to the ArrayBlockingQueue, ArrayBlockingQueue full, full processing tasks will be used to refuse strategies RejectedExecutionHandler, default is AbortPolicy.
28. What Java thread scheduling algorithm is used
Preemptive.After a thread run out of CPU, operating system will be calculated based on a total priority thread priority, thread starvation situation and other data and assign the next time slice to a thread execution.
29, Thread.What is the role sleep (0) is
This problem and that problem is related to the above, I even together.Since Java preemptive thread scheduling algorithm, so there may be a case of threads often get into control of the CPU, in order to allow some low priority threads can get to the control of the CPU, you can use the Thread.sleep (0) triggered manually operated once the operating system allocation of time slices, which is the balance of operating control of the CPU.
30, what is a spin
Many synchronized code inside just some very simple code, very fast execution time, then wait for the thread lock might be a less worthy of operation, because the thread blocking issues related to the user mode and kernel mode switching.Since the code execution inside the synchronized ground very quickly, let''s wait for the lock thread is not blocked, but do busy circulating in the border synchronized, which is a spin.If you do not find that many times get busy cycle lock, and then blocked, this may be a better strategy.
31. What is the Java Memory Model
Java memory model specification defines a multi-threaded Java access memory.To complete Java memory model is not here to speak a few words to say clearly, I briefly summarize
Several parts of Java memory model:
(1) Java memory model memory into the main memory and working memory.Class status, which is shared between class variables are stored in the main memory, each time Java threads use these variables in main memory, will read one of the main variables in memory, and let them in memory their working memory have a copy, run your own thread code, use these variables, are themselves working memory operation in that one.After threaded code is finished, it will be updated to the latest value to main memory
(2) defines several atomic operation, for operating the main memory and the working memory variables
(3) defines the use of the rule of volatile variables
(4) happens-before, the principle i.e. first occurred, defines rules B over A bound ahead occurs in operation, such as in the same thread as the front of the control flow of the code must first occur in later control of the flow of the code, a release lock unlock action must occur first in the back to be locked with a lock action lock, etc., provided that they meet the rules, you do not need additional synchronization measures, if a piece of code does not comply with all the rules of happens-before, then this code it must be a non-thread-safe
32. What is CAS
CAS, called the Compare and Set, namely comparison - set.Suppose there are three operands: Memory value V, the expected value of the old A, B values to be modified, if and only if the expected value of the A and V are the same memory value, the memory value will be revised to B and returns true, what else nothing and returns false.Of course, with the CAS variable must be volatile, so as to ensure that each variable is the main memory to get the latest that value, otherwise the old expectations A thread on a post, it is always a value of A will not change, as long as CAS a particular operation fails, we will never succeed.
33. What is the optimistic and pessimistic locking
(1) optimistic lock: As its name suggests, for thread safety issues between concurrent operations generated optimistic state, optimistic locking that competition does not always happen, so it does not need to hold the lock, the comparison - set two actions as one atomic operation attempts to modify variables in memory, if said conflict fails, then there should be a corresponding retry logic.
(2) pessimistic lock: or, as its name suggests, for thread safety issues between concurrent operations generated a pessimistic state, pessimistic locking that competition will always happen, so every time a resource operation, will hold an exclusive lock, like synchronized, willy-nilly, operating directly on the lock on the resource.
34, what is the AQS
Briefly about AQS, AQS full name AbstractQueuedSychronizer, should be translated abstract queue synchronizer.
If java.util.CAS is a concurrent basis, then the AQS is the whole core of the contract and Java, ReentrantLock, CountDownLatch, Semaphore, etc., use it.AQS actually connected in the form of two-way queue of all of Entry, say ReentrantLock, all waiting threads are placed in a two-way Entry and even into the queue in front of a thread uses ReentrantLock Well, the fact of the first two-way queue Entry is running a.
AQS defines all operations on the two-way queue, but only open tryLock and tryRelease method for developers to use, developers can rewrite tryLock and tryRelease method based on their implementation, in order to realize their concurrent function.
Security thread 35, the singleton
Commonplace problem, the first thing to say is thread-safe singleton means: an instance of a class will only be created once out in a multithreaded environment.There are many cases of single mode of writing, I summarize:
Writing (1) starving single mode embodiment: Security Thread
Writing (2) single idler embodiment mode: non thread-safe
Writing (3) Double-checked locking singletons: Security Thread
36, Semaphore what role
Semaphore is a semaphore, its role is to limit the number of concurrent piece of code blocks.Semaphore has a constructor, you may pass an int type integer n, represents a piece of code at most n threads can access, if exceeded n, then please wait until a thread completes this code block, the next thread re-entry.It can be seen that if the incoming Semaphore constructor type integer int n = 1, corresponding to the synchronized into a.
37, Hashtable''s size () method obviously only a statement "return count", why do Synchronization?
This is a confusing me before, do not know if you have not thought about this problem.If more than one method statement and are in operation with a class variable, then unlocked in multithreaded environment, it will inevitably lead to thread safety issues, it is well understood, but the size () method is clearly only one statement Why lock?
On this issue, slowly work, study, have to understand, for two main reasons:
(1) at the same time only a fixed synchronization method of a class thread execution, but for asynchronous method of the class, you can access multiple threads.So, this way there will be problems, may add data execution thread A Hashtable''s put method, thread B you can call normal size () method to read the number of the current element of the Hashtable, to read that value may not be current , a thread may be added over the data, but there is no size ++, thread size B has been read, then read the thread B is for certain size is not accurate.And to the size () method after the addition of synchronous, meaning that thread B calls the size () method can be called only after a call to put the thread A method is completed, thus ensuring thread safety
(2) CPU executing code, Java code is not executed, it is very critical, must remember.Java code will eventually be translated into assembly code execution, assembly code and hardware can be truly interactive code.Even if you see only a single line of Java code, you see that even after the bytecode generated Java code is compiled, only one line, it does not mean that the underlying operating for only one sentence statement.A "return count" hypothesis has been translated into three assembler statement is executed, entirely possible to complete the implementation of the first sentence, the thread switched.
38, the constructor of the Thread class, is called a static block which thread
This is a very tricky and tricky problems.Remember: Thread class constructor, static block is called by this thread new thread class is located, and run the code inside the method itself is being called by a thread.
If the above statement makes you feel confused, then I, for example, assume that the new Thread2 the Thread1, main function in the new Thread2, then:
Constructor (1) Thread2 static block is the main thread calls, Thread2 the run () method is invoked own Thread2
The method of construction (2) Thread1 static blocks are called Thread2, Thread1 the run () method is invoked their Thread1
39, and the sync block synchronization method, which is the better choice
Sync blocks, which means that code outside the sync block is asynchronous, which further enhance the efficiency of the overall process than the synchronous code.Please know that a principle: the less synchronous range
Through this article, I put a little extra, although the scope of the better sync, but in the Java virtual machine optimization method still exists called lock coarsening, this method is to synchronize large range.This is useful, for example StringBuffer, it is a thread-safe class, the most common natural append () method is a synchronous method, we write the code will be repeated append strings, which means that to be repeated lock -> unlock this performance disadvantage, because it means that the Java virtual machine to repeatedly switch between kernel mode and user mode on this thread, so Java virtual machine code that will repeatedly append method calls are a lock roughening operation, extended operation to append multiple craniocaudal append method, it becomes a large sync block, thus reducing the lock -> unlock times, effectively improve the efficiency of the code execution.
40, high concurrency, short of task execution time business how to use the thread pool?How concurrency is not high, long-time business using task execution thread pool?How high concurrency, long-time business execution of business using a thread pool?
This is a problem I see in concurrent programming online, put this question on the last, I hope everyone can see and think about, because this is a very good, very practical, very professional.On this issue, a personal view is:
(1) high concurrent, business task execution time short, the number of threads in the thread pool may be set to the number +1 CPU core, reducing the thread context switching
(2) concurrent is not high, long task execution time distinguished service to see:
a) If the business is a long time focused on the IO operation, that is IO-intensive task, because the operation does not take up CPU IO, so do not let all the CPU retired and sit, you can increase the number of threads in the pool, let CPU handle more business
b) If the traffic is concentrated in a long time operation is calculated, which is computationally intensive tasks, that no solution, and (1) as it, the number of threads in the pool is set to be less, reducing the thread context switching
Key (3) high concurrency, long-time business execution, to address this type of task is not thread pool but in the overall architecture design, to see whether they can do business inside some of the data cache is the first step, the first increase in server two steps, as set thread pool, provided with reference to (2).Finally, a long time business execution problems, may also need to analyze it, see if you can use the middleware to split tasks and decoupling.
Java threads are blocked (Blocked) type:
Function calls sleep to sleep, Thread.sleep (1000) or TimeUnit.SECONDS.sleep (1), sleep will not release the lock.
Wait (wait) an event, divided into two types, (wait, notify, notifyAll), (await, signal, signalAll), will be detailed later.wait and await release a lock and must be acquired to lock the environment to call.
Waiting for the lock, synchronized environment and lock, the lock has been taken away by another thread, waiting to acquire a lock.
IO blocked (Blocked), such as network latency, file open, read console.System.in.read ().
You may also be interested in the article: Java multi-thread - the main thread wait for all child thread is finished using the java HttpClient communicate between the server and client implementations in the implementation of multi-java multi-threaded multi-threaded Java Detailed multi-threaded programming CountDownLatch method of blocking threads Detailed usage of Java multi-threaded programming mutex class Detailed ReentrantLock start Java threads in multithreaded programming, interruption or termination of operations On java asynchronous multi-threaded service timeout caused by abnormal depth understanding of multi-threaded JAVA Socket communications are two ways to use multiple threads of communication between the threads Java Web project, a long connection method of Java multithreading