1, Process&Threads
Most implementations of the Java virtual machine run as a single process.
Threads exist within a process — every process has at least one. Threads share the process's resources, including memory and open files. This makes for efficient, but potentially problematic, communication.
From the application programmer's point of view, you start with just one thread, called the main thread. This thread has the ability to create additional threads, as we'll demonstrate in the next section.
2, Interrupt
Interruption in Java is not pre-emptive. Put another way both threads have to cooperate in order to process the interrupt properly. If the target thread does not poll the interrupted status the interrupt is effectively ignored.
Polling occurs via the Thread.interrupted()
method which returns the current thread's interrupted status AND clears that interrupt flag. Usually the thread might then do something such as throw InterruptedException.
Some API methods have built in interrupt handling. Of the top of my head this includes.
Object.wait()/Thread.sleep()
- Most
java.util.concurrent
structures - Java NIO (but not java.io) and it does NOT use
InterruptedException
, instead usingClosedByInterruptException
.
Interrupts are not explicitly handled by synchronization - synchronous blocks only gaurantee that, while executing, the block cannot be reentered by another thread.
总结: 对于像sleep()这样的方法,会自己throw InterrupttedException,如果你有程序会扔出这个interrupt,那编程的时候就catch它进行处理。否则,直接将当前方法throws InterruptedException即可。
或者自己调用Thread.interrupted()方法检测Interrupted status并进行处理。
对于一般的synchronized method,不会自动处理Interrupt并退出。
3,Java memory model
总结:在单线程的情况下,compiler也会对指令重排序来优化运行时间,只要保证说原order和重排序order执行后的结果一致即可。
但是,在多线程的情况下,compiler对线程1的指令重排序会影响线程2的执行结果。所以对于共享内存的多个线程来说,为了达到happens-before效果[即线程1的结果对线程2是可见的],Java memory model定义了多个rules。而这些rules的背后是,java编译器在生成指令序列的适当位置会插入内存屏障指令来禁止特定类型的处理器重排序。
由于: On modern platforms, code is frequently not executed in the order it was written. It is reordered by the compiler, the processor and the memory subsystem to achieve maximum performance. On multiprocessor architectures, individual processors may have their own local caches that are out of sync with main memory. It is generally undesirable to require threads to remain perfectly in sync with one another because this would be too costly from a performance point of view. This means that at any given time, different threads may see different values for the same shared data.
Thread 1 | Thread 2 |
---|---|
x = 1; | int r1 = y; |
y = 2; | int r2 = x; |
If no reorderings are performed, and the read of y in Thread 2 returns the value 2, then the subsequent read of x should return the value 1, because the write to x was performed before the write to y. However, if the two writes are reordered, then the read of y can return the value 2, and the read of x can return the value 0.
因此: The Java Memory Model (JMM) defines the allowable behavior of multithreaded programs, and therefore describes when such reorderings are possible. It places execution-time constraints on the relationship between threads and main memory in order to achieve consistent and reliable Java applications.
Rules that Java Memory Model uses to guarantee happens-before:
1、程序次序规则:在一个单独的线程中,按照程序代码的执行流顺序,(时间上)先执行的操作happen—before(时间上)后执行的操作。
2、管理锁定规则:一个unlock操作happen—before后面(时间上的先后顺序,下同)对同一个锁的lock操作。
3、volatile变量规则:对一个volatile变量的写操作happen—before后面对该变量的读操作。
...
The basic rules imply that individual actions can be reordered, as long as the as-if-serial semantics of the thread are not violated, and actions that imply communication between threads, such as the acquisition or release of a lock, ensure that actions that happen prior to them are seen by other threads that see their effects. For example, everything that happens before the release of a lock will be seen to be ordered before and visible to everything that happens after a subsequent acquisition of that same lock.
In Java specifically, a happens-before relationship is a guarantee that memory written to by statement A is visible to statement B, that is, that statement A completes its write before statement B starts its read.
A memory barrier, also known as a membar, memory fence or fence instruction, is a type of barrier instruction which causes a central processing unit (CPU) orcompiler to enforce an ordering constraint on memory operations issued before and after the barrier instruction. This typically means that operations issued prior to the barrier are guaranteed to be performed before operations issued after the barrier.
4, Synchronized v.s. Volatile
对一个volatile变量的读,总是能看到(任意线程)对这个volatile变量最后的写入。
加锁机制(即同步机制)既可以确保可见性又可以确保原子性,而volatile变量只能确保可见性,原因是声明为volatile的简单变量如果当前值与该变量以前的值相关,那么volatile关键字不起作用,也就是说如下的表达式都不是原子操作:“count++”、“count = count+1”。
How to synchronize a HashMap?
Map m = Collections.synchronizedMap(new HashMap());
...
Set s = m.keySet(); // Needn't be in synchronized block
...
synchronized (m) { // Synchronizing on m, not s!
Iterator i = s.iterator(); // Must be in synchronized block
while (i.hasNext())
foo(i.next());
}
5, Atomic operations
In concurrent programming, an operation (or set of operations) is atomic, linearizable, indivisible or uninterruptible if it appears to the rest of the system to occur instantaneously.
6, Intrinsic locks
是Synchronization的基础,Synchronized method是获取当前object的instrinsic lock,而Synchronized block是获取某个object的instrinsic lock。
Synchronization is built around an internal entity known as the intrinsic lock. Intrinsic locks play a role in both aspects of synchronization: enforcing exclusive access to an object's state and establishing happens-before relationships that are essential to visibility.Every object has an intrinsic lock associated with it.
public class MsLunch {
private long c1 = 0;
private long c2 = 0;
private Object lock1 = new Object();
private Object lock2 = new Object();
public void inc1() {
synchronized(lock1) {
c1++;
}
}
public void inc2() {
synchronized(lock2) {
c2++;
}
}
}
什么时候prefer synchronized block?
在这个方法你只想synchronize一部分code。或者像上面这个例子,c1和c2从来不会一起使用,所以没有必要在执行inc2的时候block inc1(),提高了速度。
7, Blocking queue
http://tutorials.jenkov.com/java-concurrency/blocking-queues.html
8, "double-checked locking" pattern
9, ThreadLocal
When a field is declared volatile, the compiler and runtime are put on notice that this variable is shared and that operations on it should not be reordered with other memory operations. Volatile variables are not cached in registers or in caches where they are hidden from other processors, so a read of a volatile variable always returns the most recent write by any thread.
When thread A writes to a volatile variable and subsequently thread B reads
that same variable, the values of all variables that were visible to A prior to
writing to the volatile variable become visible to B after reading the volatile
variable. So from a memory visibility perspective, writing a volatile variable
is like exiting a synchronized block and reading a volatile variable is like
entering a synchronized block.
You can use volatile variables only when all the following criteria are met:
- Writes to the variable do not depend on its current value, or you can ensure that only a single thread ever updates the value;
- The variable does not participate in invariants with other state variables;
11, Mutex vs Semaphore
A mutex provides mutual exclusion, either producer or consumer can have the key (mutex) and proceed with their work. As long as the buffer is filled by producer, the consumer needs to wait, and vice versa.
At any point of time, only one thread can work with the entire buffer. The concept can be generalized using semaphore.
A semaphore is a generalized mutex. Semaphore can be counted, while mutex can only count to 1.
12, java.util.concurrent.locks package
使用Lock是比synchronized更高级的选择,可以添加更多的逻辑。
Interface Lock[methods: lock(), unlock()], Implematations: ReentrantLock, ReadWriteLock
V.S. Synchronized: more extensive locking operations, allow more flexible structuring, may have quite different properties, and may support multiple associated Condition
objects. ReadWriteLock may allow concurrent access to a shared resource.
ReentrantLock is different thing with Reentrant Synchronization[Synchronized本身就是默认reentrant的].
package
com.journaldev.threads.lock;
public
class
SynchronizedLockExample
implements
Runnable{
private
Resource resource;
public
SynchronizedLockExample(Resource r){
this
.resource = r;
}
@Override
public
void
run() {
synchronized
(resource) {
resource.doSomething();
}
resource.doLogging();
}
}
package
com.journaldev.threads.lock;
import
java.util.concurrent.TimeUnit;
import
java.util.concurrent.locks.Lock;
import
java.util.concurrent.locks.ReentrantLock;
public
class
ConcurrencyLockExample
implements
Runnable{
private
Resource resource;
private
Lock lock;
public
ConcurrencyLockExample(Resource r){
this
.resource = r;
this
.lock =
new
ReentrantLock();
}
@Override
public
void
run() {
try
{
if
(lock.tryLock(
10
, TimeUnit.SECONDS)){ //Here, better than Synchronized, you could set timeout to not wait forever.
resource.doSomething();
}
}
catch
(InterruptedException e) {
e.printStackTrace();
}
finally
{
//release lock
lock.unlock(); //Here, more complicated than Synchronized, you need to use try/finally to release the lock.
}
resource.doLogging();
}
}
13, ExecutorService
The java.util.concurrent.ExecutorService
interface represents an asynchronous execution mechanism which is capable of executing tasks in the background. An ExecutorService
is thus very similar to a thread pool.
ExecutorService executorService = Executors.newFixedThreadPool(10);
executorService.execute(new Runnable() {
public void run() {
System.out.println("Asynchronous task");
}
});
executorService.shutdown(); //会通知ExecutorService关闭,它自己会等现有的tasks执行完毕就关闭。如果你不call shutdown(),会导致JVM无法关闭。
创建ExecutorService:
ExecutorService executorService1 = Executors.newSingleThreadExecutor();
ExecutorService executorService2 = Executors.newFixedThreadPool(10);
ExecutorService executorService3 = Executors.newScheduledThreadPool(10);
ExecutorService的方法:
- execute(Runnable)
- submit(Runnable) return Future,
future.get(); //returns null if the task has finished correctly.
- submit(Callable) return Future, and future.get() return callable.call().
-
future.get() = Callable Result
- invokeAny(...)
- invokeAll(...)
Callable: 作用类似于Runnable,不同的是会返回结果.
import java.util.concurrent.Callable;
public class MyCallable implements Callable<Long> { //Callable是泛型
@Override
public Long call() throws Exception { //要Override T call()方法
long sum = 0;
for (long i = 0; i <= 100; i++) {
sum += i;
}
return sum;
}
}
Callable<Long> worker = new MyCallable();
Future<Long> submit = executor.submit(worker);
public interface Future<V>
The executor framework presented in the last chapter works with Runnables
. Runnable do not return result.
In case you expect your threads to return a computed result you can use java.util.concurrent.Callable
. TheCallable
object allows to return values after completion.