explained
Example
LoadingCache<Key,Graph> graphs =CacheBuilder.newBuilder()
.maximumSize(1000)
.expireAfterWrite(10,TimeUnit.MINUTES)
.removalListener(MY_LISTENER)
.build(
newCacheLoader<Key,Graph>(){
publicGraph load(Key key)throwsAnyException{
return createExpensiveGraph(key);
}
});
Applicability
Caches are tremendously useful in a wide variety of use cases. For example, you should consider using caches when a value is expensive to compute or retrieve, and you will need its value on a certain input more than once.
缓存在很多情况下都是非常有用的.例如, 当一个值的计算和检索代价非常高且你可能在完后会多次需要他的值的时候应该考虑使用缓存
A Cache is similar to ConcurrentMap, but not quite the same. The most fundamental difference is that a ConcurrentMap persists all elements that are added to it until they are explicitly removed. A Cache on the other hand is generally configured to evict entries automatically, in order to constrain its memory footprint. In some cases a LoadingCache can be useful even if it doesn't evict entries, due to its automatic cache loading.
一个Cache比ConcurrentMap要小, 但并不完全一样. 最基本的却别在于, ConcurrentMap保留所有元素,知道它们被显式删除.另一方面, 为了限制Cache的内存占用,一般会设置为自动清理Cache条目.
Generally, the Guava caching utilities are applicable whenever:
一般来说,Guava caching 工具集在如下情况适用
- You are willing to spend some memory to improve speed.
- 你想通过空间换取时间的时候
- You expect that keys will sometimes get queried more than once.
- 你对key的查询多于一次
- Your cache will not need to store more data than what would fit in RAM. (Guava caches are local to a single run of your application. They do not store data in files, or on outside servers. If this does not fit your needs, consider a tool like Memcached.)
- 你的cache不会需要存储多于你的RAM的, (Guava caches对于你的应用程序来说是本地的.他们不再文件里储存数据,或者)
If each of these apply to your use case, then the Guava caching utilities could be right for you!
如果你的适用情况符合这里的每一条, 那么Guava caching工具很适合你!
Obtaining a Cache is done using the CacheBuilder builder pattern as demonstrated by the example code above, but customizing your cache is the interesting part.
像上面的示例代码演示的那样, 使用CacheBuilder的建造者模式就能获取Cache, 但是自定义你的cache是件很有趣的事情.
Note: If you do not need the features of a Cache, ConcurrentHashMap is more memory-efficient -- but it is extremely difficult or impossible to duplicate most Cache features with any old ConcurrentMap.
注释: 如果你不需要Cache的特性,ConcurrentHashMap在内存上会更有效率 -- 但是想要使用老的ConcurrentMap来复制Cache的特性几乎是不可能的
Population
The first question to ask yourself about your cache is: is there some sensible default function to load or compute a value associated with a key? If so, you should use a CacheLoader. If not, or if you need to override the default, but you still want atomic "get-if-absent-compute" semantics, you should pass a Callable into a get call. Elements can be inserted directly, using Cache.put, but automatic cache loading is preferred as it makes it easier to reason about consistency across all cached content.
第一个要问你自己关于cache的问题是: 是否需要加载明显的默认(sensible default)函数或计算和key相关联的值? 如果是这样, 你应该使用CacheLoader.如果不是, 或者你需要重写默认的,但是你仍然想要原子性的"get-if-absent-compute"语义, 你应该传一个Callable给get调用.元素可以使用Cache.put直接插入,但是原子性的cache loading会更好,因为它使得持久化所有缓存内容变得更简单
From a CacheLoader
A LoadingCache is a Cache built with an attached CacheLoader. Creating a CacheLoader is typically as easy as implementing the method V load(K key) throws Exception. So, for example, you could create a LoadingCache with the following code:
一个 LoadingCache 是一个使用CacheLoader建造的Cache.建造一个CacheLoader和实现V load(K key) throws Exception方法一样简单.例如, 你可以使用如下代码创建一个LoadingCache
LoadingCache<Key,Graph> graphs =CacheBuilder.newBuilder()
.maximumSize(1000)
.build(
new CacheLoader<Key,Graph>(){
public Graph load(Key key) throws AnyException{
return createExpensiveGraph(key);
}
});
...
try{
return graphs.get(key);
}catch(ExecutionException e){
thrownewOtherException(e.getCause());
}
The canonical way to query a LoadingCache is with the method get(K). This will either return an already cached value, or else use the cache'sCacheLoader to atomically load a new value into the cache. Because CacheLoader might throw an Exception, LoadingCache.get(K) throwsExecutionException. If you have defined a CacheLoader that does not declare any checked exceptions then you can perform cache lookups using getUnchecked(K); however care must be taken not to call getUnchecked on caches whose CacheLoaders declare checked exceptions.
一个规范的查询LoadingCache的方法是使用方法get(K).这个方法不是返回一个已存在的cached值,就是使用CacheLoader原子性的加载一个新值到cache里.因为CacheLoader可能抛出异常,所以LoadingCache.get(K) 抛出 ExecutionException. 假如你定义了一个没有声明任何checked exception的CacheLoader, 那么你可以通过getUnchecked(K)来查询缓存; 然而必须注意的是定义了checked exceptions的cache是不能调用getUnchecked的.
LoadingCache<Key,Graph> graphs =CacheBuilder.newBuilder()
.expireAfterAccess(10,TimeUnit.MINUTES)
.build(
new CacheLoader<Key,Graph>(){
public Graph load(Key key){// no checked exception
return createExpensiveGraph(key);
}
});
...
return graphs.getUnchecked(key);
Bulk lookups can be performed with the method getAll(Iterable<? extends K>). By default, getAll will issue a a separate call toCacheLoader.load for each key which is absent from the cache. When bulk retrieval is more efficient than many individual lookups, you can override CacheLoader.loadAll to exploit this. The performance of getAll(Iterable) will improve accordingly.
getAll(Iterable<? extends K>)可以用来执行大容量查询. 默认的是, getAll 将会对每个在cache中不存在的key做一个分离的CacheLoader.load调用.当大容量查询效率比单独查询效率高时,你可以重写CacheLoader.loadAll来开发他.getAll(Iterable)的性能将会因此提高.
Note that you can write a CacheLoader.loadAll implementation that loads values for keys that were not specifically requested. For example, if computing the value of any key from some group gives you the value for all keys in the group, loadAll might load the rest of the group at the same time.
注意你可以写一个CacheLoader.loadAll的实现来load那些没有再request中指定key的values.例如,如果计算某些group中的任何key的value可以给予你所有keys的value, loadAll可能会同时读取group中剩下的所有value
From a Callable
All Guava caches, loading or not, support the method get(K, Callable<V>). This method returns the value associated with the key in the cache, or computes it from the specified Callable and adds it to the cache. No observable state associated with this cache is modified until loading completes. This method provides a simple substitute for the conventional "if cached, return; otherwise create, cache and return" pattern.
所有的Guava cache, loading或不loading, 都支持方法get(K, Callable<V>).这个方法返回在cahce中和key相关联的value, 或者使用指定的Callable来计算他并将它加入cache.这个cache不会有可见的变化,知道loading完成.这个方法提供了一个"if cached, return; otherwise create, cache and return"模式的便利简单替代.
Cache<Key,Value> cache =CacheBuilder.newBuilder()
.maximumSize(1000)
.build();// look Ma, no CacheLoader
...
try{
// If the key wasn't in the "easy to compute" group, we need to
// do things the hard way.
cache.get(key,newCallable<Value>(){
@Override
publicValue call()throwsAnyException{
return doThingsTheHardWay(key);
}
});
}catch(ExecutionException e){
thrownewOtherException(e.getCause());
}
Inserted Directly
Values may be inserted into the cache directly with cache.put(key, value). This overwrites any previous entry in the cache for the specified key. Changes can also be made to a cache using any of the ConcurrentMap methods exposed by the Cache.asMap() view. Note that no method on the asMap view will ever cause entries to be automatically loaded into the cache. Further, the atomic operations on that view operate outside the scope of automatic cache loading, so Cache.get(K, Callable<V>) should always be preferred over Cache.asMap().putIfAbsent in caches which load values using either CacheLoader or Callable.
值可以使用cache.put(key, value)来直接插入到cache中.这会覆盖原来cache中key相同的值.也可以通过Cache.asMap()视图暴露的ConcurrentMap的方法来修改cache.注意asMap视图不会造成entires自动被load到cache中.更进一步,对于这个view的原子操作也会对外部的的cache loading做出相同的原子性操作, 所有Cache.get(K, Callable<V>)总是比Cache.asMap().putIfAbsent更推荐使用.
Eviction
The cold hard reality is that we almost certainly don't have enough memory to cache everything we could cache. You must decide: when is it not worth keeping a cache entry? Guava provides three basic types of eviction: size-based eviction, time-based eviction, and reference-based eviction.
事实是我们肯定是没有足够的内存来缓存.你必须决定: 什么时候不值得保存一个cache? Guava提供了三种基本类型的回收策略: 基于大小的回收, 基于时间的回收, 基于引用的回收.
Size-based Eviction
If your cache should not grow beyond a certain size, just use CacheBuilder.maximumSize(long). The cache will try to evict entries that haven't been used recently or very often. Warning: the cache may evict entries before this limit is exceeded -- typically when the cache size is approaching the limit.
如果你的cache不能超过规定大小,使用CacheBuilder.maximumSize(long). Cache会尝试回收最近未被使用的条目. 警告: cache可能会在limit溢出前就回收条目 -- 一般来讲是在cache size接近limit的时候.
Alternately, if different cache entries have different "weights" -- for example, if your cache values have radically different memory footprints -- you may specify a weight function with CacheBuilder.weigher(Weigher) and a maximum cache weight withCacheBuilder.maximumWeight(long). In addition to the same caveats as maximumSize requires, be aware that weights are computed at entry creation time, and are static thereafter.
替代方法是, 如果cache条目拥有不同的"重量" -- 例如, 如果你的cache的值在内存占用上完全不同 -- 你可能使用CacheBuilder.weigher(Weigher)指定一个重量方法(weight function)并使用CacheBuilder.maximumWeight(long)一个最大的cache重量.另外对于maximumSize
有相同的警告, 注意weights在条目创建的时候就会被计算, 并在以后被静态化.
LoadingCache<Key,Graph> graphs =CacheBuilder.newBuilder()
.maximumWeight(100000)
.weigher(new Weigher<Key,Graph>(){
publicint weigh(Key k,Graph g){
return g.vertices().size();
}
})
.build(
newCacheLoader<Key,Graph>(){
publicGraph load(Key key){// no checked exception
return createExpensiveGraph(key);
}
});
Timed Eviction
CacheBuilder provides two approaches to timed eviction:
CacheBuilder提供两种基于时间的回收方法
- expireAfterAccess(long, TimeUnit) Only expire entries after the specified duration has passed since the entry was last accessed by a read or a write. Note that the order in which entries are evicted will be similar to that of size-based eviction.
- expireAfterAccess(long, TimeUnit) 仅仅当当前时间举例entry最后访问(包括读和写)的时间超过指定时间会被回收.注意那些长时间才被回收的条目和基于大小的回收差不多
- expireAfterWrite(long, TimeUnit) Expire entries after the specified duration has passed since the entry was created, or the most recent replacement of the value. This could be desirable if cached data grows stale after a certain amount of time.
- expireAfterWrite(long, TimeUnit) 当条目被创建或最近的值替代到现在的时间超过指定时长会被回收.这在cache数据在一段时候后变的陈旧的情况下会很令人满意.
Timed expiration is performed with periodic maintenance during writes and occasionally during reads, as discussed below.
时间过期在写和偶尔读的时候会定期维护, 如我们下面讨论的:
Testing Timed Eviction
Testing timed eviction doesn't have to be painful...and doesn't actually have to take you two seconds to test a two-second expiration. Use theTicker interface and the CacheBuilder.ticker(Ticker) method to specify a time source in your cache builder, rather than having to wait for the system clock.
测试基于时间的回收并不会很痛苦...并且不会回真的需要花你两秒时间来测试一个两秒过期.使用Ticker接口和CacheBuilder.ticker(Ticker)方法来指定一个你的cache builder的时间资源, 比你等着系统时间结束会更好.
Reference-based Eviction
Guava allows you to set up your cache to allow the garbage collection of entries, by using weak references for keys or values, and by using soft references for values.
Guava允许你设置你的cache来允许垃圾回收条目, 通过使用key和value的弱引用, 和value的软引用来实现.
- CacheBuilder.weakKeys() stores keys using weak references. This allows entries to be garbage-collected if there are no other (strong or soft) references to the keys. Since garbage collection depends only on identity equality, this causes the whole cache to use identity (==) equality to compare keys, instead of equals().
- CacheBuilder.weakKeys()使用弱引用存储keys.这允许条目在没有别的引用(强或软引用)指向他的时候被垃圾回收.因为垃圾回收仅仅依赖于恒等, 这会导致整个cache使用(==)来比较keys,而不是使用equals().
- CacheBuilder.weakValues() stores values using weak references. This allows entries to be garbage-collected if there are no other (strong or soft) references to the values. Since garbage collection depends only on identity equality, this causes the whole cache to use identity (==) equality to compare values, instead of equals().
- CacheBuilder.weakValues()使用弱引用来存储values.这允许条目在没有其他引用(强或软引用)指向他的时候被垃圾回收.因为垃圾回收仅仅依赖于恒等, 这会导致整个cache使用(==)来比较keys,而不是使用equals().
- CacheBuilder.softValues() wraps values in soft references. Softly referenced objects are garbage-collected in a globally least-recently-used manner, in response to memory demand. Because of the performance implications of using soft references, we generally recommend using the more predictable maximum cache size instead. Use of softValues() will cause values to be compared using identity (==) equality instead of equals().
- CacheBuilder.softValues()使用软引用来包装values.软引用对象会在全局的最近最少使用的方式下被回收,
Explicit Removals
At any time, you may explicitly invalidate cache entries rather than waiting for entries to be evicted. This can be done:
任何时间,你可以显示删除缓存条目,而不是等待缓存被回收.可以如下实现:
- individually, using Cache.invalidate(key)
- 单独移除, 使用 Cache.invalidate(key)
- in bulk, using Cache.invalidateAll(keys)
- 批量移除, 使用 Cache.invalidateAll(keys)
- to all entries, using Cache.invalidateAll()
- 移除所有cache, 使用 Cache.invalidateAll()
Removal Listeners
You may specify a removal listener for your cache to perform some operation when an entry is removed, viaCacheBuilder.removalListener(RemovalListener). The RemovalListener gets passed a RemovalNotification, which specifies theRemovalCause, key, and value.
你可以指定一个移除监听器来在条目被移除的时候做一些操作, 通过 CacheBuilder.removalListener(RemovalListener). RemovalListener 获取一个制定了RemovalCause, key, value的RemovalNotification.
Note that any exceptions thrown by the RemovalListener are logged (using Logger) and swallowed.
注意任何被RemovalListener抛出的异常都会被logged并且被吞
CacheLoader<Key,DatabaseConnection> loader =new CacheLoader<Key,DatabaseConnection>(){
public DatabaseConnection load(Key key)throws Exception{
return openConnection(key);
}
};
RemovalListener<Key,DatabaseConnection> removalListener = new RemovalListener<Key,DatabaseConnection>(){
publicvoid onRemoval(RemovalNotification<Key,DatabaseConnection> removal){
DatabaseConnection conn = removal.getValue();
conn.close();// tear down properly
}
};
return CacheBuilder.newBuilder()
.expireAfterWrite(2,TimeUnit.MINUTES)
.removalListener(removalListener)
.build(loader);
Warning: removal listener operations are executed synchronously by default, and since cache maintenance is normally performed during normal cache operations, expensive removal listeners can slow down normal cache function! If you have an expensive removal listener, useRemovalListeners.asynchronous(RemovalListener, Executor) to decorate a RemovalListener to operate asynchronously.
警告: 可移除listener操作默认是以同步方式执行的, 因为cache的维护一般是在普通的cache操作时进行, 所以昂贵的移除监听器会降低普通cache函数的速度! 如果你有一个昂贵的removal listener, 请使用 RemovalListeners.asynchronous(RemovalListener, Executor)来装饰这个RemovalListener来进行异步操作.
When Does Cleanup Happen?
Caches built with CacheBuilder do not perform cleanup and evict values "automatically," or instantly after a value expires, or anything of the sort. Instead, it performs small amounts of maintenance during write operations, or during occasional read operations if writes are rare.
使用CacheBuilder建造的Caches不会自动执行清理和回收操作. 而是会在写操作时使用小范围的维护, 或者在偶尔的读操作的时候.
The reason for this is as follows: if we wanted to perform Cache maintenance continuously, we would need to create a thread, and its operations would be competing with user operations for shared locks. Additionally, some environments restrict the creation of threads, which would makeCacheBuilder unusable in that environment.
这样的原因是: 如果我们想持续执行Cache维护,我们应该创建一个线程, 并且他的操作会和我们的操作进行竞争. 另外, 某些环境下限制了线程的创建, 这会使得CacheBuilder在这种环境下无法使用
Instead, we put the choice in your hands. If your cache is high-throughput, then you don't have to worry about performing cache maintenance to clean up expired entries and the like. If your cache does writes only rarely and you don't want cleanup to block cache reads, you may wish to create your own maintenance thread that calls Cache.cleanUp() at regular intervals.
我们把选择的权利放在你的手里.如果你的cache是高吞吐量的, 你不必单行cache的维护和清理问题.如果你的cache很少写入, 并且你不想清理, 你可能希望创建你自己的维护线程定期调用Cache.cleanUp()
If you want to schedule regular cache maintenance for a cache which only rarely has writes, just schedule the maintenance usingScheduledExecutorService.
如果你想对一个很少写的cache安排经常性的维护, 使用ScheduledExecutorService来进行安排.
Refresh
Refreshing is not quite the same as eviction. As specified in LoadingCache.refresh(K), refreshing a key loads a new value for the key, possibly asynchronously. The old value (if any) is still returned while the key is being refreshed, in contrast to eviction, which forces retrievals to wait until the value is loaded anew.
刷新和回收是不一样的.如LoadingCache.refreash(K)所说, 刷新一个key会为这个key加载一个新的value, 可能是异步的. 旧的value在key被刷新的时候仍然会返回, 和回收比较, 他会强制获取操作等待, 直到加载到新的value
If an exception is thrown while refreshing, the old value is kept, and the exception is logged and swallowed.
如果在刷新的时候抛出异常, 旧的值会被保留, 并且异常会被log和吞掉.
A CacheLoader may specify smart behavior to use on a refresh by overriding CacheLoader.reload(K, V), which allows you to use the old value in computing the new value.
一个CacheLoader可以通过重写CacheLoader.reload(K, V)指定一个聪明行为, 这允许你使用旧的值来计算新的值
// Some keys don't need refreshing, and we want refreshes to be done asynchronously.
LoadingCache<Key,Graph> graphs =CacheBuilder.newBuilder()
.maximumSize(1000)
.refreshAfterWrite(1,TimeUnit.MINUTES)
.build(
new CacheLoader<Key,Graph>(){
public Graph load(Key key){// no checked exception
return getGraphFromDatabase(key);
}
public ListenableFuture<Graph> reload(finalKey key,Graph prevGraph){
if(neverNeedsRefresh(key)){
returnFutures.immediateFuture(prevGraph);
}else{
// asynchronous!
ListenableFutureTask<Graph> task =ListenableFutureTask.create(new Callable<Graph>(){
public Graph call(){
return getGraphFromDatabase(key);
}
});
executor.execute(task);
return task;
}
}
});
Automatically timed refreshing can be added to a cache using CacheBuilder.refreshAfterWrite(long, TimeUnit). In contrast toexpireAfterWrite, refreshAfterWrite will make a key eligible for refresh after the specified duration, but a refresh will only be actually initiated when the entry is queried. (If CacheLoader.refresh is implemented to be asynchronous, then the query will not be slowed down by the refresh.) So, for example, you can specify both refreshAfterWrite and expireAfterWrite on the same cache, so that the expiration timer on an entry isn't blindly reset whenever an entry becomes eligible for a refresh, so if an entry isn't queried after it comes eligible for refreshiing, it is allowed to expire.
自动时间刷新可以通过CacheBuilder.refreshAfterWrite(long, TimeUnit)方法加入到cache中.和expireAfterWrite相比, refreshAfterWrite会让一个key在指定时间段后变得可以被刷新, 但是刷新仅仅会在entry被查询后实际开始.(如果CacheLoader.refresh被实现为异步方法,那么查询将不会被refresh减速)所以,你可以对同一个cache指定refreshAfterWrite和expireAfterWrite, 这样无论一个条目的expiration timer是否可以被刷新,他都不会被盲目复位, 所以如果一个条目在变得可以被刷新后不被查询, 他是可以被回收的.
Features
Statistics
By using CacheBuilder.recordStats(), you can turn on statistics collection for Guava caches. The Cache.stats() method returns aCacheStats object, which provides statistics such as
通过使用CacheBuilder.recordStats(), 你可以为Guava caches启动统计集合. Cache.stats()方法返回一个CacheStats对象,他提供如下统计
- hitRate(), which returns the ratio of hits to requests
- hitRate(), 返回命中缓存的比率
- averageLoadPenalty(), the average time spent loading new values, in nanoseconds
- averageLoadPenalty(), 花在loading新值上的平均时间, 使用纳秒
- evictionCount(), the number of cache evictions
- evictionCount(), cache回收计数
and many more statistics besides. These statistics are critical in cache tuning, and we advise keeping an eye on these statistics in performance-critical applications.
还有更多的统计, 这些统计在cache调试中都是很关键的, 我们建议你在性能至上的应用中对这些统计保持关注..
asMap
You can view any Cache as a ConcurrentMap using its asMap view, but how the asMap view interacts with the Cache requires some explanation.
你可以通过asMap视图将任何Cache作为一个ConcurrentMap使用, 但是asMap视图和Cache是如何交互的,需要一些解释.
- cache.asMap() contains all entries that are currently loaded in the cache. So, for example, cache.asMap().keySet() contains all the currently loaded keys.
- cache.asMap()包含现在被加载到cache中的所有条目.所以,cache.asMap().keySet()包含所有现在加载的key
- asMap().get(key) is essentially equivalent to cache.getIfPresent(key), and never causes values to be loaded. This is consistent with the Map contract.
- asMap().get(key) 在本质上和cache.getIfPresent(key)是一样的, 并且永远不会造成values被加载.这和Map的比较是一致的
- Access time is reset by all cache read and write operations (including Cache.asMap().get(Object) and Cache.asMap().put(K, V)), but not by containsKey(Object), nor by operations on the collection-views of Cache.asMap(). So, for example, iterating throughcache.entrySet() does not reset access time for the entries you retrieve.
- 访问时间会在所有cache的读写操作(包括Cache.asMap().get(Object) and Cache.asMap().put(K, V))后被重置,但是不会被containsKey(Object)重置, 也不会被Cache.asMap()的collection-views操作重置 例如 遍历.
Interruption
Loading methods (like get) never throw InterruptedException. We could have designed these methods to support InterruptedException, but our support would have been incomplete, forcing its costs on all users but its benefits on only some. For details, read on.
Loading方法(比如 get)从不会抛出InterruptedException.我们可以设计这些方法支持InterruptedException, 但是我们的支持将是不完全的, 强制所有用户使用它的好处只有某些方面.具体如下
get calls that request uncached values fall into two broad categories: those that load the value and those that await another thread's in-progress load. The two differ in our ability to support interruption. The easy case is waiting for another thread's in-progress load: Here we could enter an interruptible wait. The hard case is loading the value ourselves. Here we're at the mercy of the user-supplied CacheLoader. If it happens to support interruption, we can support interruption; if not, we can't.
get方法调用那些没被cached的请求值分为两大类: 正在load的请求的值和等待其他线程读取值.这两种情况我们使用不同的对中断的支持.简单的情况是正在等待别的线程load: 我们可以介入一个中断等待.最困难的情况是自己去读取value.这个受到用户自提供的CacheLoader支配.如果碰巧支持中断,我们也可以支持中断;如果不是,我们就不支持.
So why not support interruption when the supplied CacheLoader does? In a sense, we do (but see below): If the CacheLoader throwsInterruptedException, all get calls for the key will return promptly (just as with any other exception). Plus, get will restore the interrupt bit in the loading thread. The surprising part is that the InterruptedException is wrapped in an ExecutionException.
所以为什么提供的CacheLoader支持中断的时候我们不支持中断? 在某种意义上说, 我们可以(但接着看): 如果CacheLoader抛出InterruptedException, 所有的get方法调用将会迅速返回(正如任何其他异常一样). 另外, get将会在loading将会保存中断位.令人惊讶的部分是InterruptedException是被包装在一个ExecutionException中的.
In principle, we could unwrap this exception for you. However, this forces all LoadingCache users to handle InterruptedException, even though the majority of CacheLoader implementations never throw it. Maybe that's still worthwhile when you consider that all non-loading threads' waits could still be interrupted. But many caches are used only in a single thread. Their users must still catch the impossible InterruptedException. And even those users who share their caches across threads will be able to interrupt their get calls only sometimes, based on which thread happens to make a request first.
原则上讲, 我们可以不为你包装这个异常.然而,这样会强制所有LoadingCache用户去处理InterruptedException, 即使多数CacheLoader实现从不抛出它.可能当你考虑所有non-loading线程的等待是可以被中断的时候这样做是值得的.但是许多caches仅仅会被用在单线程中.他们的用户必须依旧catch一个不可能出现的InterruptedException.甚至那些跨线程共享caches的用户也只是偶尔中断他们的get方法, 这还是基于某些线程碰巧会执行一个这样的请求.
Our guiding principle in this decision is for the cache to behave as though all values are loaded in the calling thread. This principle makes it easy to introduce caching into code that previously recomputed its values on each call. And if the old code wasn't interruptible, then it's probably OK for the new code not to be, either.
我们的指南原则是cache的表现要设定在所有值都被加载到了调用线程中. 这种原则使得更容易介绍那些每次调用的时候会重新计算值的caching代码.并且假如老的代码不会被中断,那么对于新的代码来说也是ok的.
I said that we support interruption "in a sense." There's another sense in which we don't, making LoadingCache a leaky abstraction. If the loading thread is interrupted, we treat this much like any other exception. That's fine in many cases, but it's not the right thing when multiple get calls are waiting for the value. Although the operation that happened to be computing the value was interrupted, the other operations that need the value might not have been. Yet all of these callers receive the InterruptedException (wrapped in an ExceptionException), even though the load didn't so much "fail" as "abort." The right behavior would be for one of the remaining threads to retry the load. We have a bug filed for this. However, a fix could be risky. Instead of fixing the problem, we may put additional effort into a proposed AsyncLoadingCache, which would returnFuture objects with correct interruption behavior.
我说过我们在某种意义上支持中断. 在另一些情况我们不支持, 使得LoadingCache成为一个抽象泄露.如果loading线程被中断,我们会把他当做别的异常.这在很多情况下都工作得很好, 但是在多重get调用等待value的时候不行. 即使计算value的操作碰巧被中断, 另外需要这个值的操作可能并没有被中断.但是所有这些调用者都会接收到InterruptedException(通过ExecutionException包装), 即使这些load并没有这么多的"中断"一样的"失败".正确的行为是其余的线程应该重试load.我们对这个问题会有一个bug filed. 然和, fix是有风险的.修复这个问题的替代方法是, 我们可能会尽额外的努力计划一个AsyncLoadingCache, 他可以返回一个带有正确中断行为的Future对象
测试代码
/* * Copyright (c) 2013 Qunar.com. All Rights Reserved. */ import java.io.IOException; import java.util.concurrent.Callable; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.ExecutionException; import java.util.concurrent.TimeUnit; import com.google.common.cache.*; import com.google.common.collect.ImmutableList; import com.google.common.util.concurrent.Futures; import com.google.common.util.concurrent.ListenableFuture; /** * @author zhenwei.liu created on 2013 13-9-9 下午6:09 * @version $Id$ */ public class Test { public static void main(String[] args) throws ExecutionException, IOException, InterruptedException { LoadingCache<String, Integer> caches = CacheBuilder.newBuilder() // 缓存最大数量, 在达到最大限制前就会回收最近未被使用的条目 // 一般来说是在cache size接近limit时 // .maximumSize(1000) // 最大总重量, 在达到最大重量前就会回收最近未被使用的条目 // .maximumWeight(100000) // 指定计算缓存重量的方法 // .weigher(new Weigher<String, Integer>() { // // @Override // public int weigh(String key, Integer value) { // return value + key.length(); // } // }) /* * 缓存的值被替换后的过期时间(e.g. 2s), 注意超时后并不会马上删除缓存, 缓存只会进入超时状态 * 进入超时状态的缓存不可访问, 但是依然存在于cache中, 只有对cache的任意访问操作(get, put, refresh等) * 才会真正触发缓存的删除操作 */ .expireAfterWrite(2, TimeUnit.SECONDS) // 缓存移除监听器 .removalListener(new RemovalListener<String, Integer>() { /** * 每次移除显示移除的内容, 注意缓存超时后是不会触发onRemoval的 * onRemoval是单独线程同步串行执行的,所以他的输出有可能出现在程序的任意位置 * * @param notification */ @Override public void onRemoval(RemovalNotification<String, Integer> notification) { System.err.println("remove: " + notification.getKey() + " & " + notification.getValue()); } }) // 设置定时刷新 // .refreshAfterWrite(2, TimeUnit.SECONDS) // 测试过期时间用的Ticker // .ticker(new Ticker() { // // @Override // public long read() { // return 30 * 1000 * 1000 * 1000; // } // }) // 设置缓存不存在时的默认值 .build(new CacheLoader<String, Integer>() { /** * 如果key存在, refresh操作发生时返回刷新值 如果key不存在, 使用load方法缓存新值 * * @param key * @param oldValue * @return * @throws Exception */ @Override public ListenableFuture<Integer> reload(String key, Integer oldValue) throws Exception { return Futures.immediateFuture(oldValue + 1); } /** * 当get方法对应的key无法获取到值时,返回默认值并缓存 * * @param key * @return * @throws Exception */ @Override public Integer load(String key) throws Exception { return 10086; } }); /* * put() get() getAll() */ caches.put("k1", 999); System.out.println("k1 val: " + caches.get("k1")); System.out.println("k2 val" + caches.get("k2")); // 当get方法无法获取到值时,会调用call方法来生成新值并缓存 System.out.println(caches.get("k3", new Callable<Integer>() { @Override public Integer call() throws Exception { return 10010; } })); System.out.println("get all: " + caches.getAll(ImmutableList.of("k1", "k2", "k3", "k4"))); // asMap ConcurrentMap<String, Integer> cacheMap = caches.asMap(); // put()覆盖 k1, 触发onRemoval() cacheMap.put("k1", 888); System.out.println("map associated with the cache : map val | caches val: " + cacheMap.get("k1") + " | " + caches.get("k1")); System.out.println("map won't create value automatically: " + cacheMap.get("k5")); caches.put("k5", 10); caches.put("k6", 100); System.out.println("before triggering the eviction : " + caches.asMap()); // 等待3s, 让缓存变得超时和可刷新 // 注意此处是使缓存进入超时和可刷新状态,虽然不能访问他们,但是他们依然存在cache中 System.out.println("----- start sleeping -----"); TimeUnit.SECONDS.sleep(3); System.out.println("----- end sleeping -----"); // 原来的cache已经超时 System.out.println("after triggering the eviction: " + caches.asMap()); /* * 任何一次对cache的访问都会触发cache的超时cache的真正的remove操作 * cache的回收操作每次会清除一个LocalCache$Segment * 所以此处清除的是 k4 k5 k6, 因为他们位于同一个Segment * 此处的refresh, put, get会导致输出 * remove: k4 & 10086 * remove: k5 & 10 * remove: k6 & 100 */ // caches.refresh("k8"); // caches.put("k8", 123); System.out.println(caches.get("k8")); System.out.println("after refresh 'k000' manually: " + caches.asMap()); } }
输出
k1 val: 999 k2 val10086 10010 get all: {k1=999, k2=10086, k3=10010, k4=10086} map associated with the cache : map val | caches val: 888 | 888 map won't create value automatically: null remove: k1 & 999 before triggering the eviction : {k6=100, k4=10086, k5=10, k3=10010, k1=888, k2=10086} ----- start sleeping ----- remove: k4 & 10086 remove: k5 & 10 remove: k6 & 100 ----- end sleeping ----- after triggering the eviction: {} 10086 after refresh 'k000' manually: {k8=10086}