获取服务 Server端流程
我们先看下面这张图片,这张图片简单描述了下我们EurekaClient在调用EurekaServer 提供的获取服务Http接口,Server端实现接口执行的大致流程,图中还包含了服务注册的大致流程,因为服务注册和获取服务有关联的部分,因此两个流程合到了一起
Eureka 二级缓存
我们先看看我们Eureka二级缓存的结构:
// 一级缓存 只读缓存
private final ConcurrentMap<Key, Value> readOnlyCacheMap = new ConcurrentHashMap<Key, Value>();
// 二级缓存 读写缓存
private final LoadingCache<Key, Value> readWriteCacheMap;
上面我们看到一级缓存是由我们jdk自带的ConcurrentHashMap实现,而我们的二级缓存却是有google提供的guava包中LoadingCache实现。我们接着看下Eureka二级缓存readWriteCacheMap的初始化:
this.readWriteCacheMap =
CacheBuilder.newBuilder().initialCapacity(1000)
.expireAfterWrite(serverConfig.getResponseCacheAutoExpirationInSeconds(), TimeUnit.SECONDS)
.removalListener(new RemovalListener<Key, Value>() {
@Override
public void onRemoval(RemovalNotification<Key, Value> notification) {
Key removedKey = notification.getKey();
if (removedKey.hasRegions()) {
Key cloneWithNoRegions = removedKey.cloneWithoutRegions();
regionSpecificKeys.remove(cloneWithNoRegions, removedKey);
}
}
})
.build(new CacheLoader<Key, Value>() {
@Override
public Value load(Key key) throws Exception {
if (key.hasRegions()) {
Key cloneWithNoRegions = key.cloneWithoutRegions();
regionSpecificKeys.put(cloneWithNoRegions, key);
}
Value value = generatePayload(key);
return value;
}
});
不太了解LoadingCache类的小伙伴可以自行百度了解下,上面的代码简单描述就是设置了缓存180s会自行过期以及如果我们调用get()方法获取数据,查询不到对应的缓存则会执行load方法,从而得到key对应的数据。
获取服务Server端实现源码分析
获取服务的流程就想文章顶部文章描述的,首先会去查只读(一级)缓存(前提是没有配置只读缓存为false),如果只读缓存没有对应数据,则去查读写缓存(二级)缓存,如果读写缓存也没有,则会触发LoadingCache()的load方法,从内存中读取存取的实例信息。
下面我们通过源码再来看看整个流程:
private final ResponseCache responseCache;
@GET
public Response getContainers(@PathParam("version") String version,
@HeaderParam(HEADER_ACCEPT) String acceptHeader,
@HeaderParam(HEADER_ACCEPT_ENCODING) String acceptEncoding,
@HeaderParam(EurekaAccept.HTTP_X_EUREKA_ACCEPT) String eurekaAccept,
@Context UriInfo uriInfo,
@Nullable @QueryParam("regions") String regionsStr) {
boolean isRemoteRegionRequested = null != regionsStr && !regionsStr.isEmpty();
String[] regions = null;
if (!isRemoteRegionRequested) {
EurekaMonitors.GET_ALL.increment();
} else {
regions = regionsStr.toLowerCase().split(",");
Arrays.sort(regions); // So we don't have different caches for same regions queried in different order.
EurekaMonitors.GET_ALL_WITH_REMOTE_REGIONS.increment();
}
// Check if the server allows the access to the registry. The server can
// restrict access if it is not
// ready to serve traffic depending on various reasons.
if (!registry.shouldAllowAccess(isRemoteRegionRequested)) {
return Response.status(Status.FORBIDDEN).build();
}
CurrentRequestVersion.set(Version.toEnum(version));
KeyType keyType = Key.KeyType.JSON;
String returnMediaType = MediaType.APPLICATION_JSON;
if (acceptHeader == null || !acceptHeader.contains(HEADER_JSON_VALUE)) {
keyType = Key.KeyType.XML;
returnMediaType = MediaType.APPLICATION_XML;
}
Key cacheKey = new Key(Key.EntityType.Application,
ResponseCacheImpl.ALL_APPS,
keyType, CurrentRequestVersion.get(), EurekaAccept.fromString(eurekaAccept), regions
);
Response response;
if (acceptEncoding != null && acceptEncoding.contains(HEADER_GZIP_VALUE)) {
// 取压缩的缓存数据
response = Response.ok(responseCache.getGZIP(cacheKey))
.header(HEADER_CONTENT_ENCODING, HEADER_GZIP_VALUE)
.header(HEADER_CONTENT_TYPE, returnMediaType)
.build();
} else {
// 取缓存数据
response = Response.ok(responseCache.get(cacheKey))
.build();
}
return response;
}
responseCache.getGZIP(cacheKey) 和 responseCache.get(cacheKey)方法的区别是getGZIP方法是压缩后的实例信息,但这两个方法最终都会调用getValue()方法,如下所示:
public byte[] getGZIP(Key key) {
Value payload = getValue(key, shouldUseReadOnlyResponseCache);
if (payload == null) {
return null;
}
return payload.getGzipped();
}
public String get(final Key key) {
return get(key, shouldUseReadOnlyResponseCache);
}
@VisibleForTesting
String get(final Key key, boolean useReadOnlyCache) {
Value payload = getValue(key, useReadOnlyCache);
if (payload == null || payload.getPayload().equals(EMPTY_PAYLOAD)) {
return null;
} else {
return payload.getPayload();
}
}
&emps;我们接着看getValue这个方法,代码如下:
// 只读缓存,使用concurrentHashMap实现
private final ConcurrentMap<Key, Value> readOnlyCacheMap = new ConcurrentHashMap<Key, Value>();
// 读写缓存,使用google提供的LoadingCache实现
private final LoadingCache<Key, Value> readWriteCacheMap;
@VisibleForTesting
Value getValue(final Key key, boolean useReadOnlyCache) {
Value payload = null;
try {
if (useReadOnlyCache) {
final Value currentPayload = readOnlyCacheMap.get(key);
if (currentPayload != null) {
payload = currentPayload;
} else {
payload = readWriteCacheMap.get(key);
readOnlyCacheMap.put(key, payload);
}
} else {
payload = readWriteCacheMap.get(key);
}
} catch (Throwable t) {
logger.error("Cannot get value for key : {}", key, t);
}
return payload;
}
由上可见,我们获取实例信息,会先进行2次判断,判断如果是否启用了只读缓存,如果没有启用则直接从读写缓存中读取,启用了读写缓存,则我们会先尝试从读写缓存中读取数据,如果为空则从读写缓存中读取,然后再把数据put进只读缓存。
注意:readWriteCacheMap.get(key)这个方法如果在原本的LoadingCache中查询不到数据,则会调用load方法取key对应的数据,最终返回给我们对应的数据。
接下来我们来看看我们的只读缓存和读写缓存之间是咋进行更新的:
ResponseCacheImpl(EurekaServerConfig serverConfig, ServerCodecs serverCodecs, AbstractInstanceRegistry registry) {
this.serverConfig = serverConfig;
this.serverCodecs = serverCodecs;
// 是否使用只读缓存
this.shouldUseReadOnlyResponseCache = serverConfig.shouldUseReadOnlyResponseCache();
this.registry = registry;
// 缓存更新的时间间隔(用定时器更新,定时器的时间默认30秒执行一次)
long responseCacheUpdateIntervalMs = serverConfig.getResponseCacheUpdateIntervalMs();
// 构建读写缓存 默认缓存时间180秒
this.readWriteCacheMap =
CacheBuilder.newBuilder().initialCapacity(1000)
.expireAfterWrite(serverConfig.getResponseCacheAutoExpirationInSeconds(), TimeUnit.SECONDS)
.removalListener(new RemovalListener<Key, Value>() {
@Override
public void onRemoval(RemovalNotification<Key, Value> notification) {
Key removedKey = notification.getKey();
if (removedKey.hasRegions()) {
Key cloneWithNoRegions = removedKey.cloneWithoutRegions();
regionSpecificKeys.remove(cloneWithNoRegions, removedKey);
}
}
})
// 缓存加载器,当缓存不存在时,会自动执行load方法,进行缓存加载。同时返回缓存数据
.build(new CacheLoader<Key, Value>() {
@Override
public Value load(Key key) throws Exception {
if (key.hasRegions()) {
Key cloneWithNoRegions = key.cloneWithoutRegions();
regionSpecificKeys.put(cloneWithNoRegions, key);
}
Value value = generatePayload(key);
return value;
}
});
// 使用只读缓存,如果使用,此处则启动一个定时器,用来复制readWriteCacheMap 的数据至readOnlyCacheMap
if (shouldUseReadOnlyResponseCache) {
timer.schedule(getCacheUpdateTask(),
new Date(((System.currentTimeMillis() / responseCacheUpdateIntervalMs) * responseCacheUpdateIntervalMs)
+ responseCacheUpdateIntervalMs),
responseCacheUpdateIntervalMs);
}
try {
Monitors.registerObject(this);
} catch (Throwable e) {
logger.warn("Cannot register the JMX monitor for the InstanceRegistry", e);
}
}
// 复制readWriteCacheMap 的数据至readOnlyCacheMap
private TimerTask getCacheUpdateTask() {
return new TimerTask() {
@Override
public void run() {
logger.debug("Updating the client cache from response cache");
for (Key key : readOnlyCacheMap.keySet()) {
if (logger.isDebugEnabled()) {
logger.debug("Updating the client cache from response cache for key : {} {} {} {}",
key.getEntityType(), key.getName(), key.getVersion(), key.getType());
}
try {
CurrentRequestVersion.set(key.getVersion());
Value cacheValue = readWriteCacheMap.get(key);
Value currentCacheValue = readOnlyCacheMap.get(key);
if (cacheValue != currentCacheValue) {
readOnlyCacheMap.put(key, cacheValue);
}
} catch (Throwable th) {
logger.error("Error while updating the client cache from response cache for key {}", key.toStringCompact(), th);
}
}
}
};
}
由上可知,我们Server启动了两个定时任务,只读缓存定时任务使用java.util.Timer timer类实现,读写缓存是由geegle guava LoadingCache实现。只读缓存定时设置30s,即每隔30s则会读取读写缓存中数据用来更新只读缓存中的数据。而我们的读写缓存则是设置的180s过期。