Managing Cloud Native Applications
云原生应用程序旨在由基础设施维护。就像我们在前面的章节中所示,云原生基础设施旨在由应用程序维护。
Cloud native applications are designed to be maintained by infrastructure. As we’ve shown in the previous chapters, cloud native infrastructure is designed to be maintained by applications.
使用传统基础架构,安排,维护和升级应用程序的大部分工作都是由人工完成的。这可以包括在单个主机上手动运行服务,或者在自动化工具中定义基础设施和应用程序的快照。
With traditional infrastructure, the majority of the work to schedule, maintain, and upgrade applications is done by humans. This can include manually running services on individual hosts or defining a snapshot of infrastructure and applications in automation tools.
但是,如果基础设施可以由应用程序管理,并同时管理应用程序,那么基础架构工具就变成了另一个应用程序。工程师对基础设施的职责可以在协调器模式中表示,并内嵌到在该基础结设施上运行的应用程序中。
But if infrastructure can be managed by applications and at the same time manage the applications, then infrastructure tooling becomes just another application. The responsibilities engineers have with infrastructure can be expressed in the reconciler pattern and built into applications that run on that infrastructure.
我们在过去的三章中解释了如何构建可以管理基础设施构的应用程序。本章将介绍如何在基础设施上运行这些应用程序或任何应用程序。
We just spent the past three chapters explaining how we build applications that can manage infrastructure. This chapter will address how to run those applications, or any application, on the infrastructure.
如前所述,保持基础设施和应用程序简单非常重要。管理应用程序复杂性的一种常用方法是将它们分解为方便理解的小组件。我们通常通过创建单一用途服务来实现这一目标,或将代码分解为一系列事件触发函数。
As discussed earlier, it’s important to keep your infrastructure and applications simple. A common way to manage complexity in applications is to break them apart into small, understandable components. We usually accomplish this by creating single-purpose services, or breaking out code into a series of event-triggered functions.
即使是最自动化的基础设施,小的,可部署单元的不断激增也可能是压倒性的。管理大量应用程序的唯一方法是让它们采用第1章中描述的功能操作。应用程序需要在可以大规模管理之前成为云原生的。
The proliferation of smaller, deployable units can be overwhelming for even the most automated infrastructure. The only way to manage a large number of applications is to have them take on the operational functionally described in Chapter 1. The applications need to become cloud native before they can be managed at scale.
本章不会帮助您构建下一个优秀的应用程序,但它应该为您提供一些起点,使您的应用程序在云原生基础设施上运行时运行良好。
This chapter will not help you build the next great app, but it should give you some starting points in making your applications work well when running on cloud native infrastructure.
Application Design
有很多书讨论了如何做应用程序的架构。本书不打算成为其中之一。但是需要了解的是,应用程序架构如何影响有效的基础设施设计。
There are many books that discuss how applications should be architected. This book is not intended to be one of them. However, it is still important to understand how application architecture influences effective infrastructure designs.
正如我们在第1章中讨论的那样,我们将假设应用程序设计为云原生的,因为它们从云原生基础设施中获得了最大的好处。从根本上说,云原生意味着应用程序设计为由软件管理,而不是由人类管理。
As we discussed in Chapter 1, we are going to assume the applications are designed to be cloud native because they gain the most benefits from cloud native infrastructure. Fundamentally, cloud native means the applications are designed to be managed by software, not humans.
打包方式的设计是一个单独思考的问题,并不是云原生应用的关键因素。应用程序可以是云原生的,并打包为RPM或DEB文件,并部署到VM而不是容器。它们可以是用Java或Go编写的单体应用或微服务。
The design of an application is a separate consideration from how it is packaged. Applications can be cloud native and packaged as an RPM or DEB files and deployed to VMs instead of containers. They can be monolithic or microservices, written in Java or Go.
这些实现细节不是为应用在云中运行而设计的。
These implementation details do not make an application designed to run in the cloud.
举个例子,让我们假设我们有一个用Go编写的应用程序并打包在一个容器中。我们甚至可以说容器在Kubernetes上运行,并且无论你选择什么定义,它都会被认为是微服务。
As an example, let’s pretend we have an application written in Go and packaged in a container. We can even say the container runs on Kubernetes and would be considered a microservice by whatever definition you choose.
这个假想的应用程序是云原生的吗?
Is this pretend application “cloud native”?
如果应用程序将所有活动记录到文件并对数据库IP地址进行硬编码,该怎么办?也许它不接受运行时动态配置并在本地磁盘上存储状态。如果它不以可预测的方式退出,或挂起并等待人类调试它会怎么样?
What if the application logs all activity to a file and hardcodes the database IP address? Maybe it doesn’t accept runtime configuration and stores state on the local disk. What if it doesn’t exit in a predictable manner, or hangs and waits for a human to debug it?
这个假想的应用程序可能会从所选语言和打包中看起来是云原生的,但绝对不是。像Kubernetes这样的框架可以通过各种功能帮助管理这个应用程序,但即使你能够运行它,该应用程序显然也是为了由人类维护和运行而设计的。
This pretend application may appear cloud native from the chosen language and packaging, but it is most definitely not. A framework such as Kubernetes can help manage this application through various features, but even if you’re able to make it run, the application is clearly designed to be maintained and run by humans.
在第1章中更详细地解释了使应用程序在云本机基础架构上运行得更好的一些功能。如果我们具有第1章中规定的功能,那么应用程序还有另一个考虑因素:我们如何有效地管理它们?
Some of the features that make an application run better on cloud native infrastructure are explained in more detail in Chapter 1. If we have the features prescribed in Chapter 1, there is still another consideration for applications: how do we effectively manage them?
Implementing Cloud Native Patterns
弹性,服务发现,配置中心,日志记录,健康检查和指标等模式都可以通过不同方式在应用程序中实现。实现这些模式的常见做法是通过导入应用程序的标准语言库。 比较好的例子像java的库,Netflix OSS和Twitter Finagle。
Patterns such as resiliency, service discovery, configuration, logging, health checks, and metrics can all be implemented in applications in different ways. A common practice to implement these patterns is through standard language libraries imported into the applications. Netflix OSS and Twitter’s Finagle are very good examples of implementing these features in Java language libraries.
当您使用库时,应用程序可以导入这些库,它将自动获得许多这些功能而无需额外的代码。当group内支持的语言很少时,此模型很有意义。这些库让最佳实践成为容易的事情。
When you use a library, an application can import the library, and it will automatically get many of these features without extra code. This model makes a lot of sense when there are few supported languages within an organization. It allows best practice to be the easy thing to do.
当group开始实施微服务时,他们倾向于转向多语言服务.这样可以自由地为服务选择正确的语言,但却很难为所有语言维护库。
When organizations start implementing microservices, they tend to drift toward polyglot services. This allows for freedom to choose the right language for the ser‐ vice, but makes it very difficult to maintain libraries for all the languages.
获得这些功能的另一种方法是通过所谓的“边车模式”。此模式将进程与实现各种管理功能的应用程序捆绑在一起。它通常作为一个单独的容器实现,但您也可以通过在VM上运行另一个守护程序来实现它。举个例子,“边车”应该考虑以下内容:
Another way to get some of these features is through what is known as the “sidecar” pattern. This pattern bundles processes with applications that implement the various management features. It is often implemented as a separate container, but you can also implement it by just running another daemon on a VM.Examples of sidecars include the following:
- Envoy proxy:为服务添加弹性和指标数据
- Registrator:使用外部服务进行服务注册
- Configuration:监视配置更改,并通知服务进程重新加载
- Health endpoint:提供HTTP端点以检查应用程序的运行状况
Envoy proxy:Adds resiliency and metrics to services
Registrator:Registers services with an external service discovery
Configuration:Watches for configuration changes, and notifies the service process to reload
Health endpoint:Provide HTTP endpoints for checking the health of the application
Sidecar容器甚至可用于适配多语言容器以暴露特定于语言的endpoint,来让服务于使用标准库的应用程序进行交互。 Netflix的Prana就是为不使用标准Java库的应用程序设计的
Sidecar containers can even be used to adapt polyglot containers to expose language-specific endpoints to interact with applications that use libraries. Prana from Netflix does just that for applications that don’t use their standard Java library.
当有特定团队管理边车时,边车模式是有意义的。如果工程师想要在他们的服务中暴露指标,他们可以将其构建到应用程序中——或者使用单独的团队管理的边车,来处理日志记录输出并为其暴露计算指标。
Sidecar patterns make sense when centralized teams manage specific sidecar processes. If an engineer wants to expose metrics in their service, they can build it into the application—or a separate team can also provide a sidecar that processes logging output and exposes the calculated metrics for them.
在这两种情况下,服务都可以添加功能,而不是重写应用程序。一旦能够使用软件管理应用程序,我们就来看看应该如何管理应用程序的生命周期。
In both cases, the service can have functionality added with less effort than rewriting the application. Once the ability to manage the application with software is available, let’s look at how the application’s life cycle should be managed