Zircon
来源 https://github.com/zhangpf/fuchsia-docs-zh-CN/tree/master/docs/the-book
国内镜像源 https://hexang.org/mirrors/fuchsia.git
Fuchsia is not Linux
模块化的capability-based操作系统
本文档是一系列描述Fuchsia操作系统的文章集合,围绕特定子系统而组织,各个章节将随着时间的推移而被填充。
目录
Zircon内核
Zircon是位于Fuchsia其余部分底层的微内核,Zircon还提供了核心驱动程序和Fuchsia的libc实现。
Zircon核心
- 设备管理器 & 设备主机
- 设备驱动开发(DDK)
- C系统库(libc)
- POSIX I/O(libfdio)
- 进程启动/ELF加载(liblaunchpad)
Framework框架
存储
网络
图形化
媒体
- 声音
- 视频
- 数字版权管理(DRM)
智能
- 上下文
- 代理框架
- 建议
用户接口
- 设备,用户和story shell
- story和模块
向后兼容性
- 轻量级POSIX(包括我们支持哪些POSIX的子集合以及其原因)
更新和恢复
- 验证启动
- 更新器
C++ in Zircon
A subset of the C++14 language is used in the Zircon tree. This includes both the upper layers of the kernel (above the lk layer), as well as some userspace code. In particular, Zircon does not use the C++ standard library, and many language features are not used or allowed. Language features * Not allowed - Exceptions - RTTI and `dynamic_cast` - Operator overloading - Default parameters - Virtual inheritance - Statically constructed objects - Trailing return type syntax - Exception: when necessary for lambdas with otherwise unutterable return types - Initializer lists - `thread_local` in kernel code
* Allowed - Pure interface inheritance - Lambdas - `constexpr` - `nullptr` - `enum class`es - `template`s - Plain old classes - `auto` - Multiple implementation inheritance - But be judicious. This is used widely for e.g. intrusive container mixins.
* Needs more ruling TODO(cpu) - Global constructors - Currently we have these for global data structures.
快速入门指南
检出Zircon代码
注意:由于Fuchsia也包含了Zircon的代码,请查看Fuchsia的入门指南。如果仅专注于Zircon的开发,请参考此文档。
Zircon的git仓库位于:https://fuchsia.googlesource.com/zircon (译者注:Github的镜像在此)
假设在环境中已设置好$SRC变量(译者注:即Fuchsia的工作目录),克隆Zircon仓库到本地:
git clone https://fuchsia.googlesource.com/zircon $SRC/zircon
# 或者
git clone https://github.com/fuchsia-mirror/zircon $SRC/zircon
在文档接下来的部分,我们已假设Zircon已经被检出到$SRC/zircon
目录下,进而工具链、QEMU等也将在$SRC
下进行构建。多个make命令通过-j32
选项被并行调用,如果这种并行度对于你的机器比较吃力,请尝试-j16
或者-j8
选项。
准备构建环境
Ubuntu
在Ubuntu系统中,下面的命令可以获取所需的依赖:
sudo apt-get install texinfo libglib2.0-dev autoconf libtool libsdl-dev build-essential
macOS
安装Xcode命令行工具(Command Line Tools):
xcode-select --install
安装其他的依赖项:
- 使用Homebrew:
brew install wget pkg-config glib autoconf automake libtool
- 使用MacPorts:
port install autoconf automake libtool libpixman pkgconfig glib2
安装工具链
如果你的开发环境是Linux或macOS,已有预编译的工具链可直接进行下载,只需在Zircon的工作目录下运行下列脚本即可:
./scripts/download-prebuilt
如果你想自己构建工具链,请依照本文档后面的步骤执行。
构建Zircon
构建生成的文件位于$SRC/zircon/build-{arm64,x64}
下。
对于特定的构建目标,下面示例中的$BUILDDIR
变量指向构建的输出目录。
cd $SRC/zircon
# 对于aarch64
make -j32 arm64
# 对于x64
make -j32 x64
使用Clang
如果你想使用Clang作为构建Zircon的工具链,请在调用make时使用USE_CLANG=true
选项。
cd $SRC/zircon
# 对于aarch64
make -j32 USE_CLANG=true arm64
# 对于x86-64
make -j32 USE_CLANG=true x64
为所有目标体系结构构建Zircon
# -r选项也将同时编译release版本
./scripts/buildall -r
请在提交代码变更之前为所有目标体系结构构建一次,已保证代码在所有体系结构上能工作。
QEMU
如果你使用真实硬件进行测试,那么可以跳过此步骤,但是模拟器可以很方便地进行快速本地测试,所以该步骤通常是值得进行的。
对于在zircon中构建和使用QEMU,请查看相应的文档(英文原文)。
构建Toolchains(可选项)
如果预编译工具链对于你的系统不适用,为了在ARM64和x86-64上构建Zircon,也有一些脚本可用于你下载和构建合适的gcc工具链:
cd $SRC
git clone https://fuchsia.googlesource.com/third_party/gcc_none_toolchains toolchains
cd toolchains.
./do-build --target arm-none
./do-build --target aarch64-none
./do-build --target x86_64-none
为toolchains配置PATH环境变量
如果使用的是预编译工具链,构建过程可以自动找到它们,因此可跳过此步骤。
# 对于Linux
export PATH=$PATH:$SRC/toolchains/aarch64-elf-5.3.0-Linux-x86_64/bin
export PATH=$PATH:$SRC/toolchains/x86_64-elf-5.3.0-Linux-x86_64/bin
# 对于Mac
export PATH=$PATH:$SRC/toolchains/aarch64-elf-5.3.0-Darwin-x86_64/bin
export PATH=$PATH:$SRC/toolchains/x86_64-elf-5.3.0-Darwin-x86_64/bin
Zircon双向拷贝文件
若本地IPv6网络配置成功,便可以使用主机工具./build-zircon-ARCH/tools/netcp
来拷贝文件。
# 拷贝myprogram文件到Zircon
netcp myprogram :/tmp/myprogram
# 拷贝Zircon的myprogram文件到开发主机
netcp :/tmp/myprogram myprogram
添加额外的用户态文件
Zircon的构建过程会产生一个bootfs镜像,它包含系统启动必需的用户态组件(包括设备管理器,一些设备驱动等)。除此之外,内核也能够包含一个以ramdisk镜像的形式,由QEMU或者bootloader提供的额外镜像。
为了产生该bootfs镜像,请使用在构建中同时生成的mkbootfs
工具。mkbootfs
可以通过两种方式装配出一个bootfs镜像:通过目标目录(该目录的所有文件和子目录都将包含在内)或通过逐行列出需要包括在内的清单文件:
$BUILDDIR/tools/mkbootfs -o extra.bootfs @/path/to/directory
echo "issue.txt=/etc/issue" > manifest
echo "etc/hosts=/etc/hosts" >> manifest
$BUILDDIR/tools/mkbootfs -o extra.bootfs manifest
在Zircon系统启动完成后,bootfs镜像中的文件将出现在/boot
目录下,所以上面例子中的"host"文件位于/boot/etc/hosts
。
对于QEMU,请使用run-zircon-*
脚本的-x选项来指定额外的bootfs镜像。
网络启动
有两种机制支持网络启动Zircon:Gigaboot和Zirconboot。Gigaboot是基于EFI的bootloader,而Zirconboot则是允许一个最小化的zircon系统来充当启动zircon自身的bootloader。
在支持通过EFI启动的硬件设备(如Acer和NUC)上,上述两种方式都是支持的。而对于其他系统,zirconboot可能是网络启动的唯一选项。
通过Gigaboot启动
基于GigaBoot20x6(英文原文)的bootloader使用一种简单的基于IPV6的UDP网络启动协议,它不需要特殊的服务器配置和使用权限设置。
它利用IPV6链路的本地寻址和广播来达到此目的,允许目标设备发布它的启动消息,开发主机在接收消息后发送启动镜像到目标设备。
如果你有运行GigaBoot20x6的硬件设备(例如配备Broadwell或Skylake结构的CPU的Intel NUC),请首先手动创建USB启动盘(英文原文),或使用脚本(仅针对Linux)。然后运行:
$BUILDDIR/tools/bootserver $BUILDDIR/zircon.bin
# 如果有额外的bootfs镜像(见上):
$BUILDDIR/tools/bootserver $BUILDDIR/zircon.bin /path/to/extra.bootfs
引导服务器默认将一直运行,一旦检测出有网络启动的请求,它就会将内核(包括bootfs,如果有的话)发送到该请求设备上。如果在启动引导服务时传递-1选项,那么它将在执行一次成功的启动后停止服务并退出。
通过Zirconboot启动
Zirconboot是一种允许Zircon系统充当启动Zircon自身的bootloader机制,而且Zirconboot使用和前文提到的Gigaboot相同的启动协议。
为了使用Zirconboot,请通过在kernel命令行种传递netsvc.netboot=true
选项到Zircon的方式。当Zirconboot启动时,它将试图从挂载于开发主机的引导服务器上获取并启动zircon系统。
通过网络查看日志
Zircon的构建过程默认包含了网络日志服务,它通过本地IPv6的UDP协议广播系统日志。请注意,这只是一种快速能用的方式,未来某个时刻协议肯定会发生改变。
当前,如果你在QEMU上加-N选项运行zircon或者在配备以太网卡的真实硬件上(ASIX上的USB网络适配器或NUC上的Intel网卡),loglistener工具可以通过本地网络接收日志广播:
$BUILDDIR/tools/loglistener
调试
关于在Zircon环境中进行调试的随机提示信息,请查看调试(英文原文)部分。
贡献代码变更
- 请查看贡献代码(英文原文)部分。
Zircon驱动开发工具(DDK)
Zircon Device Model
Introduction
In Zircon, device drivers are implemented as ELF shared libraries (DSOs) which are loaded into Device Host (devhost) processes. The Device Manager (devmgr) process, contains the Device Coordinator which keeps track of drivers and devices, manages the discovery of drivers, the creation and direction of Device Host processes, and maintains the Device Filesystem (devfs), which is the mechanism through which userspace services and applications (constrained by their namespaces) gain access to devices.
The Device Coordinator views devices as part of a single unified tree. The branches (and sub-branches) of that tree consist of some number of devices within a Device Host process. The decision as to how to sub-divide the overall tree among Device Hosts is based on system policy for isolating drivers for security or stability reasons and colocating drivers for performance reasons.
NOTE: The current policy is simple (each device representing a physical bus-master capable hardware device and its children are place into a separate devhost). It will evolve to provide finer-grained partitioning.
Devices, Drivers, and Device Hosts
Here's a (slightly trimmed for clarity) dump of the tree of devices in Zircon running on Qemu x86-64:
$ dm dump
[root]
<root> pid=1509
[null] pid=1509 /boot/driver/builtin.so
[zero] pid=1509 /boot/driver/builtin.so
[misc]
<misc> pid=1645
[console] pid=1645 /boot/driver/console.so
[dmctl] pid=1645 /boot/driver/dmctl.so
[ptmx] pid=1645 /boot/driver/pty.so
[i8042-keyboard] pid=1645 /boot/driver/pc-ps2.so
[hid-device-001] pid=1645 /boot/driver/hid.so
[i8042-mouse] pid=1645 /boot/driver/pc-ps2.so
[hid-device-002] pid=1645 /boot/driver/hid.so
[sys]
<sys> pid=1416 /boot/driver/bus-acpi.so
[acpi] pid=1416 /boot/driver/bus-acpi.so
[pci] pid=1416 /boot/driver/bus-acpi.so
[00:00:00] pid=1416 /boot/driver/bus-pci.so
[00:01:00] pid=1416 /boot/driver/bus-pci.so
<00:01:00> pid=2015 /boot/driver/bus-pci.proxy.so
[bochs_vbe] pid=2015 /boot/driver/bochs-vbe.so
[framebuffer] pid=2015 /boot/driver/framebuffer.so
[00:02:00] pid=1416 /boot/driver/bus-pci.so
<00:02:00> pid=2052 /boot/driver/bus-pci.proxy.so
[intel-ethernet] pid=2052 /boot/driver/intel-ethernet.so
[ethernet] pid=2052 /boot/driver/ethernet.so
[00:1f:00] pid=1416 /boot/driver/bus-pci.so
[00:1f:02] pid=1416 /boot/driver/bus-pci.so
<00:1f:02> pid=2156 /boot/driver/bus-pci.proxy.so
[ahci] pid=2156 /boot/driver/ahci.so
[00:1f:03] pid=1416 /boot/driver/bus-pci.so
The names in square brackets are devices. The names in angle brackets are proxy devices, which are instantiated in the "lower" devhost, when process isolation is being provided. The pid= field indicates the process object id of the devhost process that device is contained within. The path indicates which driver implements that device.
Above, for example, the pid 1416 devhost contains the pci bus driver, which has created devices for each PCI device in the system. PCI device 00:02:00 happens to be an intel ethernet interface, which we have a driver for (intel-ethernet.so). A new devhost (pid 2052) is created, set up with a proxy device for PCI 00:02:00, and the intel ethernet driver is loaded and bound to it.
Proxy devices are invisible within the Device filesystem, so this ethernet device appears as /dev/sys/pci/00:02:00/intel-ethernet
.
Protocols, Interfaces, and Classes
Devices may implement Protocols, which are C ABIs used by child devices to interact with parent devices in a device-specific manner. The PCI Protocol, USB Protocol, Block Core Protocol, and Ethermac Protocol, are examples of these. Protocols are usually in-process interactions between devices in the same devhost, but in cases of driver isolation, they may take place via RPC to a "higher" devhost.
Devices may implement Interfaces, which are RPC protocols that clients (services, applications, etc). The base device interface supports posix style open/close/read/write style IO. Currently Interfaces are supported via the ioctl operation in the base device interface. In the future, Fuchsia's interface definition language and bindings (FIDL) will be supported.
In many cases a Protocol is used to allow drivers to be simpler by taking advantage of a common implementation of an Interface. For example, the "block" driver implements the common block interface, and binds to devices implementing the Block Core Protocol, and the "ethernet" driver does the same thing for the Ethernet Interface and Ethermac Protocol. Some protocols, such as the two cited here, make use of shared memory, and non-rpc signaling for more efficient, lower latency, and higher throughput than could be achieved otherwise.
Classes represent a promise that a device implements an Interface or Protocol. Devices exist in the Device Filesystem under a topological path, like /sys/pci/00:02:00/intel-ethernet
. If they are a specific class, they also appear as an alias under /dev/class/CLASSNAME/...
. The intel-ethernet
driver implements the Ethermac interface, so it also shows up at /dev/class/ethermac/000
. The names within class directories are unique but not meaningful, assigned on demand.
NOTE: Currently names in class directories are 3 digit decimal numbers, but they are likely to change form in the future. Clients should not assume there is any specific meaning to a class alias name.
Device Driver Lifecycle
Device drivers are loaded into devhost processes when it is determined they are needed. What determines if they are loaded or not is the Binding Program, which is a description of what device a driver can bind to. The Binding Program is defined using macros in ddk/binding.h
An example Binding Program from the Intel Ethernet driver:
ZIRCON_DRIVER_BEGIN(intel_ethernet, intel_ethernet_driver_ops, "zircon", "0.1", 9)
BI_ABORT_IF(NE, BIND_PROTOCOL, ZX_PROTOCOL_PCI),
BI_ABORT_IF(NE, BIND_PCI_VID, 0x8086),
BI_MATCH_IF(EQ, BIND_PCI_DID, 0x100E), // Qemu
BI_MATCH_IF(EQ, BIND_PCI_DID, 0x15A3), // Broadwell
BI_MATCH_IF(EQ, BIND_PCI_DID, 0x1570), // Skylake
BI_MATCH_IF(EQ, BIND_PCI_DID, 0x1533), // I210 standalone
BI_MATCH_IF(EQ, BIND_PCI_DID, 0x15b7), // Skull Canyon NUC
BI_MATCH_IF(EQ, BIND_PCI_DID, 0x15b8), // I219
BI_MATCH_IF(EQ, BIND_PCI_DID, 0x15d8), // Kaby Lake NUC
ZIRCON_DRIVER_END(intel_ethernet)
The ZIRCON_DRIVER_BEGIN and _END macros include the necessary compiler directives to put the binding program into an ELF NOTE section which allows it to be inspected by the Device Coordinator without needing to fully load the driver into its process. The second parameter to the _BEGIN macro is a zx_driver_ops_t structure pointer (defined by [ddk/driver.h](../../system/ulib/ddk/include/ddk/driver.h)
which defines the init, bind, create, and release methods.
init()
is invoked when a driver is loaded into a Device Host process and allows for any global initialization. Typically none is required. If the init()
method is implemented and fails, the driver load will fail.
bind()
is invoked to offer the driver a device to bind to. The device is one that has matched the bind program the driver has published. If the bind()
method succeeds, the driver must create a new device and add it as a child of the device passed in to the bind()
method. See Device Lifecycle for more information.
create()
is invoked for platform/system bus drivers or proxy drivers. For the vast majority of drivers, this method is not required.
release()
is invoked before the driver is unloaded, after all devices it may have created in bind()
and elsewhere have been destroyed. Currently this method is never invoked. Drivers, once loaded, remain loaded for the life of a Device Host process.
Device Lifecycle
Within a Device Host process, devices exist as a tree of zx_device_t
structures which are opaque to the driver. These are created with device_add()
which the driver provides a zx_protocol_device_t
structure to. The methods defined by the function pointers in this structure are the "device ops". The various structures and functions are defined in device.h
The device_add()
function creates a new device, adding it as a child to the provided parent device. That parent device must be either the device passed in to the bind()
method of a device driver, or another device which has been created by the same device driver.
A side-effect of device_add()
is that the newly created device will be added to the global Device Filesystem maintained by the Device Coordinator. If the device is created with the DEVICE_ADD_INVISIBLE flag, it will not be accessible via opening its node in devfs until device_make_visible()
is invoked. This is useful for drivers that have to do extended initialization or probing and do not want to visibly publish their device(s) until that succeeds (and quietly remove them if that fails).
Devices are reference counted. When a driver creates one with device_add()
, it then holds a reference on that device until it eventually calls device_remove()
. If a device is opened by a remote process via the Device Filesystem, a reference is acquired there as well. When a device's parent is removed, its unbind()
method is invoked. This signals to the driver that it should start shutting the device down and remove and child devices it has created by calling device_remove()
on them.
Since a child device may have work in progress when its unbind()
method is called, it's possible that the parent device which just called device_remove()
on the child could continue to receive device method calls or protocol method calls on behalf of that child. It is advisable that before removing its children, the parent device should arrange for these methods to return errors, so that calls from a child before the child removal is completed do not start more work or cause unexpected interactions.
From the moment that device_add()
is called without the DEVICE_ADD_INVSIBLE flag, or device_make_visible()
is called on an invisible device, other device ops may be called by the Device Host.
The release()
method is only called after the creating driver has called device_remove()
on the device, all open instances of that device have been closed, and all children of that device have been removed and released. This is the last opportunity for the driver to destroy or free any resources associated with the device. It is not valid to refer to the zx_device_t
for that device after release()
returns. Calling any device methods or protocol methods for protocols obtained from the parent device past this point is illegal and will likely result in a crash.
=================== End