https://research.google.com/archive/spanner.html
Spanner is Google’s scalable, multi-version, globally-
distributed, and synchronously-replicated database. It is
the first system to distribute data at global scale and sup-
port externally-consistent distributed transactions. This
paper describes how Spanner is structured, its feature set,
the rationale underlying various design decisions, and a
novel time API that exposes clock uncertainty. This API
and its implementation are critical to supporting exter-
nal consistency and a variety of powerful features: non-
blocking reads in the past, lock-free read-only transac-
tions, and atomic schema changes, across all of Spanner.
【managing cross-datacenter replicated data】
Spanner is a scalable, globally-distributed database de-
signed, built, and deployed at Google. At the high-
est level of abstraction, it is a database that shards data
across many sets of Paxos [21] state machines in data-
centers spread all over the world. Replication is used for
global availability and geographic locality; clients auto-
matically failover between replicas. Spanner automati-
cally reshards data across machines as the amount of data
or the number of servers changes, and it automatically
migrates data across machines (even across datacenters)
to balance load and in response to failures. Spanner is
designed to scale up to millions of machines across hun-
dreds of datacenters and trillions of database rows.
Applications can use Spanner for high availability,
even in the face of wide-area natural disasters, by repli-
cating their data within or even across continents. Our
initial customer was F1 [35], a rewrite of Google’s ad-
vertising backend. F1 uses five replicas spread across
the United States. Most other applications will probably
replicate their data across 3 to 5 datacenters in one ge-
ographic region, but with relatively independent failure
modes. That is, most applications will choose lower la-
tency over higher availability, as long as they can survive
1 or 2 datacenter failures.
Spanner’s main focus is managing cross-datacenter
replicated data, but we have also spent a great deal of
time in designing and implementing important database
features on top of our distributed-systems infrastructure.
Even though many projects happily use Bigtable [9], we
have also consistently received complaints from users
that Bigtable can be difficult to use for some kinds of ap-
plications: those that have complex, evolving schemas,
or those that want strong consistency in the presence of
wide-area replication. (Similar claims have been made
by other authors [37].) Many applications at Google
have chosen to use Megastore [5] because of its semi-
relational data model and support for synchronous repli-
cation, despite its relatively poor write throughput. As a
consequence, Spanner has evolved from a Bigtable-like
versioned key-value store into a temporal multi-version
database. Data is stored in schematized semi-relational
tables; data is versioned, and each version is automati-
cally timestamped with its commit time; old versions of
data are subject to configurable garbage-collection poli-
cies; and applications can read data at old timestamps.
Spanner supports general-purpose transactions, and pro-
vides a SQL-based query language.