• Facebook's architecture


    转来的:

    From various readings and conversations I had, my understanding of Facebook’s current architecture is:

    * Web front-end written in PHP. Facebook’s HipHop [1] then converts it to C++ and compiles it using g++, thus providing a high performance templating and Web logic execution layer
    * Business logic is exposed as services using Thrift [2]. Some of these services are implemented in PHP, C++ or Java depending on service requirements (some other languages are probably used…)
    * Services implemented in Java don’t use any usual enterprise application server but rather use Facebook’s custom application server. At first this can look as wheel reinvented but as these services are exposed and consumed only (or mostly) using Thrift, the overhead of Tomcat, or even Jetty was probably too high with no significant added value for their need.
    * Persistence is done using MySQL, Memcached [3], Facebook’s Cassandra [4], Hadoop’s HBase [5]. Memcached is used as a cache for MySQL as well as a general purpose cache. Facebook engineers admit that their use of Cassandra is currently decreasing as they now prefer HBase for its simpler consistency model and its MapReduce ability.
    * Offline processing is done using Hadoop and Hive
    * Data such as logging, clicks and feeds transit using Scribe [6] and are aggregating and stored in HDFS using Scribe-HDFS [7], thus allowing extended analysis using MapReduce
    * BigPipe [8] is their custom technology to accelerate page rendering using a pipelining logic
    * Varnish Cache [9] is used for HTTP proxying. They’ve prefered it for its high performance and efficiency [10].
    * The storage of the billions of photos posted by the users is handled by Haystack, an ad-hoc storage solution developed by Facebook which brings low level optimizations and append-only writes [11].
    * Facebook Messages is using its own architecture which is notably based on infrastructure sharding and dynamic cluster management. Business logic and persistence is encapsulated in so-called ‘Cell’. Each Cell handles a part of users ; new Cells can be added as popularity grows [12]. Persistence is achieved using HBase [13].
    * Facebook Messages’ search engine is built with an inverted index stored in HBase [14]
    * Facebook Search Engine’s implementation details are unknown as far as I know
    * The typeahead search uses a custom storage and retrieval logic [15]
    * Chat is based on an Epoll server developed in Erlang and accessed using Thrift [16]

    About the resources provisioned for each of these components, some information and numbers are known:

    * Facebook is estimated to own more than 60,000 servers [17]. Their recent datacenter in Prineville, Oregon is based on entirely self-designed hardware [18] that was recently unveiled as Open Compute Project [19].
    * 300 TB of data is stored in Memcached processes [20]
    * Their Hadoop and Hive cluster is made of 3000 servers with 8 cores, 32 GB RAM, 12 TB disks that is a total of 24k cores, 96 TB RAM and 36 PB disks [20]
    * 100 billion hits per day, 50 billion photos, 3 trillion objects cached, 130 TB of logs per day as of july 2010 [21]

    [1] HipHop for PHP: http://developers.facebook.com/blog/post/358
    [2] Thrift: http://thrift.apache.org/
    [3] Memcached: http://memcached.org/
    [4] Cassandra: http://cassandra.apache.org/
    [5] HBase: http://hbase.apache.org/
    [6] Scribe: https://github.com/facebook/scribe
    [7] Scribe-HDFS: http://hadoopblog.blogspot.com/2009/06/hdfs-scribe-integration.html
    [8] BigPipe: http://www.facebook.com/notes/facebook-engineering/bigpipe-pipelining-web-pages-for-high-performance/389414033919
    [9] Varnish Cache: http://www.varnish-cache.org/
    [10] Facebook goes for Varnish: http://www.varnish-software.com/customers/facebook
    [11] Needle in a haystack: efficient storage of billions of photos: http://www.facebook.com/note.php?note_id=76191543919
    [12] Scaling the Messages Application Back End: http://www.facebook.com/note.php?note_id=10150148835363920
    [13] The Underlying Technology of Messages: https://www.facebook.com/note.php?note_id=454991608919
    [14] The Underlying Technology of Messages Tech Talk: http://www.facebook.com/video/video.php?v=690851516105
    [15] Facebook’s typeahead search architecture: http://www.facebook.com/video/video.php?v=432864835468
    [16] Facebook Chat: http://www.facebook.com/note.php?note_id=14218138919
    [17] Who has the most Web Servers?: http://www.datacenterknowledge.com/archives/2009/05/14/whos-got-the-most-web-servers/
    [18] Building Efficient Data Centers with the Open Compute Project: http://www.facebook.com/note.php?note_id=10150144039563920
    [19] Open Compute Project: http://opencompute.org/
    [20] Facebook’s architecture presentation at Devoxx 2010: http://www.devoxx.com
    [21] Scaling Facebook to 500 millions users and beyond: http://www.facebook.com/note.php?note_id=409881258919

    成功的案例总是很有启发性,多学,多看。

  • 相关阅读:
    pycharm中将文件目录标记为sources root和sys.path.append()效果一样
    简单的股票信息查询系统 1 程序启动后,给用户提供查询接口,允许用户重复查股票行情信息(用到循环) 2 允许用户通过模糊查询股票名,比如输入“啤酒”, 就把所有股票名称中包含“啤酒”的信息打印出来 3 允许按股票价格、涨跌幅、换手率这几列来筛选信息, 比如输入“价格>50”则把价格大于50的股票都打印,输入“市盈率<50“,则把市盈率小于50的股票都打印,不用判断等于。
    添加jar到本地maven库
    jquery.qrcode中文乱码的解决终极办法
    easyUI datagrid view扩展
    CANNOT READ PROPERTY ‘opera’ OF UNDEFINED解决方法
    关于 Promise 的一些简单理解
    Java 内功修炼 之 数据结构与算法(一)
    学习一下 JVM (三) -- 了解一下 垃圾回收
    学习一下 JVM (二) -- 学习一下 JVM 中对象、String 相关知识
  • 原文地址:https://www.cnblogs.com/yeahgis/p/2382663.html
Copyright © 2020-2023  润新知