• 配置 php-fpm 监听的socket


    一般现在我们配置的PHP的web环境,如LNMP(linux+Nginx+Mysql+PHP), 这里linux可能是centos, ubuntu..., 数据库可能是mysql, postgresql, sql server等。。

    在服务器上安装PHP-FPM, nginx后, 我们要配置Nginx的http模块, 让 .php的文件由nginx 转发给PHP-FPM处理,然后在将php-fpm的处理结果通过http响应传给浏览器,就完成了一次http的请求。。

    在配置 Nginx 的http模块的时候, 通常是这样:

    server ~ .php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass 127.0.0.1:9000;
    }

    也可以这样,
    server ~ .php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/var/run/php5-fpm.sock;
    }

    那么这两种方式有什么区别呢??

    这就是我这篇博文所要解释的问题。下面,我带大家来分析一下其中的原理,一下是我的一些理解,不对的地方还请大家不吝赐教,我将很感激~~

    PHP-FPM can listen on multiple sockets. I also listen on Unix sockets, or TCP sockets. See how this works and how to ensure Nginx is properly sending requests to PHP-FPM.

    Command Rundown

    Default Configuration

    Edit PHP-FPM configuration

    # Configure PHP-FPM default resource pool
    sudo vim /etc/php5/fpm/pool.d/www.conf
    

    PHP-FPM Listen configuration:

    # Stuff omitted
    listen = /var/run/php5-fpm.sock
    listen.owner = www-data
    listen.group = www-data
    

    Also edit Nginx and see where it's sending request to PHP-FPM:

    # Files: /etc/nginx/sites-available/default
    
    # ... stuff omitted
    
    server ~ .php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/var/run/php5-fpm.sock;
    }
    

    We can see above that Nginx is sending requests to PHP-FPM via a unix socket (faux file) at /var/run/php5-fpm.sock. This is also where the www.conf file is setting PHP-FPM to listen for connections.

    Unix Sockets

    These are secure in that they are file-based and can't be read by remote servers. We can further use linux permission to set who can read and write to this socket file.

    Nginx is run as user/group www-data. PHP-FPM's unix socket therefore needs to be readable/writable by this user.

    If we change the Unix socket owner to user/group ubuntu, Nginx will then return a bad gateway error, as it can no longer communicate to the socket file. We would have to change Nginx to run as user "ubuntu" as well, or set the socket file to allow "other" (non user nor group) to be read/written to, which is insecure.

    # Stuff omitted
    listen = /var/run/php5-fpm.sock
    listen.owner = ubuntu
    listen.group = ubuntu
    

    So, file permissions are the security mechanism for PHP-FPM when using a unix socket. The faux-file's user/group and it's user/group/other permissions determines what local users and processes and read and write to the PHP-FPM socket.

    TCP Sockets

    Setting the Listen directive to a TCP socket (ip address and port) makes PHP-FPM listen over the network rather than as a unix socket. This makes PHP-FPM able to be listened to by remote servers (or still locally over the localhost network).

    Change Listen to Listen 127.0.0.1:9000 to make PHP-FPM listen on the localhost network. For security, we can use thelisten.allowed_clients rather than set the owner/group of the socket.

    PHP-FPM:

    # Listen on localhost port 9000
    Listen 127.0.0.1:9000
    # Ensure only localhost can connect to PHP-FPM
    listen.allowed_clients = 127.0.0.1
    

    Nginx:

    # Files: /etc/nginx/sites-available/default
    
    # ... stuff omitted
    
    server ~ .php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass 127.0.0.1:9000;
    }

    http://lists.freebsd.org/pipermail/freebsd-performance/2005-February/001143.html

    unix domain sockets vs. internet sockets

    Robert Watson rwatson at FreeBSD.org 
    Fri Feb 25 02:29:14 PST 2005

    On Fri, 25 Feb 2005, Baris Simsek wrote:
    
    > I am coding a daemon program. I am not sure about which type of sockets
    > i should use. Could you compare ip sockets and unix domain sockets? My
    > main criterions are performance and protocol load. What are the
    > differences between impelementations of them at kernel level?
    
    There are a few differences that might be of interest, in addition to the
    already pointed out difference that if you start out using IP sockets, you
    don't have to migrate to them later when you want inter-machine
    connectivity: 
    
    - UNIX domain sockets use the file system as the address name space.  This
      means you can use UNIX file permissions to control access to communicate
      with them.  I.e., you can limit what other processes can connect to the
      daemon -- maybe one user can, but the web server can't, or the like.
      With IP sockets, the ability to connect to your daemon is exposed off
      the current system, so additional steps may have to be taken for
      security.  On the other hand, you get network transparency.  With UNIX
      domain sockets, you can actually retrieve the credential of the process
      that created the remote socket, and use that for access control also,
      which can be quite convenient on multi-user systems.
    
    - IP sockets over localhost are basically looped back network on-the-wire
      IP.  There is intentionally "no special knowledge" of the fact that the
      connection is to the same system, so no effort is made to bypass the
      normal IP stack mechanisms for performance reasons.  For example,
      transmission over TCP will always involve two context switches to get to
      the remote socket, as you have to switch through the netisr, which
      occurs following the "loopback" of the packet through the synthetic
      loopback interface.  Likewise, you get all the overhead of ACKs, TCP
      flow control, encapsulation/decapsulation, etc.  Routing will be
      performed in order to decide if the packets go to the localhost.
      Large sends will have to be broken down into MTU-size datagrams, which
      also adds overhead for large writes.  It's really TCP, it just goes over
      a loopback interface by virtue of a special address, or discovering that
      the address requested is served locally rather than over an ethernet
      (etc). 
    
    - UNIX domain sockets have explicit knowledge that they're executing on
      the same system.  They avoid the extra context switch through the
      netisr, and a sending thread will write the stream or datagrams directly
      into the receiving socket buffer.  No checksums are calculated, no
      headers are inserted, no routing is performed, etc.  Because they have
      access to the remote socket buffer, they can also directly provide
      feedback to the sender when it is filling, or more importantly,
      emptying, rather than having the added overhead of explicit
      acknowledgement and window changes.  The one piece of functionality that
      UNIX domain sockets don't provide that TCP does is out-of-band data.  In
      practice, this is an issue for almost noone.
    
    In general, the argument for implementing over TCP is that it gives you
    location independence and immediate portability -- you can move the client
    or the daemon, update an address, and it will "just work".  The sockets
    layer provides a reasonable abstraction of communications services, so
    it's not hard to write an application so that the connection/binding
    portion knows about TCP and UNIX domain sockets, and all the rest just
    uses the socket it's given.  So if you're looking for performance locally,
    I think UNIX domain sockets probably best meet your need.  Many people
    will code to TCP anyway because performance is often less critical, and
    the network portability benefit is substantial.
    
    Right now, the UNIX domain socket code is covered by a subsystem lock; I
    have a version that used more fine-grain locking, but have not yet
    evaluated the performance impact of those changes.  I've you're running in
    an SMP environment with four processors, it could be that those changes
    might positively impact performance, so if you'd like the patches, let me
    know.  Right now they're on my schedule to start testing, but not on the
    path for inclusion in FreeBSD 5.4.  The primary benefit of greater
    granularity would be if you had many pairs of threads/processes
    communicating across processors using UNIX domain sockets, and as a result
    there was substantial contention on the UNIX domain socket subsystem lock. 
    The patches don't increase the cost of normal send/receive operations, but
    due add extra mutex operations in the listen/accept/connect/bind paths.
    
    Robert N M Watson
    
    
     
  • 相关阅读:
    Adobe 软件防止联网激活更改Hosts文件
    Spark 共享变量之——Accumulator(累加器)
    AccumulatorV2不生效的问题排查
    RDD的Cache、Persist、Checkpoint的区别和StorageLevel存储级别划分
    2、Spark Core职责之初始化(1)——SparkContext
    1、Spark Core所处位置和主要职责
    Spark作业提交至Yarn上执行的 一个异常
    Swagger2在DBA Service中生成RESTful API的实践
    docker image换包步骤
    GitLab私服在Ubuntu上搭建总结
  • 原文地址:https://www.cnblogs.com/oxspirt/p/5109249.html
Copyright © 2020-2023  润新知