• Nginx, Websockets, SSL and Socket.IO deployment « Mixu's tech blog


    Nginx, Websockets, SSL and Socket.IO deployment « Mixu's tech blog


    Nginx, Websockets, SSL and Socket.IO deployment

    I’ve spent some time recently figuring out the options for deploying Websockets with SSL and load balancing – and more specifically, Socket.IO – while allowing for dual stacks (e.g. Node.js and another dev platform). Since there seems to be very little concrete guidance on this topic, here are my notes – I’d love to hear from you on your implementation  (leave a comment or write about and link back)…

    The goal here is to:

    1. Expose Socket.io and your main application from a single port — avoiding cross-domain communication
    2. Support HTTPS for both connections — enabling secure messaging
    3. Support the Websockets and Flashsockets transports from Socket.io — for performance
    4. Perform load balancing for both the backends somewhere — for performance

    Socket.io’s various transports

    Socket.io supports multiple different transports:

    • WebSockets — which are essentially long lived HTTP 1.1 requests, which after a handshake upgrade to the Websockets protocol
    • Flash sockets — which are plain TCP sockets with optional SSL support (but Flash seems to use some older SSL encryption method)
    • various kinds of polling — which work over long lived HTTP 1.0 requests

    Starting point: Nginx and Websockets

    Nginx is generally the first recommendation for Node.js deployments. It’s a high-performance server and even includes support for proxying requests via the HttpProxyModule.

    However, — and this should be made much obvious to people starting with Socket.io — the problem is that while Nginx can talk HTTP/1.1 to the client (browser), it talks HTTP/1.0 to the server. Nginx’s default HttpProxyModule does not support HTTP/1.1, which is needed for Websockets.

    Websockets 76 requires support for HTTP/1.1 as the handshake mechanism is not compatible with HTTP/1.0. What this means is that if Nginx is used to reverse proxy a Websockets server (like Socket.io), then the WS connections will fail. So no Websockets for you if you’re behind Nginx.

    There is a workaround, but I don’t see the benefit: use a TCP proxy (there is a custom module for this by Weibin Yao, see here ). However, you cannot run another service on the same port (e.g. your main app and Socket.io on port 80) as the TCP proxy does not support routing based on the URL (e.g. /socket.io/ to Socket.io and the rest to the main app), only simple load balancing.

    So the benefit gained from doing this is quite marginal: sure, you can use Nginx for load balancing, but you will still be working with alternative ports for your main app and Socket.io.

    Alternatives to Nginx

    Since you can’t use Nginx and support Websockets,  you’ll need to deal with two separate problems:

    1. How to terminate SSL connections and
    2. How to route HTTP traffic to the right backend based on the URL / load balance

    If you want to run two services on the same port, then you will have to terminate SSL connections before doing anything else. There are several alternatives for SSL termination:

    • Stunnel. Supports multiple SSL certificates per process, does simple SSL termination to another port.
    • Stud. Only supports one SSL certificate per invocation, does simple SSL termination to another port.
    • Pound. An SSL-termination-capable reverse proxy and load balancer.
    • Node’s https. Can be made to do anything, but you’ll have to write it yourself.

    If you choose Stunnel or Stud, then you need a load balancer as well if you plan on having more than one Node instance in the backend.

    HAProxy is not generally compatible with Websockets, but Socket.IO contains code which works around this issue and allows you to use HAProxy. This means that the alternatives are:

    • Stunnel for SSL termination + HAProxy for routing/load balancing
    • Stud for SSL termination + HAProxy for routing/load balancing
    • Pound (SSL and routing/load balancing)

    I haven’t looked into Pound more – mainly as I could not find info on it’s TCP reverse proxying capabilities (see the section on Flash sockets below), but it seems to work for these guys.

    Setting up Stunnel

    The Stunnel part is quite simple:

    cert = /path/to/certfile.pem
    ; Service-level configuration
    [https]
    accept  = 443
    connect = 8443

    If you only have one Node instance, you can skip setting up HAProxy, since you don’t need load balancing.

    Setting up HAProxy

    Would you like Flash Sockets with that?

    Note that we need TCP mode in order to support Flash sockets, which do not speak HTTP.

    Flash sockets are just plain and simple TCP sockets, which will start by sending the following payload: ‘<policy-file-request/>\0′. They expect to receive a Flash cross domain policy as a response.

    Since Flash sockets don’t use HTTP, we need a load balancer which is capable of detecting the protocol of the request, and of forwarding non-HTTP requests to Socket.io.

    HAProxy can do that, as it has two different modes of operation:

    • HTTP mode – which allows you to specify the backend based on the URI
    • TCP mode – which can be used to load balance non-HTTP transports.

    Main frontend

    We accept connections on two ports: 80 (HTTP) and 8443 (Stunnel-terminated HTTPS connections).

    By default, everything goes to the backend app at port 3000. Some HTTP paths are selectively routed to socket.io

    TCP mode is needed so that Flash socket connections can be passed through, and all non HTTP connections are sent to the TCP mode socket.io backend.

    # Main frontend

    frontend app

      bind 0.0.0.0:80

      bind 0.0.0.0:8443

      # Mode is TCP

      mode tcp

      # allow for many connections, with long timeout

      maxconn 200000

      timeout client 86400000



      # default to webapp backend

      default_backend webapp



      # two URLs need to go to the node pubsub backend

      acl is_socket_io path_beg /node

      acl is_socket_io path_beg /socket.io

         use_backend socket_io if is_socket_io



       tcp-request inspect-delay 500ms

       tcp-request content accept if HTTP

       use_backend sio_tcp if !HTTP

    Port 843: Flash policy

    Flash policy should be made available on 843.

    # Flash policy frontend

    frontend flashpolicy 0.0.0.0:843

       mode tcp

       default_backend sio_tcp

    Default backend

    This is just for your main application.

    backend webapp

       mode http

       option httplog

       option httpclose

       server nginx1s localhost:3000 check

    Socket.io backend

    Here, we have a bunch of settings in order to allow Websockets connections through HAProxy.

    backend socket_io

      mode http

      option                  httplog

      # long timeout

      timeout server 86400000

      # check frequently to allow restarting

      # the node backend

      timeout check 1s

      # add X-Forwarded-For

       option forwardfor

      # Do not use httpclose (= client and server

      # connections get closed), since it will close

      # Websockets connections

      no   option httpclose

      # Use "option http-server-close" to preserve

      # client persistent connections while handling

      # every incoming request individually, dispatching

      # them one after another to servers, in HTTP close mode

      option http-server-close

      option forceclose

      # just one node server at :8000

      server node1 localhost:8000 maxconn 2000 check

    Socket.io backend in TCP mode

    This is the same server as above, but accessed in TCP mode.

    backend sio_tcp

      mode tcp

      server node2 localhost:8000 maxconn 2000 check

    Conclusion

    The configs above allow you to serve Websockets, Flash and polling from a single port.

    However, I am dissatisfied by the complexity of this configuration. In particular, Flash sockets’ TCP requirements are rather painful since they require protocol detection in order to work from a single port.

    The alternative is of course to run Socket.io on a different port than your main app. This would mean that you configure HAProxy to just do TCP mode load balancing at that port, with SSL termination in front of HAProxy.

    If you do that, you might want to configure a fallback from Nginx at port 80 to Socket.io for those clients who are behind draconian corporate firewalls which disallow ports other than 80 and 443. The fallback will only support long polling and I don’t think Socket.io itself supports automatically switching ports during transport negotiation, but you can detect a failure in Socket.io and re-initialize manually with a different port and polling-only transport.

    Do you have a better way? How do you deploy Socket.io? Let me know in the comments below.

  • 相关阅读:
    CWMP开源代码研究6——libcwmp动态库开发
    CWMP开源代码研究5——CWMP程序设计思想
    CWMP开源代码研究4——认证流程
    CWMP开源代码研究2——easycwmp安装和学习
    CWMP开源代码研究3——ACS介绍
    CWMP开源代码研究1——开篇之作
    usb驱动开发之大结局
    usb驱动开发24之接口驱动
    usb驱动开发23之驱动生命线
    usb驱动开发22之驱动生命线
  • 原文地址:https://www.cnblogs.com/lexus/p/2524372.html
Copyright © 2020-2023  润新知