What do typical production server requirements look like?

  • Gamers

    (my work machines are up all the time during the week)

    If nginx is a problem then ZeroMQ may be better. And, it could be run on the same instance as the webrtc signaling service.

    The enterprise version of cloudflare would not be acceptable. Peer5 is already serving as a p2p cdn and cloudflare is meant to lessen the hits to the servers to reduce bandwidth usage. The usecase is relatively low profit per visitor, with bursty peaks of usage.


  • @bitspook The problem with CloudFlare is that they don't support Websockets on anything lower then there enterprise package. So if you're still planning on having CloudFlare it might be worth looking into something that will handle Websockets. Maby a subdomain?


  • @bitspook Ah, sorry. I probably understood you wrong. I thought wanted to use the Elastic Load Balancers from Amazon, which provide poor WebSocket support. But if you're doing your own auto-scaling solution with NGINX that will be perfectly fine. NGINX has no problems with WebSockets - just ELB.

  • Gamers

    @lenovouser

    There are auxiliary services unrelated to the operation of Nodebb, I left out the extraneous details for the sake of brevity.

    For load balancing it would be fine to use amazon autoscaling, but if nginx is nonideal for the additional situation of message passing...

    @baris said:

    The nodejs procs can grow up to 1.5gb in size as that is the default for v8. You can limit it with a flag.

    Interesting, that is quite large. My estimations were in low hundreds of mb's.

    Thanks for all the helpful replies all. 🙂


  • @bitspook To clarify. Amazon autoscaling with NGINX will not be a problem. Using Amazon ELB (Their own load balancer software) will be a problem - hope that's easier to understand.

    I am currently working with @yariplus on setting up a "customised" version of NodeBB to fit our needs. We expect >2-3k users after about 1/2 year of being public. What we'll set up for our production environment once we go live will roughly be like this:

    • 2 database servers in Canada and France running clustered Redis and MongoDB
    • 2 app servers in Canada and France with a anycast IP address, running NGINX which is proxing to a clustered NodeBB on the backend ( This is already available in the NodeBB core docs:scaling#utilise-clustering ) Each server instance will be configured to access the database of the country the server is in. The App server in Canada will access ca1.mongo.db.domain.tld / ca1.redis.db.domain.tld while the Server in France will access fr1.mongo.db.domain.tld / fr1.redis.db.domain.tld for low latencies )
    • Our own private plugin which amongst other things rewrites all static stuff ( CSS / JS ) to our cdn.domain.tld subdomain which is being cached and proxied by CloudFlare. Same goes for images ( img.domain.tld or embedded images which will go through a camo proxy which is also being proxied by CloudFlare )

    That is what I think will fit our needs for now. If we grow even more we'll probably change some stuff, but we'll see.

  • Gamers

    @lenovouser

    Multiple availability zones for target audiences, replicated datastores for improved read times... good, good.

    I had not read anything about Camo until just now. Is this just for the additional perceived security of ssl without warnings or does it provide other desirable functions as well?


  • @bitspook Nope, that is just for security. Well. It technically could improve speed because everything is coming from camo.domain.tld which is cached by CloudFlare, but the intention was security. What we're thinking about or actually using at the moment in our development environment is disabling WebSockets for domain.tld/community/ and letting them run over live.domain.tld/community for design and compatibility reasons

    ( We're using HTTP/2 on domain.tld/*, which makes some browsers break WebSockets because they aren't specified in HTTP/2 yet - normally they should just downgrade the request to HTTP/1.1 when using WebSockets - but some versions of Chrome, Opera and Firefox don't do that for some reason. At least that is what I experienced )

    but what I could imagine in the future is using a different Proxy for the WebSockets which e.g. handles WebSocket DDOS attacks way better than NGINX. ( Just my hope, that proxy doesn't exist yet 😄 )

  • Gamers

    "All other customers -- Business, Pro, and Free -- should create a subdomain for Websockets in their CloudFlare DNS and disable the CloudFlare proxy ("grey cloud" the record in CloudFlare DNS Settings)."

    how would this work with NodeBB? I don't think that I am understanding how a subdomain will assist with websockets support.. and how that would interface with NodeBB.


  • You create a subdomain like this:

    • A live.domain.tld 000.000.000.000 (Grey Cloud, which means you disable CF proxying)
    • AAAA live.domain.tld 0000:0000:0000:0000:0000:0000:0000:0000 (Grey Cloud, which means you disable CF proxying)

    And put this in your NodeBB configuration:

    "socket.io": {
        "transports": ["websocket", "polling"],
        "address": "live.domain.tld"
    }
    

    This way you can use WebSockets while still letting CF proxy your main community forum.

  • Gamers

    You... clearly know more about servers than I. 😄


  • Don't know 😃 But it took me a while to figure this stuff out too... 😄


  • Another important thing I would consider is a replacement for Redis. I am currently working on a new backend and therefore am also looking for major improvements in my backend.

    I replaced Redis with SSDB and actually have to say I am quite satisfied with it. It even offers an option to migrate Redis to SSDB.

    Not only that it is faster, but more efficient as it uses your HDD and your RAM as cache only.

    But basically to get back to your main question. I would suggest a decentralized database system straight ahead (but I guess that this should be clear for you anyway).

    An example taken from my new system:
    Database server (MariaDB + SSDB):

    • 4GB RAM (SSDB is very efficient!!!, before I needed 8GB)
    • NVMe SSD (CephFS)
    • Intel Xeon E5-2687W v3 (2vCores)

    Website itself:

    • NGINX 1.9.6 with HTTP/2 and PageSpeed
    • HHVM 3.10.1 (contains Redis PHP extension, which is required to migrate to SSDB)
    • NodeJS 0.10.38 (will be replaced)
    • RAID10 SATA SSD
    • Intel Xeon E5-4650 (2vCores - will be replaced)
    • 2GB RAM

    While you cannot tweak too much at NodeJS you can at NGINX. For example by settings worker processes and maximum connections, GZIP, Caching, etc.

  • GNU/Linux Admin

    Is SSDB a drop-in replacement for Redis?

  • Global Moderator Plugin & Theme Dev

    @julian it says on the GitHub page that redis clients are supported, so yes, I guess.

  • GNU/Linux Admin

    Neat, now NodeBB supports SSDB 😆

  • Community Rep

    @Kowlin said:

    @lenovouser CloudFlare can websockets with one of there enterprise plans if I remember correctly. Searching the forums with just the keyword CloudFlare should give enough results

    Yes, you need the enterprise plan.

  • Community Rep

    10K concurrent users is absolutely enormous. That would put you pretty high on the list of biggest websites in the world. I've worked for someone who was doing 30K concurrent and they are by far one of the biggest websites anywhere. 10K concurrent means millions of views per hour, likely. Are you sure that concurrent is what you mean? What will be driving that kind of traffic? That would suggest that you will hit hundreds of thousands, maybe millions of users every hour. That would be the total populations of your target countries each day. Those are localized Google scale numbers.

  • Community Rep

    You will surely want dedicated Redis and MongoDB clusters and many front end app nodes. Look at something like Chef to handle auto-scaling for you.

  • Admin

    10K concurrent means millions of views per hour, likely. Are you sure that concurrent is what you mean? What will be driving that kind of traffic? That would suggest that you will hit hundreds of thousands, maybe millions of users every hour.

    That's a lot. The NodeBB team hasn't seen that kind of traffic since our previous lives working in the videogame industry... FB games that had real-time interactions on live national television and such.

    For our forum software, as @julian mentioned earlier we haven't had a chance to manage a forum with that much load yet. We'd love the opportunity to do so - I think we have a ton of experience here... if you'd like to offload the server management to us give us a shout 🙂


  • Lets say you serve 10.000 clients a second, this means:
    10.000x60=60.000 per minute
    60.000x60=3.600.000 per hour
    3.600.000x24=864.000.000 per day
    864.000.000x30=25.920.000.000 per month

    So just to have a relation. This means you would serve 3,7 times a month the world population.

    If you want I can calculate a system for this size. Could be funny I guess 😄


Suggested Topics

  • 2
  • 5
  • 1
  • 3
  • 3
| |