What do typical production server requirements look like?


  • Gamers

    For a production server that needs to handle 10,000 concurrent users, what size Amazon servers and other services(like separate database) are needed to reliably serve them? A c3.xlarge(4 core 7.5 GB) + database per 10k?

    The primary content are images and embedded links to other content providers like image hosting and youtube. Cloudflare will be utilized, is an nginx load balancer necessary?

    At this level, does the redis need its own server? It certainly seems like it would in production; development the opposite being true. How big does redis get in production? Not a lot is said about Mongodb, that I can find in the forums. How do you persist data from redis to mongodb programmatically?

    What additional third party software would help optimize the server to handle a large load? And, the workers would be # of CPUs + 1 on the application server? Are nodeBB workers stateless: Can I spin up spot micro-instances when demand is the highest for a hefty spot discount? What size are individual nodebb workers? (to better estimate application server memory needs)

    The internet is awash in conflicting data on server best practices. I need specific information on server needs with NodeBB as the core component. How does one go from dev to prod with Nodebb?

    Thanks for making a great realtime forum software! 🙂


  • GNU/Linux Admin

    @scottalanmiller could lend some advice about his deployment, although as far as I know, nobody has really hit the 10k concurrent users mark 😄

    Would recommend Mongo and Redis on a separate server, and to start a larger number of small application servers (2 cores each) just for running NodeBB.

    We don't persist data back and forth from Redis to Mongo, we just store different data. Disposable/temporary data in Redis if present (sessions and the like), otherwise the "meat" is all on Mongo.


  • Gamers

    thanks @julian, so then:

    1 Mongo datastore server
    autoscaling Load balancers
    autoscaling redis servers (asynchronously replicated)
    autoscaling dual core app servers

    What is the exact size of each NodeBB worker process? (to know exact memory requirements for app servers)

    What about spot fleet for the autoscaling? Can NodeBB save state within the 2 minute warning that AWS gives before reclaiming the servers?



  • @bitspook As far as I remember you can't really use the Amazon ELB, or actually any load balancer because they don't proxy WebSockets which is kinda essential for NodeBB. There is a similar problem with e.g. CloudFlare.



  • @bitspook I am sorry. I just re-checked and it seems possible if you use the TCP option in ELB.



  • @bitspook said:

    What is the exact size of each NodeBB worker process?

    The nodejs procs can grow up to 1.5gb in size as that is the default for v8. You can limit it with a flag. https://github.com/nodejs/node/wiki/Frequently-Asked-Questions



  • @lenovouser CloudFlare can websockets with one of there enterprise plans if I remember correctly. Searching the forums with just the keyword CloudFlare should give enough results



  • @Kowlin Yep. But it is really expensive 😞 Back to the ELB thing. I researched some more. It definitely is possible using the TCP option. But there will be e.g. no headers which means e.g. no real IP's and I also don't know wether stuff like the x-csrf-token works, as they seem to be sent over headers in NodeBB. There is the alternative PROXY-Protocol which also seems to work with WebSockets, and also proxies headers but I don't know how to make it work with NodeBB.



  • Well, depending on OP's needs he can always look into it if it suits him... like 9K a month xD



  • Yeah, true. 😃 We'll see what he thinks when he comes back. But he seems to be online 24/7. He was online yesterday all day and now that I woke up ( It is 8:49 here at the moment ) he is still online 😄


  • Gamers

    (my work machines are up all the time during the week)

    If nginx is a problem then ZeroMQ may be better. And, it could be run on the same instance as the webrtc signaling service.

    The enterprise version of cloudflare would not be acceptable. Peer5 is already serving as a p2p cdn and cloudflare is meant to lessen the hits to the servers to reduce bandwidth usage. The usecase is relatively low profit per visitor, with bursty peaks of usage.



  • @bitspook The problem with CloudFlare is that they don't support Websockets on anything lower then there enterprise package. So if you're still planning on having CloudFlare it might be worth looking into something that will handle Websockets. Maby a subdomain?



  • @bitspook Ah, sorry. I probably understood you wrong. I thought wanted to use the Elastic Load Balancers from Amazon, which provide poor WebSocket support. But if you're doing your own auto-scaling solution with NGINX that will be perfectly fine. NGINX has no problems with WebSockets - just ELB.


  • Gamers

    @lenovouser

    There are auxiliary services unrelated to the operation of Nodebb, I left out the extraneous details for the sake of brevity.

    For load balancing it would be fine to use amazon autoscaling, but if nginx is nonideal for the additional situation of message passing...

    @baris said:

    The nodejs procs can grow up to 1.5gb in size as that is the default for v8. You can limit it with a flag.

    Interesting, that is quite large. My estimations were in low hundreds of mb's.

    Thanks for all the helpful replies all. 🙂



  • @bitspook To clarify. Amazon autoscaling with NGINX will not be a problem. Using Amazon ELB (Their own load balancer software) will be a problem - hope that's easier to understand.

    I am currently working with @yariplus on setting up a "customised" version of NodeBB to fit our needs. We expect >2-3k users after about 1/2 year of being public. What we'll set up for our production environment once we go live will roughly be like this:

    • 2 database servers in Canada and France running clustered Redis and MongoDB
    • 2 app servers in Canada and France with a anycast IP address, running NGINX which is proxing to a clustered NodeBB on the backend ( This is already available in the NodeBB core docs:scaling#utilise-clustering ) Each server instance will be configured to access the database of the country the server is in. The App server in Canada will access ca1.mongo.db.domain.tld / ca1.redis.db.domain.tld while the Server in France will access fr1.mongo.db.domain.tld / fr1.redis.db.domain.tld for low latencies )
    • Our own private plugin which amongst other things rewrites all static stuff ( CSS / JS ) to our cdn.domain.tld subdomain which is being cached and proxied by CloudFlare. Same goes for images ( img.domain.tld or embedded images which will go through a camo proxy which is also being proxied by CloudFlare )

    That is what I think will fit our needs for now. If we grow even more we'll probably change some stuff, but we'll see.


  • Gamers

    @lenovouser

    Multiple availability zones for target audiences, replicated datastores for improved read times... good, good.

    I had not read anything about Camo until just now. Is this just for the additional perceived security of ssl without warnings or does it provide other desirable functions as well?



  • @bitspook Nope, that is just for security. Well. It technically could improve speed because everything is coming from camo.domain.tld which is cached by CloudFlare, but the intention was security. What we're thinking about or actually using at the moment in our development environment is disabling WebSockets for domain.tld/community/ and letting them run over live.domain.tld/community for design and compatibility reasons

    ( We're using HTTP/2 on domain.tld/*, which makes some browsers break WebSockets because they aren't specified in HTTP/2 yet - normally they should just downgrade the request to HTTP/1.1 when using WebSockets - but some versions of Chrome, Opera and Firefox don't do that for some reason. At least that is what I experienced )

    but what I could imagine in the future is using a different Proxy for the WebSockets which e.g. handles WebSocket DDOS attacks way better than NGINX. ( Just my hope, that proxy doesn't exist yet 😄 )


  • Gamers

    "All other customers -- Business, Pro, and Free -- should create a subdomain for Websockets in their CloudFlare DNS and disable the CloudFlare proxy ("grey cloud" the record in CloudFlare DNS Settings)."

    how would this work with NodeBB? I don't think that I am understanding how a subdomain will assist with websockets support.. and how that would interface with NodeBB.



  • You create a subdomain like this:

    • A live.domain.tld 000.000.000.000 (Grey Cloud, which means you disable CF proxying)
    • AAAA live.domain.tld 0000:0000:0000:0000:0000:0000:0000:0000 (Grey Cloud, which means you disable CF proxying)

    And put this in your NodeBB configuration:

    "socket.io": {
        "transports": ["websocket", "polling"],
        "address": "live.domain.tld"
    }
    

    This way you can use WebSockets while still letting CF proxy your main community forum.


  • Gamers

    You... clearly know more about servers than I. 😄


Log in to reply
 

Suggested Topics

  • 3
  • 7
  • 1
  • 1
  • 3
| |