What do typical production server requirements look like?
-
thanks @julian, so then:
1 Mongo datastore server
autoscaling Load balancers
autoscaling redis servers (asynchronously replicated)
autoscaling dual core app serversWhat is the exact size of each NodeBB worker process? (to know exact memory requirements for app servers)
What about spot fleet for the autoscaling? Can NodeBB save state within the 2 minute warning that AWS gives before reclaiming the servers?
-
@bitspook As far as I remember you can't really use the Amazon ELB, or actually any load balancer because they don't proxy WebSockets which is kinda essential for NodeBB. There is a similar problem with e.g. CloudFlare.
-
@bitspook I am sorry. I just re-checked and it seems possible if you use the TCP option in ELB.
-
@bitspook said:
What is the exact size of each NodeBB worker process?
The nodejs procs can grow up to 1.5gb in size as that is the default for v8. You can limit it with a flag. https://github.com/nodejs/node/wiki/Frequently-Asked-Questions
-
@lenovouser CloudFlare can websockets with one of there enterprise plans if I remember correctly. Searching the forums with just the keyword CloudFlare should give enough results
-
@Kowlin Yep. But it is really expensive Back to the ELB thing. I researched some more. It definitely is possible using the TCP option. But there will be e.g. no headers which means e.g. no real IP's and I also don't know wether stuff like the
x-csrf-token
works, as they seem to be sent over headers in NodeBB. There is the alternative PROXY-Protocol which also seems to work with WebSockets, and also proxies headers but I don't know how to make it work with NodeBB. -
Well, depending on OP's needs he can always look into it if it suits him... like 9K a month xD
-
Yeah, true. We'll see what he thinks when he comes back. But he seems to be online 24/7. He was online yesterday all day and now that I woke up ( It is 8:49 here at the moment ) he is still online
-
(my work machines are up all the time during the week)
If nginx is a problem then ZeroMQ may be better. And, it could be run on the same instance as the webrtc signaling service.
The enterprise version of cloudflare would not be acceptable. Peer5 is already serving as a p2p cdn and cloudflare is meant to lessen the hits to the servers to reduce bandwidth usage. The usecase is relatively low profit per visitor, with bursty peaks of usage.
-
@bitspook The problem with CloudFlare is that they don't support Websockets on anything lower then there enterprise package. So if you're still planning on having CloudFlare it might be worth looking into something that will handle Websockets. Maby a subdomain?
-
@bitspook Ah, sorry. I probably understood you wrong. I thought wanted to use the Elastic Load Balancers from Amazon, which provide poor WebSocket support. But if you're doing your own auto-scaling solution with NGINX that will be perfectly fine. NGINX has no problems with WebSockets - just ELB.
-
There are auxiliary services unrelated to the operation of Nodebb, I left out the extraneous details for the sake of brevity.
For load balancing it would be fine to use amazon autoscaling, but if nginx is nonideal for the additional situation of message passing...
@baris said:
The nodejs procs can grow up to 1.5gb in size as that is the default for v8. You can limit it with a flag.
Interesting, that is quite large. My estimations were in low hundreds of mb's.
Thanks for all the helpful replies all.
-
@bitspook To clarify. Amazon autoscaling with NGINX will not be a problem. Using Amazon ELB (Their own load balancer software) will be a problem - hope that's easier to understand.
I am currently working with @yariplus on setting up a "customised" version of NodeBB to fit our needs. We expect >2-3k users after about 1/2 year of being public. What we'll set up for our production environment once we go live will roughly be like this:
- 2 database servers in Canada and France running clustered Redis and MongoDB
- 2 app servers in Canada and France with a anycast IP address, running NGINX which is proxing to a clustered NodeBB on the backend ( This is already available in the NodeBB core docs:scaling#utilise-clustering ) Each server instance will be configured to access the database of the country the server is in. The App server in Canada will access
ca1.mongo.db.domain.tld
/ca1.redis.db.domain.tld
while the Server in France will accessfr1.mongo.db.domain.tld
/fr1.redis.db.domain.tld
for low latencies ) - Our own private plugin which amongst other things rewrites all static stuff ( CSS / JS ) to our
cdn.domain.tld
subdomain which is being cached and proxied by CloudFlare. Same goes for images (img.domain.tld
or embedded images which will go through a camo proxy which is also being proxied by CloudFlare )
That is what I think will fit our needs for now. If we grow even more we'll probably change some stuff, but we'll see.
-
Multiple availability zones for target audiences, replicated datastores for improved read times... good, good.
I had not read anything about Camo until just now. Is this just for the additional perceived security of ssl without warnings or does it provide other desirable functions as well?
-
@bitspook Nope, that is just for security. Well. It technically could improve speed because everything is coming from
camo.domain.tld
which is cached by CloudFlare, but the intention was security. What we're thinking about or actually using at the moment in our development environment is disabling WebSockets fordomain.tld/community/
and letting them run overlive.domain.tld/community
for design and compatibility reasons( We're using HTTP/2 on
domain.tld/*
, which makes some browsers break WebSockets because they aren't specified in HTTP/2 yet - normally they should just downgrade the request to HTTP/1.1 when using WebSockets - but some versions of Chrome, Opera and Firefox don't do that for some reason. At least that is what I experienced )but what I could imagine in the future is using a different Proxy for the WebSockets which e.g. handles WebSocket DDOS attacks way better than NGINX. ( Just my hope, that proxy doesn't exist yet )
-
"All other customers -- Business, Pro, and Free -- should create a subdomain for Websockets in their CloudFlare DNS and disable the CloudFlare proxy ("grey cloud" the record in CloudFlare DNS Settings)."
how would this work with NodeBB? I don't think that I am understanding how a subdomain will assist with websockets support.. and how that would interface with NodeBB.
-
You create a subdomain like this:
A live.domain.tld 000.000.000.000
(Grey Cloud, which means you disable CF proxying)AAAA live.domain.tld 0000:0000:0000:0000:0000:0000:0000:0000
(Grey Cloud, which means you disable CF proxying)
And put this in your NodeBB configuration:
"socket.io": { "transports": ["websocket", "polling"], "address": "live.domain.tld" }
This way you can use WebSockets while still letting CF proxy your main community forum.
-
Don't know But it took me a while to figure this stuff out too...
-
Another important thing I would consider is a replacement for Redis. I am currently working on a new backend and therefore am also looking for major improvements in my backend.
I replaced Redis with SSDB and actually have to say I am quite satisfied with it. It even offers an option to migrate Redis to SSDB.
Not only that it is faster, but more efficient as it uses your HDD and your RAM as cache only.
But basically to get back to your main question. I would suggest a decentralized database system straight ahead (but I guess that this should be clear for you anyway).
An example taken from my new system:
Database server (MariaDB + SSDB):- 4GB RAM (SSDB is very efficient!!!, before I needed 8GB)
- NVMe SSD (CephFS)
- Intel Xeon E5-2687W v3 (2vCores)
Website itself:
- NGINX 1.9.6 with HTTP/2 and PageSpeed
- HHVM 3.10.1 (contains Redis PHP extension, which is required to migrate to SSDB)
- NodeJS 0.10.38 (will be replaced)
- RAID10 SATA SSD
- Intel Xeon E5-4650 (2vCores - will be replaced)
- 2GB RAM
While you cannot tweak too much at NodeJS you can at NGINX. For example by settings worker processes and maximum connections, GZIP, Caching, etc.