What do typical production server requirements look like?
-
@bitspook Nope, that is just for security. Well. It technically could improve speed because everything is coming from
camo.domain.tld
which is cached by CloudFlare, but the intention was security. What we're thinking about or actually using at the moment in our development environment is disabling WebSockets fordomain.tld/community/
and letting them run overlive.domain.tld/community
for design and compatibility reasons( We're using HTTP/2 on
domain.tld/*
, which makes some browsers break WebSockets because they aren't specified in HTTP/2 yet - normally they should just downgrade the request to HTTP/1.1 when using WebSockets - but some versions of Chrome, Opera and Firefox don't do that for some reason. At least that is what I experienced )but what I could imagine in the future is using a different Proxy for the WebSockets which e.g. handles WebSocket DDOS attacks way better than NGINX. ( Just my hope, that proxy doesn't exist yet )
-
"All other customers -- Business, Pro, and Free -- should create a subdomain for Websockets in their CloudFlare DNS and disable the CloudFlare proxy ("grey cloud" the record in CloudFlare DNS Settings)."
how would this work with NodeBB? I don't think that I am understanding how a subdomain will assist with websockets support.. and how that would interface with NodeBB.
-
You create a subdomain like this:
A live.domain.tld 000.000.000.000
(Grey Cloud, which means you disable CF proxying)AAAA live.domain.tld 0000:0000:0000:0000:0000:0000:0000:0000
(Grey Cloud, which means you disable CF proxying)
And put this in your NodeBB configuration:
"socket.io": { "transports": ["websocket", "polling"], "address": "live.domain.tld" }
This way you can use WebSockets while still letting CF proxy your main community forum.
-
Don't know But it took me a while to figure this stuff out too...
-
Another important thing I would consider is a replacement for Redis. I am currently working on a new backend and therefore am also looking for major improvements in my backend.
I replaced Redis with SSDB and actually have to say I am quite satisfied with it. It even offers an option to migrate Redis to SSDB.
Not only that it is faster, but more efficient as it uses your HDD and your RAM as cache only.
But basically to get back to your main question. I would suggest a decentralized database system straight ahead (but I guess that this should be clear for you anyway).
An example taken from my new system:
Database server (MariaDB + SSDB):- 4GB RAM (SSDB is very efficient!!!, before I needed 8GB)
- NVMe SSD (CephFS)
- Intel Xeon E5-2687W v3 (2vCores)
Website itself:
- NGINX 1.9.6 with HTTP/2 and PageSpeed
- HHVM 3.10.1 (contains Redis PHP extension, which is required to migrate to SSDB)
- NodeJS 0.10.38 (will be replaced)
- RAID10 SATA SSD
- Intel Xeon E5-4650 (2vCores - will be replaced)
- 2GB RAM
While you cannot tweak too much at NodeJS you can at NGINX. For example by settings worker processes and maximum connections, GZIP, Caching, etc.
-
@julian it says on the GitHub page that redis clients are supported, so yes, I guess.
-
@Kowlin said:
@lenovouser CloudFlare can websockets with one of there enterprise plans if I remember correctly. Searching the forums with just the keyword CloudFlare should give enough results
Yes, you need the enterprise plan.
-
10K concurrent users is absolutely enormous. That would put you pretty high on the list of biggest websites in the world. I've worked for someone who was doing 30K concurrent and they are by far one of the biggest websites anywhere. 10K concurrent means millions of views per hour, likely. Are you sure that concurrent is what you mean? What will be driving that kind of traffic? That would suggest that you will hit hundreds of thousands, maybe millions of users every hour. That would be the total populations of your target countries each day. Those are localized Google scale numbers.
-
You will surely want dedicated Redis and MongoDB clusters and many front end app nodes. Look at something like Chef to handle auto-scaling for you.
-
10K concurrent means millions of views per hour, likely. Are you sure that concurrent is what you mean? What will be driving that kind of traffic? That would suggest that you will hit hundreds of thousands, maybe millions of users every hour.
That's a lot. The NodeBB team hasn't seen that kind of traffic since our previous lives working in the videogame industry... FB games that had real-time interactions on live national television and such.
For our forum software, as @julian mentioned earlier we haven't had a chance to manage a forum with that much load yet. We'd love the opportunity to do so - I think we have a ton of experience here... if you'd like to offload the server management to us give us a shout
-
Lets say you serve 10.000 clients a second, this means:
10.000x60=60.000 per minute
60.000x60=3.600.000 per hour
3.600.000x24=864.000.000 per day
864.000.000x30=25.920.000.000 per monthSo just to have a relation. This means you would serve 3,7 times a month the world population.
If you want I can calculate a system for this size. Could be funny I guess
-
@AOKP that number is not quite that unreasonable because 10K would be his peak, not his sustained, and real world websites do much larger numbers than that because single users stay on for longer than a second and, we assume, return. But the number is still enormous.
-
@scottalanmiller yeah I am aware of this fact.
But I just was curious like you, if he does not means 10.000 visitors a day or if this is a simple general question how NodeBB deals with it. -
10K a day is easy. We do that constantly. 10K concurrent is easily more than 10K every second, though!!
-
The question is about spikes in traffic. I need generalized data for scaling projections, cost estimates and hardware decisions. The correctly sized server for each process will act as a base for scaling to the required demand. It is not uncommon to reach ten's of thousands of uniques when facebook is involved.
If 10k concurrent were constant, i'd hire someone from outside the team.
-
Concurrent users will be more of a traffic issue than anything on its own. If that is 10K concurrent readers you get one thing, if you are looking at 10K concurrent posters then MongoDB is going to be struggling on any platform.
-
Do you have an existing site to compare this data from? How did you come up with 10K concurrent? To give perspective, http://mangolassi.it/ is an extremely busy forum, we are told we are the busiest in our industry (IT) and concurrent rarely tops 80. Not 80K... 80. That's with thousands of daily users, nearly a thousand daily posts, etc. You are looking at more than 100 times our peak, let alone our load. Doable, but huge.