Server slowdown imminent!
-
Hello all,
As of roughly 5 minutes ago, I'm throwing tons of traffic at this server to see how performant NodeBB is.
I'm going to keep throwing traffic at it until the server falls over, so for the next little while (an hour or so, perhaps), your experience on this site will be diminished (long loads, timeouts, etc).
Wish me luck
-
@julian said in Server slowdown imminent!:
By the end of all this, I'll likely switch some things around to see how much more traffic we can push to a NodeBB under various configurations. Hoping to publish a blog post about it soon
I'd love to see some of these metrics! Maybe we can get a better idea of hardware requirements.
If you have an automated way of doing this, I'd love to help out with my host to give an idea of different hardware scaling.
-
I did end up confirming what our practical experience taught us, that a 2-core VPS can handle roughly 200 active connections at the same time before falling over. However, at that point you're looking at 10s+ response times, so that's definitely not ideal.
This forum is hosted on a 2-core VPS that handles everything, the proxying (via nginx), the database, and the application server. Splitting out these tasks to separate machines does increase the raw throughput by a non-insignificant margin.
That said, if I moved the db out to a separate droplet, that wouldn't actually give results that you could usefully compare with those above, as we'd be effectively doubling the CPU count from 2 to 4
For the testing itself, I used loader.io, which provides quite a nice interface for testing (if throttled, as I was using the free plan).
-
@julian said in Server slowdown imminent!:
I did end up confirming what our practical experience taught us, that a 2-core VPS can handle roughly 200 active connections at the same time before falling over. However, at that point you're looking at 10s+ response times, so that's definitely not ideal.
This forum is hosted on a 2-core VPS that handles everything, the proxying (via nginx), the database, and the application server. Splitting out these tasks to separate machines does increase the raw throughput by a non-insignificant margin.
That said, if I moved the db out to a separate droplet, that wouldn't actually give results that you could usefully compare with those above, as we'd be effectively doubling the CPU count from 2 to 4
For the testing itself, I used loader.io, which provides quite a nice interface for testing (if throttled, as I was using the free plan).
redis or mongo?
-
Yes that's correct, this is on Mongo.
Right now for our hosted clients that are big (10m page views per month) we have a setup with a singular load balancer, several small app servers and a db server.
We're (or well, Julian is) experimenting with a setup with a floating IP pointing to multiple load balancers... looking forward to his blog post
EDIT: and like Julian said, this community is basically hosted on one single potato server ... doing okay with about half million pageviews per month.