Server slowdown imminent!

General Discussion
  • Hello all,

    As of roughly 5 minutes ago, I'm throwing tons of traffic at this server to see how performant NodeBB is.

    I'm going to keep throwing traffic at it until the server falls over, so for the next little while (an hour or so, perhaps), your experience on this site will be diminished 馃槵 (long loads, timeouts, etc).

    Wish me luck 馃槅

  • By the end of all this, I'll likely switch some things around to see how much more traffic we can push to a NodeBB under various configurations. Hoping to publish a blog post about it soon 馃檪

  • @julian said in Server slowdown imminent!:

    By the end of all this, I'll likely switch some things around to see how much more traffic we can push to a NodeBB under various configurations. Hoping to publish a blog post about it soon 馃檪

    I'd love to see some of these metrics! Maybe we can get a better idea of hardware requirements.

    If you have an automated way of doing this, I'd love to help out with my host to give an idea of different hardware scaling.

  • 0_1472835987217_Screen Shot 2016-09-02 at 11.58.23 AM.png

    I did end up confirming what our practical experience taught us, that a 2-core VPS can handle roughly 200 active connections at the same time before falling over. However, at that point you're looking at 10s+ response times, so that's definitely not ideal.

    This forum is hosted on a 2-core VPS that handles everything, the proxying (via nginx), the database, and the application server. Splitting out these tasks to separate machines does increase the raw throughput by a non-insignificant margin.

    That said, if I moved the db out to a separate droplet, that wouldn't actually give results that you could usefully compare with those above, as we'd be effectively doubling the CPU count from 2 to 4 馃槵

    For the testing itself, I used loader.io, which provides quite a nice interface for testing (if throttled, as I was using the free plan).

  • @julian said in Server slowdown imminent!:

    0_1472835987217_Screen Shot 2016-09-02 at 11.58.23 AM.png

    I did end up confirming what our practical experience taught us, that a 2-core VPS can handle roughly 200 active connections at the same time before falling over. However, at that point you're looking at 10s+ response times, so that's definitely not ideal.

    This forum is hosted on a 2-core VPS that handles everything, the proxying (via nginx), the database, and the application server. Splitting out these tasks to separate machines does increase the raw throughput by a non-insignificant margin.

    That said, if I moved the db out to a separate droplet, that wouldn't actually give results that you could usefully compare with those above, as we'd be effectively doubling the CPU count from 2 to 4 馃槵

    For the testing itself, I used loader.io, which provides quite a nice interface for testing (if throttled, as I was using the free plan).

    redis or mongo?

  • @exodo I believe the main db is mongo with redis used for caching.

  • Yes that's correct, this is on Mongo.

    Right now for our hosted clients that are big (10m page views per month) we have a setup with a singular load balancer, several small app servers and a db server.

    We're (or well, Julian is) experimenting with a setup with a floating IP pointing to multiple load balancers... looking forward to his blog post 馃槈

    EDIT: and like Julian said, this community is basically hosted on one single potato server 馃崰 ... doing okay with about half million pageviews per month.


Suggested Topics