General idea about scaling out
-
It's been a few days since I'm thinking how to scale my site.
I'd like to scale out with new VPS, instead of scale-in the machine.
What I'd like to have in the end is something like: 1 "Nginx + Ghost"-vps, 2 "NodeBB"-vps, 1 "redis"-vps.My only real concern is: how to handle the uploads?
I already use Imgur for the images, but I don't like the idea of an S3 bucket. There is a way to "share" some space between the two NodeBB vps (and the NginX), also using another vps as "disk only"?I'm actually using (and I really like) OVH, do you know if there is some feature that allows to share this "space" not in readonly mode?
I'm trying to figure out this also to "implement" a Google-Cloud auto-scalable solution. (If I'll manage to do something, I'll write a guide on how to do that )
-
Yes, you would do this with NFS for example.
-
FYI: Scaling out currently with NodeBB is not very practical. You'd have to be insanely busy for it to make sense. I'm not aware of any site taking more traffic than we are with CloudFlare putting our requests for one month at 190 million - and our NodeBB doesn't break a sweat on a single VPS VM. There is a high network overhead cost to scaling out. Likely you'd never see it speed you up, it will only add latency and cost. With each release, NodeBB is faster, as is the database. And individual VPS get faster. I can't imagine how large a site would need to be to warrant scaling out.
It's a good thought experiment, but I would not make it a goal.
-
Why do you not like the idea of an S3 bucket?
-
@scottalanmiller Do you use CloudFlare for WebSocket requests?
I tried to route WebSocket requests on a subdomain, but it did not work:
https://github.com/NodeBB/NodeBB/issues/5430See also:
@hek said in Using CloudFlare with NodeBB:
Recommendation
Do NOT use cloudflare (at least not the free plan) on NodeBB when you have a moderate traffic to your forum. Cloudflare seems to silently throttle the traffic resulting in very strange NodeBB behaviour (for some clients) where the simply cut websockets.
The throttled client will see a lot of popups "Looks like your connection to XXX Forum was lost, please wait while we try to reconnect."
In the ngnix error log you will also see lots of:
2017/01/25 09:56:15 [error] 13909#13909: *799654 upstream timed out (110: Connection timed out) while reading response header from upstream, client: xxxxxxxxxxxx.123, server: forum.mysensors.org, request: "GET /socket.io/?EIO=3&transport=polling&t=xxxxx&sid=xxxxxxxxx HTTP/1.1", upstream: "http://127.0.0.1:4568/socket.io/?EIO=3&transport=polling&t=xxxx&sid=xxxxxxx", host: "forum.mysensors.org", referrer: "https://forum.mysensors.org/topic/702/openhab-mqtt-example/2"It has been kind of hellish to find the root cause.
-
@scottalanmiller I would like to scale out for two reasons:
- two single vCore VPS are much cheaper than a single two core VPS.
- I'd like to have auto-scaling at a certain point
But I'm not even close to your numbers. At the moment I cannot handle more than 30 perfectly simultaneous connections without giving a lot of 503:
(users are 50 at maximum, hits/s 25)If it's not a big problem can you tell me what your configuration is?
Using a shared hard drive is cheaper than S3 ^^'. Unluckily I cannot afford performant/nice stuff all the year since there are months that my site have 50k views/day, others that have 2k. Adsense doesn't make me earn enough ^^'
And do you use the free CloudFlare plan?
-
@vstoykov said in General idea about scaling out:
@scottalanmiller Do you use CloudFlare for WebSocket requests?
I tried to route WebSocket requests on a subdomain, but it did not work:
https://github.com/NodeBB/NodeBB/issues/5430See also:
@hek said in Using CloudFlare with NodeBB:
Recommendation
Do NOT use cloudflare (at least not the free plan) on NodeBB when you have a moderate traffic to your forum. Cloudflare seems to silently throttle the traffic resulting in very strange NodeBB behaviour (for some clients) where the simply cut websockets.
The throttled client will see a lot of popups "Looks like your connection to XXX Forum was lost, please wait while we try to reconnect."
In the ngnix error log you will also see lots of:
2017/01/25 09:56:15 [error] 13909#13909: *799654 upstream timed out (110: Connection timed out) while reading response header from upstream, client: xxxxxxxxxxxx.123, server: forum.mysensors.org, request: "GET /socket.io/?EIO=3&transport=polling&t=xxxxx&sid=xxxxxxxxx HTTP/1.1", upstream: "http://127.0.0.1:4568/socket.io/?EIO=3&transport=polling&t=xxxx&sid=xxxxxxx", host: "forum.mysensors.org", referrer: "https://forum.mysensors.org/topic/702/openhab-mqtt-example/2"It has been kind of hellish to find the root cause.
Yes, we use it. We are not on the free plan, though. But we have other sites on the free plan and have not really seen any issues.
-
@Giggiux said in General idea about scaling out:
@scottalanmiller I would like to scale out for two reasons:
- two single vCore VPS are much cheaper than a single two core VPS.
- I'd like to have auto-scaling at a certain point
But I'm not even close to your numbers. At the moment I cannot handle more than 30 perfectly simultaneous connections without giving a lot of 503:
(users are 50 at maximum, hits/s 25)If it's not a big problem can you tell me what your configuration is?
Using a shared hard drive is cheaper than S3 ^^'. Unluckily I cannot afford performant/nice stuff all the year since there are months that my site have 50k views/day, others that have 2k. Adsense doesn't make me earn enough ^^'
And do you use the free CloudFlare plan?
We use Linode and have an 8GB RAM plan. No way to scale out and be as fast or cost effective.