NodeJS Cluster
-
@Florian-Müller said in NodeJS Cluster:
I don't see any problems by disabling long-polling requests (the even messed up our statistics), because every modern browser supports websockets. And they are working very well. Maybe I've overseen something here?
No you're not that wrong. Long Polling is as of today something that you don't necessarily need to just give the best support of your service to your users. But you need it for environments that do not support websockets yet. This includes many DDoS solutions and any other reverse proxy constellation that is not yet up to date. A perfect example was cloudflare. Cloudflare today has support for websockets now. But you naturally need to get on a paid plan for actually being able to use them. Unlike if you use long polling, websockets are not really usable on the free plan. The free plan only includes support that is that limited that you actually just can test around with it. So this is a point where it makes sense to have support for long polling in a software like this. Not for the end users, but for the forum owners.
Actually the problem with socket.io here is the handshake and I wonder how you resolved this? For the socket.io handshake you need to stick to the same container for 2 requests. So how did you resolved this? I have not looked a long time into socket.io, as I use as of today only modules that are build for performance like the
ws
oruws
module, do they finally have shared state support? -
You're right, websockets don't work through akamai (we're using akamai) yet. So we're using a separate subdomain for the websocket-connections pointing to the datacenter directly. The page itself is protected, and the websockets are accepted by an nginx where we can use stuff like rate limiting.
I'm not sure wheres the difference in the handshake between long polling and websockets, but websockets simply work with one single request, as long as the session is available on all instances - via redis in our case. I guess the handshake (auth) and the connection upgrade happen in the same request.
-
@Florian-Müller said in NodeJS Cluster:
You're right, websockets don't work through akamai (we're using akamai) yet. So we're using a separate subdomain for the websocket-connections pointing to the datacenter directly. The page itself is protected, and the websockets are accepted by an nginx where we can use stuff like rate limiting.
I hope you're pointing to an IP that is not any near in the range of the IP of your services, or even the same which would be worse, otherwise if you're using AKAMAI as DDoS solution, you just build a information disclosure vulnerability by design. Unless you're hosting at level 3 or OVH or any other provider smilarily capable like them, it is pretty unlikely that your hoster has protection for this by itself.
I'm not sure wheres the difference in the handshake between long polling and websockets, but websockets simply work with one single request, as long as the session is available on all instances - via redis in our case. I guess the handshake (auth) and the connection upgrade happen in the same request.
It has nothing to do with long polling. You can do long polling without a handshake. socket.io does make the handshake, not any of the protocols that socket.io can use
-
And about AKAMAI, yes they're pretty outdated... I needed to work with them in the past (just a few month ago though) and I really wondered about how slow they are in development. They are far behind their competitors. Especially cloudflare though, they even don't have h/2 push yet...
-
We're using google cloud as our hoster, so we should be safe here
Even the routing nginx for the websockets is separated from the other routing systems.I'm not a developer, so I don't have a real clue how it works. When we used long-polling and websockets in the beginning, nodebb tried long-polling first, ending up in endless loops because the handshakes failed. This problem disappeared immediately when we removed long-polling from the transports. In my web console I can see a single websocket-request in pending state, with frames being sent over.
-
@Florian-Müller said in NodeJS Cluster:
We're using google cloud as out hoster, so we should be safe here
Even the routing nginx for the websockets is separated from the other routing systems.Ok, can't tell if they include DDoS protection automatically, may be you should ask them for that or wait for the first attack to happen .
I'm not a developer, so I don't have a real clue how it works. When we used long-polling and websockets in the beginning, nodebb tried long-polling first, ending up in endless loops because the handshakes failed. This problem disappeared immediately when we removed long-polling from the transports. In my web console I can see a single websocket-request in pending state, with frames being sent over.
Good to know
-
@Florian-Müller Yes, typically we recommend using sticky cookies or
ip_hash
in nginx to segment users into different buckets, one per NodeBB process.If you go with websockets only, then of course, no problems... except for those users running IE8 :shipit:
-
@julian Company decided that we are not going to support IE8 anymore, hurray!
So, Websocket connections are not bound to a specific process, als long as the session data is available everywhere? One single request as I assumed above?
-
afaik, and please correct me if I'm wrong, socket.io does rely on a longer handshake process (vs for example handling handshaking yourself using ws.) But the socket.io-redis module takes care of centralizing the handshake metadata? (besides centralizing the pub-sub feature of socket.io which is what it does for sure)
-
@zoharm said in NodeJS Cluster:
afaik, and please correct me if I'm wrong, socket.io does rely on a longer handshake process (vs for example handling handshaking yourself using ws.) But the socket.io-redis module takes care of centralizing the handshake metadata? (besides centralizing the pub-sub feature of socket.io which is what it does for sure)
No, it is just an adapter that enables you to send messages between different processes running socket.io all using this same adapter. It does not centralize the handshake.
-
@wzrdtales said in NodeJS Cluster:
No, it is just an adapter that enables you to send messages between different processes running socket.io all using this same adapter. It does not centralize the handshake.
Just one more question if possible, wouldn't the adapter also take care of telling all other processes subscribed to the same adapter to emit a message (and thus to all clients connected to those processes?) Ala:
io.to('room').emit('event', data):
Thank you!
-
@zoharm said in NodeJS Cluster:
Just one more question if possible, wouldn't the adapter also take care of telling all other processes subscribed to the same adapter to emit a message (and thus to all clients connected to those processes?) Ala:
To citate: "enables you to send messages between different processes running socket.io all using this same adapter.". So yes, this is basically the whole functionality of that adapter. They will fetch messages from the central storage and send them to the target, if the target is connected to them. Or in case of groups ("rooms"), send it to all in that group connected to them.