hundreds of polling requests
-
The site works with xhr-polling, to my knowledge. We had a client using xhr-polling for awhile, with no loss of functionality...
There are many ajax requests, yes, but that is the nature of long polling, no? Then again there shouldn't be multiple calls per second... just one or two every 10s or so...
-
Some more thoughts on the issue:
- When you don't have websockets working, we drop down to xhr polling (as established prior).
- To establish a secure connection, socket.io goes through a multi-stage handshake to exchange information
- If you are utilising multiple NodeBB processes, separate parts of the handshake can end up going to different processes, and the handshake is lost.
@markkus @TheBronx Can you try reducing the number of ports used by NodeBB to just 1, and see if the issue persists? I have a feeling it will not, though of course, this is not a solution...
-
@julian you are right, when setting only one instance, long polling works as expected. It only uses one request at a time, that gets an answer soon or late, but it no longer uses 4 or 5 request at the same time. there are also no 400 errors.
we have to go back to multiple instances, of course, but it seems you have found something. is there anything we can do to make nginx redirect all requests from the same IP (or user) to the same nodebb instance? this should help, am I wrong?
by the way, we have tried with a secondary server (we still have to fix some issues), and we have found that the wiki is wrong about the socketio configuration:
it says you have to create a block in config.json like:
"socketio": { ... }
when it should say:
"socket.io": { ... }
notice the dot xD -
The solution for this problem (in addition to using
ip_hash
, that is) as suggested online seems to be to use thesticky-sessions
module, although NodeBB actually doesn't use the cluster module, so this is not an applicable solution unless we rewrite the load balancing code inloader.js
to use Node.js'cluster
module again. -
That wont work as cf dont dupport websocket on lower plans, no matjer if you disable catching, etc
@julian just tried websockets with a second server with another nodebb instance and the socket.io from the main server with cloudfare refering that second ip.
It works but it doesnt let to login or post.
How do you share "cookies" or logged in status between that 2 node instances? -
After discussion with @baris, I was mistaken about NodeBB's role with cluster management.
We rely purely on nginx (or apache) to forward the incoming requests to the correct NodeBB instance. If NodeBB is set up to listen to two ports, it will start two instances, but will not do any routing.
So, @TheBronx @KingCat @zack, you'll have to set up your environment as follows:
Domain Name -> CloudFlare -> Nginx -> NodeBB x2 (assuming ports 4567 and 4568).
Nginx will need to proxy to the upstream IP (even if it is the same machine), so you'll need to use an
upstream
block like so:upstream io_nodes { ip_hash; server 127.0.0.1:4567; server 127.0.0.1:4568; }
The
ip_hash;
part is important, so incoming IP addresses are sent to the same server during handshaking.Long polling should work fine then.
-
@julian said:
After discussion with @baris, I was mistaken about NodeBB's role with cluster management.
We rely purely on nginx (or apache) to forward the incoming requests to the correct NodeBB instance. If NodeBB is set up to listen to two ports, it will start two instances, but will not do any routing.
So, @TheBronx @KingCat @zack, you'll have to set up your environment as follows:
Domain Name -> CloudFlare -> Nginx -> NodeBB x2 (assuming ports 4567 and 4568).
Nginx will need to proxy to the upstream IP (even if it is the same machine), so you'll need to use an
upstream
block like so:upstream io_nodes { ip_hash; server 127.0.0.1:4567; server 127.0.0.1:4568; }
The
ip_hash;
part is important, so incoming IP addresses are sent to the same server during handshaking.Long polling should work fine then.
not sure how to do this
how you run 2 instances of node with same domain? -
-
@julian said:
After discussion with @baris, I was mistaken about NodeBB's role with cluster management.
We rely purely on nginx (or apache) to forward the incoming requests to the correct NodeBB instance. If NodeBB is set up to listen to two ports, it will start two instances, but will not do any routing.
So, @TheBronx @KingCat @zack, you'll have to set up your environment as follows:
Domain Name -> CloudFlare -> Nginx -> NodeBB x2 (assuming ports 4567 and 4568).
Nginx will need to proxy to the upstream IP (even if it is the same machine), so you'll need to use an
upstream
block like so:upstream io_nodes { ip_hash; server 127.0.0.1:4567; server 127.0.0.1:4568; }
The
ip_hash;
part is important, so incoming IP addresses are sent to the same server during handshaking.Long polling should work fine then.
i can confirm the hundreds of polling request its not a ws issue
tried it with this info"socket.io": { "transports" : ["polling"] }
so it only uses polling
i put only one port so will run one instance and will be ok, no error on consolebut if you put several posts
"port": ["4567", "4568", "4569", "4570"],
with proxy on nginx
server { listen 80; server_name yournodebb.com; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://io_nodes; proxy_redirect off; # Socket.IO Support proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } } upstream io_nodes { ip_hash; server 127.0.0.1:4567; server 127.0.0.1:4568; server 127.0.0.1:4569; server 127.0.0.1:4570; }
then you will get many request x second , notice domain isn't ws.exo.do and its only exo.do cause it's not running with web sockets.
-
@zack Thanks for giving it a try Odd why the
ip_hash
directive isn't doing what it's supposed to. What nginx version are you running?Are you using CloudFlare as well? You'll need to translate those IPs properly, as when going through CF, the requesting IP is actually one from CF's network, not the end user.
-
@julian said:
@zack Thanks for giving it a try Odd why the
ip_hash
directive isn't doing what it's supposed to. What nginx version are you running?Are you using CloudFlare as well? You'll need to translate those IPs properly, as when going through CF, the requesting IP is actually one from CF's network, not the end user.
nginx/1.1.19 + Cloudfare
-
@zack NodeBB documentation states minimum nginx version as v1.3.13. May be upgrading nginx will solve the issue?
-
@pichalite said:
@zack NodeBB documentation states minimum nginx version as v1.3.13. May be upgrading nginx will solve the issue?
worths a try
-
updated to nginx 1.8. and problem persist
notice this is not a cloudfare problem, browsing directly with server ip will end with same error if you put more than 1 port on node config , even with 1 port iยดm getting 1 error 400 each minute.
if you click on one of them you get this
{"code":1,"message":"Session ID unknown"}