How to Load Balance for all CPU Cores
-
@baris said in How to Load Balance for all CPU Cores:
@pummelchen There is an issue in one of our dependency that effects mongodb, https://github.com/scttnlsn/mubsub/issues/61, in the meantime you can install redis and add a redis block in your config.json so pubsub uses redis instead of mongodb. That should get rid of the error you are seeing in the logs.
I had the same error. Adding redis as additional db did the trick. A question: if it's fixed, can i disable redis without a problem or should I flush changes to mongo somehow? (Memory is scarce, so I prefer to disable redis if possible)
-
@jaspernl When the issue is fixed you can remove the redis block, users will probably have to relogin since sessions will be stored in redis but that's about it, forum data is still in mongodb because you are not changing the
database: "mongo"
section in config.json -
Only happens on 1.9.3 AFAIK, if you can't wait for the PR to be merged you can apply the changes yourself from https://github.com/scttnlsn/mubsub/pull/62/files, I am not sure if the author of the package is active so it might take awhile.
-
@jaspernl if you have a redis block and set mongodb in the
database
field then redis is only used for session store and pubsub (which is required if you are running more than 1 nodebb process). Posts are cached in each nodebb process regardless of which database you use. -
@pummelchen said in How to Load Balance for all CPU Cores:
worker_processes 2;
I used the fork of @baris and replaced the whole mubsub with it. That finally spawned 2 stable node processes and the forum runs fine.
However I had to change worker_processes from 2 back to 1 to get it working.
With 2 worker processes I got this error:bind() to 0.0.0.0:80 failed (98: Address already in use)
Which is mostly an issue in my ngix.conf as it 1) runs as a webserver:443 but also 2) routes traffic and there is conflict somewhere when two worker processes are active.
The main issue is solved, I post my nginx.conf just in case. Thanks !
-
worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 60; gzip on; upstream io_nodes { ip_hash; server 127.0.0.1:4567; server 127.0.0.1:4568; } server { listen 80; server_name www.test.com test.com; return 301 https://test.com$request_uri; access_log off; } server { listen 443 ssl; server_name www.test.com; ssl_certificate X:/nginx/ssl/test.crt; ssl_certificate_key X:/nginx/ssl/test.key; return 301 https://test.com$request_uri; access_log off; } server { listen 443 ssl; server_name test.com; ssl_certificate X:/nginx/ssl/test.crt; ssl_certificate_key X:/nginx/ssl/test.key; access_log off; server_tokens off; root X:/Web; sendfile on; tcp_nopush on; tcp_nodelay on; # error_page 404 /404.html; # location = /404.html { # root X:/Web; # internal; # } # error_page 500 502 503 504 /50x.html; # # location = /50x.html { # root X:/Web; # } location / { try_files $uri /index.html; index index.html; } location /forum { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; # proxy_pass http://127.0.0.1:4567; proxy_pass http://io_nodes; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } } }
-
Self-Support ..
So it seems the fix is to add this to config.json:
"bind_address": "127.0.0.1",
and in ngix.conf bind all ports 80 and 433 to the external IP:
listen 195.201.96.256:80;
Now change
worker_processes 4;
And we have a load balancing for ngix and also for node. Missing is load balancing of mongo but I leave that for a snowcheck.
-
@pummelchen said in How to Load Balance for all CPU Cores:
worker_processes
Thanks, that's interesting. They say "It is common practice to run 1 worker process per core." (How To Optimize Nginx Configuration | DigitalOcean).
-
Yes, as nginx and node.js are single threaded apps you need to spawn multiple copies of them which match the numbers of CPU cores in order to fully use the power of your server.
One nginx process acts as a controller/watchdog.
I'm no MondoDB expert but I read that it automatically uses multi-threads for read requests but uses a single thread for write operations.