How to Load Balance for all CPU Cores



  • Hello,

    I try to do the following on a VPS with 2 CPU cores in order to use both cores for delivering web content:

    • Get 2 instances of nginx running
    • Get 2 instances of Node.js running
    • Keep mongoDB as single instance as its supposed to support multicores with threading

    Steps Done

    • Edit nginx.conf
    worker_processes 2;
    
    and
    
    upstream io_nodes {
        ip_hash;
        server 127.0.0.1:4567;
        server 127.0.0.1:4568;
        }
    
    and
    
     proxy_pass http://io_nodes;
    
    
    • Edit config.json
     "port": ["4567", "4568"],
    

    Save everything, Stop everything, Start everything .. not working

    What happens is that I see node.js to start twice as it should but shortly after both close, than this repeats 2x and I end up with no node.js processes at all.

    Only nginx works fine with 3 processes of itself.

    The nodeBB output log talks something about MongoDB related stuff. I did not change anything with MongoDB at all.

    What am I missing ?

    Plz note that I on Windows, Node.js 10.1.0.



  • Last part of the logs:

    2018-05-19T12:27:35.061Z [2316] - info: [app] Shutdown (SIGTERM/SIGINT) Initialised.
    2018-05-19T12:27:35.362Z [2316] - error: Error [ERR_SERVER_NOT_RUNNING]: Server is not running.
    at Server.close (net.js:1596:12)
    at Object.onceWrapper (events.js:273:13)
    at Server.emit (events.js:182:13)
    at emitCloseNT (net.js:1649:8)
    at process._tickCallback (internal/process/next_tick.js:63:19)
    [cluster] Child Process (2316) has exited (code: 1, signal: null)
    [cluster] Spinning up another process...
    2018-05-19T12:27:35.797Z [5080] - error: TypeError: collection.find(...).sort(...).limit(...).nextObject is not a function
    at Channel.onCollection (X:\NodeBB\node_modules\mubsub\lib\channel.js:204:14)
    at Object.onceWrapper (events.js:273:13)
    at Channel.emit (events.js:182:13)
    at X:\NodeBB\node_modules\mubsub\lib\channel.js:119:22
    at result (X:\NodeBB\node_modules\mongodb\lib\utils.js:414:17)
    at session.endSession (X:\NodeBB\node_modules\mongodb\lib\utils.js:401:11)
    at ClientSession.endSession (X:\NodeBB\node_modules\mongodb\node_modules\mongodb-core\lib\sessions.js:72:41)
    at executeCallback (X:\NodeBB\node_modules\mongodb\lib\utils.js:397:17)
    at handleCallback (X:\NodeBB\node_modules\mongodb\lib\utils.js:128:55)
    at X:\NodeBB\node_modules\mongodb\lib\db.js:504:18
    2018-05-19T12:27:35.798Z [5080] - info: [app] Shutdown (SIGTERM/SIGINT) Initialised.
    2018-05-19T12:27:36.115Z [5080] - error: Error [ERR_SERVER_NOT_RUNNING]: Server is not running.
    at Server.close (net.js:1596:12)
    at Object.onceWrapper (events.js:273:13)
    at Server.emit (events.js:182:13)
    at emitCloseNT (net.js:1649:8)
    at process._tickCallback (internal/process/next_tick.js:63:19)
    [cluster] Child Process (5080) has exited (code: 1, signal: null)
    [cluster] Spinning up another process...
    the autoIndexId option is deprecated and will be removed in a future release
    2018-05-19T12:27:37.164Z [3836] - info: Initializing NodeBB v1.9.2 https://mysecretweb.com/forum
    the autoIndexId option is deprecated and will be removed in a future release
    2018-05-19T12:27:38.485Z [8332] - error: TypeError: collection.find(...).sort(...).limit(...).nextObject is not a function
    at Channel.onCollection (X:\NodeBB\node_modules\mubsub\lib\channel.js:204:14)
    at Object.onceWrapper (events.js:273:13)
    at Channel.emit (events.js:182:13)
    at X:\NodeBB\node_modules\mubsub\lib\channel.js:119:22
    at result (X:\NodeBB\node_modules\mongodb\lib\utils.js:414:17)
    at session.endSession (X:\NodeBB\node_modules\mongodb\lib\utils.js:401:11)
    at ClientSession.endSession (X:\NodeBB\node_modules\mongodb\node_modules\mongodb-core\lib\sessions.js:72:41)
    at executeCallback (X:\NodeBB\node_modules\mongodb\lib\utils.js:397:17)
    at handleCallback (X:\NodeBB\node_modules\mongodb\lib\utils.js:128:55)
    at X:\NodeBB\node_modules\mongodb\lib\db.js:504:18
    2018-05-19T12:27:38.487Z [8332] - info: [app] Shutdown (SIGTERM/SIGINT) Initialised.
    2018-05-19T12:27:38.790Z [8332] - error: Error [ERR_SERVER_NOT_RUNNING]: Server is not running.
    at Server.close (net.js:1596:12)
    at Object.onceWrapper (events.js:273:13)
    at Server.emit (events.js:182:13)
    at emitCloseNT (net.js:1649:8)
    at process._tickCallback (internal/process/next_tick.js:63:19)



  • @pummelchen There is an issue in one of our dependency that effects mongodb, https://github.com/scttnlsn/mubsub/issues/61, in the meantime you can install redis and add a redis block in your config.json so pubsub uses redis instead of mongodb. That should get rid of the error you are seeing in the logs.



  • https://github.com/scttnlsn/mubsub/pull/62 for the PR that I have opened to fix the issue.


  • Gamers

    @baris said in How to Load Balance for all CPU Cores:

    @pummelchen There is an issue in one of our dependency that effects mongodb, https://github.com/scttnlsn/mubsub/issues/61, in the meantime you can install redis and add a redis block in your config.json so pubsub uses redis instead of mongodb. That should get rid of the error you are seeing in the logs.

    I had the same error. Adding redis as additional db did the trick. A question: if it's fixed, can i disable redis without a problem or should I flush changes to mongo somehow? (Memory is scarce, so I prefer to disable redis if possible)



  • @jaspernl When the issue is fixed you can remove the redis block, users will probably have to relogin since sessions will be stored in redis but that's about it, forum data is still in mongodb because you are not changing the database: "mongo" section in config.json



  • @baris

    Thx for that ! Do we have an ~ETA from experience till MongoDB will contain this fix ? Redis and Windows do not match very well, I'm happy to wait.



  • does this issue only impacting 1.9.x version?



  • Only happens on 1.9.3 AFAIK, if you can't wait for the PR to be merged you can apply the changes yourself from https://github.com/scttnlsn/mubsub/pull/62/files, I am not sure if the author of the package is active so it might take awhile.


  • Gamers

    Adding both redis and mongo and setting mongo as default DB driver only uses redis to store sessions with and no cached posts or quick-served pages? In that case, I'll keep Redis around.



  • @jaspernl if you have a redis block and set mongodb in the database field then redis is only used for session store and pubsub (which is required if you are running more than 1 nodebb process). Posts are cached in each nodebb process regardless of which database you use.



  • @pummelchen said in How to Load Balance for all CPU Cores:

    worker_processes 2;

    I used the fork of @baris and replaced the whole mubsub with it. That finally spawned 2 stable node processes and the forum runs fine.

    However I had to change worker_processes from 2 back to 1 to get it working.
    With 2 worker processes I got this error:

    bind() to 0.0.0.0:80 failed (98: Address already in use)

    Which is mostly an issue in my ngix.conf as it 1) runs as a webserver:443 but also 2) routes traffic and there is conflict somewhere when two worker processes are active.

    The main issue is solved, I post my nginx.conf just in case. Thanks !



  • worker_processes 1;
    
    
    events {
        worker_connections 1024;
        }
    
    
    http {
        include mime.types;
        default_type application/octet-stream;
    
        sendfile on;
        keepalive_timeout 60;
        gzip on;
    
    
    upstream io_nodes {
        ip_hash;
        server 127.0.0.1:4567;
        server 127.0.0.1:4568;
        }
    
    
    server {
        listen 80;
        server_name www.test.com test.com;
        return 301 https://test.com$request_uri;
        access_log off;
        }
    
    
    server {
        listen 443 ssl;
        server_name www.test.com;
    
        ssl_certificate X:/nginx/ssl/test.crt;
        ssl_certificate_key X:/nginx/ssl/test.key;
    
        return 301 https://test.com$request_uri;
    
        access_log off;
        }
    
    
    server {
        listen 443 ssl;
        server_name test.com;
    
        ssl_certificate X:/nginx/ssl/test.crt;
        ssl_certificate_key X:/nginx/ssl/test.key;
    
        access_log off;
        server_tokens off;
      
    	root X:/Web;
    
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
    
        # error_page 404 /404.html;
        # location = /404.html {
        # root X:/Web;
        # internal;
        # }
    
    
        #  error_page 500 502 503 504 /50x.html;
        # # location = /50x.html {
        # root X:/Web;
        # }
    
    
        location / {
    	    try_files $uri /index.html;
            index index.html;
            }
    
    
    	location /forum {
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header Host $http_host;
            proxy_set_header X-NginX-Proxy true;
    
            # proxy_pass http://127.0.0.1:4567;
            proxy_pass http://io_nodes;
    
            proxy_redirect off;
    
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
    	    }
        }
    }
    


  • Self-Support .. 😇

    So it seems the fix is to add this to config.json:

     "bind_address": "127.0.0.1",
    

    and in ngix.conf bind all ports 80 and 433 to the external IP:

    listen 195.201.96.256:80;
    

    Now change

    worker_processes 4;
    

    And we have a load balancing for ngix and also for node. Missing is load balancing of mongo but I leave that for a snowcheck.



  • @pummelchen said in How to Load Balance for all CPU Cores:

    worker_processes

    Thanks, that's interesting. They say "It is common practice to run 1 worker process per core." (How To Optimize Nginx Configuration | DigitalOcean).



  • Yes, as nginx and node.js are single threaded apps you need to spawn multiple copies of them which match the numbers of CPU cores in order to fully use the power of your server.

    0_1527420046918_Server.png

    One nginx process acts as a controller/watchdog.

    I'm no MondoDB expert but I read that it automatically uses multi-threads for read requests but uses a single thread for write operations.


Log in to reply
 

Suggested Topics

  • 1
  • 2
  • 7
  • 9
  • 8
| |