NodeBB Dies

General Discussion
  • In my original post, I setup Vanish Cache behind Apache in hopes that the connection reuse would keep the site from dying. I got Varnish Cache working and NodeBB is still dying randomly. On further investigation, I think Apache is killing the nodebb instance. Could someone with more Apache knowledge take a look at my error.log?

    ##Apache Error.log

    ##Apache Access.log
    I think this jerk is essentially DDoSing the site and causing either the Apache Proxy or the NodeBB to die. But how?

    ##NodeBB Output.log

    ##NodeBB Error Log
    It only happened from last night on, so I believe only the last entry is relevant.

    {"level":"error","message":"EACCES, open '/var/www/nodebb/socket.log'","timestamp":"2014-05-18T23:02:59.642Z"}
    {"level":"error","message":"[[error:too-many-posts, 10]]","timestamp":"2014-05-19T00:00:00.746Z"}
    {"level":"error","message":"EACCES, open '/var/www/nodebb/socket.log'","timestamp":"2014-05-19T03:53:33.291Z"}
    {"level":"error","message":"[[error:too-many-posts, 10]]","timestamp":"2014-05-19T11:00:00.564Z"}
  • I think I may have solved it. The 1-man DDoS was filling up my access_log to my limit, causing logrotate to take effect. The logrotate.d for httpd specified to use service httpd graceful, which kills all child processes. I changed it to restart from graceful, since there's no reason to restart NodeBB (and apparently forever doesn't start it back up again anyway... Useless.)

  • Scratch that. It still breaks. Any thoughts on my logs? I don't know why it's breaking at this point.

  • NodeBB runs independently from Apache, doesn't it? I can't imagine how apache would be killing off a NodeBB process...

  • @julian Interesting. Any more verbose logging I should look at? NodeBB is started using forever app.js start &, which I thought should have kept it running. My setup is:

    Apache Reverse Proxy -> Varnish Cache -> NodeBB

    But I should point out that it died when it was just Apache Reverse Proxy -> NodeBB and that varnishstat does seem to be reusing connections.

  • You should really use ./nodebb start to keep NodeBB running instead. The built-in loader removes the need for forever or supervisor 🙂

  • @julian That's fine, I made the changes to crontab. Still, the app will still die at random intervals and I don't see anything in the logs regarding this.

  • @Guiri A lot of programs semi-randomly die due to running out of resources, and it's no big secret Apache is big on the ol' server load. Is your resource usage (especially RAM) consistently maxed out or contain massive spikes around the times NodeBB crashed?

  • @Xiph Thanks for the tip. I switched to Nginx with a Varnish frontend and my RAM usage decreased from 1.2G to 600M. Thanks! Still, NodeBB randomly died a few hours in. Could the Node process need configuring RAM or SWAP wise? I'm really curious why it's dying. I'm also starting it via ./nodebb start as @julian advised.

  • I can't really tell if I'm experiencing the same situation, however, every now and then after about 24-36h NodeBB randomly ''dies'' & when it does that, it displays error 502, which I can only resolve with rebooting the VPS. I'm also running it with nginx proxy & NodeBB is started on every reboot using ./nodebb start in an init.d service.

  • @markkus Can you paste your initscript?

  • @julian
    #! /bin/bash
    # /etc/init.d/nodebb

     case "$1" in
            echo "Starting NodeBB..."
            cd /home/nodebb/foorum 
             ./nodebb start 
            echo "Stopping NodeBB..."
            cd /home/nodebb/foorum
            ./nodebb stop
            echo "Usage: /etc/init.d/nodebb {start|stop}"
            exit 1
     exit 0
  • Can you try changing ./nodebb start to ./nodebb start --no-daemon?

  • @julian
    Alright, I'll give it a try.

    Edit: Tried it & it gives a 502 error instantly.

  • Hm, weird... what's the error shown in the output.log file?

  • Sorry for bringing up this old post, but I've been struggling with the problem for almost a month now. I've tried many things - rewriting the start script, checking the logs for any error output etc. etc., nothing has helped so far. So I started thinking, that maybe the small amount of RAM is the reason for those unusual ''crashes''. So without doubt, I went for a 1024MB DigitalOcean VPS. And indeed, this improved the situation, however the NodeBB still crashes every now and then, but not on a daily basis anymore.
    So I'm thinking that maybe there is a memory allocation problem with NodeBB or just NodeJS? I'm on NodeBB 0.4.3, not on the latest build ofc.

    I'll do some tests and write back with some RAM allocation results.

  • Do you have swap enabled on the VPS?

  • @julian Is this something you would recommend doing? My RAM sits at about 73% when NodeBB is running. 🙂

  • @a_5mith It's harmless, especially if you're not using all that extra disk space in your droplet.

    When we were running this forum on a $5 droplet, we also ran php-fastcgi and mysql. MySQL would run out of memory every couple days because we didn't have swap enabled.

    Keep in mind that when Redis holds all of the db information in-memory, it also needs twice that amount when it tries to persist to disk. It makes a copy of the in-memory database, in memory (heh), and then pushes that to the .rdb file.

Suggested Topics

  • 4 Votes
    6 Posts
  • 1 Votes
    3 Posts
  • 4 Votes
    6 Posts
  • 0 Votes
    6 Posts
  • Very happy with NodeBB

    General Discussion
    0 Votes
    3 Posts