xidui
Posts
-
socket.io log flooding -
mongodb always down -
mongodb always down@pichalite
Thanks for the information. Do you have some idea on how those team deal with the issue that mongodb may down every several days? Mongodb Cluster or something else?
As I see it, mongodb cluster is too heavy for such an deployment with NodeBB.@Jop-V
Thank you -
socket.io log flooding@xidui said in socket.io log flooding:
I added this config:
"socket.io": { "transports": ["websocket"] }
The flooding error seems less than before. But still need to monitor over several days.
The performance seems better after I applied this change in config.
-
Behaviors before change:
- Most of the logs are
io emit timeout
orio emit transport close
. Only very rare portion (less than 10 percent) of requests successfully processed. - Every time I restart the site it can not response for about 5 minutes or longer even I see in the log that process has already listened to 4567.
- Some of the
socket.io/EIO?xxxx
requests failed at client side.
- Most of the logs are
-
Behaviors after change:
- No
io emit timeout
was found ever io emit transport close
still exists, about 50% or less at peak hour and 10% at valley hour.- The site is fast enough at peak hour and it can recover quickly even I restart.
- None of the requests failed at client side.
- No
@administrators
I think my issue has accidentally and surprisingly solved by this small change temporarily. I wonder why and is this config recommended or["polling", "websocket"]
is better? And do you have some idea on the root cause of the issue when I use["polling", "websocket"]
? -
-
how can nodebb print log according to each requestThis is the peak today(about 75% of history peak).
Total: 13965 (kernel 0) TCP: 46823 (estab 13808, closed 32768, orphaned 231, synrecv 0, timewait 32768/0), ports 0 Transport Total IP IPv6 * 0 - - RAW 0 0 0 UDP 0 0 0 TCP 14055 10241 3814 INET 14055 10241 3814 FRAG 0 0 0
Actually, the performance today is better and fast after I applied a change in config:
"socket.io": { "transports": ["websocket"] }
That issue was at this link.
-
how can nodebb print log according to each request@julian
I am not sure which file limit actually counts, just provides all... hahaha -
how can nodebb print log according to each requestThanks for the instruction!
This is the data for the normal hours, I will provide the peak data several hours later.Total: 7472 (kernel 0) TCP: 39360 (estab 7309, closed 31841, orphaned 193, synrecv 0, timewait 31841/0), ports 0 Transport Total IP IPv6 * 0 - - RAW 0 0 0 UDP 0 0 0 TCP 7519 5865 1654 INET 7519 5865 1654 FRAG 0 0 0
follows by the
ulimit
admin@discuss:~/NodeBB$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 63711 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 10000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 63711 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
the file limit in
/etc/sysctl.conf
:# Digital Ocean Recommended Settings: net.core.wmem_max=12582912 net.core.rmem_max=12582912 net.ipv4.tcp_rmem= 10240 87380 12582912 net.ipv4.tcp_wmem= 10240 87380 12582912 vm.swappiness = 10 fs.file-max = 70000
file limit for the node process:
admin@discuss:~/NodeBB$ cat /proc/15565/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 unlimited bytes Max resident set unlimited unlimited bytes Max processes 63711 63711 processes Max open files 30000 30000 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 63711 63711 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us
file limit in
/etc/security/limits.conf
:* soft nofile 10000 * hard nofile 30000 root soft nofile 10000 root hard nofile 30000
file limit in
/proc/sys/fs/file-max
admin@discuss:~/NodeBB$ cat /proc/sys/fs/file-max 70000
-
mongodb always down@Adam-Poniatowski
Trysystemctl enable mongod
if you use Centos7.
It will link some scripts to the upstart dir. -
socket.io log floodingThat's single machine, 4 processes and the 1000 is active user not online user.
-
socket.io log floodingI added this config:
"socket.io": { "transports": ["websocket"] }
The flooding error seems less than before. But still need to monitor over several days.
-
socket.io log floodingI notice that when the site is at peak hour, most of the logs are io logs such as
disconnecting
ordisconnect
.
When not during the peak hours, such error becomes less but still exists.Seems the nginx config is right, other wise socket can not work even during valley hours.
@administrators
I wonder is their any limit of user in NodeBB. During peak hour, users active exceeds 1000 monitors at admin page, is that size too large for Nodebb to hold in on single machine? -
socket.io log floodingCan we disable socket?
-
mongodb always down@PitaJ
Yes, just followed this instruction
and I opened 4 processes with nginx in front of them. -
socket.io log flooding@PitaJ
We run it on linode machine, not cloudflare.By the way, we had run it on Digital Ocean machine, it also behave like this.
-
socket.io log flooding@PitaJ
Thanks for the hint.
The client side log is described here
https://community.nodebb.org/topic/9752/some-polling-requests-fail -
socket.io log floodingThis is part of my log
io: 0 emit [ 'disconnecting', 'ping timeout' ] io: 0 emit [ 'disconnect', 'ping timeout' ] io: 0 emit [ 'disconnecting', 'ping timeout' ] io: 0 emit [ 'disconnect', 'ping timeout' ] io: 0 emit [ 'disconnecting', 'transport close' ] io: 0 emit [ 'disconnect', 'transport close' ] io: 0 emit [ 'disconnecting', 'ping timeout' ] io: 0 emit [ 'disconnect', 'ping timeout' ] io: 0 emit [ 'disconnecting', 'transport close' ] io: 0 emit [ 'disconnect', 'transport close' ] io: 0 emit [ 'disconnecting', 'transport close' ] io: 0 emit [ 'disconnect', 'transport close' ] io: 0 emit [ 'disconnecting', 'ping timeout' ] io: 0 emit [ 'disconnect', 'ping timeout' ] io: 0 emit [ 'disconnecting', 'transport close' ]
Only a few logs are normal requests, and the node process eat up most of the CPU resources.
UPDATE 2016/11/3:
"socket.io": { "transports": ["websocket"] }
The performance seems better after I applied this change in config.
Behaviors before change:
- Most of the logs are io emit timeout or io emit transport close . Only very rare portion (less than 10 percent) of requests successfully processed.
- Every time I restart the site it can not response for about 5 minutes or longer even I see in the log that process has already listened to 4567.
- Some of the socket.io/EIO?xxxx requests failed at client side.
Behaviors after change:
- No io emit timeout was found ever
io emit transport close still exists, about 50% or less at peak hour and 10% at valley hour.
*The site is fast enough at peak hour and it can recover quickly even I restart.
*None of the requests failed at client side.
-
how can nodebb print log according to each requestio: 0 emit [ 'disconnecting', 'ping timeout' ] io: 0 emit [ 'disconnect', 'ping timeout' ] io: 0 emit [ 'disconnecting', 'ping timeout' ] io: 0 emit [ 'disconnect', 'ping timeout' ] io: 0 emit [ 'disconnecting', 'transport close' ] io: 0 emit [ 'disconnect', 'transport close' ] io: 0 emit [ 'disconnecting', 'ping timeout' ] io: 0 emit [ 'disconnect', 'ping timeout' ] io: 0 emit [ 'disconnecting', 'transport close' ] io: 0 emit [ 'disconnect', 'transport close' ] io: 0 emit [ 'disconnecting', 'transport close' ] io: 0 emit [ 'disconnect', 'transport close' ] io: 0 emit [ 'disconnecting', 'ping timeout' ] io: 0 emit [ 'disconnect', 'ping timeout' ] io: 0 emit [ 'disconnecting', 'transport close' ]
Plenty of such error... do you have some idea? and the website is very slow if such log is flooding.
-
mongodb always down@Jop-V.
Ok, I will have a try next time. But I remember that there is no useful information. -
mongodb always down@Jop-V.
It's ubuntu 14.04.
By the way, what's your average CPU usage at peak time or the moment you restart your server? Every time I restart my server, it's a terrible thing, the server failed to response for about several minutes. During that time, nodebb, nginx and mongo eats up the CPU. -
mongodb always down@baris
It has 8GM ram and 2GB swap, large enough I think?