Performance on T2.Micro
-
Nice, thanks!
-
@julian I did notice when issuing
./nodebb restart
that it seems to only be listening on one port. Is there something wrong with my url syntax above?27/5 20:12 [993] - info: NodeBB Ready 27/5 20:12 [993] - info: Enabling 'trust proxy' 27/5 20:12 [993] - info: NodeBB is now listening on: 0.0.0.0:4567
Additionally:
cat < /dev/tcp/127.0.0.1/4568 -bash: connect: Connection refused -bash: /dev/tcp/127.0.0.1/4568: Connection refused
-
Present config:
{ "url": "https://domain.com", "port": ["4567", "4568"], ... }
After
./nodebb restart
:27/5 20:16 [1034] - info: NodeBB Ready 27/5 20:16 [1034] - info: Enabling 'trust proxy' 27/5 20:16 [1034] - info: NodeBB is now listening on: 0.0.0.0:4567
This is curious because nginx doesn't generate an error. Maybe it's smart enough to route to the only available port?
EDIT: Nevermind.
./nodebb restart
doesn't load the new config, as you point out in the documentation. Issuing./nodebb start
solves the problem:27/5 20:19 [1151] - info: NodeBB Ready 27/5 20:19 [1151] - info: Enabling 'trust proxy' 27/5 20:19 [1151] - info: NodeBB is now listening on: 0.0.0.0:4567 27/5 20:19 [1152] - info: NodeBB Ready 27/5 20:19 [1152] - info: Enabling 'trust proxy' 27/5 20:19 [1152] - info: NodeBB is now listening on: 0.0.0.0:4568
-
Results using Seige with Static Caching and Proxy Clustering
A few of you messaged me and asked about Seige vs. Locust. I think the simulation is different since I'm not first logged in like I am with Locust, but nonetheless the tiny T2.micro seems to hold up well in terms of transactions/s:
siege -q -t10s https://domain.com -c 50 Lifting the server siege... done. Transactions: 281 hits Availability: 100.00 % Elapsed time: 9.25 secs Data transferred: 1.10 MB Response time: 1.00 secs Transaction rate: 30.38 trans/sec Throughput: 0.12 MB/sec Concurrency: 30.48 Successful transactions: 281 Failed transactions: 0 Longest transaction: 1.72 Shortest transaction: 0.38
At 500 users, I essentially DDoS the site and while usable, pages take up to 10s to load:
siege -q https://domain.com -c 500 ^C Lifting the server siege... done. Transactions: 1129 hits Availability: 100.00 % Elapsed time: 42.16 secs Data transferred: 4.41 MB Response time: 14.26 secs Transaction rate: 26.78 trans/sec Throughput: 0.10 MB/sec Concurrency: 381.87 Successful transactions: 1129 Failed transactions: 0 Longest transaction: 20.54 Shortest transaction: 1.70
But under a load of 100 users, the site still remains responsive:
siege -q https://domain.com -c 100 ^C Lifting the server siege... done. Transactions: 3547 hits Availability: 100.00 % Elapsed time: 108.09 secs Data transferred: 13.89 MB Response time: 2.51 secs Transaction rate: 32.82 trans/sec Throughput: 0.13 MB/sec Concurrency: 82.21 Successful transactions: 3547 Failed transactions: 0 Longest transaction: 3.97 Shortest transaction: 0.80
While you need to realize that there are just four data points on this graph and take it with a grain of salt, this gives a surprisingly linear model for calculating the maximum response time under load:
Worst Response Time = 0.0416 x # of Users -0.349
-
@julian I did that on my previous box, but I found using nginx for SSL termination in combination with Varnish to be a bit CPU heavy. What other variables affect things like response time? What kinds of response times do you see with community.nodebb.org and how big of a box does it run on? Do you cluster NodeBB as well as cluster Redis?
-
@Luciano-Miranda, the database is a single, locally hosted Redis instance. I'd like to compare Mongo vs. Redis performance. Maybe I'll spin up a fresh R3 or M3.large and test the differences between those DBs.
What results do you get with your box when hit with
siege
and what type of box is it?The real question is what is the bottleneck on the T2 micro. Why do I never get above 8 requests/sec? The CPU never spikes above 15% in these tests, so I don't think it's NodeBB or the underlying NodeJS performance. Perhaps it's Disk I/O fetching the static content, or Redis performance?
@julian How do we specify a unix socket instead of the localhost? Any advantage there? Is pipelining used here?
-
One possible problem with this testing is since you use ip_hash in the nginx config and run the locust/siege/ab etc from a machine all the requests will be directed to a single nodebb instance. So having 2 nodebb processes does't help since all requests are coming from the same IP. You can confirm that by checking cpu usage in top during the bench.