Performance on T2.Micro
-
Thought I'd share my tests on how well NodeBB 0.7.x performs on a T2.Micro.
Information
- Ubuntu 14.04.2 LTS running 3.13.0-52-generic
- nginx 1.8.0, configured with 1 worker process and 2048 possible connections.
- Tested with Locust 0.7.2
- Simulating 50 concurrent users
Locust Test File
from locust import HttpLocust, TaskSet, task class UserBehavior(TaskSet): def on_start(self): """ on_start is called when a Locust start before any task is scheduled """ self.login() def login(self): self.client.post("/login", {"username": "login", "password": "password"}) @task(2) def index(self): self.client.get("/") @task(1) def category(self): self.client.get("/category/2/name") class WebsiteUser(HttpLocust): task_set = UserBehavior min_wait=5000 max_wait=9000
Results w/o nginx caching or Varnish
If I understand Locust correctly, the table below shows the request in the left column and the distribution's time in m.s. in the next columns. For example, 50% of index requests completed in under 160 m.s., and 95% in under 690 m.s.
The maximum responses/sec for
GET /
was 4.69 and
2.15 forGET /category/...
. I found it curious that my CPU usage for NodeBB never ran above 13%, averaging 10%. What other ways can I increase the speed here?Name # of Requests 50% 95% 100% GET / 797 160 690 3398 GET /category/2/cat_name 365 370 1800 6409 Results with nginx static caching, no clustering
See here for more info on static caching. With just 50 users no statistical difference. Avg. Req/s for
GET /
was 4.10, 2.70 forGET /category/2/name
.Results with static caching and clustering
Question: Should be url change to include both ports or should those be my base_url,
domain.com
?config.json
{ "url": "https://domain.com", "port": ["4567", "4568"], ... }
nginx.conf
server { listen 80; server_name www.domain.com domain.com; return 301 https://domain.com$request_uri; } server { listen 443 ssl spdy; server_name www.domain.com; return 301 https://domain.com$request_uri; ssl_certificate /etc/nginx/conf/domain-unified.crt; ssl_certificate_key /etc/nginx/conf/domain.com.key; } upstream io_nodes { ip_hash; server 127.0.0.1:4567; server 127.0.0.1:4568; } server { listen 443 ssl spdy; ssl on; ssl_certificate /etc/nginx/conf/domain-unified.crt; ssl_certificate_key /etc/nginx/conf/domain.com.key; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:50m; add_header Strict-Transport-Security "max-age=63072000; includeSubDomains"; add_header X-Frame-Options DENY; add_header X-Content-Type-Options nosniff; ssl_stapling on; # Requires nginx >= 1.3.7 ssl_stapling_verify on; # Requires nginx => 1.3.7 ssl_session_timeout 1d; ssl_trusted_certificate /etc/nginx/conf/startssl.root.pem; resolver 8.8.4.4 8.8.8.8 valid=300s; resolver_timeout 5s; ssl_dhparam /etc/nginx/conf/dhparam.pem; server_name domain.com; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_redirect off; # Socket.IO Support proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; gzip on; gzip_min_length 1000; gzip_proxied off; gzip_types text/plain application/xml application/x-javascript text/css application/json; location @nodebb { proxy_pass http://io_nodes; } location ~ ^/(images|language|sounds|templates|uploads|vendor|src\/modules|nodebb\.min\.js|stylesheet\.css|admin\.css) { root /home/ubuntu/NodeBB/public/; try_files $uri $uri/ @nodebb; } location / { proxy_pass http://io_nodes; } }
Req/s increased to 5.80 for
GET /
and held steady at 2.80 forGET /category/2/name
. This was true for up to 500 users, with the CPU holding constant throughout all tests, spiking occasionally to 14% usages. -
Nice, thanks!
-
@julian I did notice when issuing
./nodebb restart
that it seems to only be listening on one port. Is there something wrong with my url syntax above?27/5 20:12 [993] - info: NodeBB Ready 27/5 20:12 [993] - info: Enabling 'trust proxy' 27/5 20:12 [993] - info: NodeBB is now listening on: 0.0.0.0:4567
Additionally:
cat < /dev/tcp/127.0.0.1/4568 -bash: connect: Connection refused -bash: /dev/tcp/127.0.0.1/4568: Connection refused
-
Present config:
{ "url": "https://domain.com", "port": ["4567", "4568"], ... }
After
./nodebb restart
:27/5 20:16 [1034] - info: NodeBB Ready 27/5 20:16 [1034] - info: Enabling 'trust proxy' 27/5 20:16 [1034] - info: NodeBB is now listening on: 0.0.0.0:4567
This is curious because nginx doesn't generate an error. Maybe it's smart enough to route to the only available port?
EDIT: Nevermind.
./nodebb restart
doesn't load the new config, as you point out in the documentation. Issuing./nodebb start
solves the problem:27/5 20:19 [1151] - info: NodeBB Ready 27/5 20:19 [1151] - info: Enabling 'trust proxy' 27/5 20:19 [1151] - info: NodeBB is now listening on: 0.0.0.0:4567 27/5 20:19 [1152] - info: NodeBB Ready 27/5 20:19 [1152] - info: Enabling 'trust proxy' 27/5 20:19 [1152] - info: NodeBB is now listening on: 0.0.0.0:4568
-
Results using Seige with Static Caching and Proxy Clustering
A few of you messaged me and asked about Seige vs. Locust. I think the simulation is different since I'm not first logged in like I am with Locust, but nonetheless the tiny T2.micro seems to hold up well in terms of transactions/s:
siege -q -t10s https://domain.com -c 50 Lifting the server siege... done. Transactions: 281 hits Availability: 100.00 % Elapsed time: 9.25 secs Data transferred: 1.10 MB Response time: 1.00 secs Transaction rate: 30.38 trans/sec Throughput: 0.12 MB/sec Concurrency: 30.48 Successful transactions: 281 Failed transactions: 0 Longest transaction: 1.72 Shortest transaction: 0.38
At 500 users, I essentially DDoS the site and while usable, pages take up to 10s to load:
siege -q https://domain.com -c 500 ^C Lifting the server siege... done. Transactions: 1129 hits Availability: 100.00 % Elapsed time: 42.16 secs Data transferred: 4.41 MB Response time: 14.26 secs Transaction rate: 26.78 trans/sec Throughput: 0.10 MB/sec Concurrency: 381.87 Successful transactions: 1129 Failed transactions: 0 Longest transaction: 20.54 Shortest transaction: 1.70
But under a load of 100 users, the site still remains responsive:
siege -q https://domain.com -c 100 ^C Lifting the server siege... done. Transactions: 3547 hits Availability: 100.00 % Elapsed time: 108.09 secs Data transferred: 13.89 MB Response time: 2.51 secs Transaction rate: 32.82 trans/sec Throughput: 0.13 MB/sec Concurrency: 82.21 Successful transactions: 3547 Failed transactions: 0 Longest transaction: 3.97 Shortest transaction: 0.80
While you need to realize that there are just four data points on this graph and take it with a grain of salt, this gives a surprisingly linear model for calculating the maximum response time under load:
Worst Response Time = 0.0416 x # of Users -0.349
-
@julian I did that on my previous box, but I found using nginx for SSL termination in combination with Varnish to be a bit CPU heavy. What other variables affect things like response time? What kinds of response times do you see with community.nodebb.org and how big of a box does it run on? Do you cluster NodeBB as well as cluster Redis?
-
@Luciano-Miranda, the database is a single, locally hosted Redis instance. I'd like to compare Mongo vs. Redis performance. Maybe I'll spin up a fresh R3 or M3.large and test the differences between those DBs.
What results do you get with your box when hit with
siege
and what type of box is it?The real question is what is the bottleneck on the T2 micro. Why do I never get above 8 requests/sec? The CPU never spikes above 15% in these tests, so I don't think it's NodeBB or the underlying NodeJS performance. Perhaps it's Disk I/O fetching the static content, or Redis performance?
@julian How do we specify a unix socket instead of the localhost? Any advantage there? Is pipelining used here?
-
One possible problem with this testing is since you use ip_hash in the nginx config and run the locust/siege/ab etc from a machine all the requests will be directed to a single nodebb instance. So having 2 nodebb processes does't help since all requests are coming from the same IP. You can confirm that by checking cpu usage in top during the bench.