Thought I'd share my tests on how well NodeBB 0.7.x performs on a T2.Micro.
Information
- Ubuntu 14.04.2 LTS running 3.13.0-52-generic
- nginx 1.8.0, configured with 1 worker process and 2048 possible connections.
- Tested with Locust 0.7.2
- Simulating 50 concurrent users
Locust Test File
from locust import HttpLocust, TaskSet, task
class UserBehavior(TaskSet):
def on_start(self):
""" on_start is called when a Locust start before any task is scheduled """
self.login()
def login(self):
self.client.post("/login", {"username": "login", "password": "password"})
@task(2)
def index(self):
self.client.get("/")
@task(1)
def category(self):
self.client.get("/category/2/name")
class WebsiteUser(HttpLocust):
task_set = UserBehavior
min_wait=5000
max_wait=9000
Results w/o nginx caching or Varnish
If I understand Locust correctly, the table below shows the request in the left column and the distribution's time in m.s. in the next columns. For example, 50% of index requests completed in under 160 m.s., and 95% in under 690 m.s.
The maximum responses/sec for GET /
was 4.69 and
2.15 for GET /category/...
. I found it curious that my CPU usage for NodeBB never ran above 13%, averaging 10%. What other ways can I increase the speed here?
Name |
# of Requests |
50% |
95% |
100% |
GET / |
797 |
160 |
690 |
3398 |
GET /category/2/cat_name |
365 |
370 |
1800 |
6409 |
Results with nginx static caching, no clustering
See here for more info on static caching. With just 50 users no statistical difference. Avg. Req/s for GET /
was 4.10, 2.70 for GET /category/2/name
.
Results with static caching and clustering
Question: Should be url change to include both ports or should those be my base_url, domain.com
?
config.json
{
"url": "https://domain.com",
"port": ["4567", "4568"],
...
}
nginx.conf
server {
listen 80;
server_name www.domain.com domain.com;
return 301 https://domain.com$request_uri;
}
server {
listen 443 ssl spdy;
server_name www.domain.com;
return 301 https://domain.com$request_uri;
ssl_certificate /etc/nginx/conf/domain-unified.crt;
ssl_certificate_key /etc/nginx/conf/domain.com.key;
}
upstream io_nodes {
ip_hash;
server 127.0.0.1:4567;
server 127.0.0.1:4568;
}
server {
listen 443 ssl spdy;
ssl on;
ssl_certificate /etc/nginx/conf/domain-unified.crt;
ssl_certificate_key /etc/nginx/conf/domain.com.key;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:50m;
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
ssl_stapling on; # Requires nginx >= 1.3.7
ssl_stapling_verify on; # Requires nginx => 1.3.7
ssl_session_timeout 1d;
ssl_trusted_certificate /etc/nginx/conf/startssl.root.pem;
resolver 8.8.4.4 8.8.8.8 valid=300s;
resolver_timeout 5s;
ssl_dhparam /etc/nginx/conf/dhparam.pem;
server_name domain.com;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_redirect off;
# Socket.IO Support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
gzip on;
gzip_min_length 1000;
gzip_proxied off;
gzip_types text/plain application/xml application/x-javascript text/css application/json;
location @nodebb {
proxy_pass http://io_nodes;
}
location ~ ^/(images|language|sounds|templates|uploads|vendor|src\/modules|nodebb\.min\.js|stylesheet\.css|admin\.css) {
root /home/ubuntu/NodeBB/public/;
try_files $uri $uri/ @nodebb;
}
location / {
proxy_pass http://io_nodes;
}
}
Req/s increased to 5.80 for GET /
and held steady at 2.80 for GET /category/2/name
. This was true for up to 500 users, with the CPU holding constant throughout all tests, spiking occasionally to 14% usages.