Performance on T2.Micro

General Discussion
  • #1

    Thought I'd share my tests on how well NodeBB 0.7.x performs on a T2.Micro.


    • Ubuntu 14.04.2 LTS running 3.13.0-52-generic
    • nginx 1.8.0, configured with 1 worker process and 2048 possible connections.
    • Tested with Locust 0.7.2
    • Simulating 50 concurrent users

    Locust Test File

    from locust import HttpLocust, TaskSet, task
    class UserBehavior(TaskSet):
        def on_start(self):
            """ on_start is called when a Locust start before any task is scheduled """
        def login(self):
  "/login", {"username": "login", "password": "password"})
        def index(self):
        def category(self):
    class WebsiteUser(HttpLocust):
        task_set = UserBehavior

    Results w/o nginx caching or Varnish

    If I understand Locust correctly, the table below shows the request in the left column and the distribution's time in m.s. in the next columns. For example, 50% of index requests completed in under 160 m.s., and 95% in under 690 m.s.

    The maximum responses/sec for GET / was 4.69 and
    2.15 for GET /category/.... I found it curious that my CPU usage for NodeBB never ran above 13%, averaging 10%. What other ways can I increase the speed here?

    Name # of Requests 50% 95% 100%
    GET / 797 160 690 3398
    GET /category/2/cat_name 365 370 1800 6409

    Results with nginx static caching, no clustering

    See here for more info on static caching. With just 50 users no statistical difference. Avg. Req/s for GET / was 4.10, 2.70 for GET /category/2/name.

    Results with static caching and clustering

    Question: Should be url change to include both ports or should those be my base_url,


        "url": "",
        "port": ["4567", "4568"],


    server {
        listen 80;
        return 301$request_uri;    
    server {
        listen 443 ssl spdy;
        return 301$request_uri;
        ssl_certificate /etc/nginx/conf/domain-unified.crt;
        ssl_certificate_key /etc/nginx/conf/;
    upstream io_nodes {
    server {
        listen 443 ssl spdy;
        ssl on;
        ssl_certificate /etc/nginx/conf/domain-unified.crt;
        ssl_certificate_key /etc/nginx/conf/;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_session_cache shared:SSL:50m;
        add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
        add_header X-Frame-Options DENY;
        add_header X-Content-Type-Options nosniff;
        ssl_stapling on; # Requires nginx >= 1.3.7
        ssl_stapling_verify on; # Requires nginx => 1.3.7
        ssl_session_timeout 1d;
        ssl_trusted_certificate /etc/nginx/conf/startssl.root.pem;
        resolver valid=300s;
        resolver_timeout 5s; 
        ssl_dhparam /etc/nginx/conf/dhparam.pem;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header X-NginX-Proxy true;
        proxy_redirect off;
        # Socket.IO Support
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        gzip            on;
        gzip_min_length 1000;
        gzip_proxied    off;
        gzip_types      text/plain application/xml application/x-javascript text/css application/json;
        location @nodebb {
    	      proxy_pass http://io_nodes;
        location ~ ^/(images|language|sounds|templates|uploads|vendor|src\/modules|nodebb\.min\.js|stylesheet\.css|admin\.css) {
            root /home/ubuntu/NodeBB/public/;
            try_files $uri $uri/ @nodebb;
        location / {
            proxy_pass http://io_nodes;

    Req/s increased to 5.80 for GET / and held steady at 2.80 for GET /category/2/name. This was true for up to 500 users, with the CPU holding constant throughout all tests, spiking occasionally to 14% usages.

  • Community Rep

    Nice, thanks!

  • #3

    @julian I did notice when issuing ./nodebb restart that it seems to only be listening on one port. Is there something wrong with my url syntax above?

    27/5 20:12 [993] - info: NodeBB Ready
    27/5 20:12 [993] - info: Enabling 'trust proxy'
    27/5 20:12 [993] - info: NodeBB is now listening on:


    cat < /dev/tcp/
    -bash: connect: Connection refused
    -bash: /dev/tcp/ Connection refused
  • GNU/Linux

    Remove the port number and colon from url. When you start it, you should see "NodeBB is listening..." written to stdout twice.

  • #5

    Present config:

        "url": "",
        "port": ["4567", "4568"],

    After ./nodebb restart:

    27/5 20:16 [1034] - info: NodeBB Ready
    27/5 20:16 [1034] - info: Enabling 'trust proxy'
    27/5 20:16 [1034] - info: NodeBB is now listening on:

    This is curious because nginx doesn't generate an error. Maybe it's smart enough to route to the only available port?

    EDIT: Nevermind. ./nodebb restart doesn't load the new config, as you point out in the documentation. Issuing ./nodebb start solves the problem:

    27/5 20:19 [1151] - info: NodeBB Ready
    27/5 20:19 [1151] - info: Enabling 'trust proxy'
    27/5 20:19 [1151] - info: NodeBB is now listening on:
    27/5 20:19 [1152] - info: NodeBB Ready
    27/5 20:19 [1152] - info: Enabling 'trust proxy'
    27/5 20:19 [1152] - info: NodeBB is now listening on:
  • #6

    If anybody has experience with proxy caching in nginx beyond what I have in the config above, let me know what I can add/improve.

  • GNU/Linux

    @Guiri You can also utilise nginx to serve the static assets, which I do see you doing. Other than that, that's about it 😄

    Varnish, possibly, although I have never configured that before.

  • #8

    Results using Seige with Static Caching and Proxy Clustering

    A few of you messaged me and asked about Seige vs. Locust. I think the simulation is different since I'm not first logged in like I am with Locust, but nonetheless the tiny T2.micro seems to hold up well in terms of transactions/s:

    siege -q -t10s -c 50
    Lifting the server siege...      done.
    Transactions:		         281 hits
    Availability:		      100.00 %
    Elapsed time:		        9.25 secs
    Data transferred:	        1.10 MB
    Response time:		        1.00 secs
    Transaction rate:	       30.38 trans/sec
    Throughput:		        0.12 MB/sec
    Concurrency:		       30.48
    Successful transactions:         281
    Failed transactions:	           0
    Longest transaction:	        1.72
    Shortest transaction:	        0.38

    At 500 users, I essentially DDoS the site and while usable, pages take up to 10s to load:

    siege -q -c 500
    Lifting the server siege...      done.
    Transactions:		        1129 hits
    Availability:		      100.00 %
    Elapsed time:		       42.16 secs
    Data transferred:	        4.41 MB
    Response time:		       14.26 secs
    Transaction rate:	       26.78 trans/sec
    Throughput:		        0.10 MB/sec
    Concurrency:		      381.87
    Successful transactions:        1129
    Failed transactions:	           0
    Longest transaction:	       20.54
    Shortest transaction:	        1.70

    But under a load of 100 users, the site still remains responsive:

    siege -q -c 100
    Lifting the server siege...      done.
    Transactions:		        3547 hits
    Availability:		      100.00 %
    Elapsed time:		      108.09 secs
    Data transferred:	       13.89 MB
    Response time:		        2.51 secs
    Transaction rate:	       32.82 trans/sec
    Throughput:		        0.13 MB/sec
    Concurrency:		       82.21
    Successful transactions:        3547
    Failed transactions:	           0
    Longest transaction:	        3.97
    Shortest transaction:	        0.80

    While you need to realize that there are just four data points on this graph and take it with a grain of salt, this gives a surprisingly linear model for calculating the maximum response time under load:
    max response.png

    Worst Response Time = 0.0416 x # of Users -0.349

  • #9

    @julian I did that on my previous box, but I found using nginx for SSL termination in combination with Varnish to be a bit CPU heavy. What other variables affect things like response time? What kinds of response times do you see with and how big of a box does it run on? Do you cluster NodeBB as well as cluster Redis?

  • GNU/Linux

    Nice @Guiri!

    I have a question. 😃
    Where you are hosting your database and what is the database(mongo/redis)?

  • #11

    @Luciano-Miranda, the database is a single, locally hosted Redis instance. I'd like to compare Mongo vs. Redis performance. Maybe I'll spin up a fresh R3 or M3.large and test the differences between those DBs.

    What results do you get with your box when hit with siege and what type of box is it?

    The real question is what is the bottleneck on the T2 micro. Why do I never get above 8 requests/sec? The CPU never spikes above 15% in these tests, so I don't think it's NodeBB or the underlying NodeJS performance. Perhaps it's Disk I/O fetching the static content, or Redis performance?

    @julian How do we specify a unix socket instead of the localhost? Any advantage there? Is pipelining used here?

  • NodeBB

    One possible problem with this testing is since you use ip_hash in the nginx config and run the locust/siege/ab etc from a machine all the requests will be directed to a single nodebb instance. So having 2 nodebb processes does't help since all requests are coming from the same IP. You can confirm that by checking cpu usage in top during the bench.

Suggested Topics

  • 0 Votes
    5 Posts

  • 1 Votes
    20 Posts

  • 0 Votes
    4 Posts

  • 0 Votes
    2 Posts

  • 0 Votes
    2 Posts

| | | |