High performance stack
-
Same day, second guide.
Because I am always on the edge for the latest software and always looking to achieve the highest performance as possible I also have a slightly different server stack.What are we going to do?
Well, basically we will setup a complete stack, featuring NGINX, HHVM, NodeJS, Redis and MariaDB. So the stuff most webmasters need.Choosing a server
Before we can start we are first going to need a server and this is where most of the mistakes already begin.
How much ressources am I going to need? How much do I need to spend? Uptime? CDN? Bandwith?!Straight ahead, with an efficient stack it doesn't matters a lot. Of course you shouldn't expect to serve millions of request over a server with 512MB RAM and hope it will go fine, instead you should try to find a healthy "price/value" balance.
For people on a budget, who are interested in a high SLA and no bandwith limits I can recommend OVH VPS line. Starting at 3,49€ a month with 2GB of RAM, a 10GB SSD and a 2,4GHz vCore it is quite alright for most beginner projects. I personally use a VPS SSD2 server for Redis & MySQL and a VPS Cloud 2 server for the website itself. So all in all, I serve ca. 1.5 million visitors over a 26,10€ system.
The only thing I dislike about OVH is the 100MBits connection, which is too slow in my opinion. However, I already got an eye on ScaleAway, a project of Online.net a concurrent of OVH.
In the end you have to know, which provider you want to use, but in my opinion DO is just some hipster shit. Hosting the same systems on DO would cost me almost 4 times more, not to mention the traffic costs, but okay. It is up to you.
The setup
Time for magic. Because everyone can install NGINX from the repo's, we will make it a bit more special and compile NGINX with Google PageSpeed, which will provide some extra features to optimize your sites performance, like Memcached or automatical compression.All steps shown below were run on Ubuntu 14.04 and asume it is a clean image, without anything preinstalled, like Apache or MySQL.
NGINX
So lets go and install NGINX with PageSpeed. As there might be newer version of NGINX & PageSpeed from time to time, I suggest checking http://nginx.org
and
developers.google.com/speed/pagespeed/module/build_ngx_pagespeed_from_source
every here and then.sudo apt-get update sudo apt-get upgrade sudo apt-get install build-essential zlib1g-dev libpcre3 libpcre3-dev unzip libssl-dev cd NPS_VERSION=1.10.33.6 wget https://github.com/pagespeed/ngx_pagespeed/archive/release-${NPS_VERSION}-beta.zip -O release-${NPS_VERSION}-beta.zip unzip release-${NPS_VERSION}-beta.zip cd ngx_pagespeed-release-${NPS_VERSION}-beta/ wget https://dl.google.com/dl/page-speed/psol/${NPS_VERSION}.tar.gz tar -xzvf ${NPS_VERSION}.tar.gz # extracts to psol/ cd # We are going for HTTP/2! NGINX_VERSION=1.9.12 wget http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz tar -xvzf nginx-${NGINX_VERSION}.tar.gz cd nginx-${NGINX_VERSION}/ ./configure --add-module=$HOME/ngx_pagespeed-release-${NPS_VERSION}-beta ${PS_NGX_EXTRA_FLAGS} \ --prefix=/usr/local/nginx \ --sbin-path=/usr/local/sbin/nginx \ --conf-path=/etc/nginx/nginx.conf \ --error-log-path=/var/log/nginx/error.log \ --http-log-path=/var/log/nginx/access.log \ --pid-path=/run/nginx.pid \ --lock-path=/run/lock/subsys/nginx \ --with-http_ssl_module \ --with-ipv6 \ --with-http_v2_module \ --with-http_stub_status_module \ --with-http_gzip_static_module \ --without-mail_pop3_module \ --without-mail_imap_module \ --without-mail_smtp_module make sudo make install
We have successfully installed NGINX with PageSpeed! However, we now need to create a script to control it:
- Go to /etc/init and create a file named nginx.conf with the following contents:
# nginx description "nginx http daemon" author "George Shammas <[email protected]>" start on (filesystem and net-device-up IFACE=lo) stop on runlevel [!2345] env DAEMON=/usr/local/sbin/nginx env PID=/var/run/nginx.pid expect fork respawn respawn limit 10 5 #oom never pre-start script $DAEMON -t if [ $? -ne 0 ] then exit $? fi end script exec $DAEMON
- Afterwards run
initctl reload-configuration
. Verify that the config was successfully loaded by usinginitctl list | grep nginx
. - You now can use service nginx start to start nginx.
First part is done!
Memcached
As PageSpeed offers Memcached, we will greatly appreciate it and install it straight ahead.
sudo apt-get install memcached
You now may edit
/etc/memcached.conf
and set your own memory limits.
Afterwards restart memcached by usingservice memecached restart
.PageSpeed
PageSpeed itself is also going to need a config, otherwise it won't be used by NGINX. Therefore open
/etc/nginx/nginx.conf
and add these two lines before the http block closes:include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*;
If not already existent create a folder named conf.d and sites-enabled. Go into conf.d and create a file named
pagespeed.conf
with the following content:# ON pagespeed on; pagespeed FetchHttps enable; pagespeed XHeaderValue "PageSpeed"; # Memcached pagespeed MemcachedThreads 4; pagespeed MemcachedServers "127.0.0.1:11211"; pagespeed FileCachePath /var/ngx_pagespeed_cache; pagespeed EnableFilters extend_cache; # PageSpeed Admin pagespeed StatisticsPath /ngx_pagespeed_statistics; pagespeed GlobalStatisticsPath /ngx_pagespeed_global_statistics; pagespeed MessagesPath /ngx_pagespeed_message; pagespeed ConsolePath /pagespeed_console; pagespeed AdminPath /pagespeed_admin; pagespeed GlobalAdminPath /pagespeed_global_admin; # PageSpeed Cache Purge pagespeed EnableCachePurge on; pagespeed PurgeMethod PURGE; # Analytics pagespeed EnableFilters insert_ga; pagespeed EnableFilters make_google_analytics_async; pagespeed AnalyticsID UA-XXXXXXXX-1; # Images pagespeed EnableFilters inline_images; pagespeed EnableFilters resize_images; # Bandwith pagespeed RewriteLevel OptimizeForBandwidth; # Minify pagespeed EnableFilters remove_comments; pagespeed EnableFilters combine_css; pagespeed EnableFilters flatten_css_imports; pagespeed EnableFilters combine_javascript; pagespeed EnableFilters inline_import_to_link; pagespeed EnableFilters inline_css; pagespeed EnableFilters inline_google_font_css; pagespeed EnableFilters collapse_whitespace; # DNS pagespeed EnableFilters insert_dns_prefetch;
Most of it should be self explaining, otherwise check the official docs about the specific filters:
https://developers.google.com/speed/pagespeed/module/filtersBasically we are close to the finish. However, before we are going to create our first site config, we need to install the other components first.
MariaDB
sudo apt-get install software-properties-common sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db sudo add-apt-repository 'deb [arch=amd64,i386] http://ftp.hosteurope.de/mirror/mariadb.org/repo/10.1/ubuntu trusty main' sudo apt-get update sudo apt-get install mariadb-server
HHVM
Some might ask what HHVM is. Well, to put it into a short, technical perspective: HHVM is a PHP drop-in replacement. In an subjective way: It makes PHP burn, because thats how fast it is. Here a little benchmark of PHP 7 vs HHVM 3.6.1 (which is fairly old):
But now lets not discuss this and proceed with the installation.
sudo apt-get install software-properties-common sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0x5a16e7281be7a449 sudo add-apt-repository "deb http://dl.hhvm.com/ubuntu $(lsb_release -sc) main" sudo apt-get update sudo apt-get install hhvm sudo update-rc.d hhvm defaults
Redis
Why shall I use Redis?
To be honest, this is a fairly good question and you should consider using MongoDB, if you haven't a lot of RAM or don't want to use Swap instead. However, as we are going to strive for the maximum performance and assume that we have some RAM left, we can stick to it.sudo add-apt-repository ppa:chris-lea/redis-server sudo apt-get update sudo apt-get install redis-server
Because many users forget to secure their installs I will also add a little guide here how to secure your Redis installation.
- Go to
/etc/redis/redis.conf
and search forrequirepass
. - Remove the comments and change "foobared" to the pass you want. I personally suggest md5-hashing a word.
- Close and save the config and restart redis by using
service redis-server restart
.
NodeJS
Time for the most essential part of the installation - NodeJS!
Without it, we are stuck to PHP, which kinda sucks.curl -sL https://deb.nodesource.com/setup_5.x | sudo -E bash - sudo apt-get install -y nodejs
Wait, did we just installed NodeJS 5?!
Yes. NodeJS works just fine with NodeBB master and v1.x. At least I never ran into any issues with it.Creating our first site config
So now that we have installed everything, it is time to create our first site config and get everything up and running.
Go to/etc/nginx/sites-enabled/
and create a config namedMYSITE.conf
, below you can find a SSL ready config from my site:# Pre-Config server_tokens off; add_header X-Frame-Options SAMEORIGIN; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload'; # Force SSL server { listen 80; server_name mysite.com; return 301 https://$server_name$request_uri; } # Rewrite www to non-www server { server_name www.mysite.com; rewrite ^(.*) https://mysite.com$1 permanent; } # Open Server server { listen 443 ssl http2; server_name mysite.com; access_log off; error_log off; root /home/web/mysite.com/public_html; # Pagespeed pagespeed on; # Let's Encrypt location ~ /.well-known { allow all; } # SSL ssl on; ssl_certificate /etc/letsencrypt/live/mysite.com-0001/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/mysite.com-0001/privkey.pem; ssl_trusted_certificate /etc/letsencrypt/live/mysite.com-0001/fullchain.pem; ssl_stapling on; ssl_stapling_verify on; ssl_session_cache shared:SSL:50m; ssl_session_timeout 10m; ssl_prefer_server_ciphers on; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH'; ssl_dhparam /etc/nginx/ssl/dhparam.pem; resolver 8.8.8.8 8.8.4.4; # Max Upload client_max_body_size 100M; # Gzip gzip on; gzip_disable "msie6"; gzip_vary on; gzip_comp_level 6; gzip_min_length 1500; gzip_proxied any; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; # Stats # location /nginx_status { # stub_status on; # } # WordPress location / { index index.html index.htm index.php; try_files $uri $uri/ /index.php?$args; } # NodeBB location ^~ /forum { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://io_nodes; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } # Cache location ~* .(jpg|jpeg|png|gif|ico|css|js)$ { expires 1y; } # HHVM include /etc/nginx/hhvm.conf; } # Close Server # NodeBB Upstream upstream io_nodes { ip_hash; server 127.0.0.1:4567; keepalive 120; }
Please note that above config utilizises HSTS, which will make your site being stuck to HTTPS for quite somewhile. If you don't plan to use SSL for a long period, I suggest using the standard HTTP config:
# Pre-Config server_tokens off; add_header X-Frame-Options SAMEORIGIN; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; # Rewrite www to non-www server { server_name www.mysite.com; rewrite ^(.*) https://mysite.com$1 permanent; } # Open Server server { listen 80; server_name mysite.com; access_log off; error_log off; root /home/web/mysite.com/public_html; # Pagespeed pagespeed on; # Max Upload client_max_body_size 100M; # Gzip gzip on; gzip_disable "msie6"; gzip_vary on; gzip_comp_level 6; gzip_min_length 1500; gzip_proxied any; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; # Stats # location /nginx_status { # stub_status on; # } # WordPress location / { index index.html index.htm index.php; try_files $uri $uri/ /index.php?$args; } # NodeBB location ^~ /forum { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://io_nodes; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } # Cache location ~* .(jpg|jpeg|png|gif|ico|css|js)$ { expires 1y; } # HHVM include /etc/nginx/hhvm.conf; } # Close Server # NodeBB Upstream upstream io_nodes { ip_hash; server 127.0.0.1:4567; keepalive 120; }
- Save and close the file.
- Run
service nginx restart
. Your site should be up and running by now.
Side notes:
Above config was made to run NodeBB in a folder. The config.json for it will look something like this:{ "url": "https://mysite.com/forum", "secret": "9f5847ss-95ab-4f08-bb53-39fc3377dsa2", "database": "redis", "redis": { "host": "127.0.0.1", "port": "6379", "password": "SOMEPASSWORD", "database": "0" } }
For further help about how to setup NodeBB I suggest checking the official docs:
http://nodebb-francais.readthedocs.org/projects/nodebb/en/latest/installing/os/ubuntu.htmlBonus - Upgrading NGINX
As you might can imagine, using NGINX mainline release means that there will be updates every here and then.
To perform such an upgrade you only need to rerun the steps for NGINX shown above. Just be sure to replace the NGINX version with the one you want to use/upgrade to. It will work just fine and won't cause any downtime. After you have finished compiling you only need to restart NGINX.Bonus - Supervisor
Besides a maximum performance, we are also targeting a maximum uptime. Therefore I recommend using supervisor, which will automatically restart NodeBB whenever it crashes or the server is getting rebooted.
sudo apt-get install supervisorctl
- Now go to
/etc/supervisor/conf.d
and create a file named nodebb.conf with the following content:
[program:nodebb] command = node /home/web/mysite.conf/public_html/forum/app.js directory = /home/web/mysite.conf/public_html/forum/ user = node autostart = true autorestart = true stdout_logfile = /var/log/supervisor/nodebb.log stderr_logfile = /var/log/supervisor/nodebb_err.log
- Save and close the file. Before we are going to restart supervisor, so that it loads the new config file, we need to create a user named node. You may change its name to whatever you want, however be sure to edit it in the config file above as well.
useradd node mkdir /home/node
- Depending where your web directory is, you also need to chown all the NodeBB files for the user node. Otherwise you will run into a permission denied issue, causing NodeBB not to start. In our example the main folder is in
/home/web/mysite.com/public_html/forum
.
chown -R node:node /home/web/mysite.com/public_html/forum
- Be sure that NodeBB isn't running and run
service supervisor restart
. If everyting goes fine NodeBB is going to be up. - You now can control the forum by using
supervisorctl restart/stop/start nodebb
.
Bonus - Let's Encrypt
Ever wanted to use SSL? Do it now, free of any cost by using Let's Encrypt:
git clone https://github.com/letsencrypt/letsencrypt cd letsencrypt ./letsencrypt-auto --help ./letsencrypt-auto certonly --webroot -w /home/web/mysite.com/public_html -d mysite.com -d www.mysite.com
Bonus - NGINX Performance Tweaking
NGINX might be a reliable and fast webserver, but it can go even faster with a proper setup.
Therefore we want to edit/etc/nginx/nginx.conf
and do the necessary optimizations.Let's take a look at this for example:
user www-data; worker_processes 2; pid /run/nginx.pid; events { use epoll; worker_connections 16384; multi_accept on; }
What you can see here is NGINX having 2 processes, allowing up to 16384 connections each.
To get the number of cores of your machine rungrep processor /proc/cpuinfo | wc -l
. This will show you how manyworker_processes
are "possible". Of course you can set a higher number than the actual limit, however that would be quite useless.
For the limit ofworker_connections
runulimit -n
, which is nothing else than the file limit.Next we are going to jump to the
http
block and add this:sendfile on; tcp_nopush on; tcp_nodelay on; types_hash_max_size 2048;
Because it would take too much time to explain these settings in detail, I recommend checking these 2 articles:
https://t37.net/nginx-optimization-understanding-sendfile-tcp_nodelay-and-tcp_nopush.html & http://nginx.org/en/docs/hash.htmlBonus - Ghost on NodeJS 5.8
As you'll maybe encounter, Ghost has no NodeJS 5.x support yet. It doesn't means that it won't work, but simply that it wasn't added in its
package.json
.
Therefore we will jump to the"engines":
and edit it as follows:"engines": { "node": "~0.10.0 || ~0.12.0 || ^4.2.0 || ^5.8.0", "iojs": "~1.2.0" },
You now can proceed installing Ghost on NodeJS 5.8.
-
This is a really fantastic tutorial!
Let me just add an tip for NodeBB.
@AOKP said:
git clone https://github.com/letsencrypt/letsencrypt cd letsencrypt ./letsencrypt-auto --help ./letsencrypt-auto certonly --webroot -w /home/web/mysite.com/public_html -d mysite.com -d www.mysite.com
Here a best place for
-w
will benodebb/public
../letsencrypt-auto certonly --webroot -w /<Your_NodeBB_Directory>/public -d mysite.com -d www.mysite.com
NodeBB will server a file which letsencrypt put locally into the
public
directory.
( Of course I assume that one has set a web server correctly and the forum is accessable from internet ) -
@qgp9 I do not advice doing that. Not that it is wrong, but as we are using this on the whole site and NodeBB is in the above case installed in a sub folder, I would put it into
public_html
, which is our main directory, but in the end it depends on your overall folder structure anyway. Yet I need to mention that putting the.well_known
folder somewhere else requires an additional edit in the NGINX config. -
@AOKP I understand your points but like this forum, I believe usually people use the NodeBB with a root of a url and it is actually where a letsencrypt server checks.
( Even, from NodeBB document ( maybe ), serving of NodeBB with subfolder is not well tested or considered ( again even though I'm doing it ))Now, if NodeBB is serving a root folder, that means public_html doesn't work.
Of course, one can unlink a proxy to NodeBB and restart a websever to servepublic_html
and get a certificate and re-link a proxy and restart webserver but it's complicate.Nevertheless , I totally understand and mostly agree your points and way.
- If one serves NodeBB with sub-folder, what I said is just a bullshit. agree.
- with root, there are pros and cons and I fully accepts benefit of that complicate but clean way. mostly agree
-
The quality of tutorials as of late created in this new category has been outstanding. Good work
-
@psychobunny its the best you can get out of markdown. Actually I would have liked to use the first post as "index table" instead. And reserve myself the 2nd, 3rd, 4th and 5th post for the instructions instead.
Just as a general note:
The composer is buggy, when having a preview picture of the URL. In general I would remove the preview picture as the favicon preview is already nice enough. -
Giving this a bump.
Anyone up for Tengine?
-
this is a great article,
what aboutvarnish cache
? -
News?
Great Guide.
Can you update the guide with the commands updated (with the new versions of softwares) and improvments ?
Thanks a lot. -
@master-antonio sorry if I read this after such a long time. But why not? Lets see how well Markdown can be used for docs.