Hosting a static website using nginx
Most of my career has been centred around mobile development so I can count on one hand the number of times I've had to set up a Linux server. I have been using Apache for this website for over a decade now and decided to give nginx a try. I wanted to write down the steps I took to spin up the server so I would have something to refer back to whenever I need to do this again. But one thing I have learned about this blog is these articles somehow manage to get shared far and wide and help people in a way I never could have anticipated.
I am going to write from the perspective of setting up nginx on the most recent long-term support version of Ubuntu (24.04) using a virtual private server host such as Linode, DigitalOcean, Hetzner, AWS, Microsoft Azure, etc. While the commands may be specific to Ubuntu, the general idea should apply to any version of Linux as well as any host, even your own hardware such as a Raspberry Pi.
SSH into the machine and create a new user
Your first step is to determine the IP address of the machine so you can SSH into it. I am assuming that during the setup process you were not asked to create a new user, and were given the opportunity to provide an SSH public key. If this is the case you should easily be able to SSH in by running the command:
ssh root@XXX.XXX.XXX.XXX
Congratulations, you now have access to your machine as the root user which is one of the biggest security holes you can leave in your server. Generally speaking you should almost never be logged in as the root user directly and instead make use of the sudo
command to request elevation as necessary.
So let's start with creating a new user by running the command:
adduser johnny
and follow the prompts, supplying whatever information you wish. After you've finished creating the user you need to give them the ability to run sudo
by executing:
sudo usermod -aG sudo johnny
Now you have a new user named "johnny" who can run commands using sudo
. But you still don't have a way to log into the machine as johnny, assuming you used SSH public key authentication for the root user. You need to copy the SSH public key to the new user's home directory to ensure they can also SSH into the machine.
mkdir /home/johnny/.ssh
cp ~/.ssh/authorized_keys /home/johnny/.ssh/
chown johnny:johnny /home/johnny/.ssh/ -R
chmod go-rwx /home/johnny/.ssh -R
Verify you've set everything up correctly by logging out and SSHing back in as the new user.
ssh johnny@XXX.XXX.XXX.XXX
Update all the things!
Unless you've installed a version of Linux that was released in the last 24 hours, I can guarantee you that something critical is out of date. So let's run the following commands:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get clean
and breathe a sigh of relief knowing you're fully patched up, at least for the next 24 hours.
Most versions of Linux have some sort of unattended upgrades package that can automatically install updates for you. On Ubuntu it is called unattended-upgrades. You can run sudo systemctl status unattended-upgrades.service
to check its status and sudo apt install unattended-upgrades
if it is not installed.
You always want to configure unattended-upgrades because it is guaranteed that you will not remember to update your server as often as you should. Also, for something as basic as a static web server you should have no fear of it being automatically restarted to install a critical update that could prevent your machine from being compromised. I am not going give detailed configuration steps here because it is an absolutely insane rabbit hole to go down. If you're interested google "configuring unattended-upgrades" and you'll find thousands of articles.
But I will call out what I believe is the most important configuration step and that is allowing your machine to automatically reboot itself. To enable this you need to modify the file /etc/apt/apt.conf.d/50unattended-upgrades
and uncomment the line:
//Unattended-Upgrade::Automatic-Reboot "false";
If a critical security update is released that requires your machine to restart you can be confident you'll receive it.
Configure SSH
Depending on your hosting provider and the defaults of your Linux installation, you may have already taken steps towards hardening SSH access. But you should still do your own pass by running sudo vi /etc/ssh/sshd_config
and ensure the following options are set.
- Change
Port 22
toPort XXXXX
so you are no longer using the default SSH port. Security through obscurity should not be relied upon, but it can be be part of many layers designed to protect your system. Serious attackers will use port scanning but the vast majority of automated attacks will attempt to hit 22 and then move on. - Ensure
PermitRootLogin no
is set. There is absolutely no reason to allow someone to log into your server as root. They can log in as another user and then request escalation. - Ensure
PubkeyAuthentication yes
is set. You only want users to be able to access your server via public key authentication, never passwords. - Ensure
PasswordAuthentication no
is set. Same reasons as above. - Ensure
UsePAM no
is set. This is more nuanced but the Pluggable Authentication Module is powerful but most likely overkill. I disable it by default until I find a reason to use it. - Add
AllowUsers johnny
to the bottom of the file. You always want to have a whitelist so any new user you add is not automatically granted permission to SSH in.
These changes don't take affect automatically. You need to either restart your server or run sudo systemctl restart ssh
to restart your SSH service.
It is at this point you can modify the ~/.ssh/config
on whatever machine you are using to SSH into your server and create an explicit SSH hostname for it. For this blog I would use:
Host reidmain.com
HostName XXX.XXX.XXX.XXX
Port XXXXX
User johnny
where the HostName is the IP address of the server, Port is the new SSH port you just switched to, and User is the name of whatever user was created. No matter where you type ssh reidmain.com
you will always hit the same IP address on the same port with the same user.
Enabling a firewall
To continue hardening your server you should enable a firewall using Uncomplicated Firewall (ufw):
sudo apt install ufw
sudo ufw allow XXXXX/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
are the quick and dirty commands to install ufw, allow port XXXXX for SSH connections, allow ports 80 and 443 for http and https connections, and then enable the firewall.
Running sudo ufw status verbose
will allow you to confirm that the firewall is on and the correct rules have been enabled.
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
80/tcp ALLOW IN Anywhere
443/tcp ALLOW IN Anywhere
XXXXX/tcp ALLOW IN Anywhere
80/tcp (v6) ALLOW IN Anywhere (v6)
443/tcp (v6) ALLOW IN Anywhere (v6)
XXXXX/tcp (v6) ALLOW IN Anywhere (v6)
Install nginx
One of the great things about nginx is how easy it is to install and get running. Simply type
sudo apt-get install nginx
then open a browser, enter the IP address of your server, and you should see the nginx landing page. It really is that easy and the out-of-the-box configuration is solid.
Upload static files
Now we need to upload our static files to the server so nginx has something to display.
The first step is to decide where this content will live. nginx defaults to /var/www/html
but you shouldn't use this folder and instead opt to create your own in case you choose to host multiple websites. For this blog I would run:
sudo mkdir /var/www/reidmain.com
You want this directory to be owned by the root user but readable to all users because nginx uses the "www-data" user by default. If you created this folder in a user's home directory you would have to worry about more complex group permissions to ensure nginx could access the content correctly. This also has the benefit of ensuring that only the root user can write to this folder which means www-data or your SSH user isn't going to accidentally overwrite anything.
The second step is to decide how your static files will be uploaded to the server. I use git for this for two reasons. It allows me to easily push changes to the server using SSH. It also allows me to roll back to a previous change if I notice something is incorrect.
I don't believe git is a default package on most Linux versions so you'll have to run sudo apt-get install git
. Then you can create a bare git repository with:
sudo git init --bare /var/www/reidmain.com.git
Go to whatever git repository you are planning on uploading to your server and run:
git remote add prod ssh://reidmain.com:/var/www/reidmain.com.git
Assuming you're working with a "master" branch you should now be able to run git push prod master
and you bare git repository will be populated with the latest changes.
The first time you try this you're going to receive an error because the /var/www/reidmain.com.git
directory is not writeable by anyone. The best practice for this would be to create some sort of group that you could add various ssh users to, and give that group the ability to write to /var/www/reidmain.com.git
. But since this is a basic web server for a single blog I am going to just transfer ownership of the folder to our SSH user.
sudo chown -R johnny /var/www/reidmain.com.git
Try running git push prod master
again and you should have no issues.
The final step is to checkout a copy of your bare git repository into the /var/www/reidmain.com
folder you created earlier.
sudo git clone /var/www/reidmain.com.git/ /var/www/reidmain.com/
And that is it. You have uploaded your static files to your web server.
Configure nginx
All that is left is to configure nginx to read from /var/www/reidmain.com/
. To do this you need to modify the /etc/nginx/sites-available/
directory.
Run sudo vi /etc/nginx/sites-available/reidmain.com
and copy the following:
server {
listen 80;
listen [::]:80;
charset utf-8;
root /var/www/reidmain.com/;
server_name reidmain.com www.reidmain.com XXX.XXX.XXX.XXX;
location / {
try_files $uri $uri/ =404;
}
}
where XXX.XXX.XXX.XXX is the IP address of your server. This is done just in case your domain ever fails to resolve. You can then fall back to inputting your IP address into the browser directly and nginx will still realize it should serve up your static content.
Next you need to symbolically link this from the /etc/nginx/sites-available
folder so nginx will know that it should read it upon start.
sudo ln -s /etc/nginx/sites-available/reidmain.com /etc/nginx/sites-enabled/
Restart your server and refresh your browser. The default nginx landing page should be gone and you should be looking at your website.
Supporting https
By default your web server won't support https because you need an SSL certificate for that. Thankfully Let's Encrypt and Certbot have automated the process. For a simple web server like ours you only need to run the certbox
command with its nginx plugin and you are off the races. I will list below the commands that I needed to run to get it working but definitely make sure you cross reference with the Certbot website because it is quite possible things have changed.
Certbot prefers you use Snap to install itself so you first need to ensure that Ubuntu has not already installed Certbot by running sudo apt remove certbot
.
Next you need to make sure Snap is installed. This should be done by default for Ubuntu server but you can look at the Snap website for installation instructions if necessary.
Now you can install Certbot using Snap by running:
sudo snap install --classic certbot
If you run which certbot
you will probably see /snap/bin/certbot
which indicates that everything is working as expected. If you don't get a response then you'll need to add a symbolic link using sudo ln -s /snap/bin/certbot /usr/bin/certbot
to ensure certbot
is in your $PATH
.
You can now run Certbot where it will generate SSL certificates and automatically update your nginx configuration.
sudo certbot --nginx
You will be prompted for an email address that will be associated with the certificates generated. In a couple of seconds you should have confirmation that nginx has been updated to work with the certificates and you're good to go. Check out /etc/nginx/sites-available/reidmain.com
and you should see comments indicating the lines that Certbot added. It really is that simple.
Let Encrypt's certificates are only good for 90 days so you need to renew them. Thankfully Certbot automatically installs something that will automate this for us. You can see it is enabled by running sudo systemctl status snap.certbot.renew.service
. If you would like to do a dry test run of renewing your certificates you can use sudo certbot renew --dry-run
.