This article may contain affiliate links. If you buy some products using those links, I may receive monetary benefits. See affiliate disclosure here
In this post, I will explain how I deployed this website to a DigitalOcean droplet. I had this blog running locally using docker for a while. It was not online for a few months because of various reasons. Now when I decided to make it live again, I took a new 2GB droplet from DigitalOcean.
I am not going into the details of how I set up the local environment in this post. I think I had already done that in another post.
So, before beginning, I had two things:
- The site running locally within three docker containers - one each for Nginx, PHP, and MariaDB
- Site's code pushed on to a GitHub repository
Next, here are the steps I followed:
Initialized a new Linux instance, running Ubuntu 24.04 LTS
This is straightforward. Login to the DigitalOcean dashboard, then create a new server. For the size, I choose a normal 2GB instance.
SSH setup
DigitalOcean allows uploading public SSH keys from the dashboard, and you can select them as the authentication method while creating a new server. This is handy because you don't need to manually copy the key using ssh-copy-id.
Created SSH key pair and uploaded to the server
The first step is generating a pair of keys locally using this command, with the default options.
ssh-keygen
This will generate an RSA keypair under ~/.ssh/ folder. While generating I named the file as do-1. You could also tell ssh-keygen to use other algorithms like ECDSA other than RSA.
Then upload the public key to the server provider while keeping the private key safely on the local machine.
Create the server. Once it is started, try logging in to the SSH terminal using the key (password auth should be automatically disabled) as the root user. Don't forget to add the Host details to ~/.ssh/config.
Set up a new sudo user, and enable ssh keys
While logged in as the root user, create a new non-root sudo user:
adduser abhinav
usermod -aG sudo abhinav
cp -R ~/.ssh/authorized_keys /home/abhinav/.ssh/authorized_keys
chown -R abhinav:abhinav /home/abhinav/.ssh
Also, make sure the ~/.ssh permission is 700, and authorized_keys file permission is 600.
Correctly set the PHP version in the dockerfile, then updated the GitHub repository
If the project has been sitting idle for a while, chances are that the packages in them, or PHP itself are outdated.
For instance, php:fpm in the dockerfile might have installed the latest version at that time. Now that might have changed. To avoid such issues, either specify the versions, or update everything to the latest version, including PHP, composer, and all locally.
I changed php:fpm to php:8.3-fpm, then pushed to the remote master branch. That was the only change required.
Cloned the GitHub repository
On the remote terminal:
sudo apt update
sudo apt install git
cd ~
git clone https://github.com/iabhinavr/coralnodes-site-php.git
Installed PHP, Composer and installed the PHP app
sudo apt install php8.3 php8.3-cli php8.3-common php8.3-fpm php8.3-mysql php8.3-xml php8.3-curl php8.3-zip php8.3-mbstring php8.3-gd php8.3-intl php8.3-opcache
php -v
Then followed the instructions given on getcomposer
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
php -r "if (hash_file('sha384', 'composer-setup.php') === 'c8b085408188070d5f52bcfe4ecfbee5f727afa458b2573b8eaaf77b3419b0bf2768dc67c86944da1544f06fa544fd47') { echo 'Installer verified'.PHP_EOL; } else { echo 'Installer corrupt'.PHP_EOL; unlink('composer-setup.php'); exit(1); }"
php composer-setup.php
php -r "unlink('composer-setup.php');"
sudo mv composer.phar /usr/local/bin/composer
php composer install
Installed Docker Container Engine
For this, I followed the instructions given on Docker documentation page, installing the latest version on Ubuntu.
Built and started the containers
cd /home/abhinav/coralnodes-site-php
docker compose build
docker compose up -d
Dumped the database from the localhost
Next, I used the DBeaver GUI to dump the local database.
Connected to the remote mariadb server via SSH tunnel using DBeaver
Setup a new connection via SSH tunnel, using the keys generated earlier.
Restored the database to the remote
This is also done using the GUI, once the remote connection is established.
Set the local /etc/hosts file to point the domain to the remote IP
If everything is correct, the application must be working correctly. Yet it is not accessible publicly from a domain name. So for now, edit the /etc/hosts file to temporarily point the domain to the IP.
sudo micro /etc/hosts
Then added these lines at the top:
143.110.180.66 coralnodes.com
143.110.180.66 www.coralnodes.com
Saved the file and exited.
Installed an upstream Nginx server and created a server block
It's always good to place an upstream Nginx server for ssl termination and port forwarding from the host machine to the docker containers.
sudo apt install nginx
Then I created a server block for the new site:
sudo micro /etc/nginx/sites-available/coralnodes.com
The contents in the file looked like this:
server {
listen 80;
listen [::]:80;
server_name coralnodes.com www.coralnodes.com;
if ($host = coralnodes.com) {
return 301 http://www.coralnodes.com$request_uri;
}
location / {
proxy_pass http://localhost:8086;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 600s;
}
}
Later, I will edit this once SSL is setup.
Installed Certbot
Followed the instructions given on Certbot's site to set it up:
sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot
Set up certbot cloudflare dns plugin, and issued a certificate
I prefer authorization using dns challenge to verify the domain ownership. My domain's DNS is hosted on Cloudflare. Cloudflare allows you to setup restricted API tokens, which allows certbot to add TXT records for our domain name. So I generated an API token.
Then stored it to a file on the server at /home/abhinav/coralnodes-site-php/cloudflare_coralnodes.ini. Inside the file:
dns_cloudflare_api_token = 0123456789abcdefghijklmnopqrstuvwxyzabcd
Installed the DNS plugin:
sudo snap install certbot-dns-cloudflare
Then requested for a new certificate:
cd /home/abhinav/coralnodes-site-php
sudo certbot certonly --dns-cloudflare --dns-cloudflare-credentials ./cloudflare_coralnodes.ini -d coralnodes.com -d www.coralnodes.com --dns-cloudflare-propagation-seconds 20
I had to set the wait time to 20 seconds for it to work, the default is 10 seconds.
Now, you can view the details of the installed certificate:
sudo certbot certificates
Updated the Nginx server block for SSL and redirections
The final /etc/nginx/sites-available/coralnodes.com file looked like this:
# Redirect non-www → www
server {
listen 80;
listen [::]:80;
server_name coralnodes.com www.coralnodes.com;
return 301 https://www.coralnodes.com$request_uri;
}
# Main www server block
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name coralnodes.com www.coralnodes.com;
if ($host = coralnodes.com) {
return 301 https://www.coralnodes.com$request_uri;
}
location / {
proxy_pass http://localhost:8086;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 600s;
}
ssl_certificate /etc/letsencrypt/live/coralnodes.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/coralnodes.com/privkey.pem;
}
Updated DNS records, and cleaned up the local /etc/hosts file
Try accessing the site using https, and it should work.
Finally, in the Cloudflare dashboard, I updated the domain's A record to actually point the domain to the IP. Also, remove the entries from /etc/hosts on the local machine.