Table of Contents
Introduction
Caddy is a modern web server that is designed to be simple to set up and use while also being powerful and efficient. It aims to provide developers with a hassle-free experience when deploying websites and web applications. It’s particularly well-suited for those who may not be familiar with the complexities of traditional web server configurations.
In simple terms, Imagine you’ve built a fantastic website or web application that you want to share with the world. However, before others can access it on the internet, you need a special kind of software called a web server. A web server is like a traffic cop for the internet – it directs incoming requests from users’ browsers to the appropriate parts of your website so they can see your content.
Benefits:
- Automatic HTTPS:
One of the standout features of Caddy is its automatic HTTPS setup. When you use Caddy to host your website, it automatically generates and configures SSL certificates to enable secure connections. This is crucial for protecting user data and improving your website’s search engine ranking - Automatic Certificate Renewal:
When you make changes to your Caddy configuration, you don’t have to restart the server. Caddy will automatically reload its configuration, making it easy to iterate and test changes without disrupting your website’s availability.
Prequisites
We recommend that you have the following basic requirements:
- Server with public IP address
- Root-level access for the server where configuration is needed.
- Basic understanding of pipeline
- What is a job in CI/CD?
Caddy Installation In Server
- We need to follow the instructions below step by step
- Access the server:
To access the server, we need to addssh root@ip_address
to the command line and enter the server where ip_address is the IP address for the server. Example:49.123.4.79
- Install Caddy in server:
Please follow the series of commands provided below to install Caddy on your server. We’ll also break down the purpose of each command to enhance your understanding.sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list sudo apt update sudo apt install caddy
- Installing Required Packages:
To ensure secure package management, we install necessary components using this commandsudo apt install -y debian-keyring debian-archive-keyring apt-transport-https
This equips the system with tools to manage packages safely and transfer them using HTTPS.
- Adding Caddy’s Verification Key:
We fetch a special key to confirm the authenticity of Caddy packages by runningcurl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
With this command, we download a special digital key that acts like a seal for Caddy’s packages. We transform it into a format that our system can use to confirm the legitimacy of Caddy’s software
- Configuring Caddy’s Repository:
By executing,curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
We’re telling the system where to find Caddy’s collection of software packages. It’s like adding a new store to your list of trusted shopping places.
- Refreshing Package Information:
We keep our package information up to date with:sudo apt update
- Installing Caddy:
This command refreshes the package information from repositories, ensuring the latest available package versions are known.sudo apt install caddy
- Server Configuration:
After we have successfully installed caddy on our server. We will have a caddy directory inetc
directory. We can navigate to the caddy directory.cd /etc/caddy
In the Caddy directory, we have a Caddyfile where we need to add our server configuration.
Since, In our case we need to configure caddy for two different environment.- Production
- Non production (develop, qa, staging)
so, let’s create two different directories, one for production and one for non production.
root@ubuntu-2gb-fsn1-1:/etc/caddy# ls Caddyfile non_production production root@ubuntu-2gb-fsn1-1:/etc/caddy#
Now, let’s update Caddyfile to compile configuration of server from both directory i.e production, non_production.
We can open the file by following one of these commands.nano Caddyfile
Or,
vim Caddyfile
As a next step, we are updating content in Caddyfile.
import /etc/caddy/non_production/*.caddy import /etc/caddy/production/*.caddy
so, if we add any caddy file within these two directories, those configurations will be automatically added into Caddyfile. You can reload caddy with
caddy reload -f /etc/caddy/Caddyfile
to see your new config in action..
Now, we have successfully install the caddy on our server.🎉
Note: Caddy is extensible with lots of extensible plugins(https://github.com/caddy-plugins), If you plan to use any of these plugins, you should install caddy via xcaddy (our tutorial link https://thedevpost.com/blog/wildcard-cloudflare-token-xcaddy/)
Non Production Caddy Server Configuration:
Now, Let’s move by adding a configuration caddy file for non production. For instance, we will create file called contractTemplate.caddy
In the code block below, It set up how Caddy should handle incoming requests.
*.contract-template.truemark.com.np {
# Set this path to your site's directory.
root * /var/www/html/non_production/contract_template/{labels.4}
encode gzip
try_files {path} /index.html
# Enable the static file server.
file_server
#logger configuration
log {
output file /var/log/caddy/access.log
format json
}
tls {
dns cloudflare w**************s
}
}
Let’s breakdown how configuration is working.
root * /var/www/html/non_production/contract_template/{labels.4}
In the domain *.contract-template.truemark.com.np
, when counting labels from right to left:
labels.0
refers to the rightmost label, which is np.
labels.1
refers to the label immediately to the left, which is com.
labels.2
is truemark.
labels.3
is contract-template.
labels.4
is *.
So, labels.0
is the rightmost label (TLD), and labels.4
is the leftmost label, which is the wildcard *.
So, if you have a request to https://feature-ci-cd.contract-template.truemark.com.np, where feature-ci-cd is a dynamically generated value, {labels.4} will represent that dynamically generated feature-ci-cd, and the path will be constructed accordingly based on the value of the subdomain.
encode gzip
: This directive indicates that Caddy should enable Gzip compression for files to optimize content delivery.
try_files {path} / index.html
: Tells Caddy to try serving the requested file, and if not found then serve index.html file.
file_server
: This directive enables Caddy’s static file server to serve files from the specified root directory.
log { ... }
: This block configures logging. It specifies that access logs should be written to /var/log/caddy/access.log
in JSON format.
tls {
dns cloudflare w**************s
}
Instead of using tls example@gmail.com
directly in our configuration, we are using cloudflare service. Caddy communicates with Let’s Encrypt’s production server to obtain SSL/TLS certificates. Let’s Encrypt has rate limits on the number of certificate issuances you can make in a given time frame, and these limits are designed to prevent abuse and ensure fair usage.
By using the tls
directive with email address and reaching the rate limits, we might run into issues where we cannot obtain new certificates until those limits reset. This can potentially impact site’s availability if we need to frequently obtain new certificates.In other hand, when we use service like cloudflare, it’s help to avoid these issues.
Note: To create DNS token you can follow along documentation.
Production Caddy Server Configuration:
Now, Let’s create a similar file in the production director contractTemplate.caddy
. The server configuration for production will be similar to non production but there are few differences which we will discuss about it.
contract-template.com.np {
redir http://www.contract-template.com.np{uri}
}
www.contract-template.truemark.com.np{
# Set this path to your site's directory.
root * /var/www/html/non_production/contract_template/main
encode gzip
try_files {path} /index.html
# Enable the static file server.
file_server
tls example@gmail.com
}
Below block of the configuration deals with the non-www version of the domain contract-template.com.np and redirects any HTTP requests to the www subdomain:
contract-template.com.np {
redir http://www.contract-template.com.np{uri}
}
redir http://www.contract-template.com.np{uri}
: This line redirects any incoming HTTP requests for contract-template.com.np to the www subdomain while keeping the requested URI intact.
tls example@gmail.com
Enables TLS (SSL) encryption for the connection and specifies the email address used to request a certificate. This means the certificate for www.contract-template.com.np will be obtained and managed automatically by Caddy.
This way, we have successfully configured caddy. If you want to understand more detail on the structure of the configuration then you can go through official documentation.
Now, let’s create different directories for production and non production.
cd /var/www/html/
The directories structure will be as below
var/
|-- www/
| |-- html/
| |-- production
| |-- non_production
It’s advisable to utilize a mounted volume for storing all your project builds. To streamline this process further, consider creating a symbolic link (symlink) that connects to the mounted volume.
ln -s /mnt/non_production /var/www/html/non_production
ln -s /mnt/non_production /home/deploy/non_production
It is necessary as we will update the build files in deploy
user through gitlab CI and we are serving from /var/www/html/non_production
It will be similar approach for production.
ln -s /mnt/production /var/www/html/production
ln -s /mnt/production /home/deploy/production
Although, we could also add build files directly in /home/deploy/non_production
and /home/deploy/production
ln -s /home/deploy/non_production /var/www/html/non_production
ln -s /home/deploy/production /var/www/html/production
Gitlab CI Configuration
We need to follow the below instructions step by step.
- Create
.gitlab-ci.yml
within your root directory of the project. - We will create three different stages and introduce variables.
- Branch based Configuration: In
branch_deploy
job, we will updated build files to server and before we could upload build files, we need to maintain ssh connection between server and gitlab container.branch_based_job: extends: .branch_based_job image: $BASE_DOCKER_IMAGE stage: branch_deploy environment: name: preview/$CI_COMMIT_REF_NAME url: https://$CI_COMMIT_REF_SLUG.$BRANCH_BASE_URL before_script: - apt update && apt upgrade -y # check whether package openssh is available or not and install it if not available - 'command -v ssh-agent >/dev/null || ( apt install -y openssh )' # Command is used to setup a SSH connection - eval $(ssh-agent -s) # add ssh private to ssh connection during CI - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - # create director in server - mkdir -p ~/.ssh # change the permission of .ssh i.e read, write, delete - chmod 700 ~/.ssh # use to fetch public key of host server to known_hosts - ssh-keyscan $SERVER_IP >> ~/.ssh/known_hosts # changes permission of known_hosts file to read and write. - chmod 644 ~/.ssh/known_hosts # update package - apt install -y rsync - npm install --legacy-peer-deps
As a image for
branch_deploy
, we have used$BASE_DOCKER_IMAGE
as we have introduces in variables.
Thebefore_script
contains a series of commands that are executed to maintain ssh connection between container and server. - It will check whether package open ssh is available or not and install it if not available
'command -v ssh-agent >/dev/null || ( apt install -y openssh )'
- It will adds the SSH private key provided as an environment variable
($SSH_PRIVATE_KEY)
to the SSH agent.eval $(ssh-agent -s) echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- To creates and configures the
.ssh
directory for SSH-related files and permissions.- mkdir -p ~/.ssh - chmod 700 ~/.ssh
- we have added below command to fetch the public key of the host server to known_hosts. The environment variable $SERVER_IP in gitlab CI variable section.
ssh-keyscan $SERVER_IP >> ~/.ssh/known_hosts
- we have to add below command for read and write permission to known hosts.
chmod 644 ~/.ssh/known_hosts
- we have to upgrade packages, install the rsync package, install node modules and build react application
- apt update && apt upgrade -y && apt install -y rsync - npm install --legacy-peer-deps - npm run build
- Above configuration should have maintain ssh connection between container and server. In below block we will update build files to server.
script: # create directory by using branch name - ssh $SSH_USER@$SERVER_IP mkdir -p /home/deploy/non_production/$APP_NAME/$CI_COMMIT_REF_SLUG - cat $envfile_non_production > $PRODUCTION_ENV_FILE_NAME - npm run build # publish static assets or files to a remote server and ensure only changed files are transferred - rsync -atv --delete --progress ./$BUILD_DIRECTORY/ root@$SERVER_IP:/home/deploy/non_production/$APP_NAME/$CI_COMMIT_REF_SLUG
- we will access the host server and create a directory using slug name (for ex: feature-ci-cd). Environment variables such as $SSH_USER and $SERVER_IP are added over gitlab CI remote repository while $CI_COMMIT_REF_SLUG pre-defined variable.
- ssh $SSH_USER@$SERVER_IP mkdir -p /home/deploy/non_production/$APP_NAME/$CI_COMMIT_REF_SLUG
- It publishes static assets or files to a remote server and ensures only changed files are transferred.
- rsync -atv --delete --progress ./build/ $SSH_USER@$SERVER_IP:/home/deploy/non_production/$APP_NAME/$CI_COMMIT_REF_SLUG
- After publishing the build file and completing the job, it will save the build folder in
$BUILD_DIRECTORY
(i.e build) directory within the CI server. So, that we can look over it.artifacts: paths: - $BUILD_DIRECTORY
- It will created environment for deployed branch.
environment: name: preview/$CI_COMMIT_REF_NAME url: https://$CI_COMMIT_REF_SLUG.$BRANCH_BASE_URL
- We need to add rule so that job only run when merge request is created. Since we will required same rule for
after_deploy
job too. so, we will create base rule..branch_based_job: rules: - if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH' when: never - if: '$CI_PIPELINE_SOURCE == "merge_request_event"' when: on_success
- Gitab CI/CD Configuration For Production
The configuration for gitlab CI will be similar to branch based deployment but we will have few slight adjustments to above configuration.- Since, we will be only hosting from a single directory called
main
(note: we have added main in production configuration).- ssh $SSH_USER@$SERVER_IP mkdir -p /home/deploy/production/$APP_NAME/main
- Since we need to run job in main branch only. so, we will add rule accordingly
rules: - if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH' when: on_success
- Now, we will require new changes to the main branch. We can do that using
rsync
- rsync -atv --delete --progress ./build/ $SSH_USER@$SERVER_IP:/home/deploy/production/$APP_NAME/main
- we will also add environment for production
environment: name: production url: $PRODUCTION_URL
If we follow as above, we will get job as below.
production_deploy: image: $BASE_DOCKER_IMAGE stage: production_deploy environment: name: production url: $PRODUCTION_URL rules: - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH when: on_success before_script: - apt update && apt upgrade -y # check whether package openssh is available or not and install it if not available - 'command -v ssh-agent >/dev/null || ( apt install -y openssh )' # Command is used to setup a SSH connection - eval $(ssh-agent -s) # add ssh private to ssh connection during CI - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - # create directory in server - mkdir -p ~/.ssh # change the permission of .ssh i.e read, write, delete - chmod 700 ~/.ssh # use to fetch public key of host server to known_hosts - ssh-keyscan $SERVER_IP >> ~/.ssh/known_hosts # changes permission of known_hosts file to read and write. - chmod 644 ~/.ssh/known_hosts # update package - apt install -y rsync - npm install --legacy-peer-deps script: - ssh $SSH_USER@$SERVER_IP mkdir -p /home/deploy/production/$APP_NAME/main - cat $envfile_production > $PRODUCTION_ENV_FILE_NAME - npm run build # publish static assets or files to a remote server and ensure only changed files are transferred - rsync -atv --delete --progress ./$BUILD_DIRECTORY/ root@$SERVER_IP:/home/deploy/production/$APP_NAME/main artifacts: paths: - $BUILD_DIRECTORY
- Since, we will be only hosting from a single directory called
- Post-Deployment Notifications
Let’s create another job which will comment in the merge request to access the deployed URL.post_message_to_mr: extends: .branch_based_job stage: after_deploy image: $BASE_DOCKER_IMAGE script: - export GITLAB_TOKEN=$TRUEMARK_GITLAB_KEY - apt update # install package curl - apt install -y curl - 'curl --location --request POST "https://gitlab.com/api/v4/projects/$CI_MERGE_REQUEST_PROJECT_ID/merge_requests/$CI_MERGE_REQUEST_IID/notes" --header "PRIVATE-TOKEN: $GITLAB_TOKEN" --header "Content-Type: application/json" --data-raw "{ \"body\": \"## :tada: Latest changes from this :deciduous_tree: branch are now :package: deployed \n * :bulb: Please prefer to post snapshots of UI issues from this latest deployment while you review this PR (br tag) \n * :small_red_triangle: Preview URL might not work/exist once this Merge/Pull request is merged (hr tag) :tv: [Preview Deployed Application/website](https://$CI_COMMIT_REF_SLUG.$BRANCH_BASE_URL).\" }"'
- This job will run in same image as above branch based job and production deployment job.
image: $BASE_DOCKER_IMAGE
- we also extends same common rule used in branch based deployment here. so, that job will run only merge request is created
- There, we will assign $GITLAB_TOKEN to $TRUEMARK_GITLAB_KEY (It is added over remote repository).
export GITLAB_TOKEN=$TRUEMARK_GITLAB_KEY
- It will update the package on the server and install curl package.
- apt-get update # install package curl - apt-get install -y curl
- It will spend the post request and add comment in Merge Request with deployed URL.
- 'curl --location --request POST "https://gitlab.com/api/v4/projects/$CI_MERGE_REQUEST_PROJECT_ID/merge_requests/$CI_MERGE_REQUEST_IID/notes" --header "PRIVATE-TOKEN: $GITLAB_TOKEN" --header "Content-Type: application/json" --data-raw "{ \"body\": \"Hi Look at this awesome message with a link to a [deployed environment](https://$SLUG.contract-template.truemark.com.np).\" }"'
- This job will run in same image as above branch based job and production deployment job.
variables:
APP_NAME: contract_template
BASE_DOCKER_IMAGE: node:18.18.0
BRANCH_BASE_URL: contract-template.truemark.com.np
PRODUCTION_URL: https://www.contract-template.truemark.com.np
PRODUCTION_ENV_FILE_NAME: .env
BUILD_DIRECTORY: build
stages:
- branch_deploy
- production_deploy
- after_deploy
Along the tutorial, we will build two different environment for branch based deployment, production deployment and post deployed url over merge request.
Note: You can learn about how to configure custom CI/CD variables over this article.
After following along above instruction, we will get following configuration.
# GITLAB CI/CD Variables PREREQUISITES:
# ........................
# SSH_USER, SERVER_IP, SSH_PRIVATE_KEY
# CI_DEFAULT_BRANCH (optional in case you want to change production deploy in different branch)
# envfile_production, envfile_non_production (type: file)
# SKIP_TEST i.e true or false
# TRUEMARK_GITLAB_KEY (prefer inherit from project group and set by lead developer)
# ..........................
# UPDATE VARIABLE_IN_CONFIGURATION
# ..................
# APP_NAME: Prefer snake case for project name
# BASE_DOCKER_IMAGE: prefer node image which uses same node version used during development
# BRANCH_BASE_URL: preview domain for test
# PRODUCTION_URL: production domain
# PRODUCTION_ENV_FILE_NAME: environment variable (.env/.env.production) based on web application
# BUILD_DIRECTORY: parent directory of build file
variables:
APP_NAME: contract_template
BASE_DOCKER_IMAGE: node:18.18.0
BRANCH_BASE_URL: contract-template.truemark.com.np
PRODUCTION_URL: https://www.contract-template.truemark.com.np
PRODUCTION_ENV_FILE_NAME: .env
BUILD_DIRECTORY: build
stages:
- branch_deploy
- production_deploy
- after_deploy
.branch_based_job:
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
when: never
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: on_success
build_branch_deploy:
extends: .branch_based_job
image: $BASE_DOCKER_IMAGE
stage: branch_deploy
environment:
name: preview/$CI_COMMIT_REF_NAME
url: https://$CI_COMMIT_REF_SLUG.$BRANCH_BASE_URL
before_script:
- apt update && apt upgrade -y
# check whether package openssh is available or not and install it if not available
- 'command -v ssh-agent >/dev/null || ( apt install -y openssh )'
# Command is used to setup a SSH connection
- eval $(ssh-agent -s)
# add ssh private to ssh connection during CI
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
# create director in server
- mkdir -p ~/.ssh
# change the permission of .ssh i.e read, write, delete
- chmod 700 ~/.ssh
# use to fetch public key of host server to known_hosts
- ssh-keyscan $SERVER_IP >> ~/.ssh/known_hosts
# changes permission of known_hosts file to read and write.
- chmod 644 ~/.ssh/known_hosts
# update package
- apt install -y rsync
- npm install --legacy-peer-deps
script:
# create directory by using branch name
- ssh $SSH_USER@$SERVER_IP mkdir -p /home/deploy/non_production/$APP_NAME/$CI_COMMIT_REF_SLUG
- cat $envfile_non_production > $PRODUCTION_ENV_FILE_NAME
- npm run build
# publish static assets or files to a remote server and ensure only changed files are transferred
- rsync -atv --delete --progress ./$BUILD_DIRECTORY/ root@$SERVER_IP:/home/deploy/non_production/$APP_NAME/$CI_COMMIT_REF_SLUG
artifacts:
paths:
- $BUILD_DIRECTORY
production_deploy:
image: $BASE_DOCKER_IMAGE
stage: production_deploy
environment:
name: production
url: $PRODUCTION_URL
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
when: on_success
before_script:
- apt update && apt upgrade -y
# check whether package openssh is available or not and install it if not available
- 'command -v ssh-agent >/dev/null || ( apt install -y openssh )'
# Command is used to setup a SSH connection
- eval $(ssh-agent -s)
# add ssh private to ssh connection during CI
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
# create directory in server
- mkdir -p ~/.ssh
# change the permission of .ssh i.e read, write, delete
- chmod 700 ~/.ssh
# use to fetch public key of host server to known_hosts
- ssh-keyscan $SERVER_IP >> ~/.ssh/known_hosts
# changes permission of known_hosts file to read and write.
- chmod 644 ~/.ssh/known_hosts
# update package
- apt install -y rsync
- npm install --legacy-peer-deps
script:
- ssh $SSH_USER@$SERVER_IP mkdir -p /home/deploy/production/$APP_NAME/main
- cat $envfile_production > $PRODUCTION_ENV_FILE_NAME
- npm run build
# publish static assets or files to a remote server and ensure only changed files are transferred
- rsync -atv --delete --progress ./$BUILD_DIRECTORY/ root@$SERVER_IP:/home/deploy/production/$APP_NAME/main
artifacts:
paths:
- $BUILD_DIRECTORY
post_message_to_mr:
extends: .branch_based_job
stage: after_deploy
image: $BASE_DOCKER_IMAGE
script:
- export GITLAB_TOKEN=$TRUEMARK_GITLAB_KEY
- apt update
# install package curl
- apt install -y curl
- 'curl --location --request POST "https://gitlab.com/api/v4/projects/$CI_MERGE_REQUEST_PROJECT_ID/merge_requests/$CI_MERGE_REQUEST_IID/notes" --header "PRIVATE-TOKEN: $GITLAB_TOKEN" --header "Content-Type: application/json" --data-raw "{ \"body\": \"## :tada: Latest changes from this :deciduous_tree: branch are now :package: deployed \n * :bulb: Please prefer to post snapshots of UI issues from this latest deployment while you review this PR (br tag) \n * :small_red_triangle: Preview URL might not work/exist once this Merge/Pull request is merged (hr tag) :tv: [Preview Deployed Application/website](https://$CI_COMMIT_REF_SLUG.$BRANCH_BASE_URL).\" }"'
Conclusion
In summary, our journey covered the installation of the Caddy web server on our server and configuring it for both production and non-production deployments using Caddyfiles.
Additionally, we successfully set up three distinct GitLab jobs for branch-based deployments, production releases, and post-deployment notifications.
Thank you for joining us on this learning adventure!