Introduction
In the world of development, deploying web application has been become crucial part of the process. Docker and Docker Compose have emerged as powerful tools for containerization, making it easier than ever to package, environment and manage your web applications. Caddy, on the other hand, is a lightweight and efficient web server that can be seamlessly integrated with your Docker-based applications.
This blog will guide you through the process of configuring Caddy as a reverse proxy for our Docker-based web applications using Docker Compose. we’ll also dive into how can we configure gitlab CI/CD configuration to enhance our deployment workflow. By the end of this tutorial, you’ll have a solid understanding of how to your deployment your web applications with the power of Caddy, Docker and gitlab CI. Let’s dive in and explore on our ways to do it.
Prequisites
We recommend that you have the following basic requirements:
- Server IP address
- User level access for the server where configuration is needed.
- Configured docker compose files for rails application
Docker Installation In Server
In our previous step, we successfully set up Caddy on our server, allowing us to serve web content efficiently. Now, we’re taking the next step by adding Docker and Docker Compose to our server setup. With Docker, we can containerize applications, making deployment and management easier. Docker Compose allows us to orchestrate multi-container applications seamlessly.
Before we install Docker Engine for the first time on a new host machine, we need to set up the Docker repository. Afterward, we can install and update Docker from the repository.
- Set up Docker’s Apt repository.
# Add Docker's official GPG key: sudo apt-get update sudo apt-get install ca-certificates curl gnupg sudo install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg sudo chmod a+r /etc/apt/keyrings/docker.gpg # Add the repository to Apt sources: echo \ "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update
- Install Docker Packages
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-compose
- Verify that the Docker Engine installation is successful by running the hello-world image
sudo docker run hello-world
This command downloads a test image and runs it in a container. When the container runs, it prints a confirmation message and exits.
Caddy Installation In Server
- We need to follow the instructions below step by step
- Access the server:
To access the server, we need to addssh root@ip_address
to the command line and enter the server where ip_address is the IP address for the server. Example:49.123.4.79
-
- Install Caddy in server:
Please follow the series of commands provided below to install Caddy on your server. We’ll also break down the purpose of each command to enhance your understanding.sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list sudo apt update sudo apt install caddy
- Installing Required Packages:
To ensure secure package management, we install necessary components using this commandsudo apt install -y debian-keyring debian-archive-keyring apt-transport-https
This equips the system with tools to manage packages safely and transfer them using HTTPS.
- Adding Caddy’s Verification Key:
We fetch a special key to confirm the authenticity of Caddy packages by runningcurl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
With this command, we download a special digital key that acts like a seal for Caddy’s packages. We transform it into a format that our system can use to confirm the legitimacy of Caddy’s software
- Configuring Caddy’s Repository:
By executing,curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
We’re telling the system where to find Caddy’s collection of software packages. It’s like adding a new store to your list of trusted shopping places.
- Refreshing Package Information:
We keep our package information up to date with:sudo apt update
- Installing Caddy:
This command refreshes the package information from repositories, ensuring the latest available package versions are known.sudo apt install caddy
Now, we have successfully install the caddy on our server.🎉
Docker Configuration
For our current rails application, we will be using below docker-compose.yml configuration in our local device. docker-compose.yml
version: '3.9'
services:
build: &image
build:
context: .
dockerfile: Dockerfile
image: ${CI}
web:
<<: *image
command: /code/script/start
tty: true
volumes:
- .:/code
- bundler:/user/local/bundle
- /code/node_modules
ports:
- 3000:8000
environment:
- RAILS_MAX_THREADS=**
- RAILS_ENV=***
- RACK_ENV=***
- DATABASE_HOST=***
- DATABASE_USER=***
- DATABASE_PASSWORD=***
- POSTGRES_HOST_AUTH_METHOD=trust
depends_on:
- db
- redis
db:
image: postgres:14.2
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
- POSTGRES_USER=***
- POSTGRES_PASSWORD=***
volumes:
- db-data:/var/lib/postgresql/data
ports:
- 5432:5432
redis:
image: redis:latest
volumes:
- redis-data:/var/lib/redis/data
ports:
- 6379:6379
volumes:
db-data:
bundler:
redis-data:
Note: Configuration of dockerfile falls outside the project scope.
Above Docker Compose configuration made of three services: web, db, and redis
The web
service uses an image built by the build
service, with the image name determined by the ${CI}
environment variable. If the ${CI}
environment variable is not explicitly defined, the image for the service will be derived from the Dockerfile defined in the build section. It deploys the application , running a specified command and exposing it on port 3000
.
The db
service use the PostgreSQL image and sets environment variables for PostgreSQL, ensuring trust-based authentication. It maps port 5432
for database connections.
The redis
service uses the official Redis image, providing Redis functionality and exposing port 6379
.
The use of named volumes like db-data
ensures data persistence.
With the configuration mentioned above, we will be building a Docker image for our web service, which will be stored in GitLab Container Registry.
To enhance the flexibility of our server setup, we're making some minor adjustments to the configuration described above. We'll store above configuration as a file in GitLab and create a reusable template for all our server Docker configurations. This approach allows us to seamlessly integrate them into our CI/CD setup. By doing so, we ensure that our CI/CD process runs smoothly and our server can effortlessly fetch the Docker image from the GitLab Container Registry. These adjustments simplify the deployment and maintenance of our web service.
Template for docker-compose.yml
version: '3.9'
services:
web:
image: image_created
command: /code/script/start
tty: true
ports:
- 8000
env_file:
- ../environment_path
platform: linux/amd64
db:
image: postgres:14.2
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=unbreakable
volumes:
- db-data:/var/lib/postgresql/data
ports:
- 5432
platform: linux/amd64
redis:
image: redis:latest
volumes:
- redis-data:/var/lib/redis/data
ports:
- 6379
volumes:
db-data:
bundler:
redis-data:
In above template, we will replace image_created
to web image created in gitlab CI and adjust environment_path
accordingly to environment we are setting up i.e for production environment (.env.production
), for development environment (.env.dev
).
We've eliminated the need to explicitly expose ports for our services. Now, when we create a container, the ports required by the services will be automatically allocated, making it expose the web services to multiple ports.
Caddy Configuration
Moving on, we will to need create similar template for caddy which can be used to point different domain to above expose port.
Template for caddy
url {
reverse_proxy :port
}
In CI configuration step, we'll substitute url
with the specific domain we want to direct traffic to and port
with the exposed port obtained when creating the web container. This customization allows us to seamlessly connect our domain to the web service running in the container.
example:
........
feature-test.review.leavebalance.com {
reverse_proxy :32615
}
we will store as a file in gitlab too.
Next, we will need to make some adjustment over server to be create multiple caddy configuration.
Let's edit caddy configuration cat /etc/caddy/Caddyfile
import /etc/caddy/non_production/*.caddy
import /etc/caddy/production/*.caddy
Now, we need to make sure we have created both non_production
and production
directories.
Our folder structure will look like below.
caddy
├── Caddyfile
├── non_production
│  ├── leave_balance
│  │  ├── example1.caddy
│  │  └── example2.caddy
│  ├── leavebalance.caddy
└── production
├── prod_leave_balance.caddy
In leavebalance.caddy
, we will update configuration.
import /etc/caddy/non_production/leave_balance/*.caddy
Note: You can learn about how to configure custom CI/CD variables over this article.
Gitlab CI/CD Configuration
Now, our focus shifts to creating the CI/CD configuration, encompassing branch-based deployments, production deployments, and cleanup jobs triggered when a branch is merged or an environment is removed.
we need to follow below configuration step by step.
- Create
.gitlab-ci.yml
within your root directory of the project. - We will create four different stages and introduce variables.
- Create Web Application Image: Since we will required to create docker images of web service for both production and branch based environment. we will be create common job and extend them accordingly.
.build_image_common_setup: &build_image_common_setup stage: build_docker_image image: docker:24.0.6 services: - docker:24.0.6-dind before_script: - apk add docker-compose - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY script: - CI="$CI_REGISTRY/truemark/$APP_NAME/$CI_COMMIT_REF_SLUG:$CI_COMMIT_SHORT_SHA" - cat $envfile > .env - cp config/database.yml.ci config/database.yml - docker-compose -f docker-compose.yml build - docker-compose -f docker-compose.yml push
Here, we've introduced the
build_image_common_setup
job to streamline Docker image management. This job use Docker-in-Docker services (docker:24.0.6 and docker:24.0.6-dind
) and ensures thatdocker-compose
is available for multi-container application management. Additionally, it handles container registry authentication using stored credentials fromCI_REGISTRY_USER
andCI_REGISTRY_PASSWORD
.
By updating the CI variable, it tags the web image according to the format below$CI_REGISTRY/truemark/$APP_NAME/$CI_COMMIT_REF_SLUG:$CI_COMMIT_SHORT_SHA
.
To ensure compatibility with our server's database configurations, we change the defaultdatabase.yml
with a customizeddatabase.yml.ci
.Overall, the main objective of this job is to build the Docker image and push it to the container registry. This prepares our Docker-based web applications for deployment. By taking care of image tagging, authentication, and other essential tasks, this job significantly streamlines our CI/CD workflow.
- Branch based Configuration: Before we add configuration for branch based job, we need to create image for this branch and store over container repository. It will be possible using previous common job we created.
build_docker_image_development: <<: *build_image_common_setup
Since, we need to create image only when merge request is created. we can do so by creating common rule and extend it to above configuration.
.branch_based_job: rules: - if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH' when: never - if: '$CI_PIPELINE_SOURCE == "merge_request_event"' when: on_success
Now, let's extend to above
build_docker_image_development
.build_docker_image_development: extends: .branch_based_job <<: *build_image_common_setup
Now, Let's create
build_deploy_development
job, we need to update configuration as below- Basic configuration: By adding,
build_deploy_development: extends: .branch_based_job stage: development_branch_based_deploy image: $BASE_DOCKER_IMAGE
It will inherits common configuration from
branch_based_job
, ensuring it runs when a merge request is created. It will run indevelopment_branch_based_deploy
and utilizes the specifiedBASE_DOCKER_IMAGE
as its container. - Setting Environment: By adding,
environment: name: preview/$CI_COMMIT_REF_NAME url: https://$CI_COMMIT_REF_SLUG.$BRANCH_BASE_URL on_stop: cleanup_branch_based_deployed_environment
It will create environment after we successfully deployed application added by us. When above environment is closed then
cleanup_branch_based_deployed_environment
job will run. - Setting up SSH connection: By adding,
before_script: - 'command -v ssh-agent >/dev/null || ( apt add --update openssh )' - eval $(ssh-agent -s) - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - - mkdir -p ~/.ssh - chmod 700 ~/.ssh - ssh-keyscan $SERVER_IP >> ~/.ssh/known_hosts - chmod 644 ~/.ssh/known_hosts - apt update && apt upgrade -y && apt install -y rsync
It establishes an SSH connection between the host and the container by providing the
SSH_PRIVATE_KEY
andSERVER_IP
within the container environment. Additionally, it installs the rsync package in the container to facilitate efficient data synchronization.Note: Both
SSH_PRIVATE_KEY
andSERVER_IP
are store over gitlab CI/CD variable. - Setting Docker Configuration: Now, we need to focus on running docker over server with our configuration. Our first step word to be update the template we added over gitlab CI and transform the configuration to server.
we can do it by adding,script: - export PROJECT_SLUG="nprod_${APP_NAME}_${CI_COMMIT_REF_SLUG}" - ssh $SSH_USER@$SERVER_IP mkdir -p non_production/$APP_NAME/$PROJECT_SLUG - echo $PROJECT_SLUG - export DOCKER_WEB_IMAGE="$CI_REGISTRY/truemark/$APP_NAME/$CI_COMMIT_REF_SLUG:$CI_COMMIT_SHORT_SHA" # copy the template and replace with actual created image - cat $dockerCompose < docker-compose.yml|sed -e "s#image_created#$DOCKER_WEB_IMAGE#g" -e "s#environment_path#.env.dev#g" > docker-compose-updated.yml # update docker-compose file over server - echo docker-compose-update.yml - rsync -atv --delete --progress docker-compose-updated.yml $SSH_USER@$SERVER_IP:/home/deploy/non_production/$APP_NAME/$PROJECT_SLUG/docker-compose.yml
In this setup, the job adapts to the branch it's working on by exporting the
PROJECT_SLUG
. This dynamic slug is used to create a corresponding directory on the server. We also utilize theDOCKER_WEB_IMAGE
variable, which contains image details from the earlier build_docker_image_development job. To ensure accurate configuration, we replace placeholdersimage_created
with the actualDOCKER_WEB_IMAGE
.
Additionally, we setenvironment_path
to.env.dev
to match our development environment requirements. - Creating Containers In Server: By adding,
# remove last docker containers and update with newly created containers - ssh $SSH_USER@$SERVER_IP "cd non_production/$APP_NAME/$PROJECT_SLUG && sudo docker-compose down" - ssh $SSH_USER@$SERVER_IP "cd non_production/$APP_NAME/$PROJECT_SLUG && sudo docker-compose up -d"
Above script will move to project location and stop container if it's running for that branch. After that, it will run configuration created from above script and run the containers.
- Setting Caddy Configuration: By adding,
- export WEB_PORT=$(ssh $SSH_USER@$SERVER_IP sudo docker port "$PROJECT_SLUG"_web_1 | awk -F ':' '{print $2}') - echo $WEB_PORT - cat $caddyTemplate| sed -e "s#url#$CI_COMMIT_REF_SLUG.$BRANCH_BASE_URL#g" -e "s#port#$WEB_PORT#g" > $PROJECT_SLUG.caddy - rsync -atv --delete --progress $PROJECT_SLUG.caddy $SSH_USER@$SERVER_IP:/etc/caddy/non_production/$APP_NAME/ #reload the caddy configuration - ssh $SSH_USER@$SERVER_IP sudo caddy reload --config /etc/caddy/Caddyfile
In this segment, the job exports the port where the web container is running. It then updates the Caddy template by replacing the url placeholder with
$CI_COMMIT_REF_SLUG.$BRANCH_BASE_URL
and updating the exposed port from port to the dynamicWEB_PORT
. This ensures the Caddy configuration aligns with the running web service. Finally, the job updates the server's configuration and triggers a Caddy reload to apply the changes seamlessly.
Finally, we will transforming created configuration in our server.
Our final configuration for branch based deployment will be as below.
# branch based image will be created here. build_docker_image_development: extends: .branch_based_job <<: *build_image_common_setup build_deploy_development: extends: .branch_based_job stage: development_branch_based_deploy image: $BASE_DOCKER_IMAGE environment: name: preview/$CI_COMMIT_REF_NAME url: https://$CI_COMMIT_REF_SLUG.$BRANCH_BASE_URL on_stop: cleanup_branch_based_deployed_environment before_script: - 'command -v ssh-agent >/dev/null || ( apt add --update openssh )' - eval $(ssh-agent -s) - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - - mkdir -p ~/.ssh - chmod 700 ~/.ssh - ssh-keyscan $SERVER_IP >> ~/.ssh/known_hosts - chmod 644 ~/.ssh/known_hosts - apt update && apt upgrade -y && apt install -y rsync script: - export PROJECT_SLUG="nprod_${APP_NAME}_${CI_COMMIT_REF_SLUG}" - ssh $SSH_USER@$SERVER_IP mkdir -p non_production/$APP_NAME/$PROJECT_SLUG - echo $PROJECT_SLUG - export DOCKER_WEB_IMAGE="$CI_REGISTRY/truemark/$APP_NAME/$CI_COMMIT_REF_SLUG:$CI_COMMIT_SHORT_SHA" # copy the template and replace with actual created image - cat $dockerCompose < docker-compose.yml|sed -e "s#image_created#$DOCKER_WEB_IMAGE#g" -e "s#environment_path#.env.dev#g" > docker-compose-updated.yml # update docker-compose file over server - echo docker-compose-update.yml - rsync -atv --delete --progress docker-compose-updated.yml $SSH_USER@$SERVER_IP:/home/deploy/non_production/$APP_NAME/$PROJECT_SLUG/docker-compose.yml # remove last docker containers and update with newly created containers - ssh $SSH_USER@$SERVER_IP "cd non_production/$APP_NAME/$PROJECT_SLUG && sudo docker-compose down" - ssh $SSH_USER@$SERVER_IP "cd non_production/$APP_NAME/$PROJECT_SLUG && sudo docker-compose up -d" - export WEB_PORT=$(ssh $SSH_USER@$SERVER_IP sudo docker port "$PROJECT_SLUG"_web_1 | awk -F ':' '{print $2}') - echo $WEB_PORT - cat $caddyTemplate| sed -e "s#url#$CI_COMMIT_REF_SLUG.$BRANCH_BASE_URL#g" -e "s#port#$WEB_PORT#g" > $PROJECT_SLUG.caddy - rsync -atv --delete --progress $PROJECT_SLUG.caddy $SSH_USER@$SERVER_IP:/etc/caddy/non_production/$APP_NAME/ #reload the caddy configuration - ssh $SSH_USER@$SERVER_IP sudo caddy reload --config /etc/caddy/Caddyfile
- Basic configuration: By adding,
- Production Configuration:
The configuration for production would be similar to branch based deployment. Although few adjustment we will have update here.
we will add two common rule which will run automatically and mannually..production_auto_job: rules: - if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH' when: on_success .production_manual_job: rules: - if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH' when: manual
Since, our server doesn't have
$SSH_USER
access to production server. we will useroot
to update them over server.build_docker_image_production: extends: .production_auto_job <<: *build_image_common_setup build_deploy_production: extends: .production_manual_job stage: production_deploy image: $BASE_DOCKER_IMAGE environment: name: preview/$CI_COMMIT_REF_NAME url: https://$BRANCH_BASE_URL before_script: - 'command -v ssh-agent >/dev/null || ( apt add --update openssh )' - eval $(ssh-agent -s) - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - - mkdir -p ~/.ssh - chmod 700 ~/.ssh - ssh-keyscan $SERVER_IP >> ~/.ssh/known_hosts - chmod 644 ~/.ssh/known_hosts - apt update && apt upgrade -y && apt install -y rsync script: - export PROJECT_SLUG="prod_${APP_NAME}" - ssh $SSH_USER@$SERVER_IP sudo mkdir -p production/$APP_NAME/$PROJECT_SLUG - export DOCKER_WEB_IMAGE="$CI_REGISTRY/truemark/$APP_NAME/$CI_COMMIT_REF_SLUG:$CI_COMMIT_SHORT_SHA" # copy the template and replace with actual created image - cat $dockerCompose < docker-compose.yml|sed -e "s#image_created#$DOCKER_WEB_IMAGE#g" -e "s#environment_path#.env.production#g" > docker-compose-updated.yml # remove last docker containers and update with newly created containers - ssh $SSH_USER@$SERVER_IP "cd production/$APP_NAME/$PROJECT_SLUG && sudo docker-compose down" # update docker-compose file over server - rsync -atv --delete --progress docker-compose-updated.yml root@$SERVER_IP:/home/deploy/production/$APP_NAME/$PROJECT_SLUG/docker-compose.yml - ssh $SSH_USER@$SERVER_IP "cd production/$APP_NAME/$PROJECT_SLUG && sudo docker-compose up -d" - export WEB_PORT=$(ssh $SSH_USER@$SERVER_IP sudo docker port "$PROJECT_SLUG"_web_1 | awk -F ':' '{print $2}') - echo $WEB_PORT - cat $caddyTemplate| sed -e "s#url#$BRANCH_BASE_URL#g" -e "s#port#$WEB_PORT#g" > $PROJECT_SLUG.caddy # only root user has access to production - rsync -atv --delete --progress $PROJECT_SLUG.caddy root@$SERVER_IP:/etc/caddy/production/ #reload the caddy configuration - ssh $SSH_USER@$SERVER_IP sudo caddy reload --config /etc/caddy/Caddyfile
- Post-Deployment Notifications: Moving on, we will creating job which will post comment in merge request.
By adding,post_message_to_mr: extends: .branch_based_job stage: after_deploy image: $BASE_DOCKER_IMAGE script: - export GITLAB_TOKEN=$TRUEMARK_GITLAB_KEY - apt-get update # install package curl - apt-get install -y curl - 'curl --location --request POST "https://gitlab.com/api/v4/projects/$CI_MERGE_REQUEST_PROJECT_ID/merge_requests/$CI_MERGE_REQUEST_IID/notes" --header "PRIVATE-TOKEN: $GITLAB_TOKEN" --header "Content-Type: application/json" --data-raw "{ \"body\": \"## :tada: Latest changes from this :deciduous_tree: branch are now :package: deployed \n * :bulb: Please prefer to post snapshots of UI issues from this latest deployment while you review this PR {replace_with_br_tag} \n * :small_red_triangle: Preview URL might not work/exist once this Merge/Pull request is merged {replace_with_hr_tag} :tv: [Preview Deployed Application/website](https://$CI_COMMIT_REF_SLUG.$BRANCH_BASE_URL).\" }"'
This job has extend common base rule
branch_based_job
which will only in merge request.
Note:TRUEMARK_GITLAB_KEY
It will be store over gitlab CI - Clean up Branch Based Environment:
Moving on, we will create clean up job for branch based environment. we will make sure, it runs when environment is closed or trigged manually.- Basic configuration: By adding,
stage: development_branch_based_deploy rules: - if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH' when: never - if: '$CI_PIPELINE_SOURCE == "merge_request_event"' when: manual when: manual
This segment of job is added when merge request is created and can be trigged manually. It will run in
development_branch_based_deploy
stage. - Setting Environment: By adding,
environment: name: preview/$CI_COMMIT_REF_NAME action: stop
In this section, we define the environment for our job. The name parameter specifies the name of the environment, which is set to "preview/$CI_COMMIT_REF_NAME." This dynamic name allows each environment to be associated with the respective branch or reference in the Git repository. Additionally, we use the action parameter to specify that the environment should be stopped. This means that when this job is finished, the associated environment will be halted, freeing up any resources and ensuring a clean state for the next deployment. This is particularly useful for branch-based deployments in a CI/CD pipeline.
- Setting up SSH connection: By adding,
before_script: - 'command -v ssh-agent >/dev/null || ( apt add --update openssh )' - eval $(ssh-agent -s) - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - - mkdir -p ~/.ssh - chmod 700 ~/.ssh - ssh-keyscan $SERVER_IP >> ~/.ssh/known_hosts - chmod 644 ~/.ssh/known_hosts - apt update && apt upgrade -y && apt install -y rsync
It establishes an SSH connection between the host and the container by providing the
SSH_PRIVATE_KEY
andSERVER_IP
within the container environment. Additionally, it installs the rsync package in the container to facilitate efficient data synchronization.Note: Both
SSH_PRIVATE_KEY
andSERVER_IP
are store over gitlab CI/CD variable. - Removing Docker Container: By adding,
script: - export PROJECT_SLUG="nprod_${APP_NAME}_${CI_COMMIT_REF_SLUG}" - export DOCKER_WEB_REPOSITORY="$CI_REGISTRY/truemark/$APP_NAME/$CI_COMMIT_REF_SLUG" - ssh $SSH_USER@$SERVER_IP "cd non_production/$APP_NAME/$PROJECT_SLUG && sudo docker-compose down"
In this segment of the job, we focus on removing containers running on the host server. This step is essential to ensure that resources are freed up and that there are no conflicts with existing containers.
- Removing Pulled Docker Image: By adding,
- export IMAGE_IDS=$(ssh $SSH_USER@$SERVER_IP sudo docker image list | grep -E $DOCKER_WEB_REPOSITORY | awk '{print $3}') - echo $IMAGE_IDS - | if [ -n "$IMAGE_IDS" ]; then for IMAGE_ID in $IMAGE_IDS; do ssh $SSH_USER@$SERVER_IP sudo docker rmi $IMAGE_ID done else echo "No '$DOCKER_WEB_REPOSITORY' images found to remove." fi
In previous step, we have removed running container for this specific branch. Now, we will be able to remove image pull from container registry which help to free up the space in host.
- Removing Caddy Configuration: By adding,
- ssh $SSH_USER@$SERVER_IP sudo rm /etc/caddy/non_production/$APP_NAME/$PROJECT_SLUG.caddy - ssh $SSH_USER@$SERVER_IP sudo rm -rf /home/deploy/non_production/$APP_NAME/$PROJECT_SLUG - ssh $SSH_USER@$SERVER_IP sudo caddy reload --config /etc/caddy/Caddyfile
This segment of job will remove caddy configuration from host and reload the caddy configuration.
Final Configuration of clean up job will look as below.
cleanup_branch_based_deployed_environment: stage: development_branch_based_deploy rules: - if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH' when: never - if: '$CI_PIPELINE_SOURCE == "merge_request_event"' when: manual when: manual environment: name: preview/$CI_COMMIT_REF_NAME action: stop before_script: - apt update && apt upgrade -y - 'command -v ssh-agent >/dev/null || ( apt install -y openssh )' - eval $(ssh-agent -s) - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - - mkdir -p ~/.ssh - chmod 700 ~/.ssh - ssh-keyscan $SERVER_IP >> ~/.ssh/known_hosts - chmod 644 ~/.ssh/known_hosts script: - export PROJECT_SLUG="nprod_${APP_NAME}_${CI_COMMIT_REF_SLUG}" - export DOCKER_WEB_REPOSITORY="$CI_REGISTRY/truemark/$APP_NAME/$CI_COMMIT_REF_SLUG" - ssh $SSH_USER@$SERVER_IP "cd non_production/$APP_NAME/$PROJECT_SLUG && sudo docker-compose down" - export IMAGE_IDS=$(ssh $SSH_USER@$SERVER_IP sudo docker image list | grep -E $DOCKER_WEB_REPOSITORY | awk '{print $3}') - echo $IMAGE_IDS - | if [ -n "$IMAGE_IDS" ]; then for IMAGE_ID in $IMAGE_IDS; do ssh $SSH_USER@$SERVER_IP sudo docker rmi $IMAGE_ID done else echo "No '$DOCKER_WEB_REPOSITORY' images found to remove." fi - ssh $SSH_USER@$SERVER_IP sudo rm /etc/caddy/non_production/$APP_NAME/$PROJECT_SLUG.caddy - ssh $SSH_USER@$SERVER_IP sudo rm -rf /home/deploy/non_production/$APP_NAME/$PROJECT_SLUG - ssh $SSH_USER@$SERVER_IP sudo caddy reload --config /etc/caddy/Caddyfile
- Basic configuration: By adding,
variables:
APP_NAME: leave_balance
APP_DEPLOYMENT: $CI_COMMIT_BRANCH
BASE_DOCKER_IMAGE: ruby:3.2.2
BRANCH_BASE_URL: review.leavebalance.com
PRODUCTION_URL: https://www.leavebalance.com)
stages:
- build_docker_image
- development_branch_based_deploy
- production_deploy
- after_deploy
Along the tutorial, we will build image of web service and update over gitlab container repository and create two different environment for branch based deployment, production deployment and post deployed url over merge request.
Overall, gitlab CI/CD configuration look like as below.
variables:
APP_NAME: leave_balance
APP_DEPLOYMENT: $CI_COMMIT_BRANCH
BASE_DOCKER_IMAGE: ruby:3.2.2
BRANCH_BASE_URL: review.leavebalance.com
PRODUCTION_URL: https://www.leavebalance.com)
stages:
- build_docker_image
- development_branch_based_deploy
- production_deploy
- after_deploy
.branch_based_job:
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
when: never
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: on_success
.production_auto_job:
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
when: on_success
.production_manual_job:
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
when: manual
.build_image_common_setup: &build_image_common_setup
stage: build_docker_image
image: docker:24.0.6
services:
- docker:24.0.6-dind
before_script:
- apk add docker-compose
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- CI="$CI_REGISTRY/truemark/$APP_NAME/$CI_COMMIT_REF_SLUG:$CI_COMMIT_SHORT_SHA"
- cat $envfile > .env
- cp config/database.yml.ci config/database.yml
- docker-compose -f docker-compose.yml build
- docker-compose -f docker-compose.yml push
build_docker_image_development:
extends: .branch_based_job
<<: *build_image_common_setup
build_docker_image_production:
extends: .production_auto_job
<<: *build_image_common_setup
build_deploy_development:
extends: .branch_based_job
stage: development_branch_based_deploy
image: $BASE_DOCKER_IMAGE
environment:
name: preview/$CI_COMMIT_REF_NAME
url: https://$CI_COMMIT_REF_SLUG.$BRANCH_BASE_URL
on_stop: cleanup_branch_based_deployed_environment
before_script:
- 'command -v ssh-agent >/dev/null || ( apt add --update openssh )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan $SERVER_IP >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- apt update && apt upgrade -y && apt install -y rsync
script:
- export PROJECT_SLUG="nprod_${APP_NAME}_${CI_COMMIT_REF_SLUG}"
- ssh $SSH_USER@$SERVER_IP mkdir -p non_production/$APP_NAME/$PROJECT_SLUG
- echo $PROJECT_SLUG
- export DOCKER_WEB_IMAGE="$CI_REGISTRY/truemark/$APP_NAME/$CI_COMMIT_REF_SLUG:$CI_COMMIT_SHORT_SHA"
# copy the template and replace with actual created image
- cat $dockerCompose < docker-compose.yml|sed -e "s#image_created#$DOCKER_WEB_IMAGE#g" -e "s#environment_path#.env.dev#g" > docker-compose-updated.yml
# update docker-compose file over server
- echo docker-compose-update.yml
- rsync -atv --delete --progress docker-compose-updated.yml $SSH_USER@$SERVER_IP:/home/deploy/non_production/$APP_NAME/$PROJECT_SLUG/docker-compose.yml
# remove last docker containers and update with newly created containers
- ssh $SSH_USER@$SERVER_IP "cd non_production/$APP_NAME/$PROJECT_SLUG && sudo docker-compose down"
- ssh $SSH_USER@$SERVER_IP "cd non_production/$APP_NAME/$PROJECT_SLUG && sudo docker-compose up -d"
- export WEB_PORT=$(ssh $SSH_USER@$SERVER_IP sudo docker port "$PROJECT_SLUG"_web_1 | awk -F ':' '{print $2}')
- echo $WEB_PORT
- cat $caddyTemplate| sed -e "s#url#$CI_COMMIT_REF_SLUG.$BRANCH_BASE_URL#g" -e "s#port#$WEB_PORT#g" > $PROJECT_SLUG.caddy
- rsync -atv --delete --progress $PROJECT_SLUG.caddy $SSH_USER@$SERVER_IP:/etc/caddy/non_production/$APP_NAME/
#reload the caddy configuration
- ssh $SSH_USER@$SERVER_IP sudo caddy reload --config /etc/caddy/Caddyfile
build_deploy_production:
extends: .production_manual_job
stage: production_deploy
image: $BASE_DOCKER_IMAGE
environment:
name: preview/$CI_COMMIT_REF_NAME
url: https://$BRANCH_BASE_URL
before_script:
- 'command -v ssh-agent >/dev/null || ( apt add --update openssh )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan $SERVER_IP >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- apt update && apt upgrade -y && apt install -y rsync
script:
- export PROJECT_SLUG="prod_${APP_NAME}"
- ssh $SSH_USER@$SERVER_IP sudo mkdir -p production/$APP_NAME/$PROJECT_SLUG
- export DOCKER_WEB_IMAGE="$CI_REGISTRY/truemark/$APP_NAME/$CI_COMMIT_REF_SLUG:$CI_COMMIT_SHORT_SHA"
# copy the template and replace with actual created image
- cat $dockerCompose < docker-compose.yml|sed -e "s#image_created#$DOCKER_WEB_IMAGE#g" -e "s#environment_path#.env.production#g" > docker-compose-updated.yml
# remove last docker containers and update with newly created containers
- ssh $SSH_USER@$SERVER_IP "cd production/$APP_NAME/$PROJECT_SLUG && sudo docker-compose down"
# update docker-compose file over server
- rsync -atv --delete --progress docker-compose-updated.yml $SSH_USER@$SERVER_IP:/home/deploy/production/$APP_NAME/$PROJECT_SLUG/docker-compose.yml
- ssh $SSH_USER@$SERVER_IP "cd production/$APP_NAME/$PROJECT_SLUG && sudo docker-compose up -d"
- export WEB_PORT=$(ssh $SSH_USER@$SERVER_IP sudo docker port "$PROJECT_SLUG"_web_1 | awk -F ':' '{print $2}')
- echo $WEB_PORT
- cat $caddyTemplate| sed -e "s#url#$BRANCH_BASE_URL#g" -e "s#port#$WEB_PORT#g" > $PROJECT_SLUG.caddy
# only root user has access to production
- rsync -atv --delete --progress $PROJECT_SLUG.caddy root@$SERVER_IP:/etc/caddy/production/
#reload the caddy configuration
- ssh $SSH_USER@$SERVER_IP sudo caddy reload --config /etc/caddy/Caddyfile
post_message_to_mr:
extends: .branch_based_job
stage: after_deploy
image: $BASE_DOCKER_IMAGE
script:
- export GITLAB_TOKEN=$TRUEMARK_GITLAB_KEY
- apt-get update
# install package curl
- apt-get install -y curl
- 'curl --location --request POST "https://gitlab.com/api/v4/projects/$CI_MERGE_REQUEST_PROJECT_ID/merge_requests/$CI_MERGE_REQUEST_IID/notes" --header "PRIVATE-TOKEN: $GITLAB_TOKEN" --header "Content-Type: application/json" --data-raw "{ \"body\": \"## :tada: Latest changes from this :deciduous_tree: branch are now :package: deployed \n * :bulb: Please prefer to post snapshots of UI issues from this latest deployment while you review this PR {br_tag_name} \n * :small_red_triangle: Preview URL might not work/exist once this Merge/Pull request is merged {hr_tag_name} :tv: [Preview Deployed Application/website](https://$CI_COMMIT_REF_SLUG.$BRANCH_BASE_URL).\" }"'
cleanup_branch_based_deployed_environment:
stage: development_branch_based_deploy
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
when: never
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: manual
when: manual
environment:
name: preview/$CI_COMMIT_REF_NAME
action: stop
before_script:
- apt update && apt upgrade -y
- 'command -v ssh-agent >/dev/null || ( apt install -y openssh )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan $SERVER_IP >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- export PROJECT_SLUG="nprod_${APP_NAME}_${CI_COMMIT_REF_SLUG}"
- export DOCKER_WEB_REPOSITORY="$CI_REGISTRY/truemark/$APP_NAME/$CI_COMMIT_REF_SLUG"
- ssh $SSH_USER@$SERVER_IP "cd non_production/$APP_NAME/$PROJECT_SLUG && sudo docker-compose down"
- export IMAGE_IDS=$(ssh $SSH_USER@$SERVER_IP sudo docker image list | grep -E $DOCKER_WEB_REPOSITORY | awk '{print $3}')
- echo $IMAGE_IDS
- |
if [ -n "$IMAGE_IDS" ]; then
for IMAGE_ID in $IMAGE_IDS; do
ssh $SSH_USER@$SERVER_IP sudo docker rmi $IMAGE_ID
done
else
echo "No '$DOCKER_WEB_REPOSITORY' images found to remove."
fi
- ssh $SSH_USER@$SERVER_IP sudo rm /etc/caddy/non_production/$APP_NAME/$PROJECT_SLUG.caddy
- ssh $SSH_USER@$SERVER_IP sudo rm -rf /home/deploy/non_production/$APP_NAME/$PROJECT_SLUG
- ssh $SSH_USER@$SERVER_IP sudo caddy reload --config /etc/caddy/Caddyfile
Authentication of Docker In Server
Since we haven't added authentication for docker to login container registry. we will need to add below configuration in .docker
.
> mkdir -p /home/deploy/.docker
> touch config.json && nano config.json
{
"auths": {
"https://registry.gitlab.com": {
"auth": "base64-encoded-{user_name}-and-{password}"
}
}
}
Add Environment Variables
Let's make sure we have added environment for .env.dev
and .env.production
.
Conclusion
In summary, our journey covered the installation of the Caddy web server on our server and configuring it for both production and non-production deployments using Caddyfiles.
Additionally, we successfully set up three distinct GitLab jobs for branch-based deployments, production releases for docker based application, post-deployment notifications and clean up job for branch based deployment.
Thank you for joining us on this learning adventure!