How I Publish This Site¶
I've previously written about how I run Docker at home. This infrastructure is a core component to running this site. In addition to Docker, Gitlab is the orchestrator for how I run many of my homelab services. In this post we will take a look at how I take Markdown, and with the power of Material for MkDocs, deploy the site you are currently reading.
Material for MkDocs¶
MkDocs is one of many static site generator content projects out there, similar to Jekyll, Hugo, etc. Material for MkDocs is technically a theme for MkDocs, but introduces a number of modern design principles. There is no backing database, you write your content in the Markdown "language" and MkDocs generates the HTML that you serve with your web server. Serving statically generated sites can be as simple as taking the rendered HTML, getting it to your web server, done. My MkDocs content lives in a Gitlab.com code repo, allowing me to benefit from different branches like a dev or prod environment and then orchestrate deployments via CI/CD.
Gitlab and Gitlab Runners¶
Gitlab is a Git/DevOps platform that is similar to Github. Gitlab comes in a SaaS or self-managed version, with a variety licenses that are available. For a general homelab environment, either the free SaaS version or the self-managed version should work. One of Gitlab's core features is its DevOps capabilities, similar to Github Actions. Using compute called Gitlab Runners
we can run code pipelines, jobs that are fully defined by us to accomplish whatever it is we need. For this project that would be building the HTML content, building the Docker image to server this content, and deploying that Docker image. By default, jobs in Gitlab.com use their Runners
, but we can also host our own Runners
so that any job we run is actually run inside our homelab!
Instructions for our jobs are contained within a .gitlab-ci.yml
file that we add to our Git repo, in our case, hosted on Gitlab.com. Like Github, repos on Gitlab can be public or private. I utilize private repositories in Gitlab.com so only I am able to view and interact with my code. The limitations of the "Free" SaaS plan don't really impact how I work with Gitlab. If I do run into license limits, I would just move to the self-managed plan and host Gitlab at home, using the same deployment methods we will talk about in this article.
At home I run 3 Gitlab Runner
virtual machines, all based on Debian 12. These runners are all configured the same way and are joined to my Gitlab.com tenant via a registration token that can be found in your Gitlab.com account.
My documentation for configuring my Gitlab runners
Gitlab Runners
Most of the homelab is automated off Gitlab.com pipelines running internally via a home hosted Gitlab runners.
Runner Specs
- Debian 12
- 4096 MB Ram
- 1 Socket, 4 CPU Core
- 60 GB Disk
Installing a Debian 12 Runner
1. Create the runner in the Internal Server network (<my internal server network>)
2. Install Debian 12
- Use a static IP
- Do not install a desktop environment
3. Finish Debian configuration
- SSH to the runner with your <personal> account
- Become root via su -
- Update the system via apt update && apt upgrade -y
- Install sudo and curl as root via apt install sudo curl vim python3-pip python3-virtualenv jq pngquant
- Add <your personal user> to the sudoers group
- usermod -aG sudo <your user>
- Edit the sudoers file via visudo
- <your user> ALL=(ALL:ALL) ALL
- Reboot
4. Install Docker
- Instructions can be found on the [docker site](https://docs.docker.com/engine/install/debian/)
- Configure Docker repo
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the repository to Apt sources:
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
- Install Docker
- sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
- Enable Docker
- sudo systemctl start docker && sudo systemctl enable docker
5. Configure Runner
- Install the [Gitlab Runner repo](https://docs.gitlab.com/runner/install/linux-repository.html)
- curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash
- Install the Gitlab Runner package
sudo apt-get install gitlab-runner
- Configure the shell Runner
- sudo gitlab-runner register
- You will need a registration token from Gitlab to complete this
- Configure the Docker Runner
- sudo gitlab-runner register
- You will need a registration token from Gitlab to complete this
- Allow the Gitlab Runner to run Docker
- sudo usermod -aG docker gitlab-runner
- Reboot
6. Configured Backups Network Mount
- Create local backups folder
- sudo mkdir /var/nas_backups
- Add the following to /etc/fstab
- //<my nas backups location> /var/nas_backups cifs username=<your nas user>,password=<nas password>,file_mode=0777,dir_mode=0777
As mentioned, you can also use runners that Gitlab.com provides. There are, however, some downsides to doing this.
- On the "Free" plan you are limited in runner compute minutes per month.
- While Gitlab.com is generous with these, depending on how you use your runner, you may quickly run out of them.
- Untrusted, shared compute resources are likely not a great idea...
- Gitlab.com runners are "somewhere in the cloud" and will not be able to interact with the rest of your homelab without some potentially risky, and for sure complicated configuration.
Gitlab Runner Executors¶
When configuring a Gitlab Runner
you have the option of installing multiple executors, or environments in which to run your jobs. Consider, if you have a need to run a Python script, you may use a Docker executor and specify a Docker image with Python installed, which will be able to natively run your Python code. If you have a need to SCP files to a system in your homelab you may be looking for a Shell executor. The Gitlab.com documentation on executors details what options you have and what each executor does.
Info
You can install multiple executors on one Gitlab Runner
! Just re-run the installation and choose a different executor each time.
I tend to run a Docker and Shell executor on my Gitlab Runners
. The Docker executor allows a large amount of flexibility as just about any Docker image can be used to run your jobs and the Shell executor is used for any virtual machine level operations I may need. In our .gitlab-ci.yml
file, we use tags
to instruct our runners what executor they should be using for a specific stage.
Executing Jobs from Gitlab.com¶
To execute jobs from Gitlab you need two things, a registered Gitlab Runner
and a Gitlab repo with a valid .gitlab-ci.yml
file. .gitlab-ci.yml
files can be fairly simplistic or incredibly complicated, Gitlab.com has a full listing of all available syntax.
For this site, this is my current .gitlab-ci.yml
file
stages:
- build-site-dev
- build-container-dev
- deploy-dev
- build-site-prod
- build-container-prod
before_script:
- virtualenv venv
- source venv/bin/activate
build-dev-site:
stage: build-site-dev
image: python:3.11.4
except:
- main
tags:
- mktbs
- shell
script:
- pip install git+https://${gh_key}@github.com/squidfunk/mkdocs-material-insiders.git mkdocs-rss-plugin pillow material-plausible-plugin
- cd wobsite
- mkdocs build
artifacts:
paths:
- wobsite/site/
build-dev-container:
stage: build-container-dev
image: docker:23.0.0
except:
- main
tags:
- mktbs
- shell
script:
- docker login -u $CI_DEPLOY_USER -p $CI_DEPLOY_PASSWORD $CI_REGISTRY
- docker build -t $CI_REGISTRY/mktbsio/mktbsnet:dev .
- docker push $CI_REGISTRY/mktbsio/mktbsnet:dev
trigger-deploy-dev:
stage: deploy-dev
except:
- main
tags:
- mktbs
- shell
script :
- 'curl -k --request POST "https://docker.mktbs.io:9443/api/stacks/webhooks/<my webhook>"'
build-prod-site:
stage: build-site-prod
image: python:3.11.4
only:
- main
tags:
- mktbs
- shell
script:
- pip install git+https://${gh_key}@github.com/squidfunk/[email protected] mkdocs-rss-plugin pillow material-plausible-plugin
- cd wobsite
- mkdocs build
artifacts:
paths:
- wobsite/site/
build-prod-container:
stage: build-container-prod
image: docker:23.0.0
only:
- main
tags:
- mktbs
- shell
script:
- docker login -u $CI_DEPLOY_USER -p $CI_DEPLOY_PASSWORD $CI_REGISTRY
- docker build -t $CI_REGISTRY/mktbsio/mktbsnet .
- docker push $CI_REGISTRY/mktbsio/mktbsnet:latest
Deploying the Site¶
When we deploy the site, we build off two different branches (a dev and a prod branch) having some knowledge of Git really helps. While we do only have one Git repo, by using our .gitlab-ci.yml
file, we can build and deploy these branches in different ways.
While both our dev and prod HTML files are built similarly, the deployment of dev and prod containers differ. For the build, we have a Git repo with our Markdown files. We need to use the MkDocs code via the Material for MkDocs project to convert those files to the rendered HTML. After building the site with MkDocs we pass those HTML files to a job that builds a Docker container with a simple Nginx web server configuration. When we run this container our Nginx configuration simply serves the HTML files.
For our dev site, we automatically deploy a new version of the site when changes are made to the Git repo, but only to the dev branch. Our prod site it only supports manual deployment, this allows us to review any changes made in the dev site before promoting those changes.
Building the Docker Container¶
One design decision I made with this project is to build a custom Docker container as part of the deployment process using Gitlab.com's private container registry. This allows me to build containers and be the only person able to use them, like a private Docker Hub. I could have simply built the site, SCP'd the files to a web server somewhere, or committed them to another (or the same) Git repo and just use Portainer to bring the files into an existing Nginx web server container. It felt like building a Docker container and deploying it via Portainer was the cleanest way to go about this.
Stages¶
This project uses several stages to actually build everything. Since I'm using Portainer to run container workloads, I found that getting the site into a Docker container was the most effective way to deploy for my needs at home. Let's talk about .gitlab-ci.yml
stages. Stages are logical sections of your build pipeline. Looking at my stages below, you can see that I've divided building my dev and prod sites (since I use different release versions of the Material for MkDocs project on dev and prod), deploying my dev site, and building the containers for my dev and prod sites.
stages:
- build-site-dev
- build-container-dev
- deploy-dev
- build-site-prod
- build-container-prod
Build Site Dev and Prod¶
In my build-site-dev
and build-site-prod
stages we actually build the site, converting the Markdown files into the rendered HTML that we will add to our web server.
Let's take a look at the dev build stage.
build-dev-site:
stage: build-site-dev
image: python:3.11.4
except:
- main
tags:
- mktbs
- shell
script:
- pip install git+https://${gh_key}@github.com/squidfunk/mkdocs-material-insiders.git mkdocs-rss-plugin pillow material-plausible-plugin
- cd wobsite
- mkdocs build
artifacts:
paths:
- wobsite/site/
- We define the name of the stage.
- We specify the image (in this case the Docker image) that we would like to use for this stage.
- Since MkDocs is a Python project, we use a Python image to retrieve the Material for MkDocs project and run the build process
- The except directive will be discussed below.
- Tags are how we specify which
Gitlab Runner
we want to run our stage.- We may wish to run specific jobs on specific runners. A perfect example is using both an x86_64 CPU based runner and an Arm CPU based runner to build native versions of a Docker image.
- The script section allows us to run specific commands and is the core of our stage/pipeline. This is where we are doing the actual work in each stage.
- In this example, since we are running in a Docker image that has Python installed already, we can install the Python packages that we need via
pip
, the Python package manager. - We then navigate to the directory where our Markdown files are located and then finally we build our site!
- In this example, since we are running in a Docker image that has Python installed already, we can install the Python packages that we need via
- Artifacts are files that are passed between jobs. Since we are going to be copying our HTML files into a web server later, we specify the file path where the
mkdocs build
outcome files (our HTML files) are dropped.
Except and Only¶
So far I have talked about dev and prod branches in my .gitlab-ci.yml
file, and discussed the differences in those branches. Technically I do not have a dev and a prod branch, I have a prod branch and a not prod
branch. One of the ways we can be selective in our .gitlab-ci.yml
files is by using the except
and only
sections of our stages. These directives are fairly literal, the only
directive will only apply when being run on a branch with the name specified in the directive, the except
directive will only apply when being run on a branch other than the name specified in the directive.
In Git, it is common to see master or main being the "prod" branch, with other branches being named as descriptively as the project maintainers would like. Using these directives we can have dramatically different stages in our .gitlab-ci.yml
file based on the Git branch we are running. By committing new changes to a Git branch named "dev" (or anything but main) we build our dev site and our dev container.
Deploying the Dev Site¶
For the dev site, which is hosted on a Docker host only available on my home network, I don't want to take the manual effort to go and update the Portainer stack every time I make a change. Since the Gitlab Runners
that run all of this code live on my home network, I can utilize Portainer's webhook capabilities to trigger automatic updates to the stack. I simply create a final stage in .gitlab-ci.yml
that will only run after the site is built and the container is built to trigger Portainer to pull the new container into the stack and deploy it.
trigger-deploy-dev:
stage: deploy-dev
except:
- main
tags:
- mktbs
- shell
script :
- 'curl -k --request POST "https://docker.mktbs.io:9443/api/stacks/webhooks/<my webhook>"'
Deploying the Prod Site¶
Deploying the prod site is less interesting than the dev site, since this is just deploying a new container in Portainer, I navigate to the stack in Portainer and just click the button to pull a new version of the Docker container!
The Full Process¶
To build my website, I start by writing my content in Markdown (specific to the Material for MkDocs syntax). Once I write new content I commit those changes to a new branch in my Gitlab repo for my website, I tend to call that branch "dev". After committing these changes, the dev branch pipeline runs on my Gitlab Runners
, these build the site, create a new version of the dev Docker container, pushes this container to the Gitlab.com container registry and triggers a new deployment of my dev site.
After I merge my dev branch into my main (or prod) branch, the prod site is built, the prod container is built and is pushed to the Gitlab.com container registry. Once this happens I trigger a new deployment for my prod site.
flowchart TB
subgraph Dev
a1["Commit changes to dev branch"]-->a2
a2["Build dev site"]-->a3
a3["Build dev container"]-->a4
a4["Hit Portainer webhook to deploy dev site"]
end
subgraph Prod
b1["Commit changes to prod branch"]-->b2
b2["Build prod site"]-->b3
b3["Build prod container"]-->b4
b4["Deploy prod site manualy"]
end
Dev -- Upon Git Merge to Main--> Prod
Material for MkDocs makes writing documentation or a website an enjoyable experience. Tying this into a CI/CD pipeline process allows for new content and edits in a dev environment with easy deployments to the production site when everything looks good.