Table of Contents
- Architecture and decisions
- Preparing a Dockerfile
- Defining a tagging strategy
- Creating the full deploy.yml workflow
- Automatic deployment over SSH or manual pull?
- Authenticating the server to the private registry
- Preparing the server and the .env file
- Creating docker-compose.prod.yml
- Rollout with Docker Compose
- Configuring Nginx
- Deployment observability
- Rollback and image management
- Backup, volumes, and persistence
- Example deployment on the server
- Security and real-world production
- When to move to Kubernetes
- Conclusions
- References
Not every application needs Kubernetes, Helm, an ingress controller, a secret operator, and the operational overhead that comes with them. In many cases, especially for small teams, internal products, single-tenant SaaS deployments, self-hosted environments, or architectures built around one or a few well-controlled hosts, a much simpler pattern works extremely well: build with GitHub Actions, publish images to GHCR, and deploy to a Linux server through Docker Compose behind Nginx.
This approach works well when the main requirement is a pipeline that is clear, repeatable, affordable to operate, and solid enough for real production use, without introducing an orchestrator the team does not have the time, need, or skills to maintain. It becomes even more attractive when aggressive autoscaling is not required, there are not dozens of independent microservices, and the number of daily deployments remains reasonable.
Of course, there are also cases where this pattern starts to feel limiting: multiple active hosts at the same time, strict zero-downtime requirements, advanced rollout strategies, distributed self-healing, multi-node scheduling, richer service discovery, native secret management, and workloads that scale up and down quickly. We will come back to that at the end, because it is important to understand when a sober approach is enough and when it is time to move up a level.
In this guide we will walk through the full end-to-end path: architecture, initial rationale, a multi-stage Dockerfile, the GitHub Actions workflow, tagging strategy, deployment with Docker Compose, rollback, hardening, observability, and realistic security measures. The goal is not just to show a few YAML files, but to explain why certain choices work well in production and what their limits are.
Architecture and decisions
Why GHCR
GHCR, or GitHub Container Registry, is often a very sensible choice when the code already lives on GitHub and the pipeline runs on GitHub Actions. The main advantage is ecosystem consistency: repositories, workflows, packages, permissions, and tokens all stay within the same boundary. For a small team, or for a codebase managed entirely on GitHub, that reduces operational friction, removes an integration point, and makes the build-push-deploy flow very straightforward.
Compared to Docker Hub, GHCR is usually more convenient when you want to tie images and repositories closely together, avoid some of the public/private model limitations of Docker Hub, and use the workflow’s GITHUB_TOKEN directly for pushes. Docker Hub remains perfectly valid, but GHCR tends to feel more natural in GitHub-centric projects.
Compared to ACR or ECR, the picture changes: cloud-native registries become very appealing when the infrastructure already runs in Azure or AWS, when you want deep IAM integration, private networking, in-cloud pulls, or organizational policies that are already standardized. If deployment happens on a traditional Linux VM, possibly self-hosted, and the team already uses GitHub for everything else, GHCR often represents the best compromise between simplicity and reliability.
In short, this pattern makes a lot of sense for:
- small and medium-sized teams;
- single servers or a small number of controlled hosts;
- self-hosted environments or dedicated VPS instances;
- .NET, Node.js, or other containerized workloads without an immediate need for Kubernetes;
- organizations that already use GitHub as the operational center of their release cycle.
It makes less sense, or should at least be reconsidered, when the registry needs to live tightly inside a cloud provider, when the network is heavily segmented, when enterprise policies are already standardized elsewhere, or when the number of environments, teams, and services makes a more structured platform preferable.
Reference topology
The topology we will use as reference is intentionally simple:
- a GitHub repository containing application code and a Dockerfile;
- GitHub Actions building and publishing the image to GHCR;
- an Ubuntu server with Docker Engine, Docker Compose, and Nginx;
- one or more containerized applications behind a TLS reverse proxy;
- optional supporting services, either local or external, such as a database, Redis, a broker, or object storage.
This is the classic architecture that, when set up properly, allows repeatable deployments without bringing in the full complexity of an orchestrator too early.
Minimal components of the flow
The full flow looks like this: the developer pushes to main, GitHub Actions builds and pushes the image to GHCR, the server pulls the new tag, and Docker Compose updates the service. Around that core, you add the pieces that make a real difference in production: immutable tags, health checks, rollback, secret management, notifications, logging, and security rules around SSH access.
Preparing a Dockerfile
A good deployment almost always starts with a good Dockerfile. A multi-stage build matters for two reasons: it keeps the build phase separate from the runtime image, and it dramatically reduces the final surface area, both in terms of image size and unnecessary dependencies. This applies to .NET, but also to Node.js, Go, Java, and practically any modern stack.
Here is a neutral example for a .NET web app:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build WORKDIR /src COPY . ./ RUN dotnet restore ./MyApp/MyApp.csproj RUN dotnet publish ./MyApp/MyApp.csproj -c Release -o /app/publish /p:UseAppHost=false FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS runtime WORKDIR /app RUN adduser --disabled-password --home /app --gecos "" appuser USER appuser COPY --from=build /app/publish ./ EXPOSE 8080 ENV ASPNETCORE_URLS=http://+:8080 HEALTHCHECK --interval=30s --timeout=5s --start-period=20s --retries=3 \ CMD wget --no-verbose --tries=1 --spider http://127.0.0.1:8080/health || exit 1 ENTRYPOINT ["dotnet", "MyApp.dll"] |
There are four important details worth highlighting:
- the build stage uses the SDK, while the runtime stage uses only the runtime that is actually needed;
- the final container runs as a non-root user;
- the exposed port is explicit and consistent with the application’s configuration;
- the
HEALTHCHECKmakes the container observable both to Compose and to deployment scripts.
If you are not deploying a .NET web app but, for example, a Node.js application, the logic stays the same: full build stage, minimal runtime stage, non-privileged user, and a credible health check.
Defining a tagging strategy
One of the most common mistakes in containerized deployments is relying too heavily on the latest tag. Convenient in development, risky in production. The problem is not that latest is inherently “wrong”; the problem is that it is not immutable. Today it points to one image, tomorrow to another. When something goes wrong, reconstructing what was actually running becomes much more fragile.
In production, it is always a good idea to publish at least one immutable tag, typically based on the commit SHA. If you also have versioned releases, you can add semver tags as well. A reasonable set looks like this:
latestfor convenience, if you really want to keep it;sha-abc1234as the actual immutable deployment reference;v1.4.2when a semantic release exists.
The tag used for the real deployment should be the immutable one. It is the safest choice, the easiest to audit, and much easier to use for rollbacks.
Creating the full deploy.yml workflow
This is where we get to the heart of the pipeline. A serious workflow should do at least the following: check out the code, log in to the registry, generate metadata, build and push the image with caching, optionally generate attestations or an SBOM, and finally deploy or notify. Below is a complete example, intentionally neutral, suitable for a single containerized application.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 |
name: Build and Deploy on: push: branches: [ "main" ] workflow_dispatch: permissions: contents: read packages: write id-token: write env: REGISTRY: ghcr.io IMAGE_NAME: ${{ github.repository_owner }}/myapp jobs: build: runs-on: ubuntu-latest outputs: image_tag: ${{ steps.meta.outputs.version }} steps: - name: Checkout uses: actions/checkout@v4 - name: Log in to GHCR uses: docker/login-action@v3 with: registry: ${{ env.REGISTRY }} username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }} - name: Extract Docker metadata id: meta uses: docker/metadata-action@v5 with: images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }} tags: | type=raw,value=latest type=sha,prefix=sha- type=ref,event=tag flavor: | latest=true - name: Build and push image uses: docker/build-push-action@v6 with: context: . file: ./Dockerfile push: true tags: ${{ steps.meta.outputs.tags }} labels: ${{ steps.meta.outputs.labels }} cache-from: type=gha cache-to: type=gha,mode=max provenance: true sbom: true deploy: runs-on: ubuntu-latest needs: build if: github.ref == 'refs/heads/main' steps: - name: Deploy over SSH uses: appleboy/ssh-action@v1.0.3 with: host: ${{ secrets.SSH_HOST }} username: ${{ secrets.SSH_USER }} key: ${{ secrets.SSH_KEY }} port: ${{ secrets.SSH_PORT }} script: | cd ${{ secrets.DEPLOY_PATH }} export IMAGE_TAG=${{ needs.build.outputs.image_tag }} docker compose pull app docker compose up -d --no-deps app docker image prune -f - name: Post-deploy health check run: | for i in 1 2 3 4 5; do curl -fsS https://example.com/health && exit 0 sleep 5 done exit 1 |
Let us take a closer look at the most important parameters in this file; permissions deserves particular attention. contents: read is enough to read the repository, packages: write is required to publish the image to GHCR, while id-token: write becomes useful if the pipeline needs to issue OIDC tokens toward external systems. It is always better to start from the minimum required permissions, not the maximum.
docker/login-action is the cleanest way to authenticate the runner to the registry. With GHCR, for pushes from the workflow, GITHUB_TOKEN is almost always the right choice: it avoids introducing unnecessary PATs into the pipeline.
docker/metadata-action is extremely useful because it avoids managing tags by hand. It can generate tags based on SHA, branch, semver, or other repository events, while keeping the image and the source aligned.
docker/build-push-action is the block that does the real work. The type=gha cache uses GitHub Actions internal storage and greatly speeds up repeated builds, especially for multi-stage images. The provenance: true and sbom: true options add attestations and a Software Bill of Materials, two features that are becoming very useful even outside the strictest enterprise environments.
In the deploy job, the use of --no-deps matters: it avoids touching dependent services that do not need to be restarted. It is a small choice, but in real environments it saves a surprising number of unnecessary restarts.
Automatic deployment over SSH or manual pull?
This is where pragmatism matters. Automatic deployment over SSH from GitHub Actions is convenient and fast, but it is not always the best choice. If the server is protected by a strict IP allowlist, if you do not want to expose SSH to GitHub-hosted runners, or if the team prefers a human checkpoint before rollout, it may make more sense to stop at the GHCR push and complete the deployment manually on the server.
The automatic pattern works well when:
- the server can accept connections from the runners in use;
- the team wants to reduce manual intervention;
- the release flow is already standardized enough.
The manual or semi-manual pattern works better when:
- SSH access is heavily restricted;
- the deployment context needs to be validated before rollout;
- you want to publish the image first and decide later when to activate it.
There is no universally right answer. For many small or self-hosted environments, the most solid solution is: automatic build and push, then a controlled manual deployment with a few explicit commands.
Authenticating the server to the private registry
If GHCR images are private, the server must authenticate before it can pull them. The most common path is a docker login ghcr.io performed with a PAT that has at least the read:packages scope. That login creates the ~/.docker/config.json file, which Docker will use for subsequent pulls.
|
1 |
docker login ghcr.io -u YOUR_GITHUB_USERNAME |
If you want to automate this part, you can run it once using a dedicated deployment user. In stricter environments, tools such as crane or short-lived token approaches can be used, but for many self-hosted servers a PAT limited to read:packages, managed carefully and assigned to a dedicated technical account, remains a pragmatic solution.
Caution matters here: avoid logging in with a personal user, avoid overly broad token scopes, and avoid scattering the config.json file across shared home directories or poorly controlled backups.
Preparing the server and the .env file
Once the server is ready, it is a good idea to create a stable deployment directory, for example /var/www/myapp or /opt/myapp, containing the docker-compose.prod.yml file, the optional .env, and any configuration assets the deployment needs.
The .env file is still today a common and perfectly reasonable solution, but it must be treated properly: 600 permissions, a dedicated owner, never committed to the repository, never copied into folders accessible to other users, and never included in public or carelessly synchronized backups.
|
1 2 |
chmod 600 .env chown deploy:deploy .env |
If the infrastructure maturity level allows it, it is worth considering better solutions for the most sensitive secrets: Docker secrets, SOPS, Vault, or equivalent services. It is not always necessary to introduce them immediately, but at least for critical credentials and more exposed environments it helps to remember that .env is not the only possible model.
Creating docker-compose.prod.yml
The production Compose file should not be a mechanical copy of the development one. In production, you need at least registry-backed images, sensible restart policies, coherent health checks, and explicitly declared volumes.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
services: app: image: ghcr.io/myorg/myapp:${IMAGE_TAG:-latest} container_name: myapp env_file: - .env restart: unless-stopped ports: - "127.0.0.1:8080:8080" healthcheck: test: ["CMD", "wget", "--spider", "-q", "http://127.0.0.1:8080/health"] interval: 30s timeout: 5s retries: 3 start_period: 20s volumes: - app-data:/app/data volumes: app-data: |
restart: unless-stopped is a simple but useful choice: the container comes back up after a reboot or a crash, without forcing you to rely on improvised scripts. Binding the service to local loopback, rather than exposing it on every interface, is also a good habit when Nginx sits in front of it.
Rollout with Docker Compose
Docker Compose can handle very respectable deployments, but it is not Kubernetes. That should be stated clearly. The classic:
|
1 2 |
docker compose pull app docker compose up -d --no-deps app |
works well to update one service without touching its dependencies, but it does not by itself guarantee a sophisticated rolling update or absolute zero downtime. In many cases the interruption window is tiny, but it exists: the old container is stopped and the new one is started.
For non-critical applications, or for systems that can tolerate small interruptions, this is often enough. If you want to reduce the risk further, you need at least health checks, sensible warm-up time, and some kind of switching strategy in front of the service, usually through Nginx.
A minimal blue/green pattern with Nginx
A simple form of blue/green on a single host consists of keeping two services, for example app_blue and app_green, and having Nginx point to one or the other through an upstream block. You update the inactive container, verify the health check, then switch the upstream and reload Nginx. It is not as elegant as a real orchestrator, but in some environments it is surprisingly effective.
It is more complex than a simple compose up -d, certainly, but much less complex than a Kubernetes cluster. Once again, context matters: if the business does not require strict zero-downtime, it is often not worth complicating the flow too much.
Configuring Nginx
The reverse proxy should not stop at simply forwarding traffic. In production, it makes sense to use Nginx for TLS, correct forwarding headers, security headers, basic rate limiting, compression, and sensible request size limits.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
server { listen 443 ssl http2; server_name example.com; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Frame-Options "DENY" always; add_header Referrer-Policy "strict-origin-when-cross-origin" always; client_max_body_size 50m; gzip on; gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; location / { proxy_pass http://127.0.0.1:8080; proxy_http_version 1.1; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } |
client_max_body_size should be motivated by the application: if the system accepts uploads, it should be set deliberately rather than left implicit. The same goes for rate limiting and compression: useful measures, but ones that should be tuned to real traffic rather than copied blindly.
Deployment observability
A deployment that leaves no traces is a deployment that is hard to diagnose. Even in small environments, it is worth planning for at least three things: log collection, failure notification, and post-deployment validation.
For logs, a very reasonable baseline is to use journald or Docker logs, then forward them, when needed, to Loki or another central system. There is no need to start immediately with a massive platform, but you do need to know where to look when something stops working.
In the GitHub Actions workflow, a Slack or Teams notification in case of failure is extremely helpful. It does not need to be sophisticated: it is enough to know that the deployment failed and which commit was involved.
The post-deployment validation should be explicit, for example a curl with retries against a /health endpoint. It is a simple step and much more useful than many “deployment succeeded” messages declared only because the SSH command ended without errors.
Rollback and image management
Rollback is one of those topics everybody assumes they have covered until the day they actually need it. If the deployment uses immutable tags, rolling back is trivial: just point the service back to a previous tag and bring it up again.
|
1 2 3 |
export IMAGE_TAG=sha-abc1234 docker compose pull app docker compose up -d --no-deps app |
If everything revolves around latest, on the other hand, rollback becomes uncertain: you need to know which image the previous “latest” actually referred to, and that information is often far less obvious than it should be. That is why an immutable tag is not an aesthetic detail, but a very concrete operational safeguard.
Backup, volumes, and persistence
Images can always be rebuilt. Data cannot. For that reason, when discussing deployments, you always need to clarify where persistent data lives and how it is protected. Docker volumes are often a clean choice for application data and containerized databases; bind mounts can be fine when you need explicit control over the host path, but they require more discipline.
If the deployment includes database migrations, it is wise to create a snapshot or backup before rollout, especially when the schema changes in non-trivial ways. It is a precaution that costs very little compared to what a deployment that only “half succeeds” can cost.
Example deployment on the server
Once everything else is in place, the actual procedure on the server remains surprisingly simple:
|
1 2 3 4 5 |
cd /var/www/myapp export IMAGE_TAG=sha-abc1234 docker compose pull app docker compose up -d --no-deps app docker image prune -f |
For an initial bootstrap or a full stack refresh, you can use a general docker compose pull followed by up -d without --no-deps. For regular deployments, though, it is worth touching only what actually needs to be updated.
Security and real-world production
Pipeline security
A CI/CD pipeline is not just automation: it is also an attack surface. That is why it makes sense to start with three sober but important rules: minimum permissions in the workflow, no long-lived credentials when better alternatives exist, and traceability of the produced artifact.
Whenever possible, OIDC is preferable to static keys or long-lived secrets. If the workflow must talk to compatible cloud services, a temporary federated token model is cleaner and safer. In GHCR, image pushes can often work perfectly well with GITHUB_TOKEN alone, avoiding additional PATs.
To raise the bar further, it is worth considering image signing with cosign and the attestations generated by the build. The SBOM produced by docker/build-push-action does not solve everything, but it greatly improves visibility and auditability of the distributed artifact.
SSH access security for deployments
If you choose SSH-based deployment, do not treat that key as a harmless shortcut. The remote user should be dedicated, without sudo, or with very limited privileges. Better still, the key should be constrained to a specific command through command= in authorized_keys, or managed through ForceCommand or a wrapper script that only accepts the expected deployment operations.
This is one of the differences between a setup that merely “works” and one that is genuinely designed for production. An SSH key that opens a full shell on a production server is convenient, but it is not the best choice.
If GitHub-hosted runners do not fit well with your network policies, it may make sense to evaluate self-hosted runners or tighter IP allowlists. Once again, the right choice depends on context: it is not always worth complicating the infrastructure, but it is important to understand the trade-offs.
Secret management beyond the .env file
The .env file is perfectly fine as a baseline, but it is not an excuse for sloppy secret handling. Beyond strict permissions and dedicated ownership, it is worth making sure it never ends up in backups readable by third parties, repositories, pipeline artifacts, or shared home directories.
As requirements grow, Docker secrets, SOPS, or Vault allow you to raise the level. There is no need to adopt all of them immediately, but it is worth knowing when the transition starts making sense: more environments, more operators, more compliance, and more critical credentials.
Hardening the reverse proxy and the container host
Nginx and the host server deserve a minimum amount of hardening. Beyond the headers already mentioned, it is worth taking care of system updates, limiting exposed ports, using explicit firewalls, monitoring disk usage, keeping Docker and Compose updated, and minimizing the number of containers that run with unnecessary privileges. Even simply running the application as a non-root user inside the container is a small but concrete security measure.
Notifications, operational audit, and final checks
In production, it is not enough for a deployment to “start”. It also needs to be visible. A Slack or Teams notification on success and failure, a final health check, a readable log trail, and the ability to know which tag is currently running are the minimum baseline if you want to operate without flying blind.
When to move to Kubernetes
It is only fair to close with an honest point. GitHub Actions + GHCR + Compose is an excellent pattern for many real-world cases, but it is not universal. It starts to show its limits when the number of hosts grows, when autoscaling becomes necessary, when rolling updates need to be truly transparent, when you need advanced scheduling strategies, distributed self-healing, richer service discovery, or native secret handling for highly dynamic workloads.
A typical threshold is this: as long as you have one or a few hosts, small teams, a relatively linear architecture, and availability requirements compatible with controlled restarts or minimal blue/green patterns, this approach is often the best choice. When, on the other hand, you find yourself designing increasingly sophisticated workarounds to get what Kubernetes offers natively, it is probably time to change platform.
The value of this solution, however, remains intact: it allows you to go very far with tools that are understandable, affordable, and easy to maintain. And in a surprisingly large number of projects, that is exactly what is needed.
Conclusions
Deploying .NET applications, Node.js services, or any containerized workload with GitHub Actions and GHCR is much less complicated than people often assume, provided it is not reduced to a couple of commands copied from a tutorial. The core of the system is simple: multi-stage builds, pushes to a registry aligned with your workflow, deployments with Compose, a reverse proxy in front, and immutable tags so you can roll back without improvising.
What separates a how-to from a setup that is actually usable in production lies in the details: minimum permissions in the pipeline, realistic secret handling, secure registry login, health checks, failure notifications, reverse proxy hardening, a clear rollback path, and an honest evaluation of Compose’s limits compared to a real orchestrator. When those elements are handled well, the result is a very solid release process, without introducing premature complexity.
References
- GitHub Actions documentation - Official documentation for workflows, jobs, permissions, and CI/CD automation on GitHub.
- GitHub Container Registry - Official guide to GHCR, authentication, package visibility, and image management.
- Docker GitHub Actions - Official documentation for Docker actions covering login, metadata, build, and image push.
- Docker Build cache backend gha - Explanation of the
type=ghacache backend for speeding up builds on GitHub Actions. - Docker Compose documentation - Official reference for defining and updating containerized stacks with Compose.
- Multi-stage builds - Official best practices for creating smaller, cleaner, production-ready images.
- Cosign - Introduction to container image signing and software supply chain security.
- NGINX Reverse Proxy - Guide to configuring Nginx as a reverse proxy for HTTP containerized applications.
- Containerize a .NET app - Microsoft guide to containerizing .NET applications.
