Skip to content

self-hosted

5 posts with the tag “self-hosted”

BuildJet Is Shutting Down. What Should GitHub Actions Teams Do Next?

BuildJet just announced it is shutting down in a post titled We are shutting down.

Their core reason is straightforward:

“The gap we set out to fill has largely closed, and we’ve decided to focus our efforts elsewhere.”

First, credit where it is due: BuildJet helped validate and grow the market for third-party GitHub Actions runner services.

If you are affected by the shutdown, this post covers:

  • what this likely says about the market,
  • how to evaluate your replacement options,
  • why RunsOn is a strong BuildJet alternative.

Is the GitHub Actions ecosystem consolidating?

Section titled “Is the GitHub Actions ecosystem consolidating?”

Short answer: probably yes.

GitHub has shipped steady improvements in hosted runners, larger runner options, and overall platform capabilities. That naturally compresses the space for thin wrappers around default runner infrastructure.

What remains valuable now is not just “hosted runners, but cheaper.” Teams increasingly need:

  • stronger workload-specific performance,
  • deeper control over hardware profiles,
  • predictable costs at scale,
  • better caching and Docker build performance,
  • security and isolation boundaries that match enterprise requirements.

This is exactly where infrastructure-native solutions can still create meaningful value.

What to look for in a BuildJet alternative

Section titled “What to look for in a BuildJet alternative”

If your team is migrating, avoid doing a like-for-like swap only on price. Check these five criteria:

  1. Runner flexibility: Can you choose exact CPU, RAM, architecture, disk, and GPU for each job?
  2. Real cache performance: Not just cache support, but measurable restore/save throughput.
  3. Docker build speed: Native Docker layer caching support for container-heavy pipelines.
  4. Architecture performance: Fast x64 and arm64 options with published benchmark data.
  5. Unit economics at scale: Meaningful savings at your real monthly minute volume, not just toy examples.

Why RunsOn is a strong alternative to BuildJet

Section titled “Why RunsOn is a strong alternative to BuildJet”

For teams that want high performance without giving up control, RunsOn is built for this exact use case.

You can choose runner shapes per job with granular control over CPU, RAM, architecture, disk, and GPU availability. This avoids overpaying for one-size-fits-all runners and lets each workflow use the right hardware profile.

2. Faster caching (including large caches)

Section titled “2. Faster caching (including large caches)”

RunsOn is built to accelerate CI workloads that depend heavily on dependency and build caches. In cache-heavy pipelines, this often becomes one of the biggest contributors to end-to-end runtime.

If your pipeline builds Docker images frequently, Docker layer caching support is essential. RunsOn provides dedicated options for this, reducing repeated image build time and registry churn.

RunsOn is optimized around machine families and configurations that improve real build performance for common CI workloads on both x64 and arm64.

For many workloads, especially when using EC2 spot instances correctly, RunsOn can reduce compute costs significantly versus GitHub-hosted runner pricing tiers.

Most teams can migrate incrementally without redesigning their entire CI system.

  1. Install RunsOn in your AWS account.
  2. Configure the GitHub App for target repositories.
  3. Replace BuildJet runner labels with RunsOn runner labels.
  4. Start with one busy workflow and compare queue time, runtime, and cost.
  5. Roll out by workload type (build, test, release) and tune runner sizing.

Example migration pattern:

# Example only: replace your BuildJet label with a RunsOn label.
runs-on: runs-on=${{ github.run_id }}/runner=4cpu-linux-x64

BuildJet announced its shutdown in its post We are shutting down, citing improvements in GitHub Actions that reduced the gap it originally targeted.

What is a good BuildJet alternative for GitHub Actions?

Section titled “What is a good BuildJet alternative for GitHub Actions?”

If you need stronger performance, flexible hardware configuration, and lower CI cost at scale, RunsOn is a strong option. It supports dynamic runner sizing, fast caching, Docker layer caching, arm64/x64 performance tuning, and can be significantly cheaper depending on workload.

Do I need to rewrite all workflows to migrate from BuildJet?

Section titled “Do I need to rewrite all workflows to migrate from BuildJet?”

Usually no. Most teams can migrate incrementally by replacing runner labels and validating each pipeline class (build/test/release) one by one.

BuildJet shutting down does look like a sign of market consolidation around GitHub Actions runner infrastructure.

But consolidation does not mean “only GitHub-hosted runners remain.” It means the bar is higher:

  • better performance per dollar,
  • better workload fit,
  • better operational control.

If your team needs those three outcomes, RunsOn is a practical next step.

Start here:

GitHub to charge $0.002/min for self-hosted runners starting March 2026

Update Dec 17th 2025: GitHub has suspended the fee for now.

GitHub recently announced that starting March 2026, they will begin charging $0.002 per minute for jobs running on self-hosted runners, including those managed by e.g. Actions Runner Controller (ARC) or other solutions like RunsOn.

Until now, self-hosted runners have been free to use on GitHub Actions - you only paid for your own infrastructure costs. Starting March 2026, GitHub will add a per-minute fee on top of your infrastructure costs for any job running on a self-hosted runner.

For context:

  • $0.002/min = $0.12/hour = $2.88/day for a runner running 24 hours
  • For 40,000 minutes/month: additional $80/month in GitHub fees
  • For 100,000 minutes/month: additional $200/month in GitHub fees

RunsOn will continue to provide significant cost savings compared to GitHub-hosted runners, even with this additional fee. However, the savings margin will be reduced for some runner configurations.

To help you understand the impact, we’ve updated our pricing tools:

Our pricing page now includes a toggle to show prices with or without the GitHub self-hosted runner fee. This lets you compare:

  • Current pricing (without the fee)
  • Post-March 2026 pricing (with the $0.002/min fee included)

The pricing calculator has also been updated with the same toggle. You can now see exactly how much you’ll save with RunsOn both before and after the fee takes effect.

Even with the additional GitHub fee, RunsOn remains significantly cheaper than GitHub-hosted runners for most configurations:

  • Spot instances: Still deliver 60-90% savings depending on runner size
  • On-demand instances: Still deliver 30-60% savings for most configurations
  • Larger runners: The bigger the runner, the more you save (GitHub’s hosted runner pricing scales up faster than AWS EC2 pricing)

The fee has a larger impact on smaller runner sizes where the base cost is lower. For 2-CPU runners, the $0.002/min fee represents a larger percentage of the total cost.

  1. Check your current usage: Review your GitHub Actions minutes to understand your monthly consumption
  2. Use our calculator: Try the updated calculator with the fee toggle enabled to see your projected costs
  3. Consider runner sizes: Larger runners provide better value as the fee is fixed per minute regardless of runner size
  4. Use spot instances: AWS spot instances remain the most cost-effective option

The true cost of self-hosted GitHub Actions - Separating fact from fiction

In recent discussions about GitHub Actions runners, there’s been some debate around the true cost and complexity of self-hosted solutions. With blog posts like “Self-hosted GitHub Actions runners aren’t free” and various companies raising millions to build high-performance CI clouds, it’s important to separate fact from fiction.

It’s true that traditional self-hosted GitHub Actions runner approaches come with challenges:

  • Operational overhead: Maintaining AMIs, monitoring infrastructure, and debugging API issues
  • Hidden costs: Infrastructure expenses, egress charges, and wasted capacity
  • Human costs: Engineering time spent on maintenance rather than product development

However, these challenges aren’t inherent to self-hosted runners themselves. They’re symptoms of inadequate tooling for deploying and managing them.

At RunsOn, we’ve specifically designed our solution to deliver the benefits of self-hosted GitHub Actions runners without the traditional downsides:

While some providers claim to eliminate maintenance, they’re actually just moving your workloads to their infrastructure—creating new dependencies and security concerns. RunsOn takes a fundamentally different approach:

  • Battle-tested CloudFormation stack: Deploy in 10 minutes with a simple template URL.
  • Zero Kubernetes complexity: Unlike Actions Runner Controller (ARC), no complex cluster management.
  • Scales to zero: No jobs in queue? No cost. When a job comes up, RunsOn spins up a new runner and starts the job in less than 30s.
  • Automatic updates: Easy, non-disruptive upgrade process.
  • No manual AMI maintenance: Regularly updated runner images.

When third-party providers advertise “2x cheaper” services, they’re comparing themselves to GitHub-hosted runners—not to true self-hosted solutions. With RunsOn:

  • Up to 90% cost reduction compared to GitHub-hosted runners.
  • AWS Spot instances provide maximum savings (up to 75% cheaper than on-demand).
  • Use your existing AWS credits and committed spend.
  • No middleman markup on compute resources.
  • Transparent licensing model, with a low fee irrespective of the number of runners or job minutes you use.

Many third-party solutions gloss over a critical fact: your code and secrets are processing on their infrastructure. RunsOn:

  • 100% self-hosted in your AWS account—no code or secrets leave your infrastructure.
  • Ephemeral VM isolation with one clean runner per job.
  • Full audit capabilities through your AWS account.
  • No attack vectors from persistent runners.

High-performance CI doesn’t require VC-funded cloud platforms:

  • 30% faster builds than GitHub-hosted runners.
  • Flexible instance selection with x64, ARM64, GPUs, and Windows support.
  • Unlimited concurrency (only limited by your AWS quotas).
  • Supercharged caching with VPC-local S3 cache backend (5x faster transfers).

The often-cited “human cost” of self-hosted runners assumes significant ongoing maintenance. With RunsOn:

  • 10-minute setup with close to zero AWS knowledge required.
  • No ongoing maintenance burden for your DevOps team. Upgrades are one click away, and can be performed at your own pace.
  • No infrastructure to babysit or weekend emergency calls.
  • No complex debugging of runner API issues.

Let’s address some specific claims from recent competitor blog posts:

Claim: “Maintaining AMIs is time-consuming and error-prone”

Section titled “Claim: “Maintaining AMIs is time-consuming and error-prone””

Reality: RunsOn handles all AMI maintenance for you, with regularly updated images that are 100% compatible with GitHub’s official runners. If you want full control, we also provide templates for building custom images.

Claim: “Self-hosting means babysitting infrastructure”

Section titled “Claim: “Self-hosting means babysitting infrastructure””

Reality: RunsOn uses fully managed AWS services and ephemeral runners that are automatically recycled after each job. There’s no infrastructure to babysit.

Claim: “You’ll need to become an expert in GitHub Actions”

Section titled “Claim: “You’ll need to become an expert in GitHub Actions””

Reality: With RunsOn, you only need to change one line in your workflow files—replacing runs-on: ubuntu-latest with your custom labels. No GitHub Actions expertise required.

Claim: “High-performance CI requires third-party infrastructure”

Section titled “Claim: “High-performance CI requires third-party infrastructure””

Reality: RunsOn provides high-performance CI within your own AWS account, with benchmarks showing 30% faster builds for x64 workloads than GitHub-hosted runners and full compatibility with the latest instance types and architectures.

For arm64 workfloads, AWS is currently the leader in CPU performance.

Self-hosted GitHub Actions runners can be complex and costly, if you’re using the wrong approach. But with RunsOn, you get all the benefits of self-hosting (cost savings, performance, security) without the traditional drawbacks.

Before making assumptions about the “true cost” of self-hosted runners, evaluate solutions like RunsOn that have specifically solved these challenges. Your developers, security team, and finance department will all thank you.

Get started with RunsOn today!

🚀 v2.8.2 is out, with EFS, Ephemeral Registry support, and YOLO mode (tmpfs)!

Check out the new documentation pages for:

Now for the full release notes:

Details

Summary

Support for EFS, TMPFS, and ECR ephemeral registry for fast docker builds. Also some bug fixes.

What's changed

EFS
  • Embedded networking stack can now create an Elastic File System (EFS), and runners will auto-mount it at /mnt/efs if the extras label include efs. Useful to share artefacts across job runs, with classic filesystem primitives.
jobs:
  with-efs:
    runs-on: runs-on=${{ github.run_id }},runner=2cpu-linux-x64,extras=efs
    steps:
      - run: df -ah /mnt/efs
      # 127.0.0.1:/      8.0E   35G  8.0E   1% /mnt/efs
📝 Example use case for maintaining mirrors For instance this can be used to maintain local mirrors of very large github repositories and avoid long checkout times for every job:
env:
  MIRRORS: "https://github.com/PostHog/posthog.git"
  # can be ${{ github.ref }} if same repo as the workflow
  REF: main

jobs:
  with-efs:
    runs-on: runs-on=${{ github.run_id }},runner=2cpu-linux-x64,extras=efs
    steps:
      - name: Setup / Refresh mirrors
        run: |
          for MIRROR in ${{ env.MIRRORS }}; do
            full_repo_name=$(echo $MIRROR | cut -d/ -f4-)
            MIRROR_DIR=/mnt/efs/mirrors/$full_repo_name
            mkdir -p "$(dirname $MIRROR_DIR)"
            test -d "${MIRROR_DIR}" || git clone --mirror ${MIRROR/https:\/\//https:\/\/x-access-token:${{ secrets.GITHUB_TOKEN }}@} "${MIRROR_DIR}"
            ( cd "$MIRROR_DIR" && \
              git remote set-url origin ${MIRROR/https:\/\//https:\/\/x-access-token:${{ secrets.GITHUB_TOKEN }}@} && \
              git fetch origin ${{ env.REF }} )
          done
      - name: Checkout from mirror
        run: |
          git clone file:///mnt/efs/mirrors/PostHog/posthog.git --branch ${{ env.REF }} --single-branch --depth 1 upstream
Ephemeral registry
  • Support for an Ephemeral ECR registry: can now automatically create an ECR repository that can act as an ephemeral registry for pulling/pushing images and cache layers from your runners. Especially useful with the type=registry buildkit cache instruction. If the extras label includes ecr-cache, the runners will automatically setup docker credentials for that registry at the start of the job.
jobs:
  ecr-cache:
    runs-on: runs-on=${{ github.run_id }},runner=2cpu-linux-x64,extras=ecr-cache
    steps:
      - uses: actions/checkout@v4
      - uses: docker/setup-buildx-action@v3
      - uses: docker/build-push-action@v4
        env:
          TAG: ${{ env.RUNS_ON_ECR_CACHE }}:my-app-latest
        with:
          context: .
          push: true
          tags: ${{ env.TAG }}
          cache-from: type=registry,ref=${{ env.TAG }}
          cache-to: type=registry,ref=${{ env.TAG }} }},mode=max,compression=zstd,compression-level=22
Tmpfs

Support for setting up a tmpfs volume (size: 100% of available RAM, so only to be used on high-memory instances), and binding the /tmp, /home/runner, and /var/lib/docker folders on it. /tmp and /home/runner are mounted as overlays, preserving their existing content.

Can speed up some IO-intensive workflows. Note that if tmpfs is active, instances with ephemeral disks won't have those mounted since it would conflict with the tmpfs volume.

jobs:
  with-tmpfs:
    runs-on: runs-on=${{ github.run_id }},family=r7,ram=16,extras=tmpfs
    steps:
      - run: df -ah /mnt/tmpfs
      # tmpfs            16G  724K   16G   1% /mnt/tmpfs
      - run: df -ah /home/runner
      # overlay          16G  724K   16G   1% /home/runner
      - run: df -ah /tmp
      # overlay          16G  724K   16G   1% /tmp
      - run: df -ah /var/lib/docker
      # tmpfs            16G  724K   16G   1% /var/lib/docker

You can obviously combine options, i.e. extras=efs+tmpfs+ecr-cache+s3-cache is a valid label 😄

Instance-storage mounting changes

Until now, when an instance has locally attached NVMe SSDs available, they would be automatically formatted and mounted so that /var/lib/docker and /home/runner/_work directories would end up on the local disks. Since a lot of stuff (caches etc.) seem to end up within the /home/runner folder itself, the agent now uses the same strategy as for the new tmpfs mounts above (i.e. the whole /home/runner folder is mounted as an overlay on the local disk volume, as well as the /tmp folder. /var/lib/docker remains mounted as a normal filesystem on the local disk volume). Fixes #284.

Misc
  • Move all RunsOn-specific config files into /runs-on folder on Linux. More coherent with Windows (C:\runs-on), and avoids polluting /opt folder.
  • Fix app_version in logs (was previously empty string due to incorrect env variable being used in v2.8.1).
  • Fix "Require any Amazon EC2 launch template not to auto-assign public IP addresses to network interfaces" from AWS Control Tower. When the Private mode is set to only, no longer enable public ip auto-assignment in the launch templates. Thanks @temap!

v2.6.5 - Optimized GPU images, VpcEndpoint stack parameter, tags for custom runners

👋 v2.6.4 and v2.6.5 have been released in the last weeks, with the following changes.

Note: v2.6.6 has been released to fix an issue with the VpcEndpoints stack parameter.

Details

Summary

Optimized GPU images, new VpcEndpoints stack parameter, ability to specify custom instance tags for custom runners.

Note: there appears to be some issues with the new VPC endpoints. I'm on it! If you need that feature, please hold on to your current version of RunsOn.

What's Changed

  • New GPU images ubuntu22-gpu-x64 and ubuntu24-gpu-x64: 1-1 compatibility with GitHub base images + NVidia GPU drivers, CUDA toolkit, and container toolkit.
  • Add new VpcEndpoints stack parameter (fixes #213), and reorganize template params. Note that the EC2 VPC endpoint was previously automatically created when Private mode was enabled. This is no longer the case, so make sure you select the VPC endpoints that you need when you update your CloudFormation stack.
  • Suspend versioning for cache bucket (fixes #191).
  • Allow to specify instance tags for runners (fixes #205). Tag keys can't start with runs-on- prefix, and key and values will be sanitized according to AWS rules.

Details

Summary

CLI 0.0.1 released, fix for Magic Cache, fleet objects deletion.

What's changed

  • CLI released: https://github.com/runs-on/cli. Allows to easily view logs (both server logs and cloud-init logs) for a workflow job by just pasting its GitHub URL or ID. Also allows easy connection to a runner through SSM.
  • Fix race-condition in Magic Cache (fixes #209).
  • Delete the fleet instead of just the instance (fixes #217).