Baeldung Pro – Ops – NPI EA (cat = Baeldung on Ops)
announcement - icon

Learn through the super-clean Baeldung Pro experience:

>> Membership and Baeldung Pro.

No ads, dark-mode and 6 months free of IntelliJ Idea Ultimate to start with.

1. Introduction

Continuous Integration workflows often require external tools that aren’t available by default on the execution environment. GitHub Actions, which provide hosted runners for building, testing, and deploying code, also require the same. To access these tools, we frequently use the Ubuntu apt-get package manager.

In this tutorial, we’ll discuss using a package manager in GitHub Actions. First, we’ll start by explaining its importance in workflows, and then examine practical usage within GitHub Actions. Lastly, we’ll discuss common errors that can occur when using apt-get.

2. Package Installation in Workflows

When we run jobs on GitHub-hosted Ubuntu runners, each job begins in a clean virtual environment. To elaborate, the packages required for the task aren’t installed and need to be set up during the workflow. Installing packages ensures that the build environment meets the project requirements.

For example, a Python project may require graphviz to render documentation, a C++ project may need lcov for coverage analysis, and a Java project might require system-level libraries to compile native extensions. Without these, jobs would fail. Therefore, package installation becomes a critical step in CI.

While scripting, the question arises whether to use apt or apt-get. apt-get is generally preferred for scripting. apt is intended as a user-facing command with nicer output, while apt-get provides more stable, script-friendly behavior. For workflows where reproducibility matters, apt-get is the safer choice.

3. Basic Example Setup

To begin with, let’s consider a simple GitHub action where we install the lcov package, which is a tool for generating code coverage reports.

Let’s look at the install.yaml:

$ cat install.yaml
name: CI
on: [push, pull_request]
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Install lcov
        run: |
          sudo apt-get update
          sudo apt-get install -y lcov
      - name: Run tests with coverage
        run: |
          ./scripts/run-tests.sh
          lcov --capture --directory . --output-file coverage.info

The above code checks out the repository, so the runner has the project code. After that, it installs Icov using sudo apt-get update followed by sudo apt-get install -y lcov. The sudo command is important because apt-get requires administrative privileges. The -y flag ensures the installation proceeds without waiting for confirmation. Lastly, install.yaml runs the tests and collects coverage data with the newly installed Icov.

In this basic example, the workflow runs smoothly. The tool is installed and is available immediately. In addition, later steps can use the installed package.

4. Permission Error in GitHub Actions

At first glance, installing packages in GitHub Actions seems straightforward. However, the first attempt to run apt-get install often fails, and leads to a specific error:

E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)
E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?

The issue arises because GitHub Actions jobs run as a non-root user by default. System package management with apt-get requires elevated privileges to modify /var/lib/dpkg. Since we aren’t root, the process cannot obtain the necessary lock.

To fix the above error, we can use the sudo command. Fortunately, GitHub’s Ubuntu runners enable passwordless sudo. To elaborate, we can run administrative commands without being prompted for a password. Therefore, the fix is often as simple as adding sudo to the install step.

5. Performance Considerations

While the above method works, it can slow down workflows. Each run installs packages from scratch, which adds overhead, especially if the list of dependencies is long. To improve performance, we can apply several strategies discussed in this section.

5.1. Grouping Packages

It’s faster to install multiple packages in one command rather than invoking apt-get multiple times.

For example, we can install Icov, graphviz, and doxygen together:

$ cat install.yaml
name: CI
on: [push, pull_request]
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Install lcov
        run: |
          sudo apt-get update
          sudo apt-get install -y lcov graphviz doxygen
      - name: Run tests with coverage
        run: |
          ./scripts/run-tests.sh
          lcov --capture --directory . --output-file coverage.info

Thus, we avoid repeating dependency resolution and reduce overhead.

5.2. Caching Packages

Another option is to use caching actions. The cache-apt-pkgs-action makes package installation faster by caching downloaded .deb files between runs.

Thus, we can include the caching option in the uses field within the .yaml file:

$ cat install.yaml
name: CI
on: [push, pull_request]
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          packages: dia doxygen doxygen-doc doxygen-gui doxygen-latex graphviz mscgen
          version: 1.0
      - name: Install lcov
        run: |
          sudo apt-get update
          sudo apt-get install -y lcov
...

Here, the packages field lists the desired packages, and the version field serves as a manual cache key. Incrementing the version refreshes the cache when the package list changes. This reduces build times significantly for projects with heavy dependencies.

5.3. Custom Containers

​For projects requiring extensive system dependencies, we can use prebuilt containers, which are more efficient. By defining a container with all tools preinstalled, we avoid repeated installs in workflows.

The workflow then specifies those containers under the build section in the .yaml file:

$ cat install.yaml
name: CI
on: [push, pull_request]
jobs:
  build:
    runs-on: ubuntu-latest
    container: ghcr.io/my-org/custom-image:1.0
    steps:
      - uses: actions/checkout@v4
...

This shifts installation overhead from every run to a one-time container build, improving both speed and reproducibility. The sacrifice is that the configuration is now in two separate places, and rebuilding container images can be tedious if they change often.

6. Best Practices

Across the different approaches to install packages in GitHub Actions, there are several best practices:

  • It’s always better to install only what’s necessary, since every extra package adds to build time and increases the risk of conflicts.
  • We should keep the installations non-interactive, and that’s why flags like -y and the DEBIAN_FRONTEND=noninteractive setting are so valuable, since they prevent the workflow from hanging at unexpected prompts.
  • When stability is critical, it’s wise to pin package versions rather than relying on the latest available option, because reproducibility matters more than novelty in CI pipelines.
  • It’s important to stay mindful of the runner image updates, since ubuntu-latest shifts over time, for example, from 20.04 to 22.04, and pinning a specific version can offer more predictable results.

At the same time, we shouldn’t overlook the fact that dedicated setup actions for languages such as Python, Node, or Java, actions/setup-python, actions/setup-node, are often faster and more reliable than using system-level installs.

7. Conclusion

In this article, we covered how to use a package manager in GitHub Actions and pitfalls around such cases. First, we discussed why we use a package manager in GitHub Actions. After that, we covered an example of doing so. Lastly, we elaborated on the common errors and performance issues that arise when using apt-get in GitHub Actions and ways to fix or avoid them.