1. Introduction

Concourse is a fully open-source system for organizing and automating continuous processes. In particular, this makes it great at Continuous Integration and Continuous Deployment (CICD). However, Concourse has a more universal approach. In fact, one of the meanings of the word concourse is a passage connecting the main terminal to the gates of an airport. This also complements the main implementation language: Go.

In this tutorial, we explore Concourse and briefly demonstrate its functionality in practice. First, we do a general overview of the tool. After that, we go through the installation steps for Concourse and its client utility. Next, we deploy both in a much faster manner via Docker. Finally, we perform a step-by-step demonstration of the system by creating a basic pipeline and running a job from it.

We tested the code in this tutorial on Debian 12 (Bookworm) with GNU Bash 5.2.15. It should work in most POSIX-compliant environments unless otherwise specified.

2. Concourse Overview

As a way to build pipelines between different processes, Concourse works with many object types. To standardize their handling and representation, the system uses so-called resources.

Resources are the fundamental way through which Concourse receives input and produces output. What we do with a resource depends on the current task. Tasks are atomic units of work that receive inputs and produce outputs. Notably, a task can succeed or fail.

In addition to task steps, there are also other types:

  • task step runs tasks
  • get fetches resources
  • put updates resources
  • set_pipeline configures pipelines
  • load_var gets values into local variables
  • in_parallel runs steps in parallel
  • do runs steps one by one
  • across runs steps many times according to variable values
  • try runs steps in an environment that always succeeds

Each step runs in its own container.

Finally, tasks are organized as steps within jobs. Jobs can be thought of as the overhead view of a Concourse process.

To summarize, Concourse builds a pipeline from one or more jobs, each comprising a number of steps (tasks) in a specific order that work on resources as they move through each job. Importantly, pipelines are run by workers, i.e., interchangeable dispensable nodes that run the Concourse daemon in worker mode. All worker nodes within a given cluster report to a single web-mode daemon. Further, web nodes have a Web user interface (UI) for convenience.

Notably, the main way we build, configure, and control Concourse processes is Fly. In short, Fly can be used to log into a target, so we can manage its configuration and pipelines.

Finally, a PostgreSQL node is required for storing all related data.

3. Install Concourse Manually

For simplicity, Concourse is built as a single executable binary: concourse. It’s available on multiple platforms.

3.1. Get Concourse Package

Let’s get Concourse version $VERSION via wget:

$ wget https://github.com/concourse/concourse/releases/download/v$VERSION/concourse-$VERSION-linux-amd64.tgz

Now, we just e[x]tract the downloaded G[z]ip TAR archive [f]ile:

$ tar -xzf concourse-*-linux-amd64.tgz

After that, we enter the extracted directory and check the resulting hierarchy:

$ tree concourse/
concourse/
├── bin
│   ├── bandwidth
│   ├── bridge
│   ├── concourse
│   ├── containerd
│   ├── containerd-shim
│   ├── containerd-shim-runc-v1
│   ├── containerd-shim-runc-v2
│   ├── containerd-stress
│   ├── ctr
│   ├── dhcp
│   ├── dummy
│   ├── firewall
│   ├── gdn
│   ├── host-device
│   ├── host-local
│   ├── init
│   ├── ipvlan
│   ├── loopback
│   ├── macvlan
│   ├── portmap
│   ├── ptp
│   ├── runc
│   ├── sbr
│   ├── static
│   ├── tap
│   ├── tuning
│   ├── vlan
│   └── vrf
├── fly-assets
│   ├── fly-darwin-amd64.tgz
│   ├── fly-linux-amd64.tgz
│   └── fly-windows-amd64.zip
└── resource-types
[...]
17 directories, 70 files

As we can see, the concourse/bin/ subdirectory includes the concourse binary along with many helper utilities. Next, concourse/fly-assets/ comprises the Fly packages for different platforms. Notably, the resource-types subdirectory contains the relevant resource types for different external integrations such as git and s3.

3.2. Deploy concourse and fly

At this point, we have the concourse binary.

However, as the main configuration driver, the Fly binary fly is also part of the Concourse package. So, let’s deploy it as well:

$ tar -xzf concourse/fly-assets/fly-linux-amd64.tgz
$ chmod +x fly

Here, we extract and make fly executable.

Although we can begin using both concourse and fly in place, it’s usually more convenient to deploy them in one of three ways:

  • merge the concourse/bin/ subdirectory and fly from the Concourse package with a binary path such as /usr/local/bin/
  • create symbolic links to the concourse/bin/ executables and fly within a binary path such as /usr/local/bin/
  • add the current concourse/bin/ and fly paths to the $PATH variable

In any case, we should be able to directly call both commands in the shell.

3.3. Key Generation

Concourse uses keys to communicate and sign requests. To generate these keys, we can use the generate-key subcommand of concourse:

$ concourse generate-key --type rsa --filename session_signing_key
wrote private key to ./session_signing.key
$ concourse generate-key --type ssh --filename tsa_host_key
wrote private key to ./tsa_host.key
$ concourse generate-key --type ssh --filename worker_key
wrote private key to ./worker.key

Thus, we now have the three main keys.

3.4. PostgreSQL Node

For storage, Concourse employs a PostgreSQL database:

$ apt-get install postgresql

To configure the installation for Concourse, we should create a specific PostgreSQL user and database:

$ su postgres --command "createuser -P dbaeldung"
Enter password for new role:
Enter it again:
$ su postgres --command "createdb --owner=dbaeldung concoursedb"

In this case, we employ su to run two [–command]s as the postgres user. In particular, we use
createuser with the name dbaeldung for database access. After that, we createdb the concoursedb database with the respective owner.

3.5. Configuration

Since Concourse has at least one web node and one worker node that connects to that, we can choose whether to run both on the same or different systems. Still, each node type requires its own parameters, which we can set in the form of global shell variables or command-line parameters.

Although we use relative paths in our settings, it’s usually better to employ absolute ones.

Most options that we might need can also be obtained from the Docker Compose configuration file at https://concourse-ci.org/docker-compose.yml. In addition, we can check both concouse web –help and concourse worker –help for a full list.

3.6. Configure and Start web Node

Let’s start with the web node configuration:

$ export CONCOURSE_POSTGRES_HOST=127.0.0.1
$ export CONCOURSE_POSTGRES_USER=dbaeldung
$ export CONCOURSE_POSTGRES_PASSWORD=PASSWORD
$ export CONCOURSE_POSTGRES_DATABASE=concoursedb
$ export CONCOURSE_EXTERNAL_URL=http://192.168.6.66:8080
$ export CONCOURSE_ADD_LOCAL_USER=ccbaeldung:PASSWORD
$ export CONCOURSE_MAIN_TEAM_LOCAL_USER=ccbaeldung
$ export CONCOURSE_SESSION_SIGNING_KEY=session_signing_key
$ export CONCOURSE_TSA_HOST_KEY=tsa_host_key
$ export CONCOURSE_TSA_AUTHORIZED_KEYS=authorized_worker_keys

Here, we set the preconfigured PostgreSQL credentials and database name, along with the expected connection URL. In addition, we add a local administrative user ccbaeldung, and set it as the primary one for the default team main.

Finally, we set the session and TSA host keys, as well as the authorized key file. This way, the web node has a way to identify itself and set up sessions. Further, it can decide which worker should be able to connect based on the authorized keys.

So, let’s run the web node:

$ concourse web

We should see some JSON output without errors, which indicates the node is ready for workers. Naturally, the brain needs brawn.

3.7. Configure and Start worker Node

Now, we turn to the worker node configuration:

$ export CONCOURSE_TSA_HOST=192.168.6.66:8080
$ export CONCOURSE_EXTERNAL_URL=http://192.168.6.66:8080
$ export CONCOURSE_TSA_PUBLIC_KEY=tsa_host_key.pub
$ export CONCOURSE_TSA_WORKER_PRIVATE_KEY=worker_key
$ export CONCOURSE_WORK_DIR=<WORKER_PATH>

Naturally, each worker just needs to know the web node socket and URL, its public TSA host key, as well as the worker identifier key to use. In addition, a worker needs a local staging path to work with. It stores new containers for each step and must have the correct permissions.

Thus, we should be able to run the worker:

$ concourse worker

In this case, we run the web and worker nodes separately. This is especially important when the daemons are on different nodes. However, there is another option, mainly for testing.

3.8. Use quickstart

For convenience, the quickstart option of concourse runs both the web and worker nodes at the same time:

$ export CONCOURSE_POSTGRES_HOST=127.0.0.1
$ export CONCOURSE_POSTGRES_USER=dbaeldung
$ export CONCOURSE_POSTGRES_PASSWORD=PASSWORD
$ export CONCOURSE_POSTGRES_DATABASE=concoursedb
$ export CONCOURSE_TSA_HOST=192.168.6.66:8080
$ export CONCOURSE_EXTERNAL_URL=http://192.168.6.66:8080
$ export CONCOURSE_ADD_LOCAL_USER=ccbaeldung:PASSWORD
$ export CONCOURSE_MAIN_TEAM_LOCAL_USER=ccbaeldung
$ export CONCOURSE_WORKER_WORK_DIR=<WORKER_PATH>
$ concourse quickstart

Notably, we need to also set the worker path as CONCOURSE_WORKER_WORK_DIR due to the difference in parameter names for quickstart (–worker-work-dir). However, the key configuration isn’t necessary in this case due to the automatic setup.

Still, it’s discouraged to use quickstart in production environments.

Either way, we should now have an active Web interface and worker.

4. Deploy Concourse via Docker

Alternatively, we can deploy Concourse via its Docker image:

$ curl --remote-name https://concourse-ci.org/docker-compose.yml
$ docker compose up --detach

Here, we first use curl to get the container Docker Compose file from the official Concourse website locally with the same name. After that, we use the compose subcommand of docker along with the docker-compose.yaml file to initialize and start the new [–detach]ed container instance.

If we want to allow access over the network and not only through localhost, we can change the CONCOURSE_EXTERNAL_URL variable in the original docker-compose.yml file to point to a configured IP address or hostname of our choice.

At this point, two containers should be running:

  • concourse/concourse at 0.0.0.0:8080->8080/tcp
  • postgres at 5432/tcp

Further, we should be able to access the new container deployment at http://<HOSTNAME_OR_IP_ADDRESS>:8080 from a Web browser. In fact, we use this access to get the relevant Fly package for our [arch]itecture and platform:

$ curl 'http://xost:8080/api/v1/cli?arch=amd64&platform=linux' --output fly

Once we have the fly executable, we can deploy it as before.

Regardless of our setup mechanics, we should now be able to also visit 192.168.6.66:8080 or xost:8080 respectively, log in as our CONCOURSE_MAIN_TEAM_LOCAL_USER, and check the status of Concourse.

5. Demonstration

To understand how Concourse can help us, let’s go through a brief example. Naturally, all steps involve either fly or the Concourse Web UI.

5.1. Login

There are several ways to log in and register a target web node via the fly client.

For example, we can issue a login subcommand to the –concourse-url with a –target name of our choice:

$ fly --target=baeltarget login --concourse-url=http://192.168.6.66:8080
logging in to team 'main'

navigate to the following URL in your browser:

  http://192.168.6.66:8080/login?fly_port=10666

or enter token manually (input hidden):
target saved

After we visit the URL and log in, we get a token that we can copy back to save the target under the given name.

Alternatively, we can supply the credentials on the command line:

$ fly --target=baeltarget login --concourse-url=http://192.168.6.66:8080 --username=ccbaeldung --password=PASSWORD
logging in to team 'main'


target saved

Other options involve sending client certificates instead.

In any case, we should have a target ready with one worker:

$ fly --target=baeltarget workers
name  containers  platform  tags  team  state    version  age
xost  0           linux     none  none  running  2.5      16m56s

Once we save the target, requests should be straightforward.

5.2. Create Pipeline

Similar to Docker Compose, pipelines in Concourse are just YAML text files with their custom schema:

$ fly --target=baeltarget set-pipeline --pipeline=baelpipe --config=baelpipe.yaml
jobs:
  job job1 has been added:
+ name: job1
+ plan:
+ - config:
+     image_resource:
+       name: ""
+       source:
+         repository: perl
+       type: registry-image
+     platform: linux
+     run:
+       args:
+       - -e
+       - printf "Perl script running...";
+       path: /usr/local/bin/perl
+   task: task1

pipeline name: baelpipe

apply configuration? [yN]: y
pipeline created!
you can view your pipeline here: http://192.168.6.66:8080/teams/main/pipelines/baelpipe

the pipeline is currently paused. to unpause, either:
  - run the unpause-pipeline command:
    fly -t baeltarget unpause-pipeline -p baelpipe
  - click play next to the pipeline in the web ui

Here, we set-pipeline a new –pipeline with the name baelpipe via its [–config]uration file baelpipe.yaml.

The pipeline itself just has one job job1 with one task task1. The latter sets up a perl Docker image and runs a specific Perl command.

As we can see, a new pipeline is paused by default. So, let’s unpause baelpipe:

$ fly --target=baeltarget unpause-pipeline --pipeline=baelpipe
unpause 'baelpipe'

After preparing the pipeline, we can run it.

5.3. Run Pipeline Job

To run a pipeline, we can use the fly CLI or the Web UI.

Since the UI is usually easier and both show the same output, let’s do it via the command line:

$ fly --target=baeltarget trigger-job --job baelpipe/job1 --watch
started baelpipe/job1 #1

initializing
initializing check: image
selected worker: xost
selected worker: xost
fetching perl@sha256:2fee8a8abdceb3666f59249fd10674ddeadbeefd1b70c04e519bc089d7c21447
1b13d4e1a46e [========================================] 47.3MiB/47.3MiB
1c74667957fc [========================================] 22.9MiB/22.9MiB
30d855666954 [========================================] 61.2MiB/61.2MiB
ad6669181616 [======================================] 201.3MiB/201.3MiB
0ee2b666f83c [==============================================] 136b/136b
69d0a6652e22 [========================================] 15.1MiB/15.1MiB
1d9deafd59b5 [==============================================] 132b/132b
selected worker: xost
running /usr/local/bin/perl -e printf "Perl script running...";
Perl script running...succeeded

Thus, we trigger a build. Each build is a separate run of the job.

Alternatively, we can visit http://192.168.6.66:8080/teams/main/pipelines/baelpipe/jobs/job1 in the Web UI and check the job along with its status there.

5.4. fly Management

Naturally, fly can aid with the management of many other objects:

  • targets
  • workers
  • teams
  • volumes
  • pipelines
  • jobs
  • builds
  • cache
  • containers

Due to its comprehensive documentation, the tool is fairly easy to employ despite its versatility.

6. Summary

In this article, we delved into Concourse, a pipeline management system.

In conclusion, among the many tools for continuous process automation, Concourse stands out with its fairly easy deployment options and simplistic albeit complete feature set.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.