Learn through the super-clean Baeldung Pro experience:
>> Membership and Baeldung Pro.
No ads, dark-mode and 6 months free of IntelliJ Idea Ultimate to start with.
Last updated: November 29, 2024
We use docker exec to execute a command in an already running container. However, docker exec doesn’t support chained commands. So, when running multiple commands, we use alternatives such as docker exec options or a shell process.
In this tutorial, we’ll discuss how to execute multiple commands with docker exec.
We can run a shell process with docker exec and then execute our multiple commands through the shell process. Let’s illustrate this with bash -c:
$ docker exec [container_name] bash -c "[command1] && [command2] && [command3]"
Since we used logical AND operators (&&), each command will only run if the preceding commands are successful. However, to execute them notwithstanding the results of the previous commands, we can use semicolon (;) operators instead:
$ docker exec [container_name] bash -c "[command]; [command2]; [command3]"
On the other hand, if we want the commands to run only if their preceding commands fail, we’ll use a logical OR operator (||):
$ docker exec [container_name] bash -c "[command] || [command2] || [command3]"
Finally, if our commands are long, we can split them into multiple lines using a backslash (\):
$ docker exec [container_name] bash -c "[command1] \
&& [command2] \
&& [command3]"
Splitting verbose commands into multiple lines ensures readability — if there’s an issue somewhere, it’s easier to spot.
If we have a script file with multiple commands, we can pass it to a shell process created with docker exec. This method could come in handy when we have verbose commands that could leave the terminal unreadable.
To pass a file to a shell process created with docker exec, we use input redirection (<):
$ docker exec -i [container_name] sh < [script_file]
Without the -i flag in the command above, our shell process’s standard input won’t be attached to the container. Also, we won’t get the output of our script’s execution.
Similar to the previous method, we can execute multiple commands by redirecting a heredoc input to a docker exec shell process:
$ docker exec -i [container_name] bash << EOF
[command1]
[command2]
[command3]
EOF
Instead of a direct heredoc redirection, we can pipe a heredoc input to docker exec using cat:
$ cat << EOF | docker exec -i [container_name] bash
[command1]
[command2]
[command3]
EOF
A direct heredoc is typically more efficient since it doesn’t involve an extra process. Its syntax is also more concise.
Launching an interactive shell session with a pseudo-TTY opens up a terminal where we can run as many commands as we want:
$ docker exec -it [container_name] bash
In the command above, the -t option gives the bash session a pseudo-terminal, while the -i option attaches its standard input to the container’s.
Without -i, we won’t have an interactive session that can run multiple commands, and without -t, our shell session won’t have its own terminal.
One upside of this method and the methods in the previous sections is that changes persist while the session is active. So, if a command depends on the outcome of a previous command, we’ll get the desired result.
We can loop through multiple commands using a for loop or a while loop, passing each command to docker exec with each iteration of the loop.
To achieve this with a while loop, we’ll read each command from standard input and pass it to docker exec in a script named docker_exec.sh:
$ cat docker_exec.sh
#!/bin/bash
while read command
do
docker exec [container_name] $command
done
We can also pass the commands to read from a file:
$ cat docker_exec.sh
#!/bin/bash
while read command
do
docker exec [container_name] $command
done < commands.txt
When executing multiple docker exec commands with a for loop, we iterate over each command explicitly:
$ cat docker_exec.sh
#!/bin/bash
for command in "command1" "command2"
do
docker exec [container_name] $command
done
If one of our commands depends on the outcome of a preceding command, looping may not produce the desired result except if changes made by docker exec are persistent.
docker exec has options that replace certain commands. For instance, if we’re changing the directory and listing content, we can use docker exec‘s -w or –workdir option instead of chaining cd with ls in a shell process:
$ docker exec -w [target_directory] [container_name] [command_to_execute]
The -w option specifies the working directory in which docker exec should run the command. Therefore, we can use it in place of the cd command.
Similarly, instead of chaining the export command and another command such as env in a shell process, we can use the -e or –env option to create environment variables:
$ docker exec -e [variable=value] -e [variable=value] [container_name] env
Lastly, we can change the user with docker exec‘s -u or –user option. For example, if combining commands that switch user and execute whoami, we could run:
$ docker exec -u [username] [container_name] whoami
When using the -u option, we may specify a user id in place of the username or combine user id and group id.
In this article, we explored various methods for executing multiple docker exec commands, including using command-chaining operators in a shell process, launching an interactive shell session, and redirecting input from a script file or a here-document to the shell process. All these methods persisted changes while their sessions were active, ensuring commands dependent on preceding commands produced the desired result.
We also looked at looping the commands and passing them to docker exec. Finally, we touched on the use of docker exec options as substitutes for certain commands.