1. Introduction

Like Kubernetes, Minikube can work with different container runtimes. In many cases, the choice is Docker. However, Docker can also run Kubernetes within a container. Because of this duplication, we might sometimes want to select a specific deployment to work with.

In this tutorial, we explore Minikube deployments and the docker-env subcommand of minikube. First, we talk about the Minikube container runtime. Next, we go over stand-alone versus Kubernetes Docker deployments. Finally, we explore the change in the docker-env command and Docker context.

We tested the code in this tutorial on Debian 12 (Bookworm) with GNU Bash 5.2.15. Unless otherwise specified, it should work in most POSIX-compliant environments.

2. Minikube Container Runtime

The Minikube deployment system was conceived mainly as a way to run a single-node Kubernetes environment without configuring it step by step.

Instead, the minikube tool automatically creates a minimal Kubernetes context with a single command. We can deploy it directly on the host or through a driver like kvm2, qemu, or similar.

2.1. Direct Installation

In many cases, especially when on a virtual machine (VM), we use –driver=none with minikube to directly run Kubernetes on the current host:

$ minikube start --driver=none
😄  minikube v1.32.0 on Debian 12.6
✨  Using the none driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🤹  Running on localhost (CPUs=4, Memory=4666MB, Disk=66610MB) ...
ℹ️  OS release is Debian GNU/Linux 12 (bookworm)
🐳  Preparing Kubernetes v1.28.3 on Docker 26.0.0 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🤹  Configuring local host environment ...

❗  The 'none' driver is designed for experts who need to integrate with an existing VM
💡  Most users should use the newer 'docker' driver instead, which does not require root!
📘  For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/

[...]
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

As we can see, the installation continues on the localhost.

By default, minikube detects the available container runtime and configures it. Support exists for three runtimes:

  • containerd
  • cri-o
  • docker

If there are several available in a given environment, we might have to specify which one to use. In this case, Docker is the only one already present, so minikube uses it.

Often, Docker is the preferred Kubernetes container runtime but is also available as a minikube driver.

2.2. Docker (Container) Installation

To use the docker driver, we specify it on the command line when calling minikube:

$ minikube start --driver=docker
😄  minikube v1.32.0 on Debian 12.6
✨  Using the docker driver based on user configuration
🛑  The "docker" driver should not be used with root privileges. If you wish to continue as root, use --force.
💡  If you are running minikube within a VM, consider using --driver=none:
📘    https://minikube.sigs.k8s.io/docs/reference/drivers/none/
📌  Using Docker driver with root privileges
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
    > gcr.io/k8s-minikube/kicbase...:  453.90 MiB / 453.90 MiB  100.00% 28.34 M
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
🐳  Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

In this case, the installation pulls a container base image and runs it as the Kubernetes deployment directly under Docker.

Now, we verify the Minikube VM CONTAINER-RUNTIME and its version via the get subcommand:

$ kubectl get nodes --output=wide
NAME       STATUS   ROLES           AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE               KERNEL-VERSION   CONTAINER-RUNTIME
minikube   Ready    control-plane   3m35s   v1.28.3   192.168.39.235           Buildroot 2021.02.12   5.10.57          docker://24.0.7

Next, let’s check the Minikube VM host Docker version:

$ docker --version
Docker version 26.0.0, build 2ae903e

Notably, there is a discrepancy between the two container runtimes, meaning they are separate installations.

2.3. Virtual Machine (VM) Installation

To get a new Minikube virtual machine (VM) on the current host, we use one of the VM drivers, such as kvm2:

$ minikube start --driver=kvm2
😄  minikube v1.32.0 on Debian 12.6
✨  Using the kvm2 driver based on user configuration
🛑  The "kvm2" driver should not be used with root privileges. If you wish to continue as root, use --force.
💡  If you are running minikube within a VM, consider using --driver=none:
📘    https://minikube.sigs.k8s.io/docs/reference/drivers/none/
💾  Downloading driver docker-machine-driver-kvm2:
    > docker-machine-driver-kvm2-...:  65 B / 65 B [---------] 100.00% ? p/s 0s
    > docker-machine-driver-kvm2-...:  13.01 MiB / 13.01 MiB  100.00% ? p/s 200
💿  Downloading VM boot image ...
    > minikube-v1.32.1-amd64.iso....:  65 B / 65 B [---------] 100.00% ? p/s 0s
    > minikube-v1.32.1-amd64.iso:  292.96 MiB / 292.96 MiB  100.00% 28.73 MiB p
👍  Starting control plane node minikube in cluster minikube
💾  Downloading Kubernetes v1.28.3 preload ...
    > preloaded-images-k8s-v18-v1...:  403.35 MiB / 403.35 MiB  100.00% 29.89 M
🔥  Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

This driver sets up a VM and performs a Kubernetes deployment on it.

Again, the process uses a new Docker 24 environment instead of the version on the host (26).

Because of this difference, we might have problems dealing with Docker images and containers within the Kubernetes environment due to both version and locality discrepancies.

3. Stand-Alone Docker Versus Kubernetes Docker

Docker is a container management platform, while Kubernetes offers container orchestration. Direct interaction between the stand-alone Docker installation and Kubernetes pods is rarely recommended.

Thus, we often have different views when running kubectl and docker when the former is linked with a container or VM installation of Kubernetes. Let’s see an example.

First, we list the containers available to the host docker installation:

$ docker ps
CONTAINER ID   IMAGE                                 COMMAND                  CREATED          STATUS          PORTS                                                                                                                             NAMES
f132edec68ad   gcr.io/k8s-minikube/kicbase:v0.0.42   "/usr/local/bin/entr…"   58 seconds ago   Up 57 seconds   127.0.0.1:9009->22/tcp, 127.0.0.1:9008->2376/tcp, 127.0.0.1:9007->5000/tcp, 127.0.0.1:9006->8443/tcp, 127.0.0.1:9005->32443/tcp   minikube

As expected, we only see the minikube Kubernetes container.

Next, let’s check the containers in the Docker deployment within this container, which Kubernetes actually uses for its pods:

$ docker exec minikube docker ps
CONTAINER ID   IMAGE                       COMMAND    CREATED    STATUS     PORTS      NAMES
9b2e662ebd08   6e38f40d628d                [...]      [...]      [...]      [...]      k8sstorage-provisioner_storage-provisioner_kube-system_[...]
177c90140963   ead0a4a53df8                [...]      [...]      [...]      [...]      k8scoredns_coredns-5dd5756b68-cbtcp_kube-system_[...]
946e3d845de4   registry.k8s.io/pause:3.9   [...]      [...]      [...]      [...]      k8sPOD_coredns-5dd5756b68-cbtcp_kube-system_[...]
b1821712a4bd   bfc896cf80fb                [...]      [...]      [...]      [...]      k8skube-proxy_kube-proxy-j79hb_kube-system_[...]
701863929f5d   registry.k8s.io/pause:3.9   [...]      [...]      [...]      [...]      k8sPOD_kube-proxy-j79hb_kube-system_[...]
a66be56fe00c   registry.k8s.io/pause:3.9   [...]      [...]      [...]      [...]      k8sPOD_storage-provisioner_kube-system_[...]
c774868b2eed   6d1b4fd1b182                [...]      [...]      [...]      [...]      k8skube-scheduler_kube-scheduler-minikube_kube-system_[...]
727387caca49   73deb9a3f702                [...]      [...]      [...]      [...]      k8setcd_etcd-minikube_kube-system_[...]
f15c109f9100   10baa1ca1706                [...]      [...]      [...]      [...]      k8skube-controller-manager_kube-controller-manager-minikube_kube-system_[...]
547b45776f28   537434729123                [...]      [...]      [...]      [...]      k8skube-apiserver_kube-apiserver-minikube_kube-system_[...]
a6fb79afd237   registry.k8s.io/pause:3.9   [...]      [...]      [...]      [...]      k8sPOD_kube-scheduler-minikube_kube-system_[...]
4b8ad89fa357   registry.k8s.io/pause:3.9   [...]      [...]      [...]      [...]      k8sPOD_kube-controller-manager-minikube_kube-system_[...]
b5016796ccbc   registry.k8s.io/pause:3.9   [...]      [...]      [...]      [...]      k8sPOD_kube-apiserver-minikube_kube-system_[...]
f755b83e3b67   registry.k8s.io/pause:3.9   [...]      [...]      [...]      [...]      k8sPOD_etcd-minikube_kube-system_[...]

This time, the output includes all containers part of the Kubernetes deployment in the minikube container.

If Kubernetes runs in a VM on the host, this can become even harder, as the usual way to access the Docker instances there involves direct access to the VM.

In any case, by default, the docker or other container runtime command on the host differs from the command we need to run on the host to handle the containers within Kubernetes. Due to the potential problems and difficulties that this arrangement may cause, minikube has provided a special way to redirect the docker command to the Docker deployment within the Minikube environment.

4. Docker Context and docker-env

In short, to link the docker command in a given shell context to a specific Docker deployment, we can use the DOCKER_* environment variables.

In particular, minikube provides the docker-env command that exports several variables:

$ minikube docker-env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.49.2:2376"
export DOCKER_CERT_PATH="/root/.minikube/certs"
export MINIKUBE_ACTIVE_DOCKERD="minikube"

# To point your shell to minikube's docker-daemon, run:
# eval $(minikube -p minikube docker-env)

First, we use DOCKER_TLS_VERIFY to enable TLS verification. After that, we set the host access protocol, IP address, and port via the DOCKER_HOST variable. Next, we configure the ceritificate path in DOCKER_CERT_PATH. Finally, we set MINIKUBE_ACTIVE_DOCKERD to indicate that the context includes a special docker-env.

Further, we get a hint on how to apply these options:

$ eval $(minikube -p minikube docker-env)

This way, we use command substitution to eval the whole script that docker-env returns. Further, the [-p]rofile option specifies that the name of the minikube instance in question is named minikube.

In fact, we have already seen this by running docker ps on the Minikube host.

However, if we execute the same listing after the eval, we see a different result:

$ docker ps
CONTAINER ID   IMAGE                       COMMAND    CREATED    STATUS     PORTS      NAMES
9b2e662ebd08   6e38f40d628d                [...]      [...]      [...]      [...]      k8sstorage-provisioner_storage-provisioner_kube-system_[...]
177c90140963   ead0a4a53df8                [...]      [...]      [...]      [...]      k8scoredns_coredns-5dd5756b68-cbtcp_kube-system_[...]
946e3d845de4   registry.k8s.io/pause:3.9   [...]      [...]      [...]      [...]      k8sPOD_coredns-5dd5756b68-cbtcp_kube-system_[...]

Thus, the docker command now points to the minikube container and Kubernetes deployment within. This can be very useful when we want to employ local Docker images in Minikube or perform other similar operations.

5. Summary

In this article, we talked about ways to deploy Kubernetes with Minikube and how they relate to the Docker installation outside the deployment.

In conclusion, depending on the way Minikube creates a Kubernetes installation, we can use the minikube command to access the Docker containers within that Kubernetes environment.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments