1. Introduction

Among all Kubernetes components, the kubelet is the primary node agent running on each node, responsible for managing containers orchestrated by Kubernetes. Given its critical role, we need to access and understand kubelet logs for debugging, monitoring, and ensuring the health of the Kubernetes ecosystem.

However, locating Kubernetes logs can be somewhat perplexing, especially considering the variety of environments and configurations in which we can deploy Kubernetes. In this tutorial, we’ll discuss different mechanisms for finding kubelet logs across various setups and configurations.

Whether we’re DevOps enthusiasts troubleshooting a node issue or system administrators trying to understand pods‘ behavior, finding the Kubernetes logs is the first step toward gaining insights into the inner workings of our Kubernetes cluster. Let’s get started!

2. Understanding kubelet

The kubelet acts as the Kubernetes node agent, an essential component that communicates with the control plane to manage the containers running on its node. It ensures that the containers described by PodSpecs are started, running, and healthy based on the desired state defined in our Kubernetes manifests.

Furthermore, the kubelet takes commands from the control plane, manages container lifecycles, and handles the operational aspects of container management, such as volume mounting, networking, and resource allocation.

Given its central role, the kubelet generates logs rich with information about the node’s operational state, the pods’ status, and the containers’ execution. These logs are invaluable for diagnosing issues, from container startup failures and resource allocation problems to network connectivity issues and beyond.

Therefore, whether a misconfigured security policy is preventing a container from starting or we’re hitting a resource limit, kubelet logs often hold the key to unlocking these mysteries.

3. Accessing kubelet Logs in Different Environments

The location and method of accessing kubelet logs significantly vary depending on how we install and configure Kubernetes.

From cloud-based managed services to local development environments, each setup may store logs in a different location or require specific commands to access them.

Let’s look into some common environments and the respective ways to access these logs.

3.1. Using systemd

Most modern Linux distributions, including CentOS, Fedora, and Ubuntu (version 15.04 and above), use systemd as their init system. This system and service manager is responsible for bootstrapping the user space and managing user processes.

If our Kubernetes nodes are running on an operating system that uses systemd, then kubelet is likely managed as a systemd service.

Therefore, to access kubelet logs in such environments, we can use the journalctl command to query the systemd journal:

$ journalctl -u kubelet
Mar 13 12:00:45 my-k8s-node kubelet[9876]: I0313 12:00:45.789101 9876 server.go:155] Successfully started kubelet
Mar 13 12:01:10 my-k8s-node kubelet[9876]: W0313 12:01:10.456789 9876 container_manager_linux.go:912] Running with swap on is not supported, please disable swap

Here, our sample log from systemd‘s journal shows the kubelet starting successfully and then issuing a warning about running with swap on, which Kubernetes doesn’t natively support for now. The unique identifiers and timestamps provide insight into the sequence and nature of events in kubelet logs.

Notably, we can also use additional flags with journalctl to filter, paginate, or follow the log output in real-time.

3.2. Checking /var/log/syslog

On some Linux installations, especially those not using systemd, Kubernetes can direct the kubelet logs to the /var/log/syslog file. This file aggregates system logs and is a common place to find logging output for various services and applications, including kubelet.

However, this file can house many large contents. To filter kubelet-related entries from this file, we can use the grep command:

$ grep kubelet /var/log/syslog
Mar 13 14:20:30 my-k8s-node kubelet: I0313 14:20:30.123456 reconciliation.go:104] "Reconciler sync states" pod="my-namespace/my-pod"
Mar 13 14:21:00 my-k8s-node kubelet: E0313 14:21:00.654321 eviction_manager.go:255] "Eviction manager: failed to get get summary stats" err="failed to get node info: node \"my-k8s-node\" not found"

Here, the sample kubelet log entries in /var/log/syslog reflect a normal operation related to pod state reconciliation followed by an error related to the eviction manager failing to get summary stats due to the node not being found. This variation highlights operational logs alongside error diagnostics, showing how kubelet interacts with other cluster components and manages resources.

3.3. With Docker-MultiNode

If we install Kubernetes using the Docker-MultiNode, a rarely used setup, the kubelet runs within a Docker container.

In this configuration, accessing logs involves first identifying the container in which kubelet is running, and then, we use Docker commands to fetch the logs.

Thus, we first have to find the kubelet container ID:

$ docker ps | grep kubelet
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
abcdef123456        kubelet:latest      "/usr/local/bin/kube…"   3 minutes ago       Up 3 minutes                            k8s_kubelet

Then, we can view the logs with the specific container ID:

$ docker logs abcdef123456
I0313 16:34:20.789101 1 kuberuntime_manager.go:207] "Container runtime status check" runtimeStatus=ok
E0313 16:35:25.654321 1 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"my-container\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=my-container pod=my-pod(my-namespace)\""

Here, our sample output from docker logs shows a successful container runtime status check followed by an error related to a specific container in a crash loop back-off state. It also highlights the operational checks kubelet performs on container runtime status and the challenges of managing pod lifecycles, especially when containers fail to start properly.

3.4. Using Upstart

Older versions of Ubuntu (prior to 15.04) use Upstart instead of systemd as their init system. Upstart (for older systems) is an event-based replacement for the traditional sysvinit system that manages system startup, services, and tasks.

However, for Kubernetes nodes running on Ubuntu versions that utilize Upstart, kubelet logs are not managed by journalctl but are instead located in the /var/log/upstart directory. This directory contains log files for the services Upstart manages, including kubelet if it was configured to start at boot by Upstart.

Therefore, to access kubelet logs on a system using Upstart, we would look for kubelet‘s log file within this directory:

$ less /var/log/upstart/kubelet.log
I0313 10:34:20.123456 1 server.go:408] "Starting to listen" address="0.0.0.0" port=10250
E0313 10:34:25.654321 1 kubelet.go:2187] "Node has no valid hostname and/or IP address" err="node IP not set"

Our sample output here demonstrates the initial startup of the kubelet and an early encountered error due to missing node IP configuration. This is consistent across different methods of log access but particularly relevant here as it indicates how even early system initialization processes are captured in Upstart’s log management.

4. Accessing Logs on Kubernetes in Docker

If we’re working with Kubernetes in a local development environment, Kubernetes in Docker (kind) is an invaluable tool. It allows us to run Kubernetes clusters within Docker containers, thus simulating a cluster on our local machine. This setup is particularly useful for testing, development, and learning purposes.

To access kubelet logs within a kind environment, we first need to identify and enter the Docker container that represents our kind node. Each node in our kind cluster runs within its own Docker container.

4.1. Listing Running Containers

To get started, we can list all running Docker containers to find our kind node:

$ docker container ls
CONTAINER ID   IMAGE                  COMMAND                  CREATED       STATUS       PORTS                       NAMES
...
4a1b2c3d4e5f   kindest/node:v1.20.2   "/usr/local/bin/entr…"   10 minutes ago   Up 10 minutes   127.0.0.1:32768->6443/tcp   kind-control-plane
...

In our list, we look for containers with the “kind” prefix.

After identifying the container for the node we’re interested in, we can now enter it with docker exec:

$ docker container exec -it kind-control-plane bash

With this, we get a running Bash shell inside the container, from which we can access the file system and logs as if we had SSH‘d into a regular Kubernetes node.

4.2. Finding Logs in /var/log/containers/ and /var/log/pods/

Within the kind node container, we can also find our kubelet logs in two other primary locations, which are the /var/log/containers/ and /var/log/pods/. These directories contain log files for the containers and pods which kubelet manages on this node.

For /var/log/containers/, this directory contains symbolic links to the log files for each container kubelet manages. The logs are actually stored in the /var/log/pods/ directory but are linked here for easier access.

Similarly, for /var/log/pods/, it organizes logs by pod. Each pod has its directory, named using the pod’s UUID, which contains subdirectories for each container in the pod. Each container’s log files are stored here.

To view the logs for a specific container or pod, we can use the cat command or tail them to watch in real-time.

For example, we could view the last few lines of a log file for a specific container:

$ tail -n 100 /var/log/containers/myapp-container_<pod-uuid>.log
2024-03-13T12:34:56.789Z INFO myapp-container "Starting application..."
2024-03-13T12:35:01.234Z INFO myapp-container "Application configuration loaded"
2024-03-13T12:35:05.678Z WARN myapp-container "Deprecated API usage detected"
2024-03-13T12:35:10.123Z ERROR myapp-container "Failed to connect to database"

Preferably, we can continuously watch the log:

$ tail -f /var/log/pods/<pod-uuid>/myapp-container/0.log
2024-03-13T12:36:20.345Z INFO myapp-container "Listening on port 8080"
2024-03-13T12:40:15.678Z INFO myapp-container "Received new connection"
2024-03-13T12:41:10.123Z INFO myapp-container "Connection closed"

This continuous watch on a container’s log file within a pod shows real-time logging of the application’s operational events. The log captures the application starting its server and listening on port 8080, handling a new connection, and then closing that connection.

5. Identifying Common Issues in Logs

kubelet logs are a treasure trove of information when it comes to diagnosing issues within our Kubernetes cluster.

Let’s briefly see some common issues we might encounter and the log entries that can help us identify them.

5.1. Container Start-Up Failures

One of the more common issues we might face is a container failing to start. This could be due to a variety of reasons, such as configuration errors, resource constraints, or image issues.

Fortunately, kubelet logs will typically contain error messages from the container runtime indicating the failure reason. We can look for entries containing “Failed to start container” or “Error response from daemon” to pinpoint these issues.

5.2. Resource Allocation Problems

Resource limits set too low can lead to terminated containers or degraded performance.

Conversely, setting resource requests too high can result in inefficient utilization of cluster resources.

However, kubelet logs provide insights into these situations with messages like Out Of Memory Killed (OOMKilled) for memory allocation issues or warnings about CPU limitations. Such log entries are critical for tuning and optimizing resource allocations for our applications.

5.3. Network Connectivity Issues

Networking issues can manifest as errors in service discovery, pod communication, or external access.

kubelet logs can reveal these problems through error messages related to network plugins, DNS resolution failures, or connection timeouts.

We can look for entries containing “NetworkPlugin not ready” or “Failed to setup network for pod,” which indicate potential networking issues.

6. Automating Log Monitoring

While manually checking kubelet logs is essential for troubleshooting specific issues, automating log monitoring can greatly enhance our ability to detect and respond to problems proactively.

We can set various tools and scripts to monitor certain log patterns and alert us when they detect potential issues.

6.1. Using Monitoring Tools

We can configure tools like Prometheus in conjunction with Grafana to scrape metrics from our Kubernetes cluster, including those related to the kubelet.

While Prometheus primarily focuses on metrics, logs can be ingested and monitored using additional exporters or by integrating with log aggregation systems like Loki.

We can also set up alerts in Grafana based on specific log patterns or anomalies detected in the log data.

Furthermore, for more advanced analysis, we can consider using log aggregation and analysis tools like Elasticsearch, Fluentd, and Kibana (EFK). These tools can help us visualize log data, set up alerts, and perform complex queries across multiple log sources.

6.2. Scripting and Automation

For more customized monitoring solutions, we can also devise scripts to parse kubelet logs and trigger alerts or actions based on specific criteria.

For instance, let’s see a straightforward Bash script that checks the kubelet service logs for the past 24 hours, categorizes the errors into critical, warning, and info levels, counts their occurrences, and sends an email report:

#!/bin/bash

# Define the email address for reports
recipient_email="[email protected]"

# Define the subject of the email
subject="Daily Kubelet Log Report"

# Temporary file to store the report
report_file=$(mktemp)

# Get the date for the report
echo "Kubelet Log Report for $(date)" >> "$report_file"
echo "----------------------------------------" >> "$report_file"

# Function to count and categorize logs
function count_logs {
    log_level=$1
    message_tag=$2
    count=$(journalctl -u kubelet --since "yesterday" | grep -ic "$log_level")
    echo "$message_tag: $count" >> "$report_file"
}

# Count critical, warning, and informational messages
count_logs "error" "Critical Errors"
count_logs "warn" "Warnings"
count_logs "info" "Informational Messages"

# Check for specific common issues
# Example: Image Pull Errors
image_pull_errors=$(journalctl -u kubelet --since "yesterday" | grep -ic "Failed to pull image")
echo "Image Pull Errors: $image_pull_errors" >> "$report_file"

# Send the report via email
mail -s "$subject" "$recipient_email" < "$report_file"

# Cleanup
rm "$report_file"

As we can see, for this script to work, we’ll need to configure mail on our system. We can also make adjustments based on our specific environment.

Afterward, we save the script, e.g., check_all_logs.sh, and then make it executable with chmod:

$ chmod +x check_all_logs.sh

Finally, we can now run the script:

$ ./check_all_logs.sh

Kubelet Log Report for Tue Mar 13 10:00:00 UTC 2024
----------------------------------------
Critical Errors: 5
Warnings: 12
Informational Messages: 57
Image Pull Errors: 2

Furthermore, if we like, we can automate the execution of the script by adding it to our cron jobs.

To do this, we open and edit our crontab with crontab -e and decide to run the script at intervals, e.g., at 1 AM daily:

...
0 1 * * * /path/to/check_all_logs.sh

Finally, we should save and close the editor.

With this, cron will now automatically run this script at our scheduled time.

7. Conclusion

In this article, we learned how to access kubelet logs in a variety of common locations. kubelet logs are an indispensable resource for managing or operating Kubernetes clusters. They provide deep insights into the cluster’s functioning, helping us diagnose issues, optimize configurations, and ensure smooth operations.

Moreover, we should remember that the key to effective log management is not just in accessing and reading the logs but also in leveraging tools and practices that help us monitor and react to log data proactively. Whether through manual analysis or automated monitoring, staying on top of our kubelet logs will help us maintain a healthy and efficient Kubernetes environment.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.