Black Friday 2025 – NPI EA (cat = Baeldung on Linux)
announcement - icon

Yes, we're now running our Black Friday Sale. All Access and Pro are 33% off until 2nd December, 2025:

>> EXPLORE ACCESS NOW

Baeldung Pro – Linux – NPI EA (cat = Baeldung on Linux)
announcement - icon

Learn through the super-clean Baeldung Pro experience:

>> Membership and Baeldung Pro.

No ads, dark-mode and 6 months free of IntelliJ Idea Ultimate to start with.

Partner – Orkes – NPI EA (tag=Kubernetes)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

1. Overview

When working with files in Linux, we often need to know their size. Counting the number of lines in a file gives us a quick way to estimate its size, especially for text files. This becomes even more important when dealing with large files or automating tasks.

Fortunately, Linux offers a variety of efficient ways to count the lines of a file in Bash.

In this tutorial, we’ll look at the most common ways of counting the number of lines of a specified file in Bash.

2. Example File Setup

To illustrate the different methods for counting lines in a file, we’ll use a sample file named countries.txt. This file contains a list of countries, each on a separate line:

$ cat countries.txt
Brazil
Canada
China
France
Germany
India
Japan
Mexico
Nigeria
Russia
South Africa
Spain
United Kingdom
United States
Vietnam

Our text file contains 15 lines. While we could manually count these lines, this becomes impractical for larger files.

However, that’s where commands from the GNU Coreutils package, as well as sed and awk, come in handy. In the following sections, let’s explore various commands and techniques to count the lines in the above file, and by extension, any text file we encounter.

3. wc

The wc command in Linux is a useful tool that helps us count the number of lines, words, and characters in a file. To specifically count the lines, we use the -l option. This makes the command one of the quickest ways to determine how many lines a file has.

For example, let’s count the lines in our countries.txt file:

$ wc -l countries.txt
15 countries.txt

The above command gives us the number of lines along with the filename.

But sometimes, we want the line count without the file name. In that case, we can redirect the file to the command’s standard input:

$ wc -l < countries.txt
15

This outputs only the number of lines.

Now, let’s say we want to count the lines in each file of the current directory. We can use the * wildcard to expand our scope:

$ wc -l *
 15 countries.txt
 10 programming.txt
 25 total

This command lists the number of lines for each file and even gives us a total at the end.

Additionally, we can combine grep with wc -l if we want to count lines that match a particular pattern. For instance, let’s count lines containing the letter “a” in countries.txt:

$ grep "a" countries.txt | wc -l
13

This command counts only the lines that match the specified pattern.

Finally, if we’re working with files that have a particular extension, such as .txt, we can easily count the lines in all of those files:

$ wc -l *.txt
 15 countries.txt
 10 programming.txt
 25 total

This approach is helpful when focusing on a specific group of files.

4. sed

sed is a stream editor that’s used to perform basic text transformations on an input file. This command is mostly used for find-and-replace functionality. We can also use it to find the number of lines of a specified file.

sed can receive different arguments in order to print the line numbers.

4.1. sed -n ‘=’

We can use the combination of sed, the -n option, and an equal sign (=) to print the line numbers without the content of the file:

$ sed -n '=' countries.txt 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

From the result, we can see that the command only prints the number of lines. However, this approach isn’t efficient for large files.

4.2. sed -n ‘$=’

Most of the time, we prefer to just get the overall number of lines. sed comes in handy by using the -n option and the ‘$=’ argument. The output of the command is the number of the last line of a file:

$ sed -n '$=' countries.txt
15

This approach is more concise and efficient, especially for larger files, because it avoids printing all the individual line numbers.

5. awk

The awk command treats every line as a record. The number of lines can then be printed in the END section using awk‘s built-in NR variable:

$ awk 'END { print NR }' countries.txt
15

Like the previous example, this method is also suitable for large files.

6. cat

The cat command concatenates the files passed to it as arguments and prints the result to standard output. This is one of the most used commands in Linux. Using the cat command with the -n option prints the file contents with their line numbers:

$ cat -n countries.txt 
     1	Brazil
     2	Canada
     3	China
     4	France
     5	Germany
     6	India
     7	Japan
     8	Mexico
     9	Nigeria
    10	Russia
    11	South Africa
    12	Spain
    13	United Kingdom
    14	United States
    15	Vietnam

We can see that the command has printed both the line numbers and the content. Notably, this approach is impractical when dealing with large files.

7. Conclusion

In this article, we learned several ways of counting the number of lines of a file in Bash. The wc -l command is the most used and the easiest way to find the number of lines of a given file.

However, the wc -l command has some limitations. It can become inefficient when dealing with very large files or files that contain extremely long lines. Performance issues can arise in such scenarios due to system resource constraints.