Course – LS – All
announcement - icon

Get started with Spring Boot and with core Spring, through the Learn Spring course:

>> CHECK OUT THE COURSE

1. Overview

When we work in Java, manipulating strings is one of the fundamental skills. So, understanding string-related methods is crucial for writing efficient and error-free code.

Two commonly used methods, String.length() and String.getBytes().length, may seem similar at first glance, but they serve distinct purposes.

In this tutorial, let’s understand these two methods and explore their differences. In addition, we’ll talk about when to use each one.

2. First Glance at String.length() and String.getBytes().length

As the method name implies, the String.length() method returns the length of a string. On the other side, String.getBytes() gets the byte array with the default encoding from the given string. Then, String.getBytes().length reports the array’s length.

If we write a test, we may see they return the same value:

String s = "beautiful";
assertEquals(9, s.length());
assertEquals(9, s.getBytes().length);

When dealing with a string in Java, is it guaranteed that String.length() and String.getBytes().length always yield the same value?

Next, let’s figure it out.

3. String.length() and String.getBytes().length Can Return Different Values

The default character encoding or charset of the current JVM plays an important role in deciding the result of String.getBytes().length. If we don’t pass any argument to String.getBytes(), it uses the default encoding scheme to encode.

We can check the default encoding of a Java environment using the Charset.defaultCharset().displayName() method. For example, the current JVM’s default encoding is UTF-8:

System.out.println(Charset.defaultCharset().displayName());
//output: UTF-8

So, next, let’s test two more strings to see if String.length() and String.getBytes().length still return the same value:

String de = "schöne";
assertEquals(6, de.length());
assertEquals(7, de.getBytes().length);

String cn = "美丽";
assertEquals(2, cn.length());
assertEquals(6, cn.getBytes().length);

As the test above shows, first, we tested with the word “beautiful” in German (“schöne”), and then we took another string, which was “beautiful” in Chinese (“美丽”). It turned out that String.length() and String.getBytes().length yielded different values in both tests.

Next, let’s find out why this happened.

4. Character Encoding

Before learning why String.length() and String.getBytes().length gave different values on the strings “schöne” and “美丽”, let’s quickly understand how character encoding works.

There are many character encoding schemes, such as UTF-8 and UTF-16. We can split these encoding schemes into two categories:

  • Variable-length encoding
  • Fixed-length encoding

We won’t dive too deep into character encodings. However, a general understanding of these two encoding techniques will be pretty helpful in understanding why String.getBytes().length can have different values from String.length().

So, next, let’s take a quick look at these two kinds of encoding kinds through examples.

4.1. Fixed-Length Encoding

The fixed-length encoding uses the same number of bytes to encode any characters. A typical example of fixed-length encoding is UTF-32, which always uses four bytes to encode a character. So, this is how “beautiful” is encoded with UTF-32:

char    byte1 byte2 byte3 byte4
 b        0     0     0     98
 e        0     0     0     101
 a        0     0     0     97
 u        0     0     0     117
 ...
 l        0     0     0     108

Therefore, when invoking String.getBytes() with the UTF-32 charset, the length of the resulting byte array will consistently be four times the number of characters in the string:

Charset UTF_32 = Charset.forName("UTF_32");

String en = "beautiful";
assertEquals(9, en.length());
assertEquals(9 * 4, en.getBytes(UTF_32).length);

String de = "schöne";
assertEquals(6, de.length());
assertEquals(6 * 4, de.getBytes(UTF_32).length);

String cn = "美丽";
assertEquals(2, cn.length());
assertEquals(2 * 4, cn.getBytes(UTF_32).length);

That is to say, if UTF-32 was set as the default encoding of JVM, the results of String.length() and String.getBytes().length are always different.

Some of us might observe that when storing UTF-32 encoded characters, even though certain characters, such as ASCII characters, only need a single byte, we still allocate four bytes, with three of them being filled with zeros. This is kind of inefficient.

So, variable-length character encoding was introduced.

4.2. Variable-Length Encoding

Variable-length encoding uses varying numbers of bytes to encode different characters. UTF-8 is our default encoding. Also, it’s one example of the variable-length encoding schemes. So, let’s look at how UTF-8 encodes characters.

UTF-8 uses from one to four bytes to encode a character depending on the character’s code point. The code point is an integer representation of a character. For example, ‘b’ has the code point 98 in decimal or U+0062 in hex-decimal, which is the same as its ASCII code.

Next, let’s see how UTF-8 determines how many bytes are used for encoding a character:

Code point range Number of bytes
U+0000 to U+007F 1
U+0080 to U+07FF 2
U+0800 to U+FFFF 3
U+10000 to U+10FFFF 4

We know the character ‘b”s code point is U+0062, which is in the range of the first row of the table above. So, UTF-8 uses only one byte to encode it. As U+0000 to U+007F is 0 to 127 in decimal, UTF-8 utilizes one single byte to encode all standard ASCII characters. That’s why String.length() and String.getBytes().length gave the same result (9) on the string “beautiful“.

However, if we check the code points of ‘ö’, ‘美’, and ‘丽’, we’ll see UTF-8 uses different numbers of bytes to encode them:

assertEquals("f6", Integer.toHexString('ö'));   // U+00F6 -> 2 bytes
assertEquals("7f8e", Integer.toHexString('美')); // U+7F8E -> 3 bytes
assertEquals("4e3d", Integer.toHexString('丽')); // U+4E3D -> 3 bytes

Therefore, “schöne”.getBytes().length returns 7 (5 + 2) and “美丽”.getBytes().length yields 6 (3 + 3).

5. How to Choose Between String.length() and String.getBytes().length

Now, we have clarity on the scenarios where String.length() and String.getBytes().length return identical values and when they diverge. Then, a question may come up: when should we opt for each method?

When deciding between these methods, we should consider the context of our task:

  • String.length() – When we work with characters and the logical content of the string and want to obtain the total number of characters in the string, such as user input max-length validation or shifting characters in a string
  • String.bytes().length – When we deal with byte-oriented operations and need to know the size of the string in terms of bytes, such as reading from or writing to files or network streams

It’s worth noting when we work with String.bytes(), we should remember that character encoding plays a significant role. String.bytes() uses the default encoding scheme to encode the string. Apart from that, we can also pass the desired charset to the method to encode the string, for example, String.bytes(Charset.forName(“UTF_32”)) or String.bytes(StandardCharsets.UTF_16) 

6. Conclusion

In this article, we understood in general how character encoding works and explored why String.length() and String.getBytes().length can produce different results. In addition, we discussed how to choose between String.length() and String.getBytes().length.

As always, the complete source code for the examples is available over on GitHub.

Course – LS – All
announcement - icon

Get started with Spring Boot and with core Spring, through the Learn Spring course:

>> CHECK OUT THE COURSE

res – REST with Spring (eBook) (everywhere)