It allows us more control over our network bandwidth. For example, we can use throttling to prioritize specific downloads.
In this tutorial, we’ll look at different ways to limit our download speed in Linux. The commands presented here are written with the Bash shell in mind, so they might not work with other shells.
2. Using wget
wget is a command-line utility used for downloading files from the web. It can also be used for mirroring web pages, by copying the page files to recreate the structure of the website.
2.1. In a New Connection
When we’re starting a new transfer, we can use the –limit-rate flag for throttling. It caps the download speed at a specific value.
We use k to represent the speed in kilobytes, or m to represent the speed in megabytes. If we don’t add a suffix, the value will be in bytes.
Let’s configure wget to limit the bandwidth in a download:
$ wget --limit-rate=20k https://example.website.com/image.png
The tool accomplishes the throttle on the network speed by periodically pausing the transfer.
When packets arrive faster than the desired rate, the application sleeps for a certain amount of time. This way, we achieve an average speed that matches the limit.
However, in smaller files, this balance may not be possible, as the transfer speed takes some time to stabilize.
2.2. In an Ongoing Connection
In wget, we can’t change the rate limit during a transfer. Nonetheless, it’s possible to stop our transfer and continue from the same point with a throttled speed.
To do that, we need to halt the ongoing download. This leaves us with a partial file.
Now, if we use the previous command, wget will start the file transfer from the beginning.
Instead, let’s see how we can continue an incomplete download:
$ wget -c --limit-rate=1m https://example.website.com/image.png
By using the -c (–continue) flag, we resume the download where we stopped. Thus, we managed to set a bandwidth limit and continue where we left off.
2.3. In Between Retrievals
If we’re downloading multiple files, another approach we can take is delaying our number of retrievals. To do this, we can specify an interval to wait between different downloads.
The delay can help with website-specific restrictions for the number of transfers. It can also be useful for complying with crawling directives and preventing server overload.
Let’s apply this delay with wget:
$ wget -m https://example.website.com/ -w 30
In the example above, we’re using the -w (–wait) flag to apply a 30-second delay between retrievals. To use other units of time, we can append m (minutes), h (hours), or d (days) to the value.
The -m flag is used to mirror a website. When using this, it creates a local copy of the files needed to recreate the page.
3. Using curl
Another tool for transferring data in network protocols is curl (Client URL). Particularly, we use it for downloading files and web pages. It supports user authentication and secure connections.
3.1. In a New Connection
Let’s look at how we can limit our bandwidth when performing a curl request:
$ curl https://example.website.com/image.png --limit-rate 10K --output image.png
The –limit-rate flag sets the value for the maximum bandwidth used. The values are displayed in bytes per second if no suffix is used.
We can also define the value in other units, by using K for kilobytes, M for megabytes, and G for gigabytes. Their lowercase counterparts also work.
Like wget, the throttle corresponds to the average bandwidth used. While it might go over the limit at the beginning, it’ll even out throughout the transfer.
3.2. In an Ongoing Connection
With curl, we don’t have a way of applying a throttle during a connection.
However, we can stop an ongoing transfer and create a new one, with a transfer speed limit, and restore our progress.
We can do this with the -C (–continue-at) flag and the same output file:
$ curl https://example.website.com/image.png --limit-rate 10K -C - --output image.png
Using this flag, we can define a specific offset to skip. These are bytes from the beginning of the file that won’t be downloaded.
If we use – as the value, curl will look at the output file and only download the remainder.
In this article, we looked at methods for throttling our download speed in Bash. In particular, we focused on curl and wget. These are popular tools for data transfer and have built-in mechanisms for limiting the transfer speed.
We also explored how the throttling works by pausing a transfer when the average speed surpasses the threshold. In smaller files, this limit is harder to achieve since the tools don’t have enough time to average out.
Finally, we saw how we could achieve this throttle in multiple downloads by applying a delay between retrievals.