1. Introduction

FFmpeg is a powerful command-line tool for processing audio and video files. It’s fast and versatile, covering a wide range of use cases, but as with many command-line programs, it can be a little bit harder to use than its GUI counterparts.

In this article, we’ll explore various parameters of the FFmpeg render process.

2. Input and Output

First, we need to define our input (or inputs) and output. Inputs are preceded by the “-i” parameter, and the output is simply the last parameter provided to the command. Even these two simple parameters can define a useful process. Let’s say we want to convert a WAV file to mp3. We run:

$ ffmpeg -i file.wav file.mp3

Because we didn’t define any other parameters, FFmpeg will use defaults for the provided extension. We can use the ffprobe command to check the parameters of the newly created file:

$ ffprobe test.mp3
...
Input #0, mp3, from 'test.mp3':
...
    encoder         : Lavf58.29.100
  Duration: 00:03:04.27, start: 0.025057, bitrate: 128 kb/s
    Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 128 kb/s
    Metadata:
      encoder         : Lavc58.54

3. Codecs, Bitrate, Size, and Frame Rate

If we don’t want to rely on the defaults, we can specify some encoding options. Some can be specific to the codec that we’re using, but the most basic ones should work similarly in most situations.

3.1. Codecs

First, we must decide what codecs we want to use for audio and video. For now, we’ll assume that we want to apply the same codec to all video streams and the same codec to all audio streams if we have more than one stream in the input file.

Let’s encode our video with the “h264” codec and our audio with the “aac” codec. We’ll use the “-c:v” parameter to set the codec for all video streams and the “-c:a” parameter to set the codec for all audio streams:

$ ffmpeg -i input.mp4 -c:v h264 -c:a aac output.mp4

3.2. Bitrate

We can specify bitrate both for video and audio. For video, we’ll use the “-b:v” parameter, and for audio, the “-b:a” parameter:

$ ffmpeg -i input.mp4 -b:v 2M -b:a 128k output.mp4

Mind that different codecs require different bitrate values to maintain transparency. For example, the mp3 codec needs a higher bitrate to achieve the same quality as the aac codec. Also, the bit rate value will mean slightly different things for constant bit rate encoding and variable bit rate encoding.

3.3. Frame Rate

By default, FFmpeg will encode output with the same frame rate as its input, but if needed, we can customize it. We have a couple of ways of doing this. The first is to use the “-r” parameter. It can be used both with a constant frame rate and a variable frame rate. For variable frame-rate streams, the parameter will work as a ceiling:

$ ffmpeg -i input.mp4 -r 30 output.mp4

It can also be used for rendering images from video. If we’d like to generate a still image for every frame, we could do it easily:

$ ffmpeg -i input.mp4 -r 1 -f image2 snap-%03d.jpeg

We need to force the output to “image2” format using the “-f” parameter. The second way to set the frame rate is to use the “fps” filter:

$ ffmpeg -i input.mp4 -filter:v fps=30 output.mp4

In some situations, we don’t know the frame rate of the input, or the notion of frame rate doesn’t make sense, for example, when we try to build a video from still images. In that case, before providing the input, we need to specify the frame rate as the input argument using the “-framerate” parameter:

$ ffmpeg -f image2 -framerate 30 -i snap-%03d.jpeg output.mp4

3.4. Size and Scaling

If we want to scale the video to a specific size, we can use the scale filter:

$ ffmpeg -i input.mp4 -vf scale=320:240 output.mp4

We may also want to specify only one dimension and keep the aspect ratio:

$ ffmpeg -i input.mp4 -vf scale=320:-1 output.mp4

In some situations, we don’t want to scale to a specific size but, for example, to twice the size of the original. To achieve this, we can use variables instead of numbers — “ih” for input height and “iw” for input width:

$ ffmpeg -i input.mp4 -vf scale=iw*2:ih*2 output.mp4

4. Audio Processing

4.1. Volume

During rendering, we can also use filters to manipulate audio streams. Let’s start with volume and set it to half of the input value:

$ ffmpeg -i input.mp4 -filter:a "volume=0.5" output.mp4

The parameter value in the command above is relative to the input volume. We can also use absolute values given in decibels. Let’s increase volume by 10 dB:

$ ffmpeg -i input.mp4 -filter:a "volume=10dB" output.mp4

4.2. Normalization

If we want to normalize audio to the requested maximum volume peak, we can check the current peak (measured in dBFS):

$ ffmpeg -i input.wav -filter:a volumedetect -f null /dev/null
...
[Parsed_volumedetect_0 @ 0x7fa48bf09cc0] mean_volume: -7.6 dB
[Parsed_volumedetect_0 @ 0x7fa48bf09cc0] max_volume: -2.0 dB
...

And then, we’d adjust the volume accordingly. Let’s say we want to set the peak to -1dBFS instead of -2dBFS from the example above. We need to add 1dB to the output:

$ ffmpeg -i input.wav -filter:a "volume=1dB" output.wav

If we want to normalize not only peak volume but also perceived loudness measured in LKFS, we can use the loudnorm filter:

$ ffmpeg -i input.mp4 -filter:a loudnorm output.mp4

By default, it will normalize audio to match the EBU R 128 standard.

4.3. Sample Rate

We can also change the sample rate of the audio using the “-ar” parameter:

$ ffmpeg -i input.wav -ar 48000 output.wav

Mind that changing the sample rate can result in losing some audio quality, even if the new sample is higher than the original one.

5. Stream Mappings

Up to this point, we operated on at most one video and audio stream. However, sometimes we want to merge a few files into one. To achieve that, we can use the “-map” parameter followed by a stream specifier. Specifiers consist of a number of a file and a number of a stream inside the file delimited by a colon.

Let’s say we want to take a video from the first file and the audio from the second one and merge them into the output file:

$ ffmpeg -i first.mkv -i second.mkv -c copy -map 0:0 -map 1:1 output.mkv

Instead of specifying the number of the stream inside the file, we can use an audio or video alias:

$ ffmpeg -i first.mkv -i second.mkv -c copy -map 0:v -map 1:a output.mkv

6. Conclusion

In this article, we looked into various ways of parametrizing the FFmpeg render process. We learned how to set input and output, then applied different filters to manipulate the output. Finally, we used the mapping feature to choose and pick streams and created a file with combined streams.

Comments are closed on this article!