What is video bitrate?
If you have ever worked with video, you will almost certainly have come across the term ‘bitrate’.
And even if you have only ever watched videos, it’s likely you’ll have seen the effect that different bitrates can have, whether that’s buffering as you stream, or quality so low you can barely follow the action.
The name makes it seem a simple concept: the rate at which bits are transmitted. However, there are several factors that can affect a bitrate, meaning that bitrates are not all created equal.
You could watch two videos with identical bitrates and have entirely different experiences. Ultimately, this means it’s impossible to say what a ‘good’ bitrate is, but understanding what is behind a bitrate will go a long way to helping you decide what’s right for your video.
At its most simple, bitrate is a measure of data transmission. A bit is a computer’s most basic unit of information, a binary on or off, a 1 or a 0. Every computer, no matter how sophisticated or rudimentary, will use these. And it needs a lot of them, a typical Unicode character will use 16 bits. This paragraph uses over 5,000 bits.
Bitrate can be applied in two ways, the first is capacity. An internet connection’s capacity, for example, is measured in its bitrate and the higher, the better, since it means the connection can transfer more data, more quickly.
A similar meaning applies to video, the higher the bitrate, the more data or information each second of the video has. In the case of video, however, higher is not necessarily better.
A high bitrate might be too much to stream over home internet connections, for example, or even too much for computers to handle, and creates a larger file size.
In short, video bitrate is the amount of information that is transmitted per second and it's affected by factors such as resolution, frame rate, color and more.
A simple example of bitrate
To give an example, and to keep the numbers relatively small, let’s imagine making a video to demonstrate some retro computing from thirty years ago.
We want to demonstrate using a computer with a VGA monitor, so the image will be just 640 by 480 pixels, some way from high definition. That means each image has 307,200 pixels. VGA could display in up to 16 colors, in binary that would need four bits (1111 is 15, but we can use the 0 to give us the 16th color).
That means we need 1,228,800 bits for a single picture.
But then we need multiple frames to make it a video. Assume we go for 20 frames per second — it won’t be smooth, but will work — and we now need 24,576,000 bits per second. It might be a simple retro video, but it has a bitrate of 25 megabits per second (mbps).
At its most basic, the bitrate is a function of the image’s resolution, the color depth and frame rate. As each increases, so too does the amount of data required for every second of video.
Why bitrate gets complicated
That bitrate for the simple VGA video might seem high. And that’s because it is, a typical Blu-ray video may only have a bitrate of around 20 mbps, while a DVD will be just 6 mbps.
The reason is that for real-life uses video will always be compressed. Compression rates can vary enormously, depending on things like the video’s content and the compression techniques used.
So, what are the factors that affect bitrates?
Resolution plays a big part in bitrate because, obviously, the higher the resolution, the more data needed. However, frequently this will be something that cannot be changed. A high-definition video needs a high-resolution picture. However, if you are encoding for streaming, it might be possible to reduce bitrate by using a lower resolution.
YouTube will do this automatically, serving a lower resolution video on slower internet connections.
The more colors in the image, the more data it needs. However, this is an area where compression can be highly effective.
Compression can be lossless or lossy. As the words suggest, lossless compression will make the file smaller, but there will be no loss of data.
Lossy compression takes advantage of the fact that, to a certain point, the human eye is very effective at compensating for loss of detail.
Modern displays are capable of millions of colors, but the human eye cannot distinguish each one. Compression can take advantage of this by reducing the numbers of colors used.
Done well, it can significantly reduce the size, but can have drawbacks. One of the most common video compression artefacts is ‘banding’. This is where a gentle gradient of color is replaced by just a handful of shades that may be close, but are different enough that bands are visible.
Like resolution, the frame rate will play a big part in the overall bitrate, the more frames per second, the higher the bitrate. It might be assumed that higher is better, but there is some scope to adjust frame rate.
Commonly used frame rates have developed because of convention. Humans typically start noticing individual frames at around 10-12 fps, so anything higher could be considered a movie.
The 24-fps standard was only chosen because it was about the average frame rate when sound movies were introduced in the 1920s and its stuck ever since. More frames can make a smoother video, but people are used to 24 fps.
Indeed, when The Hobbit was released at 48 fps, some movie-goers felt the smoother picture prevented the suspension of disbelief, while others complained it induced nausea.
The type of video
Video content will play a big role in bitrate. This is because of the way video compression works.
Returning to our retro video example, it’s likely large parts of the screen may be single blocks of color; computer desktops might be plain, windows may be largely blank space with just a few windows, or a document will have lots of white space in the margins and between paragraphs.
The compression can use this, and instead of containing data for every pixel, it can just carry data for a whole area.
And even if the image is complicated, if it’s not changing, there’s no need to repeat it every frame. A video of someone talking in front of a static backdrop, like a newsreader, could have a relatively low bitrate because only a small area — the individual and the immediately surrounding area — will change with each frame.
Of course, this works the other way. Videos with lots of movement or action will require a high bitrate because every part of the image might be change in each frame.
Finally, the type of compression will make a huge difference. The most common codec used is known as h.264. However, the most recent standard is h.265 and while this is still being adopted, meaning some people may not be able to play h.265 videos, it is much more efficient.
The BBC, researching the codecs for their use, found that h.265 files were around 50% smaller than h.264 while maintaining the same quality level.
Another practical consideration is whether compression uses a constant or variable bitrate. Constant bitrates are useful for things like streaming video since it means the bitrate does not vary dramatically, and won’t risk overwhelming the bandwidth.
However, it does not work as well for videos where the action changes. A variable bitrate can change dramatically to match the video, keeping video quality is high, but increasing bitrate and file size for those action scenes.
What is the ideal bitrate?
Every video will have its own ideal bitrate. However, it’s important not to think that higher is always better. A higher bitrate may create problems with streaming, or create unwieldy file sizes that quickly eat up storage space.
And the high bitrates might not even provide an advantage, there is a point at which, although the video is technically better, people cannot perceive the difference.
Generally, the only time higher is always better is during video production. The higher the rate, the higher the quality, and can lower the bitrate of the final product for streaming or sharing. However, you can never recover the data quality lost with a low bitrate at the beginning.
When choosing that final bitrate, consider the platform you are using. Sites like YouTube have published guidelines, suggesting as high as 85 mbps for a 4K high frame rate video, down to 1 mbps for a 360p low frame rate video.
Then, within that, think about what the bitrate needs to carry. Do you need to have lots of color information retained? Is there lots of movement, does the video switch between shots? Are there scenes that might cause compression artefacts, like simple color gradients, or small, intricate, details?
Ultimately, the final bitrate will be a result of all these, sometimes hard to predict, factors. Getting the right one for your platform and audience might take some experimentation, but when you know what goes into a bitrate, you’ll soon start getting a sense of what is, and isn’t, right for your needs.