Wednesday, March 1, 2017

CBR vs VBR Rendering in Adobe Premiere

I do a lot of video editing, and as I was sitting waiting for a project to render last week, I was thinking about the various encoding settings in Premiere.  They're pretty confusing for some home video producers, so I thought I'd try to write a post to simplify your understanding of the different main settings.  In particular, it is the different types of VBR (variable bit rate) and CBR (constant bit rate) compression that confuses people.  Let me apologize in advance that "simplify" and "simple" are not synonymous.  I had to go into more detail than one might expect, while writing this post.  If you want to go straight down to CBR vs VBR, skim through until you see that section header.

But first, here's a screenshot of the project that I was working on when I decided to write this post:


I also have a video link to a YouTube version of this information, which you can find at the bottom of this post.


First, I'll assume everyone understands what the bit-rate is (but I'll explain it in more detail below).  It's the bandwidth, or amount of data that a piece of video or other types of media takes up, expressed in units of time.  If a piece of video or audio streams at 20 kbps (kilobits per second) and another streams at 40 kbps, and the two items are the same time length in duration, then the second file will take up twice as much space on your hard drive or other space as the first.


Bits & Bytes

Let's talk about bits & bytes comparisons for a minute or two (you can skip this section if you want).  I didn't want to confuse you, but I guess this is important.  Generally, you should assume that there are eight bits in a byte (alright, computer scientists might argue about the exceptions).  Small "b" refers to bit, and capital "B" refers to byte.  Kilo is generally a prefix that means a thousand, and mega is generally prefix that means a million, and giga is a prefix that means a billion, and tera is a prefix that means a trillion.  Keep adding three decimal places.  But ignore the fact that a billion can mean different things in different parts of the Commonwealth, etc.  Now, alter the previous statement because computer stuff is expressed in binary at its heart, most of the time, and kilo is actually just shorthand for 2^10 or 2 to the tenth power, most of the time.  And 2^10 is actually 1024, not 1000, which throws off the numbers for mega and giga and terra and so on (peta is next) because they're all just adding additional powers of two, not decimal multipliers.  Except wait, some manufacturers (especially of hard drives) use the SI system (decimal) to skimp a bit, which means that a 2 terabyte hard drive is actually only 1.81 TB.  Hm.  I think I'm going to abandon this part of the discussion for now, and maybe make a separate blog post about it.

1 KB = 1 kilobytes = 2^10 bytes = 1,024 bytes = 8,192 bits
1 MB = 1 megabyte = 2^20 bytes = 1,024 KB = 1,048,576 bytes = 8,388,608 bits
1 GB = 1 gigabyte = 2^30 bytes = 1,024 MB = 1,073,741,724 bytes = 8,589,933,792 bits
1 TB = 1 terabyte = 2^40 bytes = 1,024 GB = 1.0995 x 10^12 bytes
1 PB = 1 petabyte = 2^50 bytes = 1,024 TB = 1.1259 x 10^15 bytes
1 EB = 1 exabyte = 2^60 bytes = 1,024 PB = 1,1529 x 10^18 bytes

If you really need to go higher on that chart, Zettabytes (ZB) and Yottabytes (YB) are next, and the next order of magnitude after YB's hasn't been determined yet.  Of course, at the moment (early 2017), petabytes are the stuff of dreams.  However, Seagate did announce a 60 TB SSD drive in mid-2016 (which would probably cost about $50,000) so the concept of a drive that holds 1 PB probably isn't that far out.  Maybe by 2021-2022?  Then a 1 exabyte drive by 2030, and a 1 zettabyte drive by 2040?  I'm not sure what you'd store on that.  Perhaps a 3D holographic 240 fps full dolby stereo video diary of your entire life.  And the cable connection to the drive to handle the bandwidth stream would probably need to be a fiber optic cable bundle with the thickness of the Mississippi River, transmitting fibre optic data in full spectrum.  Thankfully, by the time people are worrying about that technological hurdle, my time will have come.




Bit Rates

The Bit Rate refers to the amount of information being processed per unit of time.  Imagine this: let's say that each pixel on your screen can have a color depth represented in 16 bits.  Be aware that 8 bit and 24 bit are also common color depths (more so in digital photography & graphics editing).  Also, there's a HUGE difference in range between those types.  The number of bits means a power of two since it's a binary system, so 8-bit is equivalent to 2^8th (or 256 different choices to represent ever type of colour known), 2^16th is 65,536 choices, and 2^24th is a number that I don't actually know off the top of my head, but it's a lot larger than 2^16th (which is a number that I memorized as a small child math geek, because it was the addressing range of the memory available in the Commodore 64).  Let's also assume that we're working with a hypothetical screen size of 100 pixels by 200 pixels.  This is 20,000 pixels.  If each pixel is expressed in 16-bit resolution, then you need 20,000 x 16 bits to show each frame of the video.  That works out to 1,200,000 bits per frame.  But 100x200 pixels is not a common screen size, so it's a poor example.  Let's look at some common sizes:

SD (old):            640 x   480 pixels = 307,200 pixels
SD (other):         720 x   480 pixels = 345,600 pixels
HD high-def:    1920 x 1080 pixels = 2,073,600 pixels
4K:                    3840 x 2160 pixels = 8,294,400 pixels
UHD 4K:           4096 x 2160 pixels = 8,847,360 pixels

I'll bite my tongue and refrain from going on a rant about standardization for now.  I'm sure that 8K video will have problems too.  Incidentally, on a positive note, 8K video is the approximate point where there are very limited returns in going with a higher pixel density (based on standard viewing distance) thanks to limitations in the average human eye.  This is good because it will be an eventual limit on the nuclear arms race that is video resolution.  I mean, sure, maybe someday someone will build a 2500K video camera to be able to broadcast video on a 200 foot wide by 20 foot high screen where they want the resolution (pixel density) to be "equivalent to real-life" for anyone standing four feet away from the screen, but thankfully, 8K should be a reasonable limit for our basic needs on desktop monitors and regular televisions, I think.


Color Depth (Bit Depth)

A few moments ago, I talked about 16-bit color depth.  Well, Premiere works with 16-bit (ie. it allows you to import video files that were recorded at 16-bit) but it only outputs 10-bit.  What!?  Yeah, it's complicated.  There's a link at the bottom of this post that goes into this issue in more depth.  Even though things like PhotoShop & AE & Speedgrade can output up to 32-bit, Premiere is 10-bit.  But that's fine for broadcast quality final renders.


Frame Rates

Ok, let's talk frame rates for now.  Are there different standards?  Of course there are.  24p, 25p, 29.97i, 30p, and 60p are some common ones.  For broadcast in North America, most of South America, Japan, South Korea, and the Philippines, 30fps (NTSC) is usually the standard.  For broadcast everywhere else, 24fps (PAL) is usually the standard.  I won't get into the difference between the "p" and the "i" (progressive vs. interlaced scan) so if you don't know this, look it up.  From my point of view, 30p is a decent frame rate, and 60p is a great frame rate, which means that there are 60 full frames of "photos" per second in the video.  Unfortunately, at the moment, the technology in the cameras that I own isn't sufficient to handle 4K footage at 60p.  But that's another story.  I'm sure that 4K 60P cameras (probably even in mobile phones) will be common within 12 months.


Bandwidth & Compression

Alright, so currently I'm editing at 3840x2160, 29.97i.  Let's call it 30 for simplicity.  So we need approximate 3840x2160 pixels per frame, or 8,294,440 pixels per frame, and 10 bits per pixel, which means 82,944,400 bits per frame, multiplied by 30 frames per second, which is about 2.4883 billion bits per second, or 2488.33 Mbps.  That is a lot of bandwidth.  Let's convert that from megabits to megabytes, which makes more sense when thinking about disk storage, so we divide by 8 and we get 311.04 MB/sec.  Yup, a lot of bandwidth.  I'd use a gigabyte in approximate three seconds, or 20 GB/minute.  I don't have the storage capacity for that kind of data.  That's where compression can come in.

Compression means that an algorithm or CODEC (compression/decompression module) trims down the size of the video stream by throwing away some unnecessary data.  I have links to a couple articles about compression at the bottom of this post.

For the sake of experimentation, I tried rendering some 4K projects in the very highest, most space-consuming h.264 output possible in Premiere:  300 Mbps.  That's excessive.  Be aware, that YouTube itself recommends a maximum bitrate of about 50 Mbps for this type of footage, so I'm definitely using more bandwidth than necessary, although this was for the sake of education and experimentation, and for maximizing quality when I had no storage space restrictions.  Incidentally, YouTube also currently has a cap of 128 GB per video uploaded, although I'm sure that will increase within a year or two.


CBR vs VBR

This was the original point of this post, before it got wildly out-of-control.  CBR stands for constant bit rate.  VBR stands for variable bit rate.  How do you know what settings to use for compression?

Constant Bit Rate means that EVERY part of your video will get compressed at the same bit rate.  If your video is similar throughout, this is fine.  If parts of your video are very static and unchanging, while other parts are very "busy" with a lot of movement, then this is not necessarily fine.  You're probably "wasting" bandwidth in the static parts that are being compressed, and you're probably not getting enough detail in the busy parts, because you're limited in how much bandwidth you are using.  Busy video with a lot of movement requires more information in a compressed codec.

Note that if you're using uncompressed video, where you store/save/transmit every bit of information associated with every pixel of every frame in the video, then CBR is just great.  But if you're editing anything larger than HD at the current time, you probably aren't working with uncompressed video (unless you're in a professional cinema/broadcast situation).

Variable Bit Rate means that different parts of your video are compressed more or less than other parts.  This can be useful.  As implied above, it might be helpful to use a lot of compression in a sequence without much movement (perhaps an eight-second title screen) and then to use less compression (retaining more detail) in a sequence with a car chase.  Rendering your project with a Variable Bit Rate setting can help with that.  With VBR, Premiere would allot more bandwidth to that busy sequence, and less to the static sequence.  To understand this, we need to know the two key bitrate settings.

With CBR, it's simple, there is one bit rate to set.  With VBR, there are two.  There is a "target" bit rate and a "maximum" bit rate.  These are pretty easy to understand.  The maximum is the biggest temporary time-based bit rate that Premiere will allow.  The bit rate of the rendered project will never exceed this amount.  The target bit rate is an intended average bit rate.  So if the VBR target was 24 Mbps, and your video project ended up being 10 seconds long, with 5 seconds rendered at 20 Mbps and 5 seconds rendered at 28 Mbps, your final "average" would be the target of 24 Mbps.

In that example, if your maximum bit rate was set at 30 Mbps, you'd have no problems.  Your video would have topped out at 28 Mbps, and thus never would have exceeded the maximum.  In a different situation, if your maximum bit rate was set at 26 Mbps, the sections that needed to be encoded at 28 Mbps wouldn't be ... they'd be capped at 26 Mbps.  So maybe your busy section of the video wouldn't quite end up at the quality you want, but as a consolation, at least that would have left more bandwidth for other parts of your render.

In theory, if you have a CBR set at 24 Mbps, and a VBR set at a target of 24 Mbps (the maximum shouldn't matter), then both renders will probably come out to be the same file size.  The only difference is that the CBR file will be 24 Mbps the whole way through, and the VBR will have different rates in different parts but will average out to 24 Mbps overall.  The advantage of the VBR is that when your scenes are busy, you'll get more detail than you would with CBR, because your peak bit rates during those busy sections will be allowed to exceed the bandwidth for the CBR file.

Can you use this system to produce smaller sized renders with the same approximate quality?  Yes.  For example, if you have a CBR set at 24 Mbps versus a VBR set at 18 Mbps target with 24 Mbps maximum, then your VBR should only be three quarters the size of the CBR file (18 divide 24) yet since your peak bit rate of 24 Mbps will be the same in both files, the quality should look the same.  There is a caveat though:  this works best if your video has a lot of sections of varying complexity, ie. some sections that are really busy, and some that are mostly static.  The static scenes will give you the cushion you need for extra bandwidth in the busy sections.  If the video is busy overall, there might not be enough bandwidth even with VBR, so you might have to set a higher target rate.

Now what about the difference between 1-pass and 2-pass?  Well, they're sort of similar.  Both are VBR as explained above.  However, 1-pass rendering is roughly the same speed as CBR, because in both cases, the amount of time required to go through the file is the same.  2-pass rendering is often a lot slower (think twice as long, although the relationship isn't always that simple) because it goes through the file twice.  In 2-pass, the first pass is simply to analyze the file and decide how much bandwidth will be needed in each part.  This improves the quality in the end.  With 1-pass, the editor is unable to look ahead and estimate the complexity of the rest of the project, so it has to make estimates about bandwidth as it renders, therefore, it's a bit less efficient.

So, what's the take-away lesson here?  I'd say that you should make sure you understand and remember these points:

1.  If you have all the time in the world, and you're not in a rush to render the project, and you also have unlimited storage space, and you're not worried about how long it will take for a file to upload to a content distribution site such as YouTube, then you can render as CBR on the highest possible (logical) setting.  It doesn't matter that the end file will be huge, and will take forever to upload, because in this situation, you're not in a rush.

2.  If you have lots of time, but are constrained on space, 2-pass VBR with the highest possible maximum bit rate and a target rate that is lower than a CBR rate will still give you the best possible quality, but will reduce the file size somewhat.  This is important if you have limited storage space, OR if you're worried about the amount of time that it will take to upload.

3.  If you're in a rush to finish the render, but you're not worried about size or upload time, CBR is still the way to go, because it's a fast render method.  And you can constrain the size and upload time with a lower bit rate.

4.  If you're in a rush to finish the render, AND you're worried about size or upload time, 1-pass VBR will probably be your best option.


Some Examples

I did a short test render on exactly 5 seconds of "busy" 4k video at 30fps, to test both the speed of the render, and the output file size.  The test file was from one of my tree planter training videos, which can be found at www.replant.ca/training

For the VBR test, I did a 2-pass test only, simply because I never use 1-pass VBR.  I always make sure that I have enough resources (time and/or storage space) while working on projects to be able to choose CBR or 2-pass VBR.  I picked 300 Mbps as a maximum bit rate, and 240 Mbps as a target bit rate.  That file turned out to be 149,366 KB in size, which extrapolates to a requirement of approximately 1.71 GB per full minute of footage.  The render took 7 minutes and 55 seconds (475 seconds).

For the CBR test, I rendered at 300 Mbps. That file turned out to be 140,453 KB in size, which extrapolates to a a requirement of approximately 1.61 GB per full minute of footage.  The render took 3 minutes and 54 seconds (234 seconds).

CBR was the clear winner in this case.  It took only half the time required for a 2-pass VBR (actually, 49%).  But what's interesting is that the file size did not correspond at all to expectations.  This file should have been larger than the VBR, by 25% (300 Mbps vs 240 Mbps target).  Instead, it was actually 5.97% smaller.  I DO NOT KNOW WHY!  I noticed with subsequent testing that whenever I was setting up renders in CBR, the "Estimated File Size" reported by Premiere always turned out to be a significant overestimate, often in the range of 25% to 35% higher than the actual rendered output.  I won't complain, although this confuses me.

Most notably, you are probably asking if the quality was good.  It was.  I was just as pleased with CBR renders at 300 Mbps as I was with 2-pass VBR at 240/300.  Of course, your results will vary based upon lots of factors:  the type of footage, the footage dimensions (pixels), the frame rate, the bit rates that you use for comparisons, the source of your footage (some cameras automatically compress the video as it is recorded), and a host of other minor factors.


The Take-Away Lesson

Having read all this, you've hopefully learned a few things, but you're probably also screaming, "Why can't this be a straightforward topic!?"  Why can't there be a cut & dried, black and white answer?

Video editors will have hundreds of different circumstances that they're dealing with, so the answer can't be consistent for everyone.  Therefore, the best way for you to determine the best way to encode your footage is to do comparison tests on your own.  Do the tests on small sections of your video, to speed up the process.  Do test renders with lots of different combinations of settings, until you decide what approach is going to be best for you.  Make sure you test both busy and non-busy examples of your footage, preferably in the same rendered snippet.

Don't get hung up on technical specifications.  If you're reading this post to clarify technical questions, let me reassure you that your best tool is your eyes.  If you like the look of the footage that is produced by a particular set of rendering options (and the file size isn't too large for your storage capacity or bandwidth capacity), then go with it.  Trust your eyes.


Good luck with your video editing, and thanks for being patient with my "home enthusiast" explanations.

- Jonathan Clark (DJ Bolivia)
www.djbolivia.ca

PS:  For more tutorials about audio and related work, visit djbolivia.ca/videos, but please don't judge.  I'm really unhappy with the quality of a lot of my work before 2017, and wish I had time to re-do everything in 4K with nice lighting, and a current understanding of my rendering toolkit!



Some Extra Links

This page has quite a bit of discussion about bitrates to use:

http://www.ezs3.com/public/What_bitrate_should_I_use_when_encoding_my_video_How_do_I_optimize_my_video_for_the_web.cfm


Understanding bit depth.  This page suggests talks in much more professional terms about Rendering at maximum quality/depth, different bit rates for different work flows, the use of AE (Adobe After Effects) and Adobe Speedgrade:

http://wolfcrow.com/blog/how-to-handle-bit-depth-in-adobe-premiere-pro-after-effects-and-speedgrade/


Understanding basic video compression:

http://nofilmschool.com/2014/08/heres-what-you-need-to-know-video-compression


Video codecs, containers, and compression:

http://www.makeuseof.com/tag/all-you-need-to-know-about-video-codecs-containers-and-compression/


Best video export settings for YouTube:

http://www.4kshooters.net/2015/07/28/best-video-export-settings-for-youtube-in-premiere-pro-cc/


Video for CBR vs VBR




Semi-Related Video

Here's a video I did a couple years ago that seems to have been pretty well-received.  It's about sample rate, sample size, binary, and a few other topics.  Although it focuses on these topics in the context of audio recording, it's useful for videographers to understand the content.



Here's a link to a blog post with more information about that video:



Thanks for your interest, and if you want to check out other video tutorials that I have online, visit this page:



- Jonathan Clark (DJ Bolivia)
www.djbolivia.ca



Follow Jonathan Clark on other sites:
        Twitter: twitter.com/djbolivia
        SoundCloud: soundcloud.com/djbolivia
        YouTube: youtube.com/djbolivia
        Facebook: facebook.com/djbolivia
        Main Site: www.djbolivia.ca
        About.Me: about.me/djbolivia
        Music Blog: djbolivia.blogspot.ca
        MixCloud: mixcloud.com/djbolivia
        DropBox: djbolivia.ca/dropbox