These days I mostly see the placebo audio arguments in streaming service and FLAC/lossless encode fanboys.
The clamour for lossless/high-res streaming is the audiophile community in a nutshell. Literally paying more money so your brain can trick you into thinking it sounds better.
Like many hobbies, it’s mainly a way to rationalize spending ever increasing amounts on new equipment and source content. I was into the whole scene for a while, but once I had discovered what components in the audio chain actually improve sound quality and which don’t, I called it quits.
The push for lossless seems more like pushback on low bit rate and reduced dynamic range by avoiding compression altogether. Not really a snob thing as much as trying to avoid a common issue.
The video version is getting the Blu-ray which is significantly better than streaming in specific scenes. For example every scene that I have seen with confetti on any streaming service is an eldritch horror of artifacts, but fine on physical media, because the streaming compression just can’t handle that kind of fast changing detail.
It does depend on the music or video though, the vast majority are fine with compression.
My roommate always corrects me when I make this same point, so I’ll pass it along. Blu-Rays are compressed using H.264/H.265, just less than streaming services.
Or worse. I think it was the original Ninja Turtles movie that I had owned on DVD and the quality of it kind of sucked. Years later I got it on blu ray and I swear they just ripped one of the DVD copies to make the blu ray disc.
People don’t like hearing this, but streaming services tune their codecs to properly calibrated TVs. Very few people have properly calibrated TVs. In particular, people really like to up the brightness and contrast.
A lot of scenes that look like mud are that way because you really aren’t supposed to be able distinguish between those levels of blackness.
That said, streaming services should have seen the 1000 comments like the ones here and adjusted already. You don’t need bluray level of bits to make things look better in those dark scenes, you need to tune your encoder to allow it to throw more bits into the void.
Lmao, I promise streaming services and CDNs employ world-class experts in encoding, both in tuning and development. They have already poured through maximized quality vs cost. Tuning your encoder to allow for more bits in some scenes by definition ups the average bitrate of the file, unless you’re also taking bits away from other scenes. Streaming services have already found a balance of video quality vs storage/bandwith costs that they are willing to accept, which tends to be around 15mbps for 4k. That will unarguably provide a drastically worse experience on a high-enough quality tv than a 40mbps+ bluray. Like, day and night in most scenes and even more in others.
Calibrating your tv, while a great idea, can only do so much vs low-bitrate encodings and the fake HDR services build in solely to trigger the HDR popup on your tv and trick it into upping the brightness rather than to actuality improve the color accuracy/vibrancy.
They don’t really care about the quality, they care that subscribers will keep their subscriptions. They go as low quality as possible to cut costs while retaining subs.
Blu-rays don’t have this same issue because there are no storage or bandwith costs to the provider, and people buying blu-rays are typically more informed, have higher quality equipment, and care more about image quality than your typical streaming subscriber.
I promise streaming services and CDNs employ world-class experts in encoding
They don’t really care about the quality
It’s funny that you are trying to make both these points at the same time.
You don’t hire world class experts if you don’t care about quality.
I have a hobby of doing re-encoding blurays to lower bitrates. And one thing that’s pretty obvious is the world class experts who wrote the encoders in the first place have them overly tuned to omit data from dark areas of a scene to avoid wasting bits in that location. This is true of H265, VP9, and AV1. You have to specifically tune those encoders to push the encoder to spend more of it’s bits on the dark area or you have to up the bitrate to absurd levels.
Where these encoders spend the bitrate in dark scenes is on any areas of light within the scene. That works great if you are looking at something like a tree with a lot of dark patches, but it really messes with a single light person with darkness everywhere. It just so happens that it’s really easy to dump 2mbps on a torch in a hall and leave just 0.1mbps on the rest of the scene.
That will unarguably provide a drastically worse experience on a high-enough quality tv than a 40mbps+ bluray. Like, day and night in most scenes and even more in others.
I can tell you that this is simply false. And it’s the same psuedo-scientific logic that someone trying to sell gold plated cables and FLAC encodings pushes.
Look, beyond just the darkness tuning problem that streaming services have, the other problem they have is a QOS. The way content is encoded for streaming just isn’t ideal. When you say “they have to hit 14mpbs” the fact is that they are forcing themselves to do 14mbps throughout the entire video. The reason they do this is because they want to limit buffering as much as possible. It’s a lot better experience to lower your resolution because you are constantly buffering. But that action makes it really hard to do good video optimizations on the encoder. Ever second of the video they are burning 14mb whether they need those 14mb or not. The way that’d deliver less data would be if they only averaged 14mbps rather than forcing it throughout. Allowing for 40mbps bursts when needed but then pushing everything else out at 1mbps saves on bandwidth. However, the end user doesn’t know that the reason they just started buffering is because a high motion action scene is coming up (and netflix doesn’t want to buffer for more than a few minutes).
The other point I’d make is that streaming companies simply have a pipeline that they shove all video through. And, because it’s so generalized, these sorts of tradeoffs which make stuff look like a blocky mess happen. Sometimes that blocky mess is present in the source material (The streaming services aren’t ripping the blurays themselves, they get it from the content providers who aren’t necessarily sending in raws).
I say all this because you can absolutely get 4k and 1080p looking good at sub-bluray rates. I have a library filled with these re-encodes that look great because of my experience here. A decent amount of HD media can be encoded at 1 or 2mbps and look great. But you have to make tradeoffs that streaming companies won’t make.
For the record, the way I do my encoding is a scene by scene encode using VMAF to adjust the quality rate with some custom software I built to do just that. I target a 95% VMAF which ends up looking just fantastic across media.
I fail to see where TV calibration comes in here tbh. If I can see blocky artifacts from low bitrate it will show up on any screen unless you turn the brightness down so far that nothing is visible.
Blocky artifacts typically appear in low light situations. There will be situations where it might just be blocky due to not having enough bits (high motion scenes) but there are plenty of cases where low light tuning is where you’d end up noticing the blockyness.
Blocky artifacts are the result of poor bitrates. In streaming services it’s due to over compressing the stream, which is why you see it when part of a scene is still or during dark scenes. It’s due to the service cheaping out and sending UHD video at 720p bitrates.
The issue at play for streaming services is they have a general pipeline for encoding. I mean, it could be described as cheaping out because they don’t have enough QA spot checking and special purposing encodes to make sure the quality isn’t trash. But it’s really not strictly a “not enough bits” problem.
The thing is, dynamic range compression and audio file compression are two entirely separate things. People often conflate the two by thinking that going from wav or flac to a lossy file format like mp3 or m4a means the track becomes more compressed dynamically, but that’s not the case at all. Essentially, an mp3 and a flac version of the same track will have the same dynamic range.
And yes, while audible artifacts can be a thing with very low bitrate lossy compression, once you get to128kbps with a modern lossy codec it becomes pretty much impossible to hear in a blind test. Hell, even 96kbps opus is pretty much audibly perfect for the vast majority of listeners.
In a distant past I liked to compare hires tracks with the normal ones. It turned out that they often used a different master with more dynamic range for the hires release, tricking the listener into thinking it sounded different because of the high bitrate and sampling frequency. The second step was to convert the high resolution track to standard 16 bit 44.1 kHz and do a/b testing to prove my point to friends.
If we are talking about a downloaded good high bit rate MP3 and a FLAC, then yeah, I can’t hear a difference.
For streaming, I CAN hear a difference between the default spotify stream and my locally stored lossless files. That difference might come down to how they are mastered or whatever spotify does to the files, but whatever it is the difference is pretty perceptible to me and I don’t have especially sensitive ears.
If we’re talking free tier Spotify, then it could actually be due to the bitrate (96kbps OGG vorbis, IIRC). However, if you’re a premium subscriber then the standard bitrate is 160kbps, which is definitely not audible to 99.99% of people.
In fact, after much ABX testing, I found that a noticeable audible difference between a local file and the same song on a streaming service is almost always due to either a loudness differential or because the two tracks come from different masters.
I really noticed when I switched from Spotify to Tidal that there is something different about Spotify’s sound quality that makes it worse even at the highest streaming quality. I was surprised since I fully admit that in 99% of cases I can’t tell the difference between a 128kbps MP3 and a FLAC of the same file.
Usually when I hear someone swear by lossless audio one service provides compared to another, I swear the reality is either placebo or one service is just using a better masterering of an album compared to another. The service that has on their service the better version album mix and mastering. Like they could serve it as 192kbps MP3 and sound better than a lossless encoded album version with the non ideal mix and mastered release
Oh, 100%. I actually tested this by recording bit perfect copies from different streaming services and comparing them using Audacity.
I found that they only way to hear a difference between the same song played on two different platforms was 1) if there was a notable difference in gain or 2) if they were using two different masters for the same song. If two platforms were using the same master version, they were impossible to tell apart in an ABX test.
All of this is to say that the quality of the mastering is orders of magnitude more important than whether or not a track is lossy or lossless, as far as audible audio quality goes.
Not here to argue I can hear the difference, because I can’t. But in audio collecting where the size and burden of even large lossless files isn’t much different from lossy files, why care? I download the flac files and compress upon delivery to the client where the space might be of a larger concern.
I do the same, as it happens, so I won’t argue with you.
As for “why care?”, I’d say it’s about making informed decisions and not spending money unnecessarily in the pursuit of genuinely better sound quality.
Yeah, I don’t get too deep into that game. I do have some higher-ish quality headphones and speakers though. I also find that subs are largely underrated by audio snobs.
The clamour for lossless/high-res streaming is the audiophile community in a nutshell. Literally paying more money so your brain can trick you into thinking it sounds better.
Like many hobbies, it’s mainly a way to rationalize spending ever increasing amounts on new equipment and source content. I was into the whole scene for a while, but once I had discovered what components in the audio chain actually improve sound quality and which don’t, I called it quits.
The push for lossless seems more like pushback on low bit rate and reduced dynamic range by avoiding compression altogether. Not really a snob thing as much as trying to avoid a common issue.
The video version is getting the Blu-ray which is significantly better than streaming in specific scenes. For example every scene that I have seen with confetti on any streaming service is an eldritch horror of artifacts, but fine on physical media, because the streaming compression just can’t handle that kind of fast changing detail.
It does depend on the music or video though, the vast majority are fine with compression.
My roommate always corrects me when I make this same point, so I’ll pass it along. Blu-Rays are compressed using H.264/H.265, just less than streaming services.
Higher bitrate though init
Significantly, streaming is 8-16Mbps for 4K, whereas 4K discs are >100
🤓☝️ many older blu-rays also used VC1
Or worse. I think it was the original Ninja Turtles movie that I had owned on DVD and the quality of it kind of sucked. Years later I got it on blu ray and I swear they just ripped one of the DVD copies to make the blu ray disc.
Sadly, that basically feels like what happened with The Fellowship of the Ring’s theatrical cut blu ray, too. It just doesn’t look that great.
Then the extended edition has decent fidelity but some bizarro green-blue color grading.
People don’t like hearing this, but streaming services tune their codecs to properly calibrated TVs. Very few people have properly calibrated TVs. In particular, people really like to up the brightness and contrast.
A lot of scenes that look like mud are that way because you really aren’t supposed to be able distinguish between those levels of blackness.
That said, streaming services should have seen the 1000 comments like the ones here and adjusted already. You don’t need bluray level of bits to make things look better in those dark scenes, you need to tune your encoder to allow it to throw more bits into the void.
Lmao, I promise streaming services and CDNs employ world-class experts in encoding, both in tuning and development. They have already poured through maximized quality vs cost. Tuning your encoder to allow for more bits in some scenes by definition ups the average bitrate of the file, unless you’re also taking bits away from other scenes. Streaming services have already found a balance of video quality vs storage/bandwith costs that they are willing to accept, which tends to be around 15mbps for 4k. That will unarguably provide a drastically worse experience on a high-enough quality tv than a 40mbps+ bluray. Like, day and night in most scenes and even more in others.
Calibrating your tv, while a great idea, can only do so much vs low-bitrate encodings and the fake HDR services build in solely to trigger the HDR popup on your tv and trick it into upping the brightness rather than to actuality improve the color accuracy/vibrancy.
They don’t really care about the quality, they care that subscribers will keep their subscriptions. They go as low quality as possible to cut costs while retaining subs.
Blu-rays don’t have this same issue because there are no storage or bandwith costs to the provider, and people buying blu-rays are typically more informed, have higher quality equipment, and care more about image quality than your typical streaming subscriber.
It’s funny that you are trying to make both these points at the same time.
You don’t hire world class experts if you don’t care about quality.
I have a hobby of doing re-encoding blurays to lower bitrates. And one thing that’s pretty obvious is the world class experts who wrote the encoders in the first place have them overly tuned to omit data from dark areas of a scene to avoid wasting bits in that location. This is true of H265, VP9, and AV1. You have to specifically tune those encoders to push the encoder to spend more of it’s bits on the dark area or you have to up the bitrate to absurd levels.
Where these encoders spend the bitrate in dark scenes is on any areas of light within the scene. That works great if you are looking at something like a tree with a lot of dark patches, but it really messes with a single light person with darkness everywhere. It just so happens that it’s really easy to dump 2mbps on a torch in a hall and leave just 0.1mbps on the rest of the scene.
I can tell you that this is simply false. And it’s the same psuedo-scientific logic that someone trying to sell gold plated cables and FLAC encodings pushes.
Look, beyond just the darkness tuning problem that streaming services have, the other problem they have is a QOS. The way content is encoded for streaming just isn’t ideal. When you say “they have to hit 14mpbs” the fact is that they are forcing themselves to do 14mbps throughout the entire video. The reason they do this is because they want to limit buffering as much as possible. It’s a lot better experience to lower your resolution because you are constantly buffering. But that action makes it really hard to do good video optimizations on the encoder. Ever second of the video they are burning 14mb whether they need those 14mb or not. The way that’d deliver less data would be if they only averaged 14mbps rather than forcing it throughout. Allowing for 40mbps bursts when needed but then pushing everything else out at 1mbps saves on bandwidth. However, the end user doesn’t know that the reason they just started buffering is because a high motion action scene is coming up (and netflix doesn’t want to buffer for more than a few minutes).
The other point I’d make is that streaming companies simply have a pipeline that they shove all video through. And, because it’s so generalized, these sorts of tradeoffs which make stuff look like a blocky mess happen. Sometimes that blocky mess is present in the source material (The streaming services aren’t ripping the blurays themselves, they get it from the content providers who aren’t necessarily sending in raws).
I say all this because you can absolutely get 4k and 1080p looking good at sub-bluray rates. I have a library filled with these re-encodes that look great because of my experience here. A decent amount of HD media can be encoded at 1 or 2mbps and look great. But you have to make tradeoffs that streaming companies won’t make.
For the record, the way I do my encoding is a scene by scene encode using VMAF to adjust the quality rate with some custom software I built to do just that. I target a 95% VMAF which ends up looking just fantastic across media.
I fail to see where TV calibration comes in here tbh. If I can see blocky artifacts from low bitrate it will show up on any screen unless you turn the brightness down so far that nothing is visible.
Blocky artifacts typically appear in low light situations. There will be situations where it might just be blocky due to not having enough bits (high motion scenes) but there are plenty of cases where low light tuning is where you’d end up noticing the blockyness.
Blocky artifacts are the result of poor bitrates. In streaming services it’s due to over compressing the stream, which is why you see it when part of a scene is still or during dark scenes. It’s due to the service cheaping out and sending UHD video at 720p bitrates.
Look, this is just an incorrect oversimplification of the problem. It’s popular on the internet but it’s just factually incorrect.
Here’s a thread discussing the exact problem I’m describing
https://www.reddit.com/r/AV1/comments/1co9sgx/av1_in_dark_scenes/
The issue at play for streaming services is they have a general pipeline for encoding. I mean, it could be described as cheaping out because they don’t have enough QA spot checking and special purposing encodes to make sure the quality isn’t trash. But it’s really not strictly a “not enough bits” problem.
The thing is, dynamic range compression and audio file compression are two entirely separate things. People often conflate the two by thinking that going from wav or flac to a lossy file format like mp3 or m4a means the track becomes more compressed dynamically, but that’s not the case at all. Essentially, an mp3 and a flac version of the same track will have the same dynamic range.
And yes, while audible artifacts can be a thing with very low bitrate lossy compression, once you get to128kbps with a modern lossy codec it becomes pretty much impossible to hear in a blind test. Hell, even 96kbps opus is pretty much audibly perfect for the vast majority of listeners.
In a distant past I liked to compare hires tracks with the normal ones. It turned out that they often used a different master with more dynamic range for the hires release, tricking the listener into thinking it sounded different because of the high bitrate and sampling frequency. The second step was to convert the high resolution track to standard 16 bit 44.1 kHz and do a/b testing to prove my point to friends.
I think it depends on your source.
If we are talking about a downloaded good high bit rate MP3 and a FLAC, then yeah, I can’t hear a difference.
For streaming, I CAN hear a difference between the default spotify stream and my locally stored lossless files. That difference might come down to how they are mastered or whatever spotify does to the files, but whatever it is the difference is pretty perceptible to me and I don’t have especially sensitive ears.
If we’re talking free tier Spotify, then it could actually be due to the bitrate (96kbps OGG vorbis, IIRC). However, if you’re a premium subscriber then the standard bitrate is 160kbps, which is definitely not audible to 99.99% of people.
In fact, after much ABX testing, I found that a noticeable audible difference between a local file and the same song on a streaming service is almost always due to either a loudness differential or because the two tracks come from different masters.
I really noticed when I switched from Spotify to Tidal that there is something different about Spotify’s sound quality that makes it worse even at the highest streaming quality. I was surprised since I fully admit that in 99% of cases I can’t tell the difference between a 128kbps MP3 and a FLAC of the same file.
Usually when I hear someone swear by lossless audio one service provides compared to another, I swear the reality is either placebo or one service is just using a better masterering of an album compared to another. The service that has on their service the better version album mix and mastering. Like they could serve it as 192kbps MP3 and sound better than a lossless encoded album version with the non ideal mix and mastered release
Oh, 100%. I actually tested this by recording bit perfect copies from different streaming services and comparing them using Audacity.
I found that they only way to hear a difference between the same song played on two different platforms was 1) if there was a notable difference in gain or 2) if they were using two different masters for the same song. If two platforms were using the same master version, they were impossible to tell apart in an ABX test.
All of this is to say that the quality of the mastering is orders of magnitude more important than whether or not a track is lossy or lossless, as far as audible audio quality goes.
Not here to argue I can hear the difference, because I can’t. But in audio collecting where the size and burden of even large lossless files isn’t much different from lossy files, why care? I download the flac files and compress upon delivery to the client where the space might be of a larger concern.
I do the same, as it happens, so I won’t argue with you.
As for “why care?”, I’d say it’s about making informed decisions and not spending money unnecessarily in the pursuit of genuinely better sound quality.
Yeah, I don’t get too deep into that game. I do have some higher-ish quality headphones and speakers though. I also find that subs are largely underrated by audio snobs.