I cannot live with a HD screen without the trusty Lanczos, because point sampling is too jaggy, and bilinear is too blur.
So the idea that upsampling is good. Because interpolating the data such that it looks more like a line than a flight of stairs is always good.
But in certain cases, it can be bad. Like on a GBA emulator - using bilinear just blurs things too much, because it has just goddamn too little pixels. Seeing pixelated Nintendo characters is way better. Using point sampling (a.k.a. nearest neighbour) of course.
What if this logic is applied to audio also?
Talk about bilinear first, it is well-known for excessive blurring when the scaling is too great (e.g. >200%, although anything above 50% is bad enough). In general, resampling will blur a picture and it will blur more at higher scalings, although better algorithms can reduce the amount, but higher scalings still means higher blurring.
With audio, you're resampling to 200% or 400% of original frequency, depending on 96 or 192kHz.
Take a low-res image, make it 4x the scale, compare bilinear to nearest neighbour. I won't blame if you choose either over the other.
Of course, audio uses different (and more complex and better) algorithms than video, but as long as there is a basis for it, there is chance that it is valid.
And like how one can use like trigo-dunno-wat transformation to resize 1600% while still looking like something from earth, throw enough DSP power at it, and you can get anything. Recording studios already did one resampling to 44.1kHz.
No comments:
Post a Comment