oversampling?

ColinP
Posts: 953
Joined: Mon Aug 03, 2020 7:46 pm

Re: oversampling?

Post by ColinP »

Grant, isn't that just a very complicated way of looking at L?
User avatar
ChR_is
Posts: 107
Joined: Wed Sep 22, 2021 5:48 pm

Re: oversampling?

Post by ChR_is »

ColinP wrote: Wed Dec 20, 2023 6:29 pm Thanks for the link Chris. Rather than a research paper, it's a dense 63 page student essay from twenty odd years ago that contains no references but that doesn't mean it's not valuable so I'll check it out when I have a little more time.
i don't really get you're point here. why wouldn't it be a paper just because it's from a student? it's not peer reviewed and wasn't published in a scientific magazine, sure. but it is well researched and the information is still valid. it's old, but that doesn't make it any less correct. the math didn't magically change and the experiments done in the paper won't have different results if you replicate them today.
ColinP wrote: Wed Dec 20, 2023 6:29 pm I'm struggling with some of your comments. I'm reasonably confident that fractional buffer access using lerping can produce non-bandlimited signals as I think I've shown this to be the case from both a theoretical and experimental POV. I've also addressed why it's pretty daft to view this through the lens of low-pass filtering given that when f = 0 or f = 1 it certainly isn't and in most other circumstances f jumps about all over the place so the frequency response alters sample by sample. OK one could argue that it's filtering Jim, but not as we know it...
so that means if i turn a lowpass filter all the way down or all the way up it's not a filter anymore? what is it then?
User avatar
utdgrant
Posts: 541
Joined: Wed Apr 07, 2021 8:58 am
Location: Scotland
Contact:

Re: oversampling?

Post by utdgrant »

ColinP wrote: Wed Dec 20, 2023 7:51 pm Grant, isn't that just a very complicated way of looking at L?
It's a way of calculating (and visualising) a long series of 'L's without resorting to spreadsheets or anything. It took me longer to screenshot and describe it than it took to execute in Audacity. :)

I found it very intuitive and easy to grasp, like Chris' explanations. It's sort of like a virus or a drug bonding with a receptor - sometimes I can internalise a concept only when I find the 'right key' to allow it into my brain. That key may or may not be the one that clicks for anyone else.
______________________
Dome Music Technologies
ColinP
Posts: 953
Joined: Mon Aug 03, 2020 7:46 pm

Re: oversampling?

Post by ColinP »

Chris, sorry if I've overstepped in any way here. I left formal schooling at 16, earlier in reality as I spent as much time in a friendly public library as I could get away with as I considered much of the teaching at school to be seriously inadequate. I didn't attend university. I am effectively self-educated.

I've done lots of professsional consultancy work for various universities so am aware of the snobbery that some academics have.

I'm just pointing out that calling a student essay a paper is stretching things. But as I said this doesn't mean that the essay is not valuable. It could indeed contain insight well beyond a peer reviewed paper, which are often extremely mundane. I've talked with many PhDs and professors who haven't really got a clue about anything. Not even the subjects of their supposed expertise.

So I will read it as I respect your opinion. But my time is limited. I've already spent more time on this thread than is sensible.

On filters that vary their coefficients sample by sample I accept they are still filters. I don't know if you read earlier in the thread that I arguged that lerping was still learping when f = 0 and f = 1 and that x * y was still multiplication when either x or y were 1 or that x + y was still addition when either were 0. It's just that I think that low pass filtering is a slightly confusing way of looking at what is just cross-fading.
ColinP
Posts: 953
Joined: Mon Aug 03, 2020 7:46 pm

Re: oversampling?

Post by ColinP »

utdgrant wrote: Wed Dec 20, 2023 8:53 pm It's sort of like a virus or a drug bonding with a receptor - sometimes I can internalise a concept only when I find the 'right key' to allow it into my brain. That key may or may not be the one that clicks for anyone else.
Insightful.
ColinP
Posts: 953
Joined: Mon Aug 03, 2020 7:46 pm

Re: oversampling?

Post by ColinP »

Curiosity got the better of me so I spent about 30 minutes looking at the "paper".

http://yehar.com/blog/wp-content/upload ... 8/deip.pdf

I'm not sure what to make of it.

It begins with ...
Welcome to read the paper that took three entire weeks (24/7) of my life, approximately 1/1000 of the whole deal.
Right so I understand that English isn't this guy's first language but I think he's claiming to have not slept for three weeks. Now in my misspent youth I confess that I sometimes did three solid day coding sessions (72 hours work with no sleep) with chemical assistance, but three weeks is pushing it.

I'm no mathematician but I know that 3 weeks times 1,000 is 3,000 weeks and according to my calculator that would mean he is about 57 years old.

There's a photo of him on his website dated 2021.

http://yehar.com/blog/

I wish I looked that good when I was 55.

Then the conclusion ...
The presented optimal interpolators make it possible to do transparent-quality resampling for even the most demanding applications with only 2x or 4x oversampling before the interpolation. However, in most cases simple linear interpolation combined with a very high-ratio oversampling (perhaps 512x) is the optimal tradeoff. The computational costs depend on the platform and the oversampling implementation.

Therefore, which interpolator is the best is not concluded here. You must first decide what quality you need (for example around 90dB modified SNR for a transparency of 16 bits) and then see what alternatives the table given in the summary has to suggest for the oversampling ratios you can afford.
This isn't exactly precise analysis shall we say.

In between there's a lot of technical looking data with equations and graphs and tables and as far as I can tell with a brief glance it looks correct but it appears to be mostly basic poylnomial interpolation.

Call me cynical but it looks suspiciously like it's just been lifted from another source, especially as he says he also managed to learn PostScript and other things while doing all this amazing work in three weeks.

Maybe I'm completely wrong and he's actually a very good looking 57 year old genius.

Sorry to take such a jaundiced view but since discovering that one academic was pretending that my IBM funded research in the 1990's on persistent storage management and object oriented graphical programming was his own work I have sadly developed a certain scepticism.
User avatar
ChR_is
Posts: 107
Joined: Wed Sep 22, 2021 5:48 pm

Re: oversampling?

Post by ChR_is »

i really don't understand what you are arguing for here.
ColinP wrote: Fri Dec 22, 2023 5:45 pm
It begins with ...
Welcome to read the paper that took three entire weeks (24/7) of my life, approximately 1/1000 of the whole deal.
Right so I understand that English isn't this guy's first language but I think he's claiming to have not slept for three weeks. Now in my misspent youth I confess that I sometimes did three solid day coding sessions (72 hours work with no sleep) with chemical assistance, but three weeks is pushing it.

I'm no mathematician but I know that 3 weeks times 1,000 is 3,000 weeks and according to my calculator that would mean he is about 57 years old.

There's a photo of him on his website dated 2021.

http://yehar.com/blog/

I wish I looked that good when I was 55.
what's your point here? are you mocking their english? or that they had a little fun on the first page of whatever this text is. english is not my native language either. maybe that's why i am not so anal about it. moreover, that's just a short fun introduction giving acknowledgements and a little insight into how this text came to be. why does that matter? how is this related to the provided information? does this first page text really disqualify the whole body of work and all of the research that follows for you?
ColinP wrote: Fri Dec 22, 2023 5:45 pm Then the conclusion ...
The presented optimal interpolators make it possible to do transparent-quality resampling for even the most demanding applications with only 2x or 4x oversampling before the interpolation. However, in most cases simple linear interpolation combined with a very high-ratio oversampling (perhaps 512x) is the optimal tradeoff. The computational costs depend on the platform and the oversampling implementation.

Therefore, which interpolator is the best is not concluded here. You must first decide what quality you need (for example around 90dB modified SNR for a transparency of 16 bits) and then see what alternatives the table given in the summary has to suggest for the oversampling ratios you can afford.
This isn't exactly precise analysis shall we say.
why does it need to be precise? that's a valid conclusion. "here's a bunch of measurements and results. there is no single best method. decide on one in the context of your use-case with the help of the data given in this text." that sounds very reasonable to me. also we have completely left the original point anyways. this person has done research and compiled data that shows how interpolation (including linear interpolation) has an effect on the frequency spectrum of signals. mainly gentle lowpassing. this sole image was the essential part:
DEIP linear interpolation frequency response
DEIP linear interpolation frequency response
linearInterpolationMesaurementsDEIP.png (178.13 KiB) Viewed 46662 times
look, i am sorry that you had bad experiences in the scientific world. and yes, i may have mislabeled the text as a "paper", because it was called a paper in the text itself. but neither of those things make the research, experiments and measurements any less valid. if you don't trust this person, go ahead and repeat the test setup. after all that's what scientific work is all about, isn't it?

i don't mean you no harm, i am not here to fight. i was just trying to help in this thread. all the information i have given is from my own personal experience, research and experiments. i have given reasoning and easy to conduct test setups that aid my claims. i rest my case. feel free to disagree with me.
ColinP
Posts: 953
Joined: Mon Aug 03, 2020 7:46 pm

Re: oversampling?

Post by ColinP »

Yeah, that's all fair enough Chris.

I just came back online intending to replace my post with the line "EDIT: Deleted on reflection". But I'll leave it unchanged given you've already responded.

Basically the info is useful and I can't prove that the main body of it was lifted even though that's my strong suspicion.

I certainly wasn't mocking him because English wasn't his first language. But I'm pretty sure he didn't stay awake for 504 hours and I suspect he's about 25 years younger than his opening comments imply he is. If he can't do even basic arithmetic in the introduction then it seems very improbable that the main body of the text is his own work.

I'm not here for a fight either but you've been pretty aggressive and I think you've used a strawman argument rather than reading my words with care. At least that's how it seems to me.

People like us are perhaps very argumentative and forceful by nature. I'm certainly an arrogant old tw*t with a planet-sized ego. Sorry if I've caused offence. I have poor social skills and have encountered so much stupidity and dishonesty in a long lifetime that I struggle to trust people.

EDIT: Thinking about this more I must be completely wrong about this guy's age. To me he looks like he's in his early 30's rather than a grandad approaching 80. But I must be wrong as that would mean he was about 10 years old when he wrote this essay if it was written 22 years ago.

Another edit: My conclusion is that the photo shows a man aged 30 at most not someone in their late 70's. So I don't think the guy was 57 in 2001. Neither do I think the essay was written by someone 10 years old or younger. Therefore the essay wasn't written in 2001 but much more recently.

Anyway, I'd like to apologizes for some of my comments as they were ill-judged. That essay and the inconstitnecies triggered bad memories but that's no excuse for me accusing the guy of plagarism without solid evidence. Unfortunately it's distracted from what I thought was a fairly useful discussion. I'm still reasonably confident that using lerping to implement fractional buffer access can lead to aliasing (as I don't see how the opposite can be true) and I would like to discuss the subject further to identify where my reasoning and my reading of the experimental evidence could be wrong but as Chris has decided to rest his case it looks like there is nobody left to discuss this with.
ColinP
Posts: 953
Joined: Mon Aug 03, 2020 7:46 pm

Re: oversampling?

Post by ColinP »

I feel this thread has come to an unsatisfactory conclusion. That's partly my fault so I'd like to make one last attempt to explain my position.

I think Chris is more expert than me in this field and I respect his opinions and his generous input to the community but I think some people, at least with a cursory read, will take away from this discussion the notion that using lerping to perform fractional buffer access can't create aliasing.

On re-reading I don't think he is actually saving that. He is being more nuanced but that's the impression that I believe some people will form reading his statements that lerping simply can't create aliasing because it is a linear process. When in certain circumstances lerping can cause aliasing precisely because it is a linear process.

The argument that mixing two fixed bandlimited signals can't produce a signal that is not bandlimited is a strawman. Nobody is claiming this. The most that can happen is interference between the two signals.

Similarly nobody is arguing that lerping between adjacent samples in a buffer doesn't consititue a crude FIR filter. I've just saying that this isn't a very helpful way of looking at it.

I think Chris argured somewhere that this thread was about fixed delay but I think Borkman's OP was asking about the potential problems of using lerping for fractional buffer access (and the use of oversampling in this context). So as GS uses lerping to perform fractional buffer access I have joined in the debate. I've tried to explain that this was in the context of simulating a model of flying playback heads.

And similar applications will presumably be in the minds of people curious about developing modules that perform fractional buffer access using lerping. If all one wants to do is code fixed delay lines there's no real point in fractional access.

Then I presented a visual thought experiment that looked at lerped fractional buffer access in the time domain as I think this makes it easily understandable. My scribbled diagram shows a sinewave very close to Nyquist and the effect of slowing down a flying playback to half speed. This pitch shifts the signal down an octave by upsampling using linear interpolation.

The result is a sequence of three samples A, L, B. that form a straight line - they must be in a straight line because that's what lerping does. Now although if we looked at L in isolation it would be bandlimited the result isn't just L, instead it's a sequence of samples and in this context the result isn't bandlimited.

Now one could argue that well this isn't fixed delay but that's not what I'm claiming it to be.

What's going on here is nothing more complex than lerped fractional buffer access at a fixed rate and it produces aliasing. So it's completely on topic and I believe easy to understand.

I then did a little experiment with a spectrum analyser to show that this does indeed produce aliasing in GS with a very high frequency sinewave as a test signal.

In conclusion linear approximation for fractional buffer access is fine at low frequences but as we get closer and closer to Nyquist it breaks down and can produce non-bandlimited signals and therefore aliasing.

I'm still open to arguments that this isn't the case.
User avatar
utdgrant
Posts: 541
Joined: Wed Apr 07, 2021 8:58 am
Location: Scotland
Contact:

Re: oversampling?

Post by utdgrant »

I still think we're all pretty much singing from the same hymn sheet, as it were. How seasonal! ;)
ColinP wrote: Sun Dec 24, 2023 11:03 am If all one wants to do is code fixed delay lines there's no real point in fractional access.
I can think of at least one case where this is appropriate: Implementing 'waveguides' for physical modelling techniques. In the Zeit Bundle, there is a 1V / Oct Pitch to Delay Time Converter module. The delay time for a particular resonant frequency rarely falls on an exact, integer number of sample periods. This becomes even more apparent when you get up to higher pitches. If you didn't have the ability to specify fractional delays, your pitches would be quantised to integer divisors of the 48 kHz sample period. The quantisation effect gets much more pronounced as the pitch increases.

To slightly deviate from the 'fixed delay' scenario, I present the situation of 'gently modulated delay effects', such as flanging and chorus. To all intents and purposes, the playback rate remains almost exactly 1:1 with the recording rate, so the value of f remains effectively constant over the period of tens or hundreds of replayed samples. This is a very significant difference when compared to pitch-shifting, where even a semitone up or down will produce values of f which differ dramatically, even over consecutive replayed samples.

When you are shortening delay times dynamically (e.g. when flanging towards zero delay), the value of f will indeed vary, but it is a gradual change in the value of f from one replayed sample to the next. Any artifacts from this (e.g. slight low-pass filtering in the upper frequencies) will be swamped by the more dramatic (and desirable) comb-filtering going on in the lower frequencies.

Once again, in the case of modulation delay effects, there are problems if you vary the delay length in integer steps only. When I first tried to implement the Solaris Ensemble, I experimented with varying the delayed signal in integer steps. There were significant artifacts which made the solution completely unacceptable. However, once I introduced fractional delays (via simple Lerping), theses artifacts disappeared. Or at least, they were rendered practically inaudible. I don't think I have any recordings of those experiments, but the resultant output was simply unusable. I might make an edit to the Solaris source code and make a demo video at some point if I'm feeling saucy over the festive period. :lol:
What's going on here is nothing more complex than lerped fractional buffer access at a fixed rate and it produces aliasing.
[Emphasis mine, above] I think this is the crux of the matter:

A fixed replay rate which differs significantly from the record rate will indeed produce aliasing. No arguments at all. In applications such as real-time pitch-shifting (Zeit + Time Stream Integrator) or sample replay (GS), the aliasing might be acceptable as a trade-off against reduced CPU load and thus higher polyphony / grain density. However, for simple 1:1 record:replay rates (even when the replay rate is gently modulated by an LFO) aliasing is simply NOT an issue. It only becomes an issue when you go into audio-rate LFO frequencies or extreme modulation depths.

Merry Christmas to one and all when it arrives in your part of the world! :)
______________________
Dome Music Technologies
Post Reply

Return to “Module Designer”