oversampling?

borkman
Posts: 51
Joined: Tue May 09, 2023 7:26 pm

oversampling?

Post by borkman »

I've added oversampling to a new effect module I'm finishing up using the library that was generously provided by R_Ware. I only oversample the fractional delays, as I don't believe there is any other place in the signal path that can produce aliasing. I've noticed that the sound becomes noticeably duller when I switch on oversampling. It's mostly (only?) noticeable in the transients.

For anyone that has experience with this library or oversampling in general, does this sound normal? My fractional delays use linear interpolation, so I wanted to include the option to oversample them, but, at least to my ears, it never positively affects the overall sound. I know that audio is subjective, but I was hoping to find someone who had implemented oversampling, with this library or otherwise, to see if there are any gotchas or things I might not be thinking about. Thanks!
ColinP
Posts: 952
Joined: Mon Aug 03, 2020 7:46 pm

Re: oversampling?

Post by ColinP »

I'm very interested in hearing how other devs might respond to this thread as I'm unsure about the cost-benefit ratio of oversampling.

While developing the Adroit Granular Synth I spent a while researching polyphase FIR interpolation and was very impressed by how clever the algorithm was but never actually implemented it as my intuition was that it would only deliver a small improvement in fidelity while considerable reducing the maximum grain density.

A/B testing is perhaps the only way to pin this down but given the inherent expense of oversampling, I suspect many users would happily sacrifice a little aliasing for increased performance in other areas.
UrbanCyborg
Posts: 599
Joined: Mon Nov 15, 2021 9:23 pm

Re: oversampling?

Post by UrbanCyborg »

It would depend on the amount and nastiness of the aliased frequencies. Do it with a fed-back ring modulator and I think you'd want anti-aliasing pretty quickly. It doesn't surprise me that you lose some top-end; that oversampling library from R_Ware is, after all, a filter. It's meant to be very steep and low-ripple in the band-pass region, but to do that really well you tend to want an optimally-derived linear phase FIR, which, as Chris Neuberg pointed out with the original release of his oversampling code, he was keeping for his own use for now, and I see his logic. To do one reasonably, you likely need the polyphase FIR Colin mentioned, and you definitely need filter coefficient tables of the proper sort, computed with the right number of taps for the oversampling index you want to do.

Reid
Cyberwerks Heavy Industries -- viewforum.php?f=76
borkman
Posts: 51
Joined: Tue May 09, 2023 7:26 pm

Re: oversampling?

Post by borkman »

Thanks for the detailed explanation. What's happening makes sense now, and I verified it in the debugger. I was confused because logically identical paths (or so I thought) through my module produced different outcomes when oversampled. All delays were set to 0ms delay time, and when oversampling was off, output through the different paths was identical. When I switched oversampling on, different paths had different outcomes. Why? Because the number of oversampled delays was different in the different paths! When not oversampling, input in = input out for a delay of 0 ms. When oversampled, that is not the case because filtering is applied. One oversampling step was not so noticeable, but when two oversampled delays happened in series, the high end really dropped.

What to do now?
1. Implement the polyphase FIR mentioned.
2. Use better interpolation than linear.
3. Ignore it altogether since I didn't really hear aliasing anyway but added oversampling because I thought it would be a good idea

I'm leaning towards #3. I could add the others as options at a later date if necessary. If someone wouldn't mind taking a listen once I wrap this up, I'd really appreciate it! A different set of ears would be really helpful.
User avatar
utdgrant
Posts: 541
Joined: Wed Apr 07, 2021 8:58 am
Location: Scotland
Contact:

Re: oversampling?

Post by utdgrant »

I used the R_OpenLib oversampling library in my Blue Velvet Soft-Clip Mixer module. I was holding off creating any kind of non-linear distortion module until I could eliminate (or greatly attenuate) alias frequencies, and Chris came to the rescue at just the right time.

Yes, there is definitely a high shelf of about -6dB, starting at around 10kHz when you apply the R_Ware oversampling algorithm. This frequency response profile seems to apply at all oversampling multipliers (x2 up to x16).

However, in the case of Blue Velvet, I think this is a beneficial side-effect, leading to a smoother, more-rounded, 'analogue' sound when enabled. You also have the option to bypass oversampling completely, leading to a brighter sound, whilst also allowing through the alias frequencies. This works well if you're after a more aggressive, grittier, 'digital' sound.
______________________
Dome Music Technologies
User avatar
utdgrant
Posts: 541
Joined: Wed Apr 07, 2021 8:58 am
Location: Scotland
Contact:

Re: oversampling?

Post by utdgrant »

Oh, and I used simple linear interpolation to implement fractional delays in the following modules:

Solaris Ensemble
Zeit Voltage-Controlled Delay
Aqua-Marine String Ensemble
Micro Stringer

It sounded 'good enough' to my ears during development and it was a breeze to implement without having to understand advanced DSP and mathematics concepts.

If you're at all interested, I've made all my source code available for download, and it can be re-used in any way without restriction:

At the bottom of This Page.
______________________
Dome Music Technologies
borkman
Posts: 51
Joined: Tue May 09, 2023 7:26 pm

Re: oversampling?

Post by borkman »

@utdgrant, I have read through parts of your code for reference a few times. In fact, it served as a good example of how to integrate R-OpenLib. Thanks for providing this to the community!

It's reassuring that you used linear interpolation in a few modules with good results. As I mentioned, I was applying oversampling more to fix a theoretical issue (and to see how it's done) than to fix any perceived problem. In my case, I think it's better to leave it out altogether, for now anyway. I can always reassess if necessary. That's the blessing (and curse) of software.

"The software isn’t finished until the last user is dead." - Attributions vary
ColinP
Posts: 952
Joined: Mon Aug 03, 2020 7:46 pm

Re: oversampling?

Post by ColinP »

Yes, Grant has been an excellent contributer to this little community. Thanks Grant and keep up the good work. :D

Borkman I think if you've already implemented oversampling then you might as well leave it in your release as a user-selectable option. Even if it has no apparent benefit.

I looked at these issues in the early days of GS when it was still capable of real-time input signal manipulation so was thinking about aliasing problems with interpolated access to a live buffer. But the cost of doing anything other than linear interpolation was scary as my target was 500 simultaneous grains - in effect 500 sample-based oscillators therefore requiring 500 interpolations per sample. I also wanted full stereo support so when the buffer is stereo the code has to perform 1000 interpolations per sample (along with running 500 envelope generators and many thousands of VCAs).

Because of both technical and UI issues I changed tack to concentrate on pure synthesis rather than trying to incorporate effects processing so things became easier to manage and I hit the target on a mid-range machine. I did plan to then add "pre-oversampling" - where the buffer was upsampled on loading or recording so that the buffer was effectively running at 384 kHz or 768 kHz by using up 8 or 16 times as much memory. This would have been superficially efficient because at runtime the code would have still been doing linear interpolation but a typical buffer would then be so many tens of megabytes in size that I figured L2 and L3 cache misses would have become a major overhead. And nobody has complained about aliasing in GS so it's been low on my list of things to tinker with.

However if anyone is looking at doing one or two interpolations per sample rather than a thousand then live polyphase FIR interpolation is a realistic prospect.

So what is polyphase FIR interpolation? Basically it's about exploiting zero-stuffing when upsampling. I spent what felt like a long time reading poorly written student papers before I came across this excellent article...

https://www.allaboutcircuits.com/techni ... ilter-dsp/

It doesn't contain any code but it explains the concepts with about as much clarity as one could hope for.
borkman
Posts: 51
Joined: Tue May 09, 2023 7:26 pm

Re: oversampling?

Post by borkman »

I've already removed the oversampling. The way I originally implemented it, with each delay being oversampled, proved to be problematic. With all delays set to 0 delay time, you'd expect in = out, and that's true for no oversampling. With oversampling on, different signal paths would yield different output depending on how many delays the signal passed through. It seemed like a bug to me and took me a while to get my head around. I assumed I screwed up somewhere and spent quite a lot of time in the debugger! Had I just oversampled the final output, I likely would have been fine. I was trying to optimize CPU usage and not oversample things that didn't need oversampling. Ah well, I also wouldn't have learned as much!

Thanks for sharing the link. This is an area I definitely need to study more, especially before I attempt the distortion module I've been thinking about. But first, I have some modulated delay lines that need attention. :)
User avatar
ChR_is
Posts: 107
Joined: Wed Sep 22, 2021 5:48 pm

Re: oversampling?

Post by ChR_is »

Hi! R_OpenLib dev here. maybe i can clear some things up :)

first of all, oversampling is awesome, but it's no magic bullet you can fire at everything and expect it to make it better. oversampling extends the frequency range a processor can operate within. VM is fixed at 48kHz internal sampling rate. which means that frequency content above 24kHz will reflect back into the spectrum "as aliasing". by oversampling two-fold you increase the sampling rate to 96kHz and the nyquist frequency to 48kHz. this gives you the opportunity to remove the frequencies that would reflect back at your regular sampling rate before downsampling. of course this only helps if your processing adds frequency content that can extend above the nyquist frequency. a prime example here would be anything non-linear. a delay however is a perfectly linear process. given the output you can compute the input. so oversampling wouldn't really be needed in that case.

interpolated delay lines have a built-in filter. linear interpolation will introduce some high-frequency roll-off due to the interpolation. in theory you could use oversampling to combat this, but that'd be a bit over the top imo. if you have a saturating delay, oversampling makes sense again. but in this case you'd want to oversample the saturation only to save cpu.

i don't know what you have done and how you have implemented the oversampled delay, so i can't really say why you experience such a strong coloration. i have designed the OS filters to be steep so that you can push them more towards nyquist and have more of the relevant audible spectrum. from the top of my head here are some common pitfalls that might lead to wrongful filtering:
1) samplerate is not set correctly for the oversampling processor
the oversampler needs to be initialized to the correct samplerate so that it can setup the filters accordingly. if you e.g. set it to half the actual samplerate it will cut the spectrum at a quarter of the target samplerate instead of nyquist leading to a severe drop in high frequency content.

2) processor that is oversampled is not adjusted to the changed samplerate
processor that are influenced by the samplerate need to adjust to a change in samplerate. e.g. if you oversample a filter you need to calculate its coefficients based on the oversampled samplerate instead of the base samplerate. a delay has its time directly linked to the samplerate. here you need to make sure to account for that so that the timing is correct and that enough samples are added to and read from the delay buffer. to set up an oversampled delay you would have to upsample the input, then process the delay x times for the given oversampling factor (e.g. 2 times for 2x oversampling), and then downsample the result. of course the delay now needs to process more values per sample and must therefore account for that when calculating its timing.

3) samples are not processed multiple times according to the oversampling factor.
this one is an easy miss. when oversampling, your processing needs to be done mutliple times for the given oversampling factor. you need to process all upsampled samples and downsample all processed samples. e.g. with 2x oversampling the processor must process 2 samples and must therefore be invoked 2 times.

4) IIR filter frequency set too low
while a lower cutoff frequency for the IIR oversampling filter will get rid of more aliased frequencies, it will also cut away more of the high-end. you can try setting the cutoff frequency closer to nyquist (e.g. 22kHz) to get less high-end attenuation at the cost of less efficient aliasing suppression.


there's also been some questionable information in this thread that i want to clarify:

IIR vs FIR.. that's a debate by itself tbh. but what's not debatable is the quality of each. both filter types can produce very steep filters whose cutoff frequencies can be pushed close to nyquist. while you can design an incredibly steep FIR of course, that will add lots of latency and processing cost, which is not really do-able in VM. it mostly boils down to the phase response. IIR filters will always alter the phase. FIR filters can be designed to be linear phase. the tradeoff for the latter is added latency. so.. FIR oversampling is NOT better than IIR oversampling. it just has different tradeoffs. you could implement a steeper IIR filter and get better results, but at the cost of increased cpu usage... i have designed R_OpenLib to be a good tradeoff between quality and performance.

zero-stuffing is not exclusive to FIR oversampling. you need to do that for IIR oversampling as well. basically zero stuffing is needed to "create" the missing information between the samples. i won't go into detail here, just wanted to clear that up.

Oversampling IS filtering. always has been. oversampling only works if you filter out the frequency content above the original nyquist in the oversampled state before downsampling. since oversampling without filtering doesn't make sense you can use the filters for the interpolation as well. an upsampler for instance works by adding the input value alongside some zeros to a filter that's tuned roughly below nyquist. the filter will interpolate the values and remove the offending high frequency content all at once producing oversampled samples at the new samplerate. since oversampling is filtering, there'll always be a slight loss in high frequencies. since VM runs at 48kHz oversampling filters are forced into the "audible" spectrum. setting the cutoff to 20kHz should be sufficient for most tasks and shouldn't really be audible by most humans.

all of that being said, i've added FIR oversampling to R_OpenLib, so you can try everything i've said out for yourself ;)
initially i wanted to keep FIR oversampling as a standout feature for my brand. but.. if you follow my releases you might have noticed that i don't even include it myself anymore. imo linear phase oversampling doesn't make sense in a virtual modular. the latency is just too much of a tradeoff. i'd rather take the slight phase changes at the highest frequencies.

i don't frequent this forum very often, so i miss discussion like this easily. feel free to reach out with further questions. i'll also try to check this thread from time to time.
Post Reply

Return to “Module Designer”