C++ 混响算法

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/5318989/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-28 17:53:37  来源:igfitidea点击:

Reverb Algorithm

c++signal-processing

提问by Reu

I'm looking for a simple or commented reverb algorithm, even in pseudocode would help a lot.

我正在寻找一个简单的或注释的混响算法,即使在伪代码中也会有很大帮助。

I've found a couple, but the code tends to be rather esoteric and hard to follow.

我找到了一些,但代码往往相当深奥且难以理解。

回答by MusiGenesis

Here is a very simple implementation of a "delay line" which will produce a reverb effect in an existing array (C#, bufferis short[]):

这是“延迟线”的一个非常简单的实现,它将在现有数组(C#,bufferis short[])中产生混响效果:

int delayMilliseconds = 500; // half a second
int delaySamples = 
    (int)((float)delayMilliseconds * 44.1f); // assumes 44100 Hz sample rate
float decay = 0.5f;
for (int i = 0; i < buffer.length - delaySamples; i++)
{
    // WARNING: overflow potential
    buffer[i + delaySamples] += (short)((float)buffer[i] * decay);
}

Basically, you take the value of each sample, multiply it by the decay parameter and add the result to the value in the buffer delaySamplesaway.

基本上,您获取每个样本的值,将其乘以衰减参数并将结果添加到缓冲区中的值中delaySamples

This will produce a true "reverb" effect, as each sound will be heard multiple times with declining amplitude. To get a simpler echo effect (where each sound is repeated only once) you use basically the same code, only run the forloop in reverse.

这将产生真正的“混响”效果,因为每个声音都会随着振幅的下降而多次听到。为了获得更简单的回声效果(每个声音只重复一次),您使用基本相同的代码,只需for反向运行循环。

Update:the word "reverb" in this context has two common usages. My code sample above produces a classic reverb effect common in cartoons, whereas in a musical application the term is used to mean reverberation, or more generally the creation of artificial spatial effects.

更新:在这种情况下,“混响”一词有两种常见用法。我上面的代码示例产生了卡通中常见的经典混响效果,而在音乐应用中,该术语用于表示混响,或者更一般地说是创建人工空间效果。

A big reason the literature on reverberation is so difficult to understand is that creating a good spatial effect requires much more complicated algorithms than my sample method here. However, most electronic spatial effects are built up using multiple delay lines, so this sample hopefully illustrates the basics of what's going on. To produce a really good effect, you can (or should) also muddy the reverb's output using FFT or even simple blurring.

关于混响的文献如此难以理解的一个重要原因是,创建良好的空间效果需要比我这里的示例方法复杂得多的算法。然而,大多数电子空间效果是使用多条延迟线构建的,因此本示例有望说明发生的基本情况。为了产生真正好的效果,您还可以(或应该)使用 FFT 或什至简单的模糊处理混响输出。

Update 2:Here are a few tips for multiple-delay-line reverb design:

更新 2:以下是多延迟线混响设计的一些技巧:

  • Choose delay values that won't positively interfere with each other (in the wave sense). For example, if you have one delay at 500ms and a second at 250ms, there will be many spots that have echoes from both lines, producing an unrealistic effect. It's common to multiply a base delay by different prime numbers in order to help ensure that this overlap doesn't happen.

  • In a large room (in the real world), when you make a noise you will tend to hear a few immediate (a few milliseconds) sharp echoes that are relatively undistorted, followed by a larger, fainter "cloud" of echoes. You can achieve this effect cheaply by using a few backwards-running delay lines to create the initial echoes and a few full reverb lines plus some blurring to create the "cloud".

  • The absolute besttrick (and I almost feel like I don't want to give this one up, but what the hell) only works if your audio is stereo. If you slightly vary the parameters of your delay lines between the left and right channels (e.g. 490ms for the left channel and 513ms for the right, or .273 decay for the left and .2631 for the right), you'll produce a much more realistic-sounding reverb.

  • 选择不会相互干扰的延迟值(在波的意义上)。例如,如果您在 500 毫秒处有一个延迟,在 250 毫秒处有一个延迟,那么将会有许多点同时具有来自两条线的回声,从而产生不切实际的效果。通常将基本延迟乘以不同的素数以帮助确保不会发生这种重叠。

  • 在一个大房间里(在现实世界中),当你发出噪音时,你会听到一些直接的(几毫秒)相对未失真的尖锐回声,然后是更大、更微弱的“云团”回声。您可以通过使用一些反向运行的延迟线来创建初始回声和一些完整的混响线加上一些模糊来创建“云”,从而廉价地实现这种效果。

  • 绝对最好的技巧(我几乎觉得我不想放弃这个技巧,但到底是什么)只有在您的音频是立体声时才有效。如果您稍微改变左右声道之间延迟线的参数(例如,左声道为 490 毫秒,右声道为 513 毫秒,或者左声道衰减 0.273,右声道衰减 .2631),您将产生很多更逼真的混响。

回答by Shannon Matthews

Digital reverbs generally come in two flavors.

数字混响通常有两种风格。

  • Convolution Reverbsconvolvean impulse responseand a input signal. The impulse response is often a recording of a real room or other reverberation source. The character of the reverb is defined by the impulse response. As such, convolution reverbs usually provide limited means of adjusting the reverb character.

  • Algorithmic Reverbsmimic reverb with a network of delays, filters and feedback. Different schemes will combine these basic building blocks in different ways. Much of the art is in knowing how to tune the network. Algorithmic reverbs usually expose several parameters to the end user so the reverb character can be adjusted to suit.

  • 卷积混响脉冲响应和输入信号进行卷积。脉冲响应通常是真实房间或其他混响源的录音。混响的特性由脉冲响应定义。因此,卷积混响通常提供的调整混响特性的方法有限。

  • 算法混响通过延迟、滤波器和反馈网络模拟混响。不同的方案将以不同的方式组合这些基本构建块。很多艺术在于知道如何调整网络。算法混响通常向最终用户公开几个参数,因此可以调整混响特性以适应。

The A Bit About Reverbpost at EarLevel is a great introduction to the subject. It explains the differences between convolution and algorithmic reverbs and shows some details on how each might be implemented.

EarLevel的A Bit About Reverb帖子很好地介绍了该主题。它解释了卷积和算法混响之间的差异,并展示了有关如何实现每种混响的一些细节。

Physical Audio Signal Processingby Julius O. Smith has a chapter on reverb algorithms, including a section dedicated to the Freeverb algorithm. Skimming over that might help when searching for some source code examples.

Julius O. Smith 的Physical Audio Signal Processing有一章关于混响算法,包括专门介绍 Freeverb 算法的部分。在搜索一些源代码示例时,略读可能会有所帮助。

Sean Costello's Valhallablog is full of interesting reverb tidbits.

Sean Costello 的Valhalla博客充满了有趣的混响花絮。

回答by hotpaw2

What you need is the impulse response of the room or reverb chamber which you want to model or simulate. The full impulse response will include all the multiple and multi-path echos. The length of the impulse response will be roughly equal to the length of time (in samples) it takes for an impulse sound to completely decay below audible threshold or given noise floor.

您需要的是要建模或模拟的房间或混响室的脉冲响应。完整的脉冲响应将包括所有的多径和多径回波。脉冲响应的长度将大致等于脉冲声音完全衰减到低于可听阈值或给定本底噪声所需的时间长度(以样本为单位)。

Given an impulse vector of length N, you could produce an audio output sample by vector multiplication of the input vector (made up of the current audio input sample concatenated with the previous N-1 input samples) by the impulse vector, with appropriate scaling.

给定长度为 N 的脉冲向量,您可以通过输入向量(由当前音频输入样本与前 N-1 个输入样本串联而成)与脉冲向量的向量乘法来生成音频输出样本,并进行适当的缩放。

Some people simplify this by assuming most taps (down to all but 1) in the impulse response are zero, and just using a few scaled delay lines for the remaining echos which are then added into the output.

有些人通过假设脉冲响应中的大多数抽头(除 1 之外的所有抽头)为零,并仅对剩余的回声使用一些缩放延迟线,然后将其添加到输出中来简化这一点。

For even more realistic reverb, you might want to use different impulse responses for each ear, and have the response vary a bit with head position. A head movement of as little as a quarter inch might vary the position of peaks in the impulse response by 1 sample (at 44.1k rates).

为了获得更逼真的混响,您可能希望为每只耳朵使用不同的脉冲响应,并根据头部位置使响应略有不同。仅四分之一英寸的磁头移动可能会使脉冲响应中的峰值位置改变 1 个样本(以 44.1k 速率)。

回答by Joe Qian

You can use GVerb. Get the code from here.GVerb is a LADSPA plug-in, you can go hereif you want to know something about LADSPA.

您可以使用 GVerb。从这里获取代码.GVerb 是一个 LADSPA 插件,想了解 LADSPA 的可以去这里

Hereis the wiki for GVerb , including explaining of the parameters and some instant reverb settings.

是 GVerb 的 wiki,包括对参数和一些即时混响设置的解释。

Also we can use it directly in Objc:

我们也可以直接在 Objc 中使用它:

ty_gverb        *_verb;
_verb = gverb_new(16000.f, 41.f, 40.0f, 7.0f, 0.5f, 0.5f, 0.5f, 0.5f, 0.5f);
AudioSampleType *samples = (AudioSampleType*)dataBuffer.mBuffers[0].mData;//Audio Data from AudioUnit Render or ExtAuidoFileReader
float lval,rval;
for (int i = 0; i< fileLengthFrames; i++) {
     float value = (float)samples[i] / 32768.f;//from SInt16 to float
     gverb_do(_verb, value, &lval, &rval);
     samples[i] =  (SInt16)(lval * 32767.f);//float to SInt16
}

GVerb is a mono effect but if you want a stereo effect you could run each channel through the effect separately and then pan and mix the processed signals with the dry signals as required

GVerb 是一种单声道效果,但如果您想要立体声效果,您可以分别通过效果运行每个通道,然后根据需要将处理过的信号与干信号进行平移和混合