“Since ancient times, clapping has always been deeply associated with music: From the rhythmic clapping at tribal gatherings or concerts, to the iconic clap sound in drum machines. But the ‘high five’ clap signifying shared achievement, closely related to the handshake signifying agreement or mutual appreciation, is another positive association. And that metaphor transcends the duality between host application and plug-in, which meet as equals in CLAP.”

and let's round it off with the logo:

The logo is basically an infinity symbol inside a triangle. The symbol is broken so it looks like a twisted band. Due to its position and tilt, the left half forms the letter “C” while the right half forms a “P” – the first and last letters of CLAP. Also, the two blobs in the middle resemble musical note symbols.

Then there’s the “high five” association of clapping, which I also see in the symmetry of c and p coming together – which is perhaps wishful thinking on my part: It may have worked better if human hands more closely resembled those of Lego minifigures.

Audio associations are also welcome: The sound of clapping easily associates with a plug-in snapping into the host, as if a virtual hardware cartridge was inserted into some kind of virtual console.

Lastly, clapping is also the foundation of applause, which I'll have for the whole team who worked more than a year on this.

]]>So then I turn to the analogue synth and set the knobs to similar settings. With extreme modulations though just touching a pot ever so slightly may alter the sound quite a bit. So it takes a minute or two until everything lines up and sure enough, the result may be closer than was expected. Which my colleagues next room notice by my giggles, or because I'd call them to demonstrate the effect.

It so happens regularly that on the next day or maybe just a coffee break later, that match is gone. Suddenly the analogue synth doesn't do the same thing anymore and sometimes it isn't even possible to dial the same tone back in. It may be subtle, but it's an audible difference.

So there it is. No two analogue synths of the same make sound the same, and actually not even a single analogue synths sounds like itself at two different points in time.

What it means is, it doesn't really make much sense to do those AB comparisons between analogue hardware and softsynths that are supposed to model said hardware. They're not only controversial because of bias on both the maker's and listener's side, because of sound design skills, choice of settings or genre and numerous other possible reasons - they're most of all controversial because it's inherently unlikely, if not impossible, to create the definitive match. This alone leaves the door wide open for people to hear what they want to hear. I'll get back to that later.

So where do these differences stem from?

First of all the most prominent candidate is temperature. Everybody knows that analogue synths need to warm up until the oscillators keep in tune. But not just the oscillators, also the filters, envelopes, amplifiers, glide circuit, modulation depth and whatever else are subject to change with temperature. Only, we're not as perceptive to these effects as we are for out of tune oscillators. But it's not just the pitch that drifts over time, it's also the sound.

In the same vein, climate also plays a role albeit a tiny one. Humidity and air pressure may have very little effect, but they do have some effect. Hey, a cent off is a cent off. I was told that the developers of Monark had a Winter model and a Summer one. I don't know which one was chosen in the end, I'd be curious to know which one they thought sounded better.

In any case, when you start matching a digital model to an analogue synth in the morning in an office that - no offense - gets more humid over the day, it'll simply sound a tad different in the evening. Likewise if the afternoon sun comes round and you turn the aircon on...

With this in mind, it may make sense to record the synth in the parameter ranges you're trying to match, and verify how the beginning of your work matches up with the end. Keep track of your drift, and if it's bad, maybe model to a snapshot (samples...) entirely.

Secondly, there are part tolerances that may create some difference between two specimen of the same synth. Most people have heard of matched transistors. It's often noted in schematics whether or not transistors must be matched. Which in turn means, there's a whole lot of them which are not matched. If they aren't matched on the board, chances are great they aren't matched among machines. Thus they're all a tad different, as are diodes, resistors, capacitors, knobs, faders and all the complex parts glued into chips as well.

Those aren't earthshattering differences, not much at all. But they add another dimension of subtlety to the aforementioned temperature drift. For instance, with our Pro One, when I dial resonance from start of self oscillation to maximum, the pitch of the self oscillating filter goes up by about a quarter tone. When I asked people to send audio samples from their Pro Ones, the pitch would usually go down towards maximum resonance, some as much as a third or fourth - as it happens in pretty much the majority of synths out there.

One can get datasheets for most parts that are used in vintage synthesizers. Not only do they contain formulae that are useful for modelling, but component tolerances are often quoted i.e. how much their properties are allowed to vary. If some resistance or so can be off by 5% it's sometimes quite surprising how some synths actually sound similar. Which brings up the next bit:

What's baked into part tolerances is usually compensated by little trimpots that let your synth repair man dial in correct frequencies and voltages in critical sections. While this is supposed to level out the differences between units it may also bear potential for sonic differences. These trimpots can also be used to change the sound in many ways, and like their counterparts on the front panel, a tad to the left or right may change the sound ever so slightly, yet audibly.

Furthermore when synths get serviced, parts may be exchanged. Some people have collected filter or envelope chips that match each other in order to equip polysynths with a set of chips that sound as similar as possible. Some people also modify some of the resistors and capacitors for smoother or more aggressive operation. That's not necessarily circuit bending, it should merely be seen as improving the hardware surgically.

So, analogue synth afficionados - including myself - will often point out that they made this or that sound on a freshly serviced device. This doesn't mean that all freshly serviced units will sound exactly the same. Not only because of everything else but also because the people who calibrate synths take their own pride in how they make that synth sound better than when it came fresh from the factory!

But even a good service won't always bring back what's lost through wear and tear. It's not only the pots that become scratchy and all that, some parts simply break gradually over the years. Most classically it's electrolyte capacitors that run dry. But of course there's also dust, things getting loose or just old and squishy, who knows. A synth that has been locked in the basement for a decade will need service, there's no question about that.

Talking about age in a synth is a tad ambiguous. The more vintage the higher the price. But how likely is it that the sound of a synthesizer ages as well as a great port in a barrel? I secretly hope that the sound doesn't really change that much, not least if parts can be replaced. However Hans Zimmer told me that his newly built Moog Modular sounds a great deal better than the vintage wall at the back of his studio - same modules, 40+ years apart.

Lo and behold, climate, parts, service and age are the main reasons of sonic differences between two specimen of the same synth, heck, even of a single synth. No-one expects their synth to sound the same before and after service, but why not also before and after lunch?

So, how can we ever do those popular AB tests again when we know all this?

]]>Some clarification up front: I haven't invented any of this. It's all pretty standard stuff, but it might not all be very obvious to many people. I know it took me a while to grasp all of this, and it's a bit frightening to see it laid out so simple in front of me now. Hehehe, but wait for the next chapters. This one is still on the easy gear!

Here's your standard active lowpass filter, all naked and without the bloat:

This time I threw KCL right into the graph. Hence we get:

0 = Ivccs - Icap

0 = g * ( tanh(Vin) - tanh(Vout) ) - Vout + s

which leads to following "unsolvable" equation:

**Vout = g * ( tanh(Vin) - tanh(Vout) ) + s**

To sum this up, the general problem we face is, if we want to compute Vout, we already need to know Vout. It's computed by itself. This article is about methods to do exactly that in the non-linear case, all forgoing psychic abilities. I'll be showing three kinds of basic techniques that are more or less fit to serve this purpose:

- Numerical Methods using iterative solvers
- Analytical Methods using "one step" solutions
- Lookup tables (yep, that works too)

As a helpful addendum, if there weren't any tanh() terms, we'd have a linear equation and we could solve for Vout, like this (which we'll use for various purposes later on):

**VoutLin = ( g * Vin + s ) / ( g + 1 )**

But first things first... something really helpful to get started with:

Every method to solve this kind of implicit non-linear filter works best with a proper estimate. That means, one way or another we "guess" the value of Vout - or tanh(Vout).

The most well known - and probably the worst - estimate is using Vout from its previous computation. While you'll find this recommended often, particularly for iterative methods, it's not necessarily a good starting point. For a one pole filter it's probably better to use the linear estimate. That is, we simply compute VoutLin using the linear equation and start our iterative solver or our one step method from there.

If you look at the graphics in my Numerical Integration article, you'll notice that s (or iceq) is sort of in the middle between two Vouts. It's proven quite useful to use s[n+1] as an estimate for Vout[n+1]. But how about a linear estimate for s[n+2]? (Hint: It ain't good, but it may encourage experimenting, for instance you could blend it with something else)

Lastly, I mentioned lookup tables. If precision is key, calculating the initial estimate from a lookup table and running, say, a bisection solver will work wonders.

Here's our first piece of code:

```
// member variables:
float Vout; // output
float s; // state
enum EstimateSource
{
State, // use current state
PreviousVout, // use z-1 of Vout
LinearStateEstimate, // use linear estimate of future state
LinearVoutEstimate, // use linear estimate of Vout
LookUpTable // use lookup table
};
float getInitialEstimate( float tanh_input, EstimateSource inSource )
{
if( inSource == LinearStateEstimate
|| inSource == LinearVoutEstimate )
{
// run linear, update Vout
Vout = runOneStep( tanh_input );
}
switch( inSource )
{
default:
case LinearVoutEstimate:
case PreviousVout: return Vout;
case State: return s;
case LinearStateEstimate: return 2 * Vout - s;
case LookUpTable: return runLookUp( tanh_input );
}
}
```

Note that, in order to avoid complaints of excessive use of tanh() I have written most functions so that whenever there's an input sample, it's already passed with tanh() applied.

Another convention right here: Methods that start with "run..." don't change the s, i.e. they don't update the filter state. They may however change Vout. Each run... method also has one or more accompanying "tick..." methods which also update the state. Reason is, some methods are called multiple times where the same state is required.

All cases I described are covered. Two cases use a linear estimate, for which they run the lowpass filter without any implicit tanh() term. The linear method is baked into the code for the OneStep solution. The OneStep solution actually defaults to linear if nothing else is specified.

(if you're really picky, you could say that running a linear estimate or a table lookup first turns a one step method into a multi step one... you're right if you will, I won't be splitting hairs over this ;-)

Here comes your CPU smoker right there. Newton's Method aka Newton Raphson etc. is an iterative method, a so called root finding algorithm. What latter have in common, they take a function f(x) and they find one or more zero crossings. They start at some point (estimate!), calculate the result - which is often far from 0 - and use that result to compute a new estimate that hopefully brings us closer to a 0. I really really really wish to not go any deeper about the maths, but if you click the link above it'll take you to Wikipedia.

*Source Code! Yay!*

```
#define MAX_NEWTON_ERROR 0.000001
template<class Filter>
inline float NewtonRaphson_Solver ( float& input, float& estimate, Filter* object )
{
float residue = object->runFilter( input, estimate ) - estimate;
float numIterations = 1;
while( fabs(residue) > MAX_NEWTON_ERROR )
{
// new_estimate = old_estimate - residue/derivative
float derivative = object->getDerivative();
estimate = estimate - residue/derivative;
// get next residue
residue = object->runFilter( input, estimate ) - estimate;
numIterations++;
if( numIterations > 50 )
{
// Debug_log("BUG?!? - iterations > 50, returning 0 for safety reasons");
// this can also happen if MAX_NEWTON_ERROR is too small,
// e.g. due to truncation errors
return 0;
}
}
return estimate;
}
```

This is, in a nutshell, a Newton Solver that can be reused for any filter that implements two important methods, runFilter() and getDerivative(). It'll call former as often as it needs to bring the estimate closer to Vout. If Vout and the estimate are the same, we speak of "convergence". In a typical scenario, convergence is reached at 2-5 iterations. At worst case (diode ladder filter..?) it should be about 3-8 iterations (but this also requires a *slightly* enhanced solver).

In-between each call to runFilter() it picks up the derivative of the filter equation at the last estimate. This sounds like a terrible idea, math and performance wise. However, if you refresh your memory for derivative rules (sum rule, chain rule, product rule) and if you keep in mind that the derivative of tanh(x) is just 1 - (tanh(x))^{2,} it's actually not too bad. Check this out:

*More Source Code! Yay!*

```
// ------------------------------ tanh'(x) = 1 - tanh^2(x) ------------------------------------
inline float sech2_with_tanh( float tanh_value )
{
return 1.f - tanh_value * tanh_value;
}
inline float tanh_derivative( float x ) // not used, for edu purpose only
{
return sech2_with_tanh( tanh(x) );
}
```

Those are two helper functions defined outside our filter class. The first one, sech2_with_tanh, will return the derivative or tanh_value = tanh(x) from, well, tanh_value itself - like I said before, many methods and functions expect their input already tanh()'d to avoid redundant calls to tanh()!

```
// ------------------------------ Newton's Method ------------------------------------
// member variable:
float tanh_result; // stores tanh(Vout) to reuse whenever applicable
float runFilter( float tanh_input /* tanh(input) */, float v_estimate )
{
tanh_result = tanh( v_estimate );
return g * ( tanh_input - tanh_result ) + s;
}
float getDerivative()
{
// tanh'(x) = 1 - tanh^2(x)
return -g * sech2_with_tanh ( tanh_result ) - 1.f;
}
float tickNewtonRaphson( float input,
EstimateSource inSource
= LinearVoutEstimate )
{
float tanh_input = tanh(input);
float estimate = getInitialEstimate( tanh_input, inSource );
Vout = NewtonRaphson_Solver ( tanh_input, estimate, this );
updateState();
return Vout;
}
```

There you go. That's all there is. We tanhify our input, we pick up our initial estimate and then we let our filter visit the Newton-Raphson solver from the listing above. That's all it takes.

This is an utterly common and reliable way to solve implicit non-linear equations. Similar methods are bisection method, regula falsi, secant method and Brent's method. All have different advantages and drawbacks. Some are "bracketed" which means that the actual value remains between two estimates. This guarantees convergence and it guarantees a measurable accuracy. Other methods are ideal for vectorization or computationally inexpensive.

Newton's Method is certainly not the cheapest per iteration and it's also not always guaranteed to converge. In fact, it works best with a good initial estimate. But it's also the method that converges the fastest out of the lot. If it converges. For our filter (or any classic filter topology using tanh() distortion) it's guaranteed to converge. The only reason it may not is, when you drive the filter hard enough to loose the numerical floating point precision to have your residue below the maximum error. If you want to drive your filter hard, maybe adjust the maximum error dynamically to filter gain.

Another reason to promote Newton's Method: It can be put in vectorial form. That is, if we calculate a higher order filter with 2, 3...11 tanh() terms, we need to converge as many estimates with their respective results. And this, in other words, is why some filters can kill the CPU. But before we get there, let's have a look at the aternatives.

Now, I guess pretty much every developer reading this has gazed over Teemu Voipio aka Mystan's "Cheap non-linear zero-delay filters" over in this thread on KVR . What's described there is probably one of the most influentual solutions to avoid the unit delay in both the outer loop and the Euler integration used in this infamous transistor ladder model published by Antti Huovilainen in 2004 (which btw. is a must read as it has been a great catalyst for Analogue Modeling).

The gist of this method is, it belongs to a category of methods that approximates the implicit non-linear equation with a single computation. I've sometimes seen these solutions called "one step methods", hence I'm calling them that way. They are a subset of so-called "analytical" solutions because they use maths to "up front" to get to the result, i.e. the system's properties were analyzed and a fitting solution/workaround was found. In contrast, iterative solvers are called "numerical" solutions because they don't minimize the problem, they just math it out. As computers often do.

A word of caution: Neither of these methods can be as accurate as the numerical ones without additional computation. However, the audible difference might be neglectible if one considers that the CPU usage is not much higher than that of a filter calculated with Euler and unit delays. Up to your preference I'd say ;-)

Addendum: There was a discussion whether or not these methods are analytical. My stance is that while they approximate to the original, implicit system numerically, we could also look at these equations as a system on their own right - which was then solved analytically. Thoughts on that are welcome in the comments section below or on KVR.

Now, the actual solution I'm presenting here is based on the concept that tanh(x) is replaced by a linear term, and quite literally so: What we do is, we replace tanh(x) - which is an s-shaped curve - by a straight line a * x + b. Latter is *the* general formula for all straight lines that map x to y. Obviously, for each sample we'd try to choose a and b so that our result would be the closest possible to the s-shaped tanh()-curve. Sounds logical? - Well, there's various methods to go by.

The interesting bit is that we can use the very same code for different ways to compute a and b. In fact we can set a = 1 and b = 0 to calculate our filter linearly, i.e. without the tanh() in the negative coupling. We can furthermore set a to tanh(s)/s and b to 0 to compute Mystan's method - which I call "pivotal" because our straight line that approximates tanh() pivots around the origin (x = 0/y=0).

The other method I'll show is one which I think has been used in many products... but I'm not so sure of that... It's essentially based on drawing a tangent to our estimated tanh(x), and then move our function result along that line. This is in many ways identical to Newton's Method, but wrapped into a single step method.

As you can see, the pivotal method is a good "allrounder". However, a fast rise in absolute value can go way out of bounds because the slope is always steeper as values run high.

The tangential method on the other hand is very good if the estimate is good, but then it might deviate even quicker than the pivotal method, particularly after a change of sign on a large jump.

I would love to provide for a full analysis of the two and I'd also love to show more variations, e.g. such that take the statistical error of the estimate into account. But that really would make my head explode right now. So let's just cut it short and post some code:

```
// ------------------------- linear / analytical methods --------------------------
enum AnalyticalMethod
{
Linear, // linear solution
Pivotal, // Mystran's "cheap" method, using x=0 as pivot
Tangential // Newton-like linearisation
};
float runOneStep( float tanh_input /* tanh(input) */,
AnalyticalMethod inMethod = Linear,
EstimateSource inSource = State )
{
float b = 0.f;
float a = 1.f;
if( inMethod != Linear )
{
// base variable for tangent/angle
float base = getInitialEstimate( tanh_input, inSource );
float tBase = tanh(base);
switch( inMethod )
{
case Pivotal:
{
a = base == 0 ? 1.f : tBase/base;
break;
}
case Tangential:
{
// tanh'(x) = 1 - tanh^2(x)
a = sech2_with_tanh( tBase );
b = tBase - base * a;
break;
}
// surplus, but compilers may complaining if we don't keep it
case Linear: ;
}
}
return ( g * ( tanh_input - b ) + s ) / ( a * g + 1 );
}
```

Yet again, this is it. That's all there is to it. Or is there? :-)

And again, like in the iterative method, we first pick up an estimate. Then we prepare a and b for whatever method out of Pivotal, Tangential and Linear we wish for. Then we compute Vout directly from that. Here's the path to get to the final equation:

Take original non-linear equation

**Vout = g * ( tanh(input) - tanh(Vout) ) + s**

Replace

**tanh(Vout) -> a * Vout + b**

Such that

**Vin = tanh(input)**

**Vout = g * ( Vin - (a * Vout + b) ) + s**

Now Vout can be isolated

**Vout = (Vin*g-b*g+s)/(a*g+1)**

Note: If a = 1 and b = 0 we get the linear form Vout = ( Vin * g + s)/( g + 1 )

For sake of completeness, let's post the tick functions for Pivotal, Tangential and truely Linear as previously mentioned and as announced elsewhere on KVR:

```
// method originally described by Mystran on KVR
float tickPivotal( float input )
{
Vout = runOneStep( tanh(input), Pivotal, State );
updateState();
return Vout;
}
// Newton's method in one step with linear estimate
float tickTangential( float input )
{
Vout = runOneStep( tanh(input), Tangential, LinearVoutEstimate );
updateState();
return Vout;
}
// --------------------------- Linear Filter ---------------------------------
// fully linear version, not shaping the input
float tickLinear( float input )
{
Vout = runOneStep( input );
updateState();
return Vout;
}
```

A short note in-between. Both the iterative method as well as the one step method above seem very simple, and indeed they are - as long as we deal with one pole filters. But as soon as we go further, i.e. with higher order filters, things can become quite nasty. While one can keep iterative methods nice and clean for higher orders, doing a true "vectorial Newton solver" involves matrices and linear equation system solvers.

Therefore you'll often find a mix of different approaches right there in one filter algorithm. E.g. one can do an iterative solver for the overall feedback of a filter while keeping the single stages simple with a one step method. Also, you'll often hear of "corrective steps" which improve the accuracy of one step methods by a second or third computation - which basically also makes them iterative methods.

Everything is allowed - as long as it sounds good.

Anyhow, the focus of this article is teaching the differences of the basic popular methods. There's always a lot more that can be said and done, and I'll be happy to discuss a few more in-depth things later or elsewhere.

Look Up Tables for filters *...whaaat?* - Being a bit of an ousider, this method only applies to one pole filters. There's a tiny chance to maybe do it for 2nd order filters, but you'll soon run out of memory. But who knows, maybe this method is great for FPGAs... or someone comes up with great equations that approximate the table solution in 17 dimensions...

The idea is this:

Out of all variables in Vout = g * ( tanh(Vin) - tanh(Vout) ) + s only Vout is unknown, all other values are available beforehand. So how do we get this into a lookup table?

If we rewrite the equation as

Vout = ( -g * tanh(Vout) ) + ( g * tanh(Vin) + s )

We can see that Vout depends on g itself and g * tanh(Vin) + s. Latter we can fully calculate into a single value:

State_gIn = g * tanh(Vin) + s;

Now we can create a lookup table for g in one dimension and State_gIn in another. We can thence use our Newton Raphson solver to populate that table with the correct values for Vout. Next thing you know, you can read Vout from g and State_gIn, without a single tanh() term involved :-) - furthermore, a considerably small table gives us a pretty good result from just bilinear interpolation alone (that is, linear interpolation in two dimensions as used in graphics software).

Hint: a table with 256 * 256 entries and linear interpolation gives us perfect results, but the table is quite big. We could always optimise it by cutting it in half and storing the sign bit of State_gIn, but d'oh... I haven't checked that out yet. Hint 2: Check out 3D plots. You might find ways to minimize the table in one dimension or the other.

Now, for some unexpected fun, here's a small table for a pretty good initial guess:

```
// ------------------------ two dimensional lookup table ----------------------------
#define numEntries 16
float table[ numEntries ][ numEntries ];
float minG, maxG, maxState_gIn;
float coeffMul, maxState_gInMul;
void createTable( float inMinG, float inMaxG, float inMaxState_gIn )
{
// using numEntries-2 for boundaries so that
// reading index+1 isn't out of bounds later
minG = inMinG;
maxG = inMaxG;
maxState_gIn = inMaxState_gIn;
coeffMul = ((float)(numEntries-2)) / (maxG - minG);
maxState_gInMul = ((float)(numEntries-2)) / (2.f * maxState_gIn);
clear();
for( int i = 0; i < numEntries; i++ )
{
g = minG + (maxG - minG) * (float)i / float( numEntries - 2 );
for( int k = 0; k < numEntries; k++ )
{
float State_gIn = -maxState_gIn
+ 2 * maxState_gIn * (float)k
/ float( numEntries - 2 );
s = State_gIn;
table[ i ][ k ] = tickNewtonRaphson( 0.f );
}
}
}
float runLookUp( float tanh_input /* tanh(input) */ )
{
float inG = g;
if( inG < minG ) return s;
if( inG > maxG ) inG = maxG;
float State_gIn = inG * tanh_input + s;
if( State_gIn > maxState_gIn ) State_gIn = maxState_gIn;
if( State_gIn < -maxState_gIn ) State_gIn = -maxState_gIn;
State_gIn = (State_gIn + maxState_gIn) * maxState_gInMul;
inG = (inG - minG ) * coeffMul;
int State_gIn_index = (int) State_gIn;
int inG_index = (int) inG;
float State_gIn_fract = State_gIn - floorf( State_gIn );
float inG_fract = inG - floorf( inG );
float ingLowSL = table[ inG_index ][ State_gIn_index ];
float ingLowSH = table[ inG_index ][ State_gIn_index +1 ];
float ingHighSL = table[ inG_index +1 ][ State_gIn_index ];
float ingHighSH = table[ inG_index +1 ][ State_gIn_index +1 ];
float ingLow = ingLowSL
+ State_gIn_fract * ( ingLowSH - ingLowSL );
float ingHigh = ingHighSL
+ State_gIn_fract * ( ingHighSH - ingHighSL );
return ingLow + inG_fract * ( ingHigh - ingLow );
}
float tickLookup( float input )
{
Vout = runLookUp( tanh(input) );
updateState();
return Vout;
}
```

Ok, it looks a bit more messy than the other solutions, but maybe you can customize it to your needs. Which brings up the final chapter for today:

I doubt that too many people have ever before lost so many words on such a simple lowpass filter. It's for a good cause though, because I hope I could illustrate some of the methods me and others are so conveniently mentioning in the forums all the time. I hope that people will put this to good use and start exploring the possibilities. For instance, adapting the one step method for the one pole lowpass with an inverting input is a bit of a challenge.

To be honest, I'd much rather post challenges than recipes that one could just copy and paste. There's a lot of challenge underlying these technologies. Making them fast, making them more accurate, adapting them for new filter types. It's endless. And also of course, there are more methods - but you gotta start somewhere...

Anyhow, my vacation is almost over, I doubt I'll have time to post the next step anytime soon. Which might include bit about a higher order filter and/or the vectorial Newton method. Which is then also how we relate the principles laid out so far to actual crcuit simulators. or maybe I do another piece about one pole filters that don't have tanh distortion... We'll see...

Lastly, here's the full filter class and utility functions or you to work with. Compiles here, should compile for you too...

http://www.u-he.com/downloads/UrsBlog/onePoleLowpass.h

Enjoy,

- Urs

Update Feb 27th 2016: The same oversight happened here as in the previous article - I messed up the OTA voltage to current equation. Hence I also changed the schematic and equations in this article to a general concept of a Voltage Controlled Current Source. As promised, I'll try to add a solution for the proper OTA-based one pole filter some time. Also, I added some thoughts on the analytical vs. numerical debate (which slightly confused me as well ;-)

]]>