So then I turn to the analogue synth and set the knobs to similar settings. With extreme modulations though just touching a pot ever so slightly may alter the sound quite a bit. So it takes a minute or two until everything lines up and sure enough, the result may be closer than was expected. Which my colleagues next room notice by my giggles, or because I'd call them to demonstrate the effect.

It so happens regularly that on the next day or maybe just a coffee break later, that match is gone. Suddenly the analogue synth doesn't do the same thing anymore and sometimes it isn't even possible to dial the same tone back in. It may be subtle, but it's an audible difference.

So there it is. No two analogue synths of the same make sound the same, and actually not even a single analogue synths sounds like itself at two different points in time.

What it means is, it doesn't really make much sense to do those AB comparisons between analogue hardware and softsynths that are supposed to model said hardware. They're not only controversial because of bias on both the maker's and listener's side, because of sound design skills, choice of settings or genre and numerous other possible reasons - they're most of all controversial because it's inherently unlikely, if not impossible, to create the definitive match. This alone leaves the door wide open for people to hear what they want to hear. I'll get back to that later.

So where do these differences stem from?

First of all the most prominent candidate is temperature. Everybody knows that analogue synths need to warm up until the oscillators keep in tune. But not just the oscillators, also the filters, envelopes, amplifiers, glide circuit, modulation depth and whatever else are subject to change with temperature. Only, we're not as perceptive to these effects as we are for out of tune oscillators. But it's not just the pitch that drifts over time, it's also the sound.

In the same vein, climate also plays a role albeit a tiny one. Humidity and air pressure may have very little effect, but they do have some effect. Hey, a cent off is a cent off. I was told that the developers of Monark had a Winter model and a Summer one. I don't know which one was chosen in the end, I'd be curious to know which one they thought sounded better.

In any case, when you start matching a digital model to an analogue synth in the morning in an office that - no offense - gets more humid over the day, it'll simply sound a tad different in the evening. Likewise if the afternoon sun comes round and you turn the aircon on...

With this in mind, it may make sense to record the synth in the parameter ranges you're trying to match, and verify how the beginning of your work matches up with the end. Keep track of your drift, and if it's bad, maybe model to a snapshot (samples...) entirely.

Secondly, there are part tolerances that may create some difference between two specimen of the same synth. Most people have heard of matched transistors. It's often noted in schematics whether or not transistors must be matched. Which in turn means, there's a whole lot of them which are not matched. If they aren't matched on the board, chances are great they aren't matched among machines. Thus they're all a tad different, as are diodes, resistors, capacitors, knobs, faders and all the complex parts glued into chips as well.

Those aren't earthshattering differences, not much at all. But they add another dimension of subtlety to the aforementioned temperature drift. For instance, with our Pro One, when I dial resonance from start of self oscillation to maximum, the pitch of the self oscillating filter goes up by about a quarter tone. When I asked people to send audio samples from their Pro Ones, the pitch would usually go down towards maximum resonance, some as much as a third or fourth - as it happens in pretty much the majority of synths out there.

One can get datasheets for most parts that are used in vintage synthesizers. Not only do they contain formulae that are useful for modelling, but component tolerances are often quoted i.e. how much their properties are allowed to vary. If some resistance or so can be off by 5% it's sometimes quite surprising how some synths actually sound similar. Which brings up the next bit:

What's baked into part tolerances is usually compensated by little trimpots that let your synth repair man dial in correct frequencies and voltages in critical sections. While this is supposed to level out the differences between units it may also bear potential for sonic differences. These trimpots can also be used to change the sound in many ways, and like their counterparts on the front panel, a tad to the left or right may change the sound ever so slightly, yet audibly.

Furthermore when synths get serviced, parts may be exchanged. Some people have collected filter or envelope chips that match each other in order to equip polysynths with a set of chips that sound as similar as possible. Some people also modify some of the resistors and capacitors for smoother or more aggressive operation. That's not necessarily circuit bending, it should merely be seen as improving the hardware surgically.

So, analogue synth afficionados - including myself - will often point out that they made this or that sound on a freshly serviced device. This doesn't mean that all freshly serviced units will sound exactly the same. Not only because of everything else but also because the people who calibrate synths take their own pride in how they make that synth sound better than when it came fresh from the factory!

But even a good service won't always bring back what's lost through wear and tear. It's not only the pots that become scratchy and all that, some parts simply break gradually over the years. Most classically it's electrolyte capacitors that run dry. But of course there's also dust, things getting loose or just old and squishy, who knows. A synth that has been locked in the basement for a decade will need service, there's no question about that.

Talking about age in a synth is a tad ambiguous. The more vintage the higher the price. But how likely is it that the sound of a synthesizer ages as well as a great port in a barrel? I secretly hope that the sound doesn't really change that much, not least if parts can be replaced. However Hans Zimmer told me that his newly built Moog Modular sounds a great deal better than the vintage wall at the back of his studio - same modules, 40+ years apart.

Lo and behold, climate, parts, service and age are the main reasons of sonic differences between two specimen of the same synth, heck, even of a single synth. No-one expects their synth to sound the same before and after service, but why not also before and after lunch?

So, how can we ever do those popular AB tests again when we know all this?

]]>Some clarification up front: I haven't invented any of this. It's all pretty standard stuff, but it might not all be very obvious to many people. I know it took me a while to grasp all of this, and it's a bit frightening to see it laid out so simple in front of me now. Hehehe, but wait for the next chapters. This one is still on the easy gear!

Here's your standard active lowpass filter, all naked and without the bloat:

This time I threw KCL right into the graph. Hence we get:

0 = Ivccs - Icap

0 = g * ( tanh(Vin) - tanh(Vout) ) - Vout + s

which leads to following "unsolvable" equation:

**Vout = g * ( tanh(Vin) - tanh(Vout) ) + s**

To sum this up, the general problem we face is, if we want to compute Vout, we already need to know Vout. It's computed by itself. This article is about methods to do exactly that in the non-linear case, all forgoing psychic abilities. I'll be showing three kinds of basic techniques that are more or less fit to serve this purpose:

- Numerical Methods using iterative solvers
- Analytical Methods using "one step" solutions
- Lookup tables (yep, that works too)

As a helpful addendum, if there weren't any tanh() terms, we'd have a linear equation and we could solve for Vout, like this (which we'll use for various purposes later on):

**VoutLin = ( g * Vin + s ) / ( g + 1 )**

But first things first... something really helpful to get started with:

Every method to solve this kind of implicit non-linear filter works best with a proper estimate. That means, one way or another we "guess" the value of Vout - or tanh(Vout).

The most well known - and probably the worst - estimate is using Vout from its previous computation. While you'll find this recommended often, particularly for iterative methods, it's not necessarily a good starting point. For a one pole filter it's probably better to use the linear estimate. That is, we simply compute VoutLin using the linear equation and start our iterative solver or our one step method from there.

If you look at the graphics in my Numerical Integration article, you'll notice that s (or iceq) is sort of in the middle between two Vouts. It's proven quite useful to use s[n+1] as an estimate for Vout[n+1]. But how about a linear estimate for s[n+2]? (Hint: It ain't good, but it may encourage experimenting, for instance you could blend it with something else)

Lastly, I mentioned lookup tables. If precision is key, calculating the initial estimate from a lookup table and running, say, a bisection solver will work wonders.

Here's our first piece of code:

```
// member variables:
float Vout; // output
float s; // state
enum EstimateSource
{
State, // use current state
PreviousVout, // use z-1 of Vout
LinearStateEstimate, // use linear estimate of future state
LinearVoutEstimate, // use linear estimate of Vout
LookUpTable // use lookup table
};
float getInitialEstimate( float tanh_input, EstimateSource inSource )
{
if( inSource == LinearStateEstimate
|| inSource == LinearVoutEstimate )
{
// run linear, update Vout
Vout = runOneStep( tanh_input );
}
switch( inSource )
{
default:
case LinearVoutEstimate:
case PreviousVout: return Vout;
case State: return s;
case LinearStateEstimate: return 2 * Vout - s;
case LookUpTable: return runLookUp( tanh_input );
}
}
```

Note that, in order to avoid complaints of excessive use of tanh() I have written most functions so that whenever there's an input sample, it's already passed with tanh() applied.

Another convention right here: Methods that start with "run..." don't change the s, i.e. they don't update the filter state. They may however change Vout. Each run... method also has one or more accompanying "tick..." methods which also update the state. Reason is, some methods are called multiple times where the same state is required.

All cases I described are covered. Two cases use a linear estimate, for which they run the lowpass filter without any implicit tanh() term. The linear method is baked into the code for the OneStep solution. The OneStep solution actually defaults to linear if nothing else is specified.

(if you're really picky, you could say that running a linear estimate or a table lookup first turns a one step method into a multi step one... you're right if you will, I won't be splitting hairs over this ;-)

Here comes your CPU smoker right there. Newton's Method aka Newton Raphson etc. is an iterative method, a so called root finding algorithm. What latter have in common, they take a function f(x) and they find one or more zero crossings. They start at some point (estimate!), calculate the result - which is often far from 0 - and use that result to compute a new estimate that hopefully brings us closer to a 0. I really really really wish to not go any deeper about the maths, but if you click the link above it'll take you to Wikipedia.

*Source Code! Yay!*

```
#define MAX_NEWTON_ERROR 0.000001
template<class Filter>
inline float NewtonRaphson_Solver ( float& input, float& estimate, Filter* object )
{
float residue = object->runFilter( input, estimate ) - estimate;
float numIterations = 1;
while( fabs(residue) > MAX_NEWTON_ERROR )
{
// new_estimate = old_estimate - residue/derivative
float derivative = object->getDerivative();
estimate = estimate - residue/derivative;
// get next residue
residue = object->runFilter( input, estimate ) - estimate;
numIterations++;
if( numIterations > 50 )
{
// Debug_log("BUG?!? - iterations > 50, returning 0 for safety reasons");
// this can also happen if MAX_NEWTON_ERROR is too small,
// e.g. due to truncation errors
return 0;
}
}
return estimate;
}
```

This is, in a nutshell, a Newton Solver that can be reused for any filter that implements two important methods, runFilter() and getDerivative(). It'll call former as often as it needs to bring the estimate closer to Vout. If Vout and the estimate are the same, we speak of "convergence". In a typical scenario, convergence is reached at 2-5 iterations. At worst case (diode ladder filter..?) it should be about 3-8 iterations (but this also requires a *slightly* enhanced solver).

In-between each call to runFilter() it picks up the derivative of the filter equation at the last estimate. This sounds like a terrible idea, math and performance wise. However, if you refresh your memory for derivative rules (sum rule, chain rule, product rule) and if you keep in mind that the derivative of tanh(x) is just 1 - (tanh(x))^{2,} it's actually not too bad. Check this out:

*More Source Code! Yay!*

```
// ------------------------------ tanh'(x) = 1 - tanh^2(x) ------------------------------------
inline float sech2_with_tanh( float tanh_value )
{
return 1.f - tanh_value * tanh_value;
}
inline float tanh_derivative( float x ) // not used, for edu purpose only
{
return sech2_with_tanh( tanh(x) );
}
```

Those are two helper functions defined outside our filter class. The first one, sech2_with_tanh, will return the derivative or tanh_value = tanh(x) from, well, tanh_value itself - like I said before, many methods and functions expect their input already tanh()'d to avoid redundant calls to tanh()!

```
// ------------------------------ Newton's Method ------------------------------------
// member variable:
float tanh_result; // stores tanh(Vout) to reuse whenever applicable
float runFilter( float tanh_input /* tanh(input) */, float v_estimate )
{
tanh_result = tanh( v_estimate );
return g * ( tanh_input - tanh_result ) + s;
}
float getDerivative()
{
// tanh'(x) = 1 - tanh^2(x)
return -g * sech2_with_tanh ( tanh_result ) - 1.f;
}
float tickNewtonRaphson( float input,
EstimateSource inSource
= LinearVoutEstimate )
{
float tanh_input = tanh(input);
float estimate = getInitialEstimate( tanh_input, inSource );
Vout = NewtonRaphson_Solver ( tanh_input, estimate, this );
updateState();
return Vout;
}
```

There you go. That's all there is. We tanhify our input, we pick up our initial estimate and then we let our filter visit the Newton-Raphson solver from the listing above. That's all it takes.

This is an utterly common and reliable way to solve implicit non-linear equations. Similar methods are bisection method, regula falsi, secant method and Brent's method. All have different advantages and drawbacks. Some are "bracketed" which means that the actual value remains between two estimates. This guarantees convergence and it guarantees a measurable accuracy. Other methods are ideal for vectorization or computationally inexpensive.

Newton's Method is certainly not the cheapest per iteration and it's also not always guaranteed to converge. In fact, it works best with a good initial estimate. But it's also the method that converges the fastest out of the lot. If it converges. For our filter (or any classic filter topology using tanh() distortion) it's guaranteed to converge. The only reason it may not is, when you drive the filter hard enough to loose the numerical floating point precision to have your residue below the maximum error. If you want to drive your filter hard, maybe adjust the maximum error dynamically to filter gain.

Another reason to promote Newton's Method: It can be put in vectorial form. That is, if we calculate a higher order filter with 2, 3...11 tanh() terms, we need to converge as many estimates with their respective results. And this, in other words, is why some filters can kill the CPU. But before we get there, let's have a look at the aternatives.

Now, I guess pretty much every developer reading this has gazed over Teemu Voipio aka Mystan's "Cheap non-linear zero-delay filters" over in this thread on KVR . What's described there is probably one of the most influentual solutions to avoid the unit delay in both the outer loop and the Euler integration used in this infamous transistor ladder model published by Antti Huovilainen in 2004 (which btw. is a must read as it has been a great catalyst for Analogue Modeling).

The gist of this method is, it belongs to a category of methods that approximates the implicit non-linear equation with a single computation. I've sometimes seen these solutions called "one step methods", hence I'm calling them that way. They are a subset of so-called "analytical" solutions because they use maths to "up front" to get to the result, i.e. the system's properties were analyzed and a fitting solution/workaround was found. In contrast, iterative solvers are called "numerical" solutions because they don't minimize the problem, they just math it out. As computers often do.

A word of caution: Neither of these methods can be as accurate as the numerical ones without additional computation. However, the audible difference might be neglectible if one considers that the CPU usage is not much higher than that of a filter calculated with Euler and unit delays. Up to your preference I'd say ;-)

Addendum: There was a discussion whether or not these methods are analytical. My stance is that while they approximate to the original, implicit system numerically, we could also look at these equations as a system on their own right - which was then solved analytically. Thoughts on that are welcome in the comments section below or on KVR.

Now, the actual solution I'm presenting here is based on the concept that tanh(x) is replaced by a linear term, and quite literally so: What we do is, we replace tanh(x) - which is an s-shaped curve - by a straight line a * x + b. Latter is *the* general formula for all straight lines that map x to y. Obviously, for each sample we'd try to choose a and b so that our result would be the closest possible to the s-shaped tanh()-curve. Sounds logical? - Well, there's various methods to go by.

The interesting bit is that we can use the very same code for different ways to compute a and b. In fact we can set a = 1 and b = 0 to calculate our filter linearly, i.e. without the tanh() in the negative coupling. We can furthermore set a to tanh(s)/s and b to 0 to compute Mystan's method - which I call "pivotal" because our straight line that approximates tanh() pivots around the origin (x = 0/y=0).

The other method I'll show is one which I think has been used in many products... but I'm not so sure of that... It's essentially based on drawing a tangent to our estimated tanh(x), and then move our function result along that line. This is in many ways identical to Newton's Method, but wrapped into a single step method.

As you can see, the pivotal method is a good "allrounder". However, a fast rise in absolute value can go way out of bounds because the slope is always steeper as values run high.

The tangential method on the other hand is very good if the estimate is good, but then it might deviate even quicker than the pivotal method, particularly after a change of sign on a large jump.

I would love to provide for a full analysis of the two and I'd also love to show more variations, e.g. such that take the statistical error of the estimate into account. But that really would make my head explode right now. So let's just cut it short and post some code:

```
// ------------------------- linear / analytical methods --------------------------
enum AnalyticalMethod
{
Linear, // linear solution
Pivotal, // Mystran's "cheap" method, using x=0 as pivot
Tangential // Newton-like linearisation
};
float runOneStep( float tanh_input /* tanh(input) */,
AnalyticalMethod inMethod = Linear,
EstimateSource inSource = State )
{
float b = 0.f;
float a = 1.f;
if( inMethod != Linear )
{
// base variable for tangent/angle
float base = getInitialEstimate( tanh_input, inSource );
float tBase = tanh(base);
switch( inMethod )
{
case Pivotal:
{
a = base == 0 ? 1.f : tBase/base;
break;
}
case Tangential:
{
// tanh'(x) = 1 - tanh^2(x)
a = sech2_with_tanh( tBase );
b = tBase - base * a;
break;
}
// surplus, but compilers may complaining if we don't keep it
case Linear: ;
}
}
return ( g * ( tanh_input - b ) + s ) / ( a * g + 1 );
}
```

Yet again, this is it. That's all there is to it. Or is there? :-)

And again, like in the iterative method, we first pick up an estimate. Then we prepare a and b for whatever method out of Pivotal, Tangential and Linear we wish for. Then we compute Vout directly from that. Here's the path to get to the final equation:

Take original non-linear equation

**Vout = g * ( tanh(input) - tanh(Vout) ) + s**

Replace

**tanh(Vout) -> a * Vout + b**

Such that

**Vin = tanh(input)**

**Vout = g * ( Vin - (a * Vout + b) ) + s**

Now Vout can be isolated

**Vout = (Vin*g-b*g+s)/(a*g+1)**

Note: If a = 1 and b = 0 we get the linear form Vout = ( Vin * g + s)/( g + 1 )

For sake of completeness, let's post the tick functions for Pivotal, Tangential and truely Linear as previously mentioned and as announced elsewhere on KVR:

```
// method originally described by Mystran on KVR
float tickPivotal( float input )
{
Vout = runOneStep( tanh(input), Pivotal, State );
updateState();
return Vout;
}
// Newton's method in one step with linear estimate
float tickTangential( float input )
{
Vout = runOneStep( tanh(input), Tangential, LinearVoutEstimate );
updateState();
return Vout;
}
// --------------------------- Linear Filter ---------------------------------
// fully linear version, not shaping the input
float tickLinear( float input )
{
Vout = runOneStep( input );
updateState();
return Vout;
}
```

A short note in-between. Both the iterative method as well as the one step method above seem very simple, and indeed they are - as long as we deal with one pole filters. But as soon as we go further, i.e. with higher order filters, things can become quite nasty. While one can keep iterative methods nice and clean for higher orders, doing a true "vectorial Newton solver" involves matrices and linear equation system solvers.

Therefore you'll often find a mix of different approaches right there in one filter algorithm. E.g. one can do an iterative solver for the overall feedback of a filter while keeping the single stages simple with a one step method. Also, you'll often hear of "corrective steps" which improve the accuracy of one step methods by a second or third computation - which basically also makes them iterative methods.

Everything is allowed - as long as it sounds good.

Anyhow, the focus of this article is teaching the differences of the basic popular methods. There's always a lot more that can be said and done, and I'll be happy to discuss a few more in-depth things later or elsewhere.

Look Up Tables for filters *...whaaat?* - Being a bit of an ousider, this method only applies to one pole filters. There's a tiny chance to maybe do it for 2nd order filters, but you'll soon run out of memory. But who knows, maybe this method is great for FPGAs... or someone comes up with great equations that approximate the table solution in 17 dimensions...

The idea is this:

Out of all variables in Vout = g * ( tanh(Vin) - tanh(Vout) ) + s only Vout is unknown, all other values are available beforehand. So how do we get this into a lookup table?

If we rewrite the equation as

Vout = ( -g * tanh(Vout) ) + ( g * tanh(Vin) + s )

We can see that Vout depends on g itself and g * tanh(Vin) + s. Latter we can fully calculate into a single value:

State_gIn = g * tanh(Vin) + s;

Now we can create a lookup table for g in one dimension and State_gIn in another. We can thence use our Newton Raphson solver to populate that table with the correct values for Vout. Next thing you know, you can read Vout from g and State_gIn, without a single tanh() term involved :-) - furthermore, a considerably small table gives us a pretty good result from just bilinear interpolation alone (that is, linear interpolation in two dimensions as used in graphics software).

Hint: a table with 256 * 256 entries and linear interpolation gives us perfect results, but the table is quite big. We could always optimise it by cutting it in half and storing the sign bit of State_gIn, but d'oh... I haven't checked that out yet. Hint 2: Check out 3D plots. You might find ways to minimize the table in one dimension or the other.

Now, for some unexpected fun, here's a small table for a pretty good initial guess:

```
// ------------------------ two dimensional lookup table ----------------------------
#define numEntries 16
float table[ numEntries ][ numEntries ];
float minG, maxG, maxState_gIn;
float coeffMul, maxState_gInMul;
void createTable( float inMinG, float inMaxG, float inMaxState_gIn )
{
// using numEntries-2 for boundaries so that
// reading index+1 isn't out of bounds later
minG = inMinG;
maxG = inMaxG;
maxState_gIn = inMaxState_gIn;
coeffMul = ((float)(numEntries-2)) / (maxG - minG);
maxState_gInMul = ((float)(numEntries-2)) / (2.f * maxState_gIn);
clear();
for( int i = 0; i < numEntries; i++ )
{
g = minG + (maxG - minG) * (float)i / float( numEntries - 2 );
for( int k = 0; k < numEntries; k++ )
{
float State_gIn = -maxState_gIn
+ 2 * maxState_gIn * (float)k
/ float( numEntries - 2 );
s = State_gIn;
table[ i ][ k ] = tickNewtonRaphson( 0.f );
}
}
}
float runLookUp( float tanh_input /* tanh(input) */ )
{
float inG = g;
if( inG < minG ) return s;
if( inG > maxG ) inG = maxG;
float State_gIn = inG * tanh_input + s;
if( State_gIn > maxState_gIn ) State_gIn = maxState_gIn;
if( State_gIn < -maxState_gIn ) State_gIn = -maxState_gIn;
State_gIn = (State_gIn + maxState_gIn) * maxState_gInMul;
inG = (inG - minG ) * coeffMul;
int State_gIn_index = (int) State_gIn;
int inG_index = (int) inG;
float State_gIn_fract = State_gIn - floorf( State_gIn );
float inG_fract = inG - floorf( inG );
float ingLowSL = table[ inG_index ][ State_gIn_index ];
float ingLowSH = table[ inG_index ][ State_gIn_index +1 ];
float ingHighSL = table[ inG_index +1 ][ State_gIn_index ];
float ingHighSH = table[ inG_index +1 ][ State_gIn_index +1 ];
float ingLow = ingLowSL
+ State_gIn_fract * ( ingLowSH - ingLowSL );
float ingHigh = ingHighSL
+ State_gIn_fract * ( ingHighSH - ingHighSL );
return ingLow + inG_fract * ( ingHigh - ingLow );
}
float tickLookup( float input )
{
Vout = runLookUp( tanh(input) );
updateState();
return Vout;
}
```

Ok, it looks a bit more messy than the other solutions, but maybe you can customize it to your needs. Which brings up the final chapter for today:

I doubt that too many people have ever before lost so many words on such a simple lowpass filter. It's for a good cause though, because I hope I could illustrate some of the methods me and others are so conveniently mentioning in the forums all the time. I hope that people will put this to good use and start exploring the possibilities. For instance, adapting the one step method for the one pole lowpass with an inverting input is a bit of a challenge.

To be honest, I'd much rather post challenges than recipes that one could just copy and paste. There's a lot of challenge underlying these technologies. Making them fast, making them more accurate, adapting them for new filter types. It's endless. And also of course, there are more methods - but you gotta start somewhere...

Anyhow, my vacation is almost over, I doubt I'll have time to post the next step anytime soon. Which might include bit about a higher order filter and/or the vectorial Newton method. Which is then also how we relate the principles laid out so far to actual crcuit simulators. or maybe I do another piece about one pole filters that don't have tanh distortion... We'll see...

Lastly, here's the full filter class and utility functions or you to work with. Compiles here, should compile for you too...

http://www.u-he.com/downloads/UrsBlog/onePoleLowpass.h

Enjoy,

- Urs

Update Feb 27th 2016: The same oversight happened here as in the previous article - I messed up the OTA voltage to current equation. Hence I also changed the schematic and equations in this article to a general concept of a Voltage Controlled Current Source. As promised, I'll try to add a solution for the proper OTA-based one pole filter some time. Also, I added some thoughts on the analytical vs. numerical debate (which slightly confused me as well ;-)

]]>Did you know?

- one pole filters can have separate inputs for lowpass and highpass outputs
- active one pole filters can essentially have an additional inverting lowpass input
- despite the tanh() terms of the active lowpass, the output is
*not*bound to +/- 1

Here's a schematic with a bunch of simplifications and ingredients that help us model a lowpass filter e.g. as used in a transistor ladder or a OTA cascade configuration (more on the difference later):

I'm sure most people who deal with filter models have already seen and understood the theory behind one pole lowpass filters. You can either believe me or read the Cytomic paper when I say that the linear approximation of an active lowpass leads to the same equations as that of a RC filter. Let's assume the capacitor is connected to ground (V=0) and the inverting LP input isn't connected, we'll get exactly the same equation Vout = ( g * Vin + s ) / ( g + 1 ) as in our RC example.

The first thing to note is that I have chosen a component labelled "VCCS". In the first version of this article this was an OTA, but I got the equation wrong. Fixing this would have made this tutorial either too complex, or the filter model too inflexible. However, by choosing a more general component as Voltage Controlled Current Source with separately non-linear inputs that's merely a concept than an actual part, we get a building block that's both useful for modeling transistor ladders and OTA cascades *and* explains the technique. To illustrate the trick, here are the formulas for active one pole filters in comparison:

**OTA:** Iota = g * tanh(V+ - V-) *with simple circuit schematic*

**Transistor ladder:** Itl = g * ( tanh(V+) - tanh(V-) ) *with rather complex schematic*

**VCCS:** Ivccs = g * ( tanh(V+) - tanh(V-) ) *with simple schematic*

The second thing you should notice is the inverting lowpass input. The reason this works (even in case of the simple OTA based circuit) is because there's a virtual buffer between the filter's output and the negative coupling input of the Amplifier. That virtual buffer is a set of resistors (typically in the vicinity of 100k) in real OTA circuits which surround the circuitry but have no effect on the actual math. They do however minimize the feed forward from the inverting input to Vout by a considerable factor so that no audible signal can bleed through. Hence we can treat these resistors virtually as a buffer between our filter output and the negative port of the VCCS (which in this case behaves exactly like an actual OTA). This makes our active lowpass filter unlike a RC filter even if we assume linear approximations.

So, let's do the math for the linear case!

currents:

Ivccs = g * ( Vlp - (Vout + Vln) )

Icap = (Vout - Vhp) + s

KCL:

0 = Ivccs - Icap

0 = g * ( Vlp - (Vout + Vln) ) - (Vout - Vhp) + s

hence: Vout = (-Vln*g+Vlp*g+Vhp+s)/(g+1)

Where Vlp is the non-inverting lowpass input, Vln is the invertig one and Vhp is the highpass input. Then, for trapezoidal integration we get the following update for s for the next sample step:

s[n+1] = 2*(Vout[n]-Vhp[n]) - s[n]

Let's also not forget that g = tan(PI * cutoff/samplerate), which makes this all that's needed to compute this active input-mixing lowpass filter as a linear approximation.

I won't explain this now, but there's a reason that analogue filters sound great while digital filters usually don't. If you want an eye opener, read Dave Rossum's (of E-mu fame) paper Making Digital Filters sound Analog - it shows that people have always wanted *that* sound and it shows that it always had to be achieved by saturation *inside* the filter. It furthermore explains that the best way to achieve it would be doing what circuit simulators do. Which is pretty much what we're presenting here, at least to some nice enough degree.

I guess you've already noticed the tanh() terms in above schematic:

Ivccs = g * ( tanh(V+) - tanh(V-) )

So there they are. The muchly fabled tanh() terms that add non-linear behaviour to our filter, embedded somewhere inside our VCCS component which acts as our voltage controlled resistor. I think it would exceed the scope of this article if I went through all the citations needed to explain where those tanh() terms come from. For the sake of simplicity, let's just assume that those waveshapers are a reasonably good approximation to the non-linear effects inside the actual parts (differential transistor pairs, OTAs etc., with their respective placements).

In order to add non-linearities to our equations we simply add the non-linear terms. KCL still applies, and only because the "conductance" changes in respect to the input voltage, there's no change in the equations derived from the currents. Let's start with a simple lowpass and add the additional inputs later:

linear lowpass version: Vout = g * (Vin - Vout) + s

non-linear version: Vout = g * ( tanh(Vin) - tanh(Vout) ) + s

Note that I kept the equations in their implicit form. If I added array indices to the variables, all off them would read [n]. The reason for the implicit form being, while we can solve the linear equation for Vout, we can not do so for the non-linear one. It's not possible to isolate Vout on one side because we can't somehow divide by a constant to get rid of the enclosing tanh() term.

This can do one's heads in. Apart from the obvious strategy to use Euler's Method for integration - and thus use tanh( Vout[n-1] ) - people might feel compelled to let the negative coupling feedback be linear. If we do so, we arrive at this neatly solvable equation:

Vout = g * ( tanh(Vin) -Vout ) + s // <-- don't do this

However, with this equation Vout is always bound to +/- 1, because obviously the input will never exceed +1 or -1. Even if Vin is constantly set to +5, Vout will always ever reach tanh(5) which will always be below 1 ... But then, consider a constant input of +5 in this case:

Vout = g * ( tanh(Vin) - tanh(Vout) ) + s

s updated for next iteration: s = 2 * Vout - s

In this case Vout (and s) will settle at +5! That is because the damping factor in the negative coupling is smaller than the output and thus the output can grow beyond the limits of the (waveshaped) input. You don't believe it? A very simple test can confirm that this is the only solution, if we calculate a sample step. We assume Vin = +5, Vout = +5, s = +5, 0 < g < 1 and put them into above formula:

5 = g * ( tanh(5)-tanh(5) ) + 5

s = 2*5 - 5

If you're still not convinced, just do the experiment with the code I'll post later. Feed those equations a constant signal beyond +1 and see how the output isn't bounded!

The gist being, **an active non-linear lowpass filter is not bounded to +/- 1 even if tanh() waveshapers are modeled inside**. That's an important notion for the design of complex filters such as cascades, sallen-keys and ladders. The generally higher output gain of this kind of filter sounds considerably different due to gain staging of subsequent filter stages!

Now, let's have a look at our implicit equation for the full filter model:

Vout[n] = g * ( tanh( Vlp[n] ) - tanh( Vout[n] + Vln[n] ) ) + Vhp[n] + s[n]

s[n+1] = 2 * ( Vout[n] - Vhp[n] ) - s[n]

Some trivia about it:

- the non-inverting lowpass gets shaped separately from the feedback, as expected from, say, a transistor ladder
- the inverting lowpass input gets waveshaped with the feedback. This not just results in an inverting lowpass but also in a different sound! With this input you get the distoriton bahaviour of OTA based filters
- the highpass configuration only gets waveshaped in the feedback, the feed forward is linear

Why do we need this? - A transistor ladder filter can essentially be modeled using a cascade of these one pole stages connected by the non-inverting lowpass inputs. This *might* for instance be the case with the SSM 2044 filter chip which poses a ladder configuration (I'll measure this out one day). Most all-in-one-chip filter circuits however, such as those using the OTA-based CEM3320 or SSM2040 chips connect each stage via the inverting lowpass input. Latter can be configured for highpass stages, as found in the ELKA Synthex. If you want to model those filters, understanding this one pole filter might be a good start.

Solving its implicit equation works many ways... I'll post an extra article on this!

*Update Feb 27th:I was made aware that the equation used for the differential amplifier was not correct for an OTA output current. In order to keep the tutorial simple and the filter flexible, I have chosen to introduce a conceptual part VCCS which poses a Voltage Controlled Current Source. This trick lets us stick to the more flexible formula, but doesn't change the use case for OTA cascades (which I wish to introduce in future tutorials). On the downside this tutorial is a little less authentic, but that's IMHO better than either presenting a lesser solution or a harder to understand structure. I hope to show the difference to actual (authentic) components in a subsequent article. This update also fixed a typo in one of the equations.

]]>