There's always a bit of a misconception as to what people mean when they talk about eliminating the unit delay in feedback loops, such as those found in analogue filter models. Here's a more or less comprehensive run down on what is actually meant by that.
Often enough I see people state that, in order to predict the output of a filter, we miraculously have to estimate a future sample and thus we have to predict the future and that's impossible, certainly! - But that is not what this is about. We don't need to predict the future, we just need to figure out what happens in the present. Which isn't just a big difference, it's also entirely possible!
On other accounts, people repeat the idea that feedback without delay is impossible, there is no filter whose output can be computed without a delay. They might be mixing up the feedbacks and delays we're talking about with those we aren't. After all, every 2nd order or higher electronic filter circuit has multiple feedback paths: One or more for resonance and typically one for integration of each filter stage. And also, we're of course not talking about the inevitable group delay which is the purpose of the filter.
So let's clear things up!
integration vs. implicit equations
We have to distinguish between two different things here. That is, numerical integration on the one hand and solving implicit equations (as opposed to explicit ones) on the other. Both are easily confused by someone who's accustomed to forward Euler integration.
In digital filters there's memory that's required to integrate the "virtual currents" that flow during the time between two sample instances. The calculation of these requires delays, and that is an entirely undisputed fact. So yes, it's impossible to calculate a filter without delays. And yep, there is a form of feedback in each integrator that requires some kind of unit delay. Integration requires a history component, which practically is a delay element.
But then, musical filters are not just made up out of integrators. There's commonly also a wire somewhere that taps into the output of the filter or some other location(s) and feeds that signal back somewhere near the filter's input. And this is where a unit delay is often imposed - which we seek to avoid. One should really always say "avoid a unit delay" rather than "eliminate the unit delay", as in we don't need to eliminate it because we simply didn't ever put it there.
And sure enough, adding a unit delay into a construct that feeds the output of a filter back into its input seems not only like a logical thing to do, at first sight it might seem like the only way to get there. How else would a computer program be able to compute a function which takes its output value as an input argument? - Only, computing a result with the function of the very same result is indeed possible, but it sometimes requires a bit of out-of-the-box thinking - you can't bake it into a C function per se, you have to math it out.
An equation which has its result term on the left hand side also appear on the right hand side is called "implicit" - which is a preferred term nowadays over "zero delay feedback". Here's a simple example of an implicit equation, using array notation to distinguish between past, present and future samples:
y[n] = g * ( x[n] - y[n] ) + s[n]
This is an example of an implicit equation because y[n] needs to be computed by y[n] itself. But because it's a linear equation, it can easily be solved by rearranging the term:
y[n] = ( g * x[n] + s[n] ) / ( g + 1 ) // cool
There you go. This correctly computes y[n] by a function of itself, and no memory, delay whatsoever was involved. All array indices are n, none is n-1 or n+1. Hence there's also no "future" component involved, everything that's required is available at one point of time.
To counter this example with the concept of inserting a unit delay, here's the same equation solved with a unit delay and thus made explicit:
y[n] = g * ( x[n] - y[n-1] ) + s[n] // boring
As you can see, here we take the result of the calculation for the previous sample step, which is illustrated by the n-1 in the array index. Using this kind of unit delay in filter equations is/was very common in music DSP even though it often introduces very unpleasant side effects, such as making filter algorithms unstable. Those very same filter algorithms might however be stable if this unit delay wasn't added in the first place.
Note that in many filter implementations that unit delay also provides for the integration method. It not only makes the equations explicit, it's also used for forward Euler integration, i.e. when s[n] = y[n-1]... I'm sure you've seen it around.
the non-linear case
A completely different matter is a non-linear equation like this, which is typical for analogue filter models:
y[n] = g* ( tanh( x[n] ) - tanh( y[n] ) ) + s[n] // impossible!
In this case we can not just rearrange the equation to get y onto the left hand side without any y on the right hand side. However, there is a plethora of mathematical solutions for this which approximate the correct result to any accuracy of our choice. A popular solution is using iterative root finding methods (bisection, Newton-Raphson, Wikipedia is our friend):
Instead of above "impossible" equation we write
r = g* ( tanh( x[n] ) - tanh( yEst ) ) + s[n] - yEst // just works!
What we do is, we "estimate" what y would be and call it yEst. Then we calculate the equation and get r, which is called "residue" aka the error of our estimate. Once we have the residue, we can improve our estimate using the standard root finding methods as described on Wikipedia, or maybe what the younger audience remembers from high school. We do so as often as it takes to minimize the residue until we reach the accuracy we want, usually below a millionth of our normal amplitude. If the residue is zero, we have found an yEst that equals y and thus we have solved a non-linear implicit equation without the help of a unit delay embedded into that equation.
Note that during those rounds of "zooming into the value"- which hopefully results in convergence between estimate and actual value - we do not change any of the other variables; x[n] and s[n] stay untouched. I've seen people take the residue/estimate of itertative methods as a form of history and thus getting wind of a unit delay, but that is just a history in the solving algorithm, it has no delay-like impact on the solution itself.
This is just one class of methods, but there are numerous others that can be driven to various degrees of accuracy, such as the tangential method which I might explain elsewhere. What all these methods have in common is, they are fit to remove any necessity for an artificially added unit delay in the simulation of an analogue filter's (or OpAmps whatsoever) negative feedback path ("Negative Gegenkopplung" as I'd call it in German).
Therefore, software does not need to predict the future. It just needs to approximate the present to a sufficently precise result. Again, it's not about miraculously integrating time steps without a memory component. It's about solving implicit equations, i.e. such that need to calculate the output by a function of their own output. And this is a fully understood mathematical discipline since like forever 8)
s[n+1]
Now, with the equations posted above, haven't I be cheating somewhere? Surely that s value contains some form of unit delay? - Yes it does!
The fun bit about the s (state, sC, sum) thing is, we can decouple the calculation of the filter output from the actual integration step, i.e. summing the virtual currents into the virtual capacitors. In fact we can use the very same filter algorithm regardless of choice of integration method, specifically for single step integration methods auch as Backward Euler and bilinear/trapezoidal integration. For the Euler ones we would simply set s[n+1] to y[n] after each sample step, which produces that legendary unit delay for the integration step - but not for the computation of the filter output!
If you're still confused, take a (linear) ladder filter with 4 stages like this:
input = x1[n] - feedback * y4[n]
y1[n] = g * ( input - y1[n] ) + s1[n]
y2[n] = g * ( x2[n] - y2[n] ) + s2[n]
y3[n] = g * ( x3[n] - y3[n] ) + s3[n]
y4[n] = g * ( x4[n] - y4[n] ) + s4[n]
Here, not only do we have to implicitely solve each filter stage, we furthermore have to solve the equation system as a whole. Impossible? - Not quite. We just substitute the equations into what I call a "large sausage equation" (I am German, after all), and we can again compute the surrounding feedback as well as each individual stage without adding any unit delay. Afterwards we can update the s terms in accordance to our integration method. It isn't as difficult as it may sound.
As for the non-linear version... this is a different story for a different set of articles. If you can't wait for a solution, google "Newton's method for vectorial equations". It's a bit more difficult than the linear case, but then, it's also entirely possible!
runs u-he