4/08/2011

04-08-11 - Friday Filters

So in the last post we made the analysis filters for given synthesis filters that minimize L2 error.

Another common case is if you have a set of discrete data and wish to preserve the original values exactly. That is, you want to make your synthesis interpolate the original points.

(obviously some synthesis filters like sinc and box are inherently interpolating, but others like gauss and cubic are not).

So, the construction proceeds as before, but is actually simpler. Rather than using the hat overlaps we only need the values of the synthesis at the lattice points. That is :


Given synthesis/reconstruction filter S(t)
and discrete points P_i
Find points Q_i such that :

f(t) = Sum_i Q_i * S(t - i)

f(j) = P_j for all integers j

We can write this as a matrix equation :

Sij = S(j - i) = S(i - j)

S * Q = P

note that S is band-diagonal. This is the exact same kind of matrix problem that you get when solving for B-splines.

In general, if you care about the domain boundary effects, you have to construct this matrix and solve it somehow. (matrix inversion is a pretty bad way, there are fast ways to solve sparse band-diagonal matrices).

However, if the domain is large and you don't care too much about the boundary, there's a much simpler way. You just find S^-1 and look at a middle row. As the domain size goes to infinity, all the rows of S^-1 become the same. For finite domain, the first and last rows are weird and different, but the middle rows are about the same.

This middle row of S^-1 is the analysis filter.


A = middle row of S^-1

Q = A <conv> P

Q_i = Sum_j A(i-j) P_j

note A is only defined discretely, not continuously. Now our final output is :

f(t) = Sum_i Q_i * S(t - i)

f(t) = Sum_i Sum_j A(i-j) P_j * S(t - i)

f(t) = Sum_j C(t-j) * P_j

C(t) = Sum_i A(i) * S(t-i)

where C is the combined analysis + synthesis filter. The final output is just a simple filter applied to the original points.

For example, for the "canonical cubic" synthesis , (box convolved thrice), we get :


cubic analysis :
const float c_filter[11] = { -0.00175, 0.00876, -0.03327, 0.12434, -0.46410, 1.73205, -0.46410, 0.12434, -0.03327, 0.00876, -0.00175 };
chart : 

(A is 11 wide because I used an 11x11 matrix; in general it's infinitely wide, but gets smaller as you go out)

The synthesis cubic is piece-wise cubic and defined over four unit intervals from [-2,2]
The combined filter is piece-wise cubic; each unit interval is a linear combo of the 4 parts of synthesis

{ ADDENDUM : BTW this *is* the B-spline cubic; see for example : Neil Dodgson Image resampling page 128-129 ; the coefficients are exactly the same }

So, you could do all this work, or you could just use a filter that looks like "combined" from the start.


gaussian analysis :
const float c_filter[11] = { -0.00001, 0.00008, -0.00084, 0.00950, -0.10679, 1.19614, -0.10679, 0.00950, -0.00084, 0.00008, -0.00001 };
chart : 

mitchell1 analysis :
const float c_filter[11] = { -0.00000, 0.00002, -0.00028, 0.00446, -0.07115, 1.13389, -0.07115, 0.00446, -0.00028, 0.00002, -0.00000 };
chart : 

Now, not surprisingly all of the "combined" filters look very similar, and they all look rather a lot like windowed sincs, because there simply aren't that many ways to make interpolating filters. They have to be = 1.0 at 0, and = 0.0 at all other integer locations.

ADDENDUM : well, I finally read the Nehab/Hoppe paper "Generalized Sampling" , and guess what, it's this. It comes from a 1999 paper by Blu et.al. called "Generalized Interpolation: Higher Quality at no Additional Cost".

The reason they claim it's faster than traditional filtering is that what we have done is to construct a sinc-like interpolating filter with wide support, which I call "combined", which can be separated into a simple compact "synthesis" filter, and a discrete matrix (S^-1). So for the case where you have only a few sets of data and you sample it many times (eg. texture in games), you can obviously implement this quickly by pre-applying the discrete matrix to the data, so you no longer have samples of the signal as your pixels, instead you have amplitudes of synthesis basis functions (that is, you store Q in your texture, not P). Now you can effectively sample the "combined" filter by applying only the "synthesis" filter. So for example Nehab/Hoppe suggests that you can do this fast on the GPU by using a GPU implementation of the cubic basis reconstruction.

Both of the papers are somewhat misleading in their claims of "higher quality than traditional filtering". You can of course compute a traditional linear filter that gives the same result, since their techniques are still just linear. They are higher quality *for the same complexity of filter* , and only if you have pre-done the S^-1 multiplication. The speed advantage is in being able to precompute the S^-1 multiplication.

1 comment:

cbloom said...

Addenda added to post.

old rants