Fourier Transforms Revisited — Part I


A while back I spent some time messing around with Fourier Transforms, and ended up making an experimental system to precompute visibility information for a game level.

Here I’ll only briefly recap the details of the original experiment. For more detailed information on the initial project and Fourier Transforms, see the original post. The prototype is an alternative to using raycasts at runtime to check visibility. In most cases rays will be the simpler, better solution, but there are two situations in which they are not ideal:

  • Casting a lot of rays each frame can impact performance. The cost of each ray scales with the number of colliders in the area through which it travels.
  • Using a single ray to test something like visibility can be prone to edge-cases, e.g. the ray is blocked by a small collider or passes through a small hole, creating situations where the observer really should and should not be able to see the target point respectively.

The alternative I’m exploring is to precompute and store visibility data for a level, using a system very similar to the network of light probes sometimes used to light dynamic objects. This approach has two main benefits:

  • If optimized properly, performing a visibility check with this system should be faster than casting a ray.
  • Some fidelity is lost when compressing the precomputed data. However, this ‘blurring’ of the data caused by compression actually alleviates ‘pin-hole’ edge-cases that normally would be solved by casting multiple rays.

I found that my prototype system could achieve high levels of fidelity, or substantial compression, but not both at the same time. Its best use might be as a pre-check step used to narrow down the number of entities that actually require raycasts. However since putting the project down I have thought of a few additional optimizations and experiments I would like to try, so look forward to further posts in the future.

Tools for Evaluation

So far I’ve been using a visual debug to verify that the decompressed data coming out of interpolated probes “looks right.” But if I’m going to make some experimental improvements, I want to build some tools to more empirically measure their benefits. For starters, I want to know how frequently the system produces a false negative or positive evaluation, and in what situations.

First, I added a feature to my visual debug that shows false ‘not visible’ (red) and false ‘visible’ (cyan) readings. Nothing unexpected here. The false readings sit along the blurred edges between visible and non-visible areas.

Assuming our use case for this system is to filter out entities that are almost certainly not visible before a definitive ray cast check, we don’t mind a few false ‘visible’ reads, but we don’t want false ‘not visible’ reads that will filter out entities that the observer should actually see. By lowering the threshold for reading a sample point as visible, we can tune the system to minimize the chance of false ‘not visible’ reads.

This enhancement to the visual debug is useful, but I also want to measure the quality of the entire probe grid at a glance. To do this, I added an evaluation step when baking a new grid. After the probes have been generated, for each square made by four probes, I generate an interpolated probe at its center and perform ray casts to compare against its visibility map. I can then add up all the sample data from all the interpolated probes, and generate statistics about the grid as a whole.

One thing noticed right away is that the rate of false ‘not visible’ readings seems abnormally high, and not in keeping with what I was seeing from the visual debug. This suggests to me that there must be an edge case with some of the interpolated probe positions used for the evaluation, which causes most of their ‘not visible’ readings to be false.

After a bit of exploration with the visual debug, I determined that the issue is at least partially caused by large differences in altitude between adjacent probes. I’m not currently accounting for altitude when interpolating a probe at the avatar’s position, resulting in artifacts where the interpolated probe treats the avatar as though it is behind a wall rather than on top of it.

Now that I know what the problem is, it should not be too hard to fix. I think this is an excellent example of how worthwhile it can be to invest in visual and quantitative debugging tools. Going forward, I plan to improve these alongside the upgrades that I make to the system itself.

Leave a comment

Log in with itch.io to leave a comment.