BlackWindBooks.com | Newsletter! | risingthumb.xyz | achtung.risingthumb.xyz | github.com/RisingThumb | site map

risingthumb.xyz No one wants to play with me...

Audio Acoustics

What

Audio Acoustics is about the acoustic properties of a room, materials and the sound source, and making this sound good in a game.

This is where I will document my findings on other papers and methods, as well as what I do to achieve good audio acoustics.

Reverb

The audio source has a number of raycasts. It then randomises their directions casting to a max distance. I then get reverb properties from the material it collides with and apply this to the sound.

I also approximate the room size using the distance of these raycasts. This approximation is as follows, running it for each raycast's distance. Doing this, we slowly approach an idea of the room size.

room_size += (distance / max_raycast_distance) / (float(distance_array.size()))
room_size = min(room_size, 1.0);

Wetness defaults to 1.0, and we slowly reduce it if a raycast never collides with anything

wetness -= 1.0/float(distance_array.size());
wetness = max(wetness, 0.0);

That said, the direction of these raycasts is randomised, so that I can capture a more accurate understanding of the room over time. These raycasts are not run every frame, but at least once each second.

I also get wetness. When there is no collision,

Audio Propagation

So far, I do a pretty simple form of propagation, where it's plain distance-based. The effect of propagation is "faked" by audio occlusion. In practice this sort of works, especially when you use lerping, but there is a noticeable point where the audio is occluded and not.

Audio Occlusion

For Audio Occlusion, I have 5 positions I make raycasts from, towards the audio source. One in front, behind, left and right and center of the player. In all of these I make a raycast towards the audio object. This goes hand in hand with audio propagation.

When I do occlude, 2 things happen. Firstly, we have a lowpass filter which removes high frequencies from the audio. This is done as the following:

lowpass_cutoff = wall_lowpass_cutoff_amount * wall_to_player_ratio

With a default cutoff of 20kHz. The wall to player ratio is the rayDistanceTravelled/distanceToPlayer.

The second thing I do, is I have an equaliser. This is used to more accurately capture how some materials are good for blocking high frequencies and bad with low frequencies and vice versa. This is implemented by first, having an absorption coefficient used with a physics equation to get the decibel reduction in sound.

=> An absorption coefficient table for different frequencies can be found here.
=> And another.

Absorption coefficient tables are difficult to find because they are normally averaged out to an overall absorption coefficient which is more convenient for acoustic construction, but not simulation of acoustics. We can then get the decibel decrease by the following physics equation:

d= -20log10(1-C)

Where d is the decibel drop, and C is the absorption coefficient.

Future ideas

From the reverb, I could buffer all the raycast collision positions, and their associated damping. I could then make additional raycast occlusion checks to these raycast collision positions, to then influence the Audio Occlusion and to simulate some idea of Audio propagation. This could also go a step further with raycast reflections so I have a larger number of collision positions to sample for occlusion checks.

As for which collision points to buffer? I think some heuristics can be used to cull out useless collision points to buffer:

Something else I'm considering, is how these raycast collision points can form a graph. Perhaps, creating a graph of particular size, and adding new raycast collisions to it, only if it meets the above criteria is something I should consider, as from that graph I could then go backwards.

Notes

The main reason we can get away a lot with treating sound as a ray, and being rough with parts of it is because videogames borrow from Cartoons design language where a visual interaction and a sound are combined together, instead of sound simulation. The other point for why rays that reflect is a good way of doing this, is by taking advantage of wave-particle duality, and understanding that a wave can be modelled pretty well as a particle, and a particle can be modelled pretty well as a wave.

Additionally there is a further note that Microsoft has a project called Project Acoustics which seems to be focused on baking and voxelising the space to compute this information. I suspect an approach with signed distance fields and raymarching could also be used. Possibly a raymarched approach could be done to generate the sparse propagation graph.

Further Research

=> Ambient Environment, Building Acoustics
=> Real-time sound propagation in videogames GDC slides
=> Reverberation time
=> Alternative Godot project featuring Acoustics
=> Realtime Audio Raytracing and Occlusion in Csound and Unity
=> VideoGame Acoustics Doctoral Thesis(Particularly useful for the breadth of content and the literature review)
=> The Division 2 Environmental Acoustics Talk
=> Project acoustics

I also found notes about some people using A* Pathfinding to handle propagation.