Friday, November 12, 2010

Abstracting the Sound Engine and Collisions

Currently The Render Engine uses SoundManager 2, which was a great solution at the time since the audio tags weren't formalized and Flash seemed like the only way to play sounds reliably.  Now that the spec is formal, and after analyzing SM2, I have decided that abstracting the sound system would be a good idea for v2.0.  This would allow a developer to plug in a different sound system if they have a better alternative.  Instead of loading SM2 and using its HTML5 capabilities (originally what was planned) I'm going to create a layer of abstraction which allows for just such a thing.

SM2 will continue to work, but will instead be a pluggable sound system.  The actual SoundLoader and Sound objects won't change, but instead the loader will interact with the abstraction.  This is all planned for v2.0 but should be in the repository soon.  I really want a lightweight sound system, and depending on SM2 is pretty heavyweight but it does have lots of nice options...

I played around a bit with getting the fundamentals in for the Separating Axis Theorem last night.  While The Render Engine will support any collision system, I've decided to provide both GJK and SAT.  GJK is mostly implemented, and SAT is getting there.  It'll be nice to have some choices when using the collision system.  I still think that the SpatialGrid will continue to be the broad-phase of choice, but I'm also considering BSP which would open up the engine for raycasting and such.

No comments:

Post a Comment