In the previous post in this series we talked about using destruction in games to increase the level of immersion and showed a quick example of the kind of basic destruction that can be achieved by default with Box2D. In this post we’re going to explore dynamically destroying a Box2D object into fragment pieces.
The concept is fairly straightforward. Start with a normal physics object, wait for a projectile to hit the object, and then use the trajectory of the projectile to split the object into two separate objects.
In practice, this takes a fair amount of work that has quite a few complications that need to be taken into account.
The first step is to listen for collisions within the Box2D world. If one of the objects colliding is a projectile and the other is a breakable object, we can proceed and start trying to split it.
Splitting a Box2D shape requires that we need to know the vertices that make up it's original shape. Circles unfortunately present a problem. There isn't any way in Box2D to represent a partial circle. If you want to chop circles in half, you need to convert the circle into a list of vertices which could be taxing to simulate
Once we have that original shape in the form of vertices, we can cast a ray on the fixture along the projectile's path to find the point on the object that the projectile collided. Reversing the ray allows us to get the point on the object the projectile would have exited had it been allowed to continue on through the original object. A word of warning, the points returned from the Box2D function b2Fixure.RayCast returns points in world coordinates. Since the vertices are in local coordinates we need to convert the entry and exit point via the b2Body function GetLocalPoint.
At this point we have the list of vertices comprising the original shape and two new vertices representing the entry and exit point the projectile takes. We now need to group the vertices that remain into one of the two resulting shapes to be created by the split. By using the line determined from the entry and exit point we can test each vertex for which side of the line it resides on and place it in a list of vertices for the new shapes. Keep in mind that the entry and exit point will be used in both shapes.
Once we have the two lists of vertices for each shape, we can start to construct new physics objects to represent them. The problem we have now is that the coordinates of those vertices are relative to the original object’s center of mass. We need to re-center the new vertices so that rotations act correctly. This can be done simply by averaging all the values of the vertices to find the center and then subtracting that center point from each vertex.
We can create new physics bodies with the vertex lists for each shape. We’ll want to copy the same physics properties such as friction, density, restitution, linear and angular velocity from the original object.
We can then remove the original object from the physics world and add our two new objects to the physics world to complete the effect.
While this will play out nicely in the physics simulation, we haven’t addressed actually rendering the fragments on screen. With the original object (in our case a square), it’s fairly simple, add a textured quad in Starling to the stage and ensure its pixel properties match the corresponding physics object properties for location and rotation.
When we start slicing the object into pieces, we don’t get nice rectangles to work with. And if the original object was textured, we also need to be concerned with UV coordinates.
Unfortunately Starling doesn’t support oddly shaped polygons with varying numbers of vertices and UV coordinates so we need extend the Starling DisplayObject and handle the vertices and UV coordinates ourselves. Another complication is that Stage3D, the rendering technology Starling is built on, can only render triangles.
The original object is easy since we can control its shape and UV coordinates. It’s a simple rectangle so we know the vertex points and we can assign UV coordinates to those vertex points to allow the texture to properly map to it.
When we split the original object, we need to know what the UV coordinates of the entry and exit point will be. We can do this by finding the min and max values for the vertices in both the x and y axis. This gives us a bounding box for the dimensions of the original shape. The entry and exit points can now be compared to this bounding box to get a percentage between the min and max values.
We can do the same thing with UV coordinates on each vertex giving us a bounding box with the min and max UV values. By applying the percentage we got before from the coordinate bound box to the UV bounding box we can get accurate UV coordinates for the entry and exit point.
At this point we have a definition of a polygon with proper vertex and UV coordinates to represent our physics object. We still need to take it and convert it list of triangles in order for the GPU to render it. There are many Triangulation algorithms out there that will take in a set of points and return a list of triangles but we’ve chosen to use an implementation by Nicolas Barradeau of Delaunay Triangulation. It provides nice results with a bias to wide triangles where as other algorithms produce long skinny triangles.
Now that we finally have a list of triangles, we can batch them together in a similar manner to Starling’s QuadBatch class and have them rendered correctly while providing visual representation of their corresponding physics objects.
As you can see there’s a fair amount of work for just a simple slicing of physics objects but now that we have this base we can explore ways to expand the effects in the future.
In the meantime we can play with a demo below on slicing physics objects using the methods described above.