Lessons Learned from Voxel Editing
I have been looking into voxel editing stuff on and off recently, and this is a short(or long) list of practical lessons I learned. All the factual information presented here is already available online. I'm just gathering and summarizing them here with my 2 cents.
1. Understand how isosurface works first
One of the most practical way of manipulating voxel data is by assigning an iso value for each voxel. I have tried some other ways (and even a very original way I came up with), most of them came short. After all, reconstructing meshes from iso values is the most practical and memory-smart way. There are a lot of papers explaining iso surface extraction and what not, but none of those really explains what isosurface is.
I found this tutorial. It's good and it will help you understand the concept of isosurface.
Also the developer(s) of 3D Coat kindly shared how they store their voxel data. You will be surprised how much you can learn about voxel editing by looking at their data structure.
If your interest is not so much of voxel editing, but procedurally generating large terrains with some help from temporary voxel data, read VoxelFarm inc's blog. He's genius.
2. You can "only" modify voxel data implicitly
Once you use iso values to represent your volume data. The only way you can modify your voxel data directly is by changing the iso values. The operations would be very mathematical like increase/decreasing each iso values or averaging across neighbours unless you stamp a new isosurface shape directly by evaluating a distance function(e.g, functions you can find in the tutorial linked above).
There's no one-to-one relationship between your iso value and the final mesh shape, you can't simply say "I want this edge to extend about 0.2 meter, so let me change this voxel value to 0.752" In other words, you don't have a fine control over the final shape. This is still true even if you increase your voxel density crazyily high(e.g, a voxel per a centimeter). So that's why I call it implicit editing.
There are some newer mesh construction methods that can give you very shape edges, but these are not really made for voxel editing as explained in the next point.
3. Traditional marching cube is sufficient
As I just said, there have been new researches on how to generate meshes out of iso data. Particularly, Cubical Marching Square and Efficient Voxel Octree were something I really enjoyed reading and playing with.
CMS can give you a sharp edge IF you capture your data from polygons. It encodes where polygon edges intersect with on a cube surface. But good luck with editing this data in real-time. (e.g, voxel editing like C4 engine or 3D Coat). I just couldn't find any practical way to edit this data while preserving the edge intersection info correctly.
EVO is not about polygon generation. It's a actually a ray-tracing voxel visualizer running on CUDA. It's fast enough to be real-time(very impressive! check out the demo!), and its genius way to store voxel data as a surface(so, sorta 2D) instead of volume needs some recognition. But still same problem here. I don't see any easy way to edit this data.
So basically all these new methods are good for capturing polygons as voxel data and visualize them without any alternation.
Sure, there can be some difference in final meshes generated from traditional MC and CMS, but if your intention is editing voxel data instead of replicating the polygon data precisely, the difference is negligible. As long as there's WYSIWYG in your editing tool, either method is fine. But MC is about 30% faster with my test. (both unoptimized and implemented in a multi-threaded C# application)
4. Optimize your vertex buffer
It's very common to have 1000 ~ 2000 triangles for a mesh generated from 10 x 10 x 10 voxels. If your voxel density is high and your world is large, it can easily give you 15 million triangles. Sending this much vertex data to GPU is surely a bottleneck. I tested on two different GPUs, one very powerful and the other okay-ish. Both were choking.
So at least do some basic vertex buffer optimization: sharing vertices between triangles and average normals from the neighbouring surfaces. You will see 30~50% reduction on vertex buffer size.
5. Use level-of-details
Simply sharing vertices wouldn't be enough. 30~50% save from 15 million is still 8~10 million triangles. Still not good enough! Now you need to generate LOD meshes. If you want to do it in voxel space(and probably you should), you simply sample every n-th voxels for LOD meshes. (e.g, for 1st LOD every 2nd, for 2nd LOD every 4th and so on)
If you use CMS, doing LODs is not that hard. This algorithm has intersection point information, so there will be no discontinuity between lower and higher LODs(if you have a full octree data, that is). But if you use MC as I suggested, you will see gaps. I haven't tried to fill in the gaps yet, but if I do, I will probably try TransVoxel algorithm first. Also Don Williams had a great success with this.
You can also generate lower LODs from your higher LOD polygoins instead of using voxel data directly. That's how Voxel Farm does it. This approach has a nice benefit: you can reproject higher LOD meshes and generate normal maps for the lower LOD mesh. But I'm not a big fan of mesh simplification for the reasons I'm gonna explain below.
6. Mesh simplification might not satisfy you
Another way of optimizing your vertex data is mesh simplification. Voxel Farm is doing it and it looks good enough for them, but with my test case, it was not giving me satisfactory results. I just didn't like the fact that I didn't have too much control over what triangles will be merged and what will not. It's all based on error calculation method you use, but I just was not be able to find an one-fits-all function for this.
Again, you might find it working fine for your use, but it was not the case for me.
K, that's all I can think of for now.. so happy voxeling… lol