# Peg Solitaire Solver in just 50 Lines of Code

This blog posts describes a simple program I’ve written so solve the English peg solitaire. If you would like to refresh your “binary operation” knowledge, I’d definitely recommend to read the source code – you can find it at the end of this post! The algorithm is highly optimized and you might be able to learn some tricks when reading through the code. English Peg Solitaire

### The Game

Peg Solitaire works as follows: A peg (or ball) can be removed by orthogonally jumping over it with a second ball. This is only possible if the cell behind the first ball is empty. The goal is to finish the game with one remaining ball in the center. A found solution can be seen in the following animation. English Peg Solitaire Solution

### Implementation

The implementation is highly optimized and uses bit operators to efficiently find a solution. The idea is as following: Since there exists 33 slots on the board, it can not be represented by using an integer (32 bits), but we can use a long (64 bits). The first 49 bits (7 x 7) of the long represent the board. However there are some bits that are not valid and never used, i.e. 0,1,5,6,7,8,12,13 and so on. Checking of possible moves and applying them can be done by using simple bit operations.

The really interesting part is that the algorithm is optimized by reverting the search direction. You can find more details on this in the comment header of the program below.

### Memory Usage

There is often a tradeoff with algorithms when it comes to memory usage and run time. With this particular algorithm it is no different. Remembering the boards that we have already seen (and not rechecking them unnecessarily) means that a solution is found in a very reasonable time (usually a few seconds). Not doing so means that my computer is still working on a solution since 24 hours (however it is using almost no ram). In general a gigabyte of ram, used to remember the known boards, should be enough to allow for a solution to be found.

### Run Time

Interestingly the run time highly fluctuates, depending on the ordering of the possible moves. The program always checks the moves in the same order when looking at any given boards and sometimes this (initially determined) order is “unlucky”. The branch that the algorithm follows might not include a solution, but it still is searched in its entirety.

An idea to reduce the fluctuation would be to use heuristics and to rank the moves depending on the board, i.e. which move makes “more sense” for a given board.

The Algorithm

Update: In the following you’ll find the Java version, but the reddit user leonardo_m has also translated the code to C++! http://www.reddit.com/r/programming/comments/24meqs/peg_solitaire_solver_in_just_50_lines/ch8xu9s

The following is 52 lines of code according to IntelliJ LoC metric.

```import java.util.*;

public class PegSolitaireSolver {
private static final HashSet<Long> seenBoards = new HashSet<Long>();
private static final ArrayList<Long> solution = new ArrayList<Long>();

private static final long GOAL_BOARD = 16777216L;
private static final long INITIAL_BOARD = 124141717933596L;
private static final long VALID_BOARD_CELLS = 124141734710812L;

private static final long[][] moves = new long[];

private static void printBoard(long board) {
for (int i = 0; i < 49; i++) {
boolean validCell = ((1L << i) & VALID_BOARD_CELLS) != 0L;
System.out.print(validCell ? (((1L << i) & board) != 0L ? "X " : "O ") : "  ");
if (i % 7 == 6) System.out.println();
}
System.out.println("-------------");
}

private static void createMoves(int bit1, int bit2, int bit3, ArrayList<long[]> moves) {
moves.add(new long[]{(1L << bit1), (1L << bit2) | (1L << bit3), (1L << bit1) | (1L << bit2) | (1L << bit3)});
moves.add(new long[]{(1L << bit3), (1L << bit2) | (1L << bit1), (1L << bit1) | (1L << bit2) | (1L << bit3)});
}

private static boolean search(long board) {
for (long[] move : moves) {
if ((move & board) == 0L && (move & board) != 0L) {
long newBoard = board ^ move;
if (!seenBoards.contains(newBoard)) {
if (newBoard == INITIAL_BOARD || search(newBoard)) {
return true;
}
}
}
}
return false;
}

public static void main(String[] args) {
long time = System.currentTimeMillis();

ArrayList<long[]> moves = new ArrayList<long[]>();
int[] starts = new int[] {2,9,14,15,16,17,18,21,22,23,24,25,28,29,30,31,32,37,44};
for (int start : starts) {
createMoves(start, start + 1, start + 2, moves);
createMoves((start%7) * 7 + start/7, (start%7 + 1) * 7 + start/7, (start%7 + 2) * 7 + start/7, moves);
}
Collections.shuffle(moves);
moves.toArray(PegSolitaireSolver.moves);

search(GOAL_BOARD);

for (long step : solution)
printBoard(step);
System.out.println("Completed in " + (System.currentTimeMillis() - time) + " ms.");
}
}
```

And here is the commented version.

```import java.util.*;

/**
* Peg Solitaire Solver
* Copyright (C) 2014 blackflux.com <pegsolitaire@blackflux.com>
*
*  This program is free software: you can redistribute it and/or modify
*  it under the terms of the GNU General Public License version 3 as
*  published by the Free Software Foundation.
*
*  This program is distributed in the hope that it will be useful,
*  but WITHOUT ANY WARRANTY; without even the implied warranty of
*  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
*  GNU General Public License for more details.
*
*  You should have received a copy of the GNU General Public License
*  along with this program.  If not, see <http://www.gnu.org/licenses/>.
*/

/**
* Solver for the English peg solitaire.
* This program finds a random solution for peg solitaire game by using brute force.
*
* -- Runtime
* A solution is typically found in less than two seconds, but the time does highly
* fluctuate (I've seen everything from a few milliseconds to several seconds).
*
* -- Implementation
*
* The implementation is highly optimized and uses bit operators to efficiently find
* a solution. The idea is as following: Since there exists 33 slots on the board, it
* can not be represented by using an integer (32 bits), but we can use a long (64 bits).
* The first 49 bits (7 x 7) of the long represent the board. However there are some bits
* that are not valid and never used, i.e. 0,1,5,6,7,8,12,13 and so on. Checking of
* possible moves and applying them can be done by using simple bit operations.
*
* A recursive function is then used to check all possible moves for a given board,
* applying each valid move and calling itself with the resulting board. The recursion is
* done "in reverse", starting from the goal board. While this is not conceptually faster [a],
* it allows for a minimum amount of bit operations in the recursion:
*
* To reverse a move we can simply check
* - (board & twoBalls) == 0 and
* - (board & oneBall) != 0
* where "twoBalls" indicates the two ball that would need to be added for this reversed move.
* If we instead used the intuitive search direction, the same check would require additional
* binary operations, since a simple inversion of the check would not work [b].
*
* Paper  shows how the moves can be ordered to almost instantly find a solution.
* Website  gives a nice overview of binary operations and some tricks that
* can be applied.
*
* [a] Playing the game in reverse is simply the inversion of the original game - just remove all
* balls from the board and place ball where there were none before and you'll understand
* what I mean.
* [b] There is no "single" binary operation to check if two specific bits are set, but there
* is one to check if they are both zero. There is further a binary operation to check if a specific
* bit is set.
*
*  http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.6.4826 (download at the top)
*  http://graphics.stanford.edu/~seander/bithacks.html
*/
public class Solver {

// list of seen boards - this is used to prevent rechecking of paths
private static final HashSet<Long> seenBoards = new HashSet<Long>();

// list of solution boards in ascending order - filled in once the solution is found
private static final ArrayList<Long> solution = new ArrayList<Long>();

// -------

// goal board (one marble in center)
private static final long GOAL_BOARD = 16777216L;

// initial board (one marble free in center)
private static final long INITIAL_BOARD = 124141717933596L;

// board that contains a ball in every available slot, i.e. GOAL_BOARD | INITIAL_BOARD
private static final long VALID_BOARD_CELLS = 124141734710812L;

// holds all 76 moves that are possible
// the inner array is structures as following:
// - first entry holds the peg that is added by the move
// - second entry holds the two pegs that are removed by the move
// - third entry holds all three involved pegs
private static final long[][] moves = new long[];

// -------

// print the board
private static void printBoard(long board) {
// loop over all cells (the board is 7 x 7)
for (int i = 0; i < 49; i++) {
boolean validCell = ((1L << i) & VALID_BOARD_CELLS) != 0L;
System.out.print(validCell ? (((1L << i) & board) != 0L ? "X " : "O ") : "  ");
if (i % 7 == 6) System.out.println();
}
System.out.println("-------------");
}

// create the two possible moves for the three added pegs
// (this function assumes that the pegs are in one continuous line)
private static void createMoves(int bit1, int bit2, int bit3, ArrayList<long[]> moves) {
moves.add(new long[]{(1L << bit1), (1L << bit2) | (1L << bit3),
(1L << bit1) | (1L << bit2) | (1L << bit3)});
moves.add(new long[]{(1L << bit3), (1L << bit2) | (1L << bit1),
(1L << bit1) | (1L << bit2) | (1L << bit3)});
}

// do the calculation recursively by starting from
// the "GOAL_BOARD" and doing moves in reverse
private static boolean search(long board) {
// for all possible moves
for (long[] move : moves) {
// check if the move is valid
// Note: we place "two ball" check first since it is more
// likely to fail. This saves about 20% in run time (!)
if ((move & board) == 0L && (move & board) != 0L) {
// calculate the board after this move was applied
long newBoard = board ^ move;
// only continue processing if we have not seen this board before
if (!seenBoards.contains(newBoard)) {
// check if the initial board is reached
if (newBoard == INITIAL_BOARD || search(newBoard)) {
return true;
}
}
}
}
return false;
}

// the main method
public static void main(String[] args) {
// to measure the overall runtime of the program
long time = System.currentTimeMillis();

// add starting board (as this board is not added by the recursive function)

// generate all possible moves
ArrayList<long[]> moves = new ArrayList<long[]>();
// holds all starting positions in west-east direction
int[] startsX = new int[] {2,9,14,15,16,17,18,21,22,23,24,25,28,29,30,31,32,37,44};
for (int x : startsX) {
createMoves(x, x + 1, x + 2, moves);
}
// holds all starting positions in north-south direction
int[] startsY = new int[] {2,3,4,9,10,11,14,15,16,17,18,19,20,23,24,25,30,31,32};
for (int y : startsY) {
createMoves(y, y + 7, y + 14, moves);
}
// randomize the order of the moves (this highly influences the resulting runtime)
Collections.shuffle(moves);
// fill in the global moves variable that is used by the solver
moves.toArray(Solver.moves);

// start recursively search for the initial board from the goal (reverse direction!)
search(GOAL_BOARD);

// print required time
System.out.println("Completed in " + (System.currentTimeMillis() - time) + " ms.");

// print the found solution
for (long step : solution) {
printBoard(step);
}
}
}
```

If you enjoyed this article, please consider following me on twitter. I love technical stuff and if you do too, chances are you will enjoy what I tweet =)

# Meshing in Voxel Engines – Part 3

This is the third and final part of a blog post on meshing in voxel engines. If you haven’t already, I would strongly suggest that you read the first and second part before continuing here! In this part I will directly reference the previous two parts!

Go back to second part.

We will first see some important, yet often overlooked details and then draw a conclusion.

## World Segmentation

Wold segmentation refers to the fact that typically you will not merge all voxels in one plane at a time, but rather segment them and do the triangulation on these segments. The main reason is overhead: You need to remove the old triangles from the world and add the new triangles. When one voxel is removed you don’t want to recreate half of the triangles in your world.

Another reason why you would consider world segmentation is the meshing algorithm itself. Feeding small chunks into it might be much faster than feeding huge areas at once. Furthermore if the algorithm does create long and thing rectangles, world segmentation also puts a restriction on that, since a triangle can only live within a segment.

So, why is world segmentation problematic? Simple – because it creates T-junction problems. When the world is segmented one segment has “no idea” what is happening in the neighboring segments and triangles might not align.

In this case the T-Junction problem can be prevented by considering the neighboring segments when creating the polygon and adding polygon points in the appropriate places. These points are automatically considered by Poly2Tri when creating the triangulation.

Finally, when a segment is updates it needs to check if it has changed its borders and trigger refreshing of the neighboring segments if this is the case.

## Vertex Reduction and Normals

Something we didn’t consider until now is the vertex count. As was noted before the Greedy Meshing method creates the most vertices of all algorithms, Monotone Meshing is already much better and finally Poly2Tri is optimal. So if vertex count is a concern, you already know what you should go with.

The next question is now, how should we deal with duplicate vertices? And how does this affect our normals? While it sounds like a great idea to merge vertices, this will mean that it gets very hard to rotate normals correctly. Consider the example of a 2x2x2 space with voxels (0,0,0) and (1,1,1) set. How would you align the normal in between the voxels?

So in general it seems like one should duplicate all vertices and align the normals to the corresponding face. However, it is possible to use vertex reduction and get away with it if you are using flat shading (i.e. the vertices are only used to compute the face orientation). If you plan on doing smooth shading there is no way around vertex duplication. Merged Normals (without triangle reduction) with Flat Shading (l) and Smooth Shading (r)

We notice immediately that the flat shading is correct while the smooth shading looks very distorted. Even though vertices can be reduced significantly (the same vertex can appear up to six times if you are using the Poly2Tri reduction), consider if you really need it. On modern hardware the impact should be small.

## Alternative: Hiding the T-Junction Problem

A completely different approach to the T-Junction problem was suggested by David Williams. The idea is to use a post processing step that checks the z-buffer of the rendering output and fills in anomalies.

While this approach has the flaw that it requires additional computational power and typically the T-Junction problems occur on hardware that has limited computational power, I couldn’t wait to test it.

What is done is simply checking if a pixel has a very different z-buffer value compared to the surrounding pixels. As you can see the result is amazing (all the T-Junctions are gone).

However using a shader and the z-buffer to fix the issue is pointless in our scenario. If you are concerned about triangle count, this means speed is an issue. And a shader is typically slow compared to using the Poly2Tri method and a few more triangles.

Using a shader might be a great option if you have no control over the mesh data.

## VoxelShop – Test it yourself

If you are curious how the different algorithms perform on your voxel models, you can try them out using the VoxelShop. Import your model and type /study into the console. If you want to compare run times, make sure you run it a few times to allow the JVM to convert the functions into native code.

The program will then compute the stats for the different algorithms. This should give you a good intuition. For the above dragon model I computed the following stats (times are averaged):

• Naive Greedy Meshing: 69474 triangles in ~28ms
• Optimal Greedy Meshing: 67992 triangles in ~240ms
• Monotone Meshing: 72374 triangles in ~31ms
• Modified Monotone Meshing (Save in 2D): 80722 triangles in ~37ms
• Poly2Tri Meshing: 79482 triangles in ~210ms

Without reduction (using two triangles per visible voxel face) the triangle count would have been 156580. This is quite a significant reduction for all algorithms. However the dragon model doesn’t have many plane surfaces and so for other models we can expect a much better reduction.

There is barely any difference between the run times of Naive Greedy, Monotone and the Modified Monotone meshing methods (the overhead is causing most of it). Only the Poly2Tri Meshing and the Optimal Greedy Meshing are noticeable slower, which is expected since the problems that they are solving are more complicated than those of the other algorithms. Note that the Optimal Greedy Meshing algorithm in particular is not well optimized w.r.t the data structures.

The tree model from before results in the following stats (times omitted, because they are so short that other factors cause too much variation):

• Naive Greedy Meshing: 1062 triangles
• Optimal Greedy Meshing: 1046 triangles
• Monotone Meshing: 1083 triangles
• Modified Monotone Meshing (Save in 2D): 1218 triangles
• Poly2Tri Meshing: 1132 triangles

The unreduced model would have had 4700 triangles. I’m showing one final model to emphasize that the Greedy Meshing method is not always better than Monotone Meshing (times omitted again):

• Naive Greedy Meshing: 1586 triangles
• Optimal Greedy Meshing: 1522 triangles
• Monotone Meshing: 1581 triangles
• Modified Monotone Meshing (Save in 2D): 1924 triangles
• Poly2Tri Meshing: 1868 triangles

This is a realistic model and the Monotone Meshing algorithm did beat the Naive Greedy Meshing (not by much though). We notice the zigzag shape that reminds us of the example we have seen in the previous part, in which Monotone Meshing always outperforms the Greedy Meshing. The model would have had 3968 triangles without reduction.

In general we can see that the triangle count for the Optimal Greedy Meshing is not that much better than for the Naive Greedy Meshing (usually just a few percent). The run-time is very comparable to the Poly2Tri meshing and could still be improved further.

## Afterthought

In theory one should be able to experience the T-Junction problem with the following scenario (which also Poly2Tri meshing fails to correct).

However I have yet to see the background shining through for this case. I’m not sure why it is not happening, but if you have a good explanation or any experience I’d be very happy to hear about it!

## Conclusion

Originally I was planning to compare run-times and triangles created by the different algorithms more in depth. However this is a bit pointless, since there are clear “winners” already.

If you do not care for the T-Junction problem for whatever reason (you probably should!) and want a simple and fast algorithm, use the Naive Greedy Meshing approach described above. The amount of triangles created is sufficient low and the speed is blazing fast.

If you do not care for the T-Junction issue and you absolutely want the minimal amount of triangles (e.g. when exporting static objects and speed is not a concern), you should combine the Optimal Greedy Meshing with the Original Monotone Meshing approach.
Note that you need to test all four rotations for the Monotone Meshing (as we have seen in Part One).

And finally, if you do care about the T-Junction problem, split your voxels into Polygons (with holes) and use the poly2tri library. There are other libraries that can do similar things, but I found poly2tri best.

## Edit (2016):

Poly2Tri generates T-Junction issues under certain circumstances as described above. However it is relatively easy to correct these by adding additional points before feeding the polygon to the Poly2Tri algorithm. To determine where to insert points into the polygon you need to consider the adjacent edges in 3D. When exporting from VoxelShop you will never experience any T-Junction issues as the required points are inserted.

If you enjoyed this article, please consider following me on twitter. I love technical stuff and if you do too, chances are you will enjoy what I tweet =)

# Meshing in Voxel Engines – Part 2

This is the second part of a blog post on meshing in voxel engines. If you haven’t already, I would strongly suggest that you read the first part before continuing here! In this part I will directly reference the previous part!

Go back to first part.

## How to overcome the T-Junction problem

As we have seen before, the problem with the (Naive) Greedy Meshing and Monotone Meshing method is that additional vertices are created. However the underlying problem is that we were never considering the weakly simple polygon, which the voxel faces form as a whole, but only parts of it.

Triangulating a polygon with holes is by no means an easy undertaking. Especially if you’re aiming to avoid long and thin triangles it gets tricky. This paper describes, based on Seidel’s algorithm, how it can be done, but does produce many long and thin triangles. These kind of triangles can cause rendering artifacts, but they can be avoided by flipping them according to the Delaunay condition as described here. In general it would be nice to find the minimum weight triangulation, but unfortunately that problem is known to be NP-hard.

Apparently it is very tricky to create a solid triangulation algorithm and I had already spent so much time on the other algorithms that I wasn’t to keen on trying it. So, with a few requirements in my head, I started looking around for alternatives. This page is slightly outdated, but gives a great overview.

## Poly2Tri Meshing

The reason why I choose the Poly2Tri library over competitors were the following:

• Handles holes
• Extremely robust (the few complains I read were written by people not understanding the input requirements)
• Available in many programming languages
• Long and thin triangles are avoided (judging from my test data)

The algorithm is partly based on the paper “Sweep-line algorithm for constrained Delaunay triangulation” by V. Domiter and and B. Zalik. The author came up with other parts of the algorithm himself, so I have no idea how exactly it works [reference]. The only explanation I could find is the image above.

What we do care for is that it is capable of triangulating any weakly simple polygons and not creating additional vertices. There might be libraries that do this faster, but I have yet to see conclusive data. The speed is in my experience more than sufficient.

## Preparing the Input

For input the Poly2Tri algorithm expects the polygon outline and (if present) the contained holes. So the next question is, how do we convert the voxels into a polygon outline? This can be done by scanning our bit array from before for edges and then merging these edges while detecting whether they are polygon outlines or holes. When four edges meet it’s a bit tricky to traverse the correct edge and do that efficiently.

The algorithm is described in its basic form here and is relatively easy to implement (just use a hash map to do the traversing). Alternatively you could try and find a library that does the polygon extraction for you (the library I linked is really slow and resource hungry though).

Taking a look at the documentation of Poly2Tri, there are still more issues to overcome. The major concerns are that there must not be point duplicates and that Interior holes must not touch other holes, nor touch the polygon outline.

Fortunately we can merge such holes together or into the polygon outline. This can even be done in the detection phase. Polygon Outlines in Red, Holes in Blue, Before Merging (l), After Merging (r)

On the right hand side we see that the two interior holes form now one big hole and that the hole that touched the border is a continuation of the border.

Now we shift the points that are duplicates away form each other (making sure they are shifted into the correct direction).

In the previous image the triangulation of four different meshing algorithms (variants) can be seen. We notice immediately that the Greedy Meshing looks “cleanest” due to the rectangle structure. If the T-Junction problem does not concern you, it’s a simple but great algorithm.

In the next two images the Monotone Meshing algorithm demonstrates its biggest weakness: Many long and thing triangles are created. Especially the “save” variation of the algorithm produces many highly problematic triangles.

And finally the beauty of the Poly2Tri Meshing method is that it never creates new vertices and hence no problem edges appear. The triangle count is larger than for the Greedy Meshing or the original Monotone Meshing approach, but in my experience not by much. You can see that long and thin triangles are avoided.

Note: Poly2Tri allows for adding so called steiner points to define interior points of the polygon. I haven’t tried it, but in theory this should help with overlarge or thin triangles as these can still appear when the vertex positioning doesn’t allow for a good triangulation. There is even an add-on called ofxPoly2Tri that can help with bad triangles.

## Optimal Greedy Meshing

Now that we have seen how to avoid the T-Junction problem, let’s for a moment assume you don’t care about this issue. Deep down I really wanted to know how many triangles could be saved with an Optimal Greedy Meshing method.

After some testing I determined that the recursive approach I suggested before get’s extremely slow in certain scenarios. So that was not an option. Luckily someone smart had already looked at the issue some thirty years ago! The algorithm idea that I’m using is described here.

Unfortunately not all steps are clear and especially the final conversion into rectangles is not described at all. So i had to come up with my own algorithm. To save you some time I’ll try to describe how the Optimal Greedy Meshing works a bit more detailed. If you are not interested in the details, feel free to skip the rest of this section (it’s fun though and even involves a German King!). Set of Good Edges (l), Maximum Independent Set (c), Additional Edges drawn from unused Vertices (r)

Let’s consider the example that is given in the paper linked before. The basic idea is the following (as highlighted in the previous image): We first look at the vertical and horizontal edges that can be drawn between concave vertices. These are so called “good edges”. Out of those we take a maximal set (i.e. as many edges as we can get such that they are not intersecting). Then we use those to “slice” our polygon and use all unused concave vertices to draw additional edges (one per vertex until we hit another edge, the polygon outline or a hole). The paper proofs that this induces an optimal cover with rectangles. So far for the theory, let’s go a bit more into detail.

1) Determining the concave points. Since it is much more efficient to work with a polygon, compared to a bit array, we start by converting our voxel input into a polygon with holes (as described in the previous section). Then we loop over all points in the polygon and determine if they are concave (hint: a point is concave iff exactly three bits are set out of the four bits surrounding the point).

2) Finding the good edges. To find the good edges we check if two concave points have the same x or y coordinate. If this is the case we further need to check that “nothing is in between”. If this is also the case we have found a good edge. In the last picture (left) these edges were assigned the numbers one to eight.

3) Finding a maximal independent set. This part is a bit complicated and we need to use a number of different algorithms. The up side is that these algorithms are well documented and we just need to follow instructions for the most part. The initial idea is that the edges can be seen as a bipartite graph where two entries are connected exactly iff they intersect. That this forms a bipartite graph is not hard to see, since two vertical or two horizontal edges can not intersect. In our example the bipartite graph looks as following: The two sets are (1, 4, 5, 6) and (2, 3, 7, 8) and the mappings are 1 -> (7), 4 -> (3, 7, 8), 5 -> (7), 6 -> (2, 3, 7), 2 -> (6), 3 -> (4, 6), 7 -> (1, 4, 5, 6) and 8 -> (4). Now that we have our bipartite graph, we can shoot a few well known algorithms at it:

• First we use the Hopcroft-Karp algorithm to find a maximum cardinality set (a set containing as many edges as possible such that no two edges share an endpoint).
• Then we use König’s theorem to find a minimum vertex cover (a minimal set of vertices – in our case those are the edges – such that each vertex of the graph is “neighboring” at least one vertex of the minimal set).
• The inverse of a minimal vertex cover is a maximal independent set (and exactly what we’re looking for!).

Note: Finding the minimal vertex cover for a graph is NP-hard in general. So we were really lucky that the graph was bipartite!

4) Finding the rectangles. Now the really difficult part starts. Given a polygon and a maximal independent set of edges we need to find the induced set of rectangles (basically we’re at the center picture at this moment). This step is not mentioned in any of the literature that I looked at – don’t even ask for an algorithm. Fortunately the problem can be solved very efficiently by sorting all polygon points and looping over them once.

The fundamental idea is to use a sweep-line algorithm (left to right). An int array that has the size of the height of our polygon holds the information whether a rectangle was started and if so at which x coordinate (for every y coordinate).

First we need to determine for each point whether it is an “opening” or a “closing” point. A point p is opening iff the y coordinate of the previous point is smaller or the y coordinate of the next point is bigger than the y coordinate of p (if you think about it this is a very intuitive definition!). Next and previous are defined w.r.t to the ordering of the polygon outline (or hole) that p is found in.

Then we need to sort all points P in the polygon according to the x coordinate and if these are equal according to the y coordinate. Note that point duplicates can exist, but if this is the case one is opening and one is closing.

Looping over these sorted points, and initializing the Boolean variables “removing” and “adding” with false and the sweep-line array with nulls, we can build all rectangles according to the following pseudo code:

When a rectangle is added the sweep-line int array is initialized with the current x coordinate (this is the “memory” to know where a rectangle started). And when a rectangle is finished this value is used to build the rectangle (and then updated to null or the x value).

For the picture above the sweep-line state changes as following (read top to bottom).

In this image you can see how the closing and opening points change the sweep-line array and you get an idea how the rectangles are formed.

As can be seen in the previous image, the Optimal Greedy Meshing tends to produce a lot of long triangles. This is caused by the last step: If there is an option to go in a vertical or horizontal direction from a vertex, always the vertical edge is chosen. This could be changed but might require a more complex algorithm than the one above. We note that the Naive Greedy Meshing is also affected by this problem. Fortunately world segmentation (explained later) can help with this.

Implementing the algorithm took a bit longer than initially estimated, but it was a lot of fun! In general I would reckon that the extra saving in triangle count is not worth the extra effort (saving is just a few percent, more details in the next part!). Currently some friends and I are trying to apply the idea to three dimensions (which I’m not sure is possible). If we get it to work, you’ll surely hear about it!

That completes the implementation description of the Optimal Greedy Meshing. Let’s continue with some more general stuff in the next part before we get to the conclusion.

Continue to third part.

If you enjoyed this article, please consider following me on twitter. I love technical stuff and if you do too, chances are you will enjoy what I tweet =)

# Meshing in Voxel Engines – Part 1

Today I would like to talk to you about meshing in voxel engines. There have been two great blog posts on this topic by Mikola Lysenko, which give fundamental information and served me as a starting point. If you haven’t already, I suggest that you head over there and give them a read.

In particular we’re gonna compare “Naive Greedy Meshing” and “Monotone Meshing” and analyze when you should or shouldn’t use them in your voxel engine. Furthermore we will outline a viable alternative that fully overcomes the T-Junction problem. At the end we will also take a quick look at the T-Junction problem w.r.t. world segmentation and talk about whether vertex reduction is a good idea. If you are only interested in the essence of this post, feel free to scroll down.

## What is the T-Junction problem?

The T-Junction problem refers to the fact that in certain scenarios you will get “see through” artifacts due to rounding issues. This means that, even though the geometry should be solid, you can see the background shining through. It only happens if triangle edges that share a line do not share the same two points. This phenomena does frequently occur with T-Junctions, hence the name.

If we look at the underlying geometry we can see what is causing the issue. The two edges T1 and T2 in question do not share any points.

There has been an argument if the T-Junction problem is still relevant today, since many graphic cards work now with higher precision. From what I have seen, the problem does still exist. Modern PC hardware might not be affected and anti-aliasing can help in obfuscating the artifacts, but mobile hardware or software renderer aim to reduce computational complexity and this can mean relevant rounding errors. If you are not sure how and in which graphic engine your voxel engine will be used, this should concern you.

## Naive Greedy Meshing

This method is a simplified version of the Greedy Meshing method described before. We start with a two dimensional bit array that has a one set iff a voxel is present in that position. Now we loop over the voxel and if a one is found we expand this position first in x direction and then in y direction. We store the so found rectangle and erase all “one” bits contained in it. Then we continue the loop.

As you can see in the example, the algorithm does not always minimize the number of triangles (e.g. placing one rectangle from top to bottom in the right most column would reduce the triangle count). The advantage is however that it is lightning fast and very easy to implement.

If you are really concerned with triangle count, you should consider recursively checking all possible variations and even compare the final result to the Monotone Meshing result. If you do the recursion in a clever way and depending on the size of the world segmentation you choose, this is feasible. However in my experience the difference in triangles is not very large.

The most important thing to note is that the greedy approach does produce T-Junction problems. I’m sure you can spot many edges that could turn out problematic.

We further notice the rectangle structure. This can make managing textures easier depending on how you are planning to implement them. In my opinion, next to the run time, this is the biggest advantage of the greedy method.

## Monotone Meshing

This algorithm is based on the fact that monotone polygons can be converted into triangles very efficiently. Converting the monotone polygons into triangles is fairly easy to implement – make no mistake, this will still take a moment! The general concept is nicely explained here. The tricky part is splitting an area of voxels into monotone polygons.

This approach as suggested by Mikola Lysenko is done by looping column wise over our two dimensional bit array and adding a “voxel slice chain” as monotone polygon for every “one bit” found.

In the following image the found starting bits are highlighted with a black frame and the found monotone polygons are colored differently. The decomposition part is slightly modified from the original to allow for movement between the rows.

One can see that the T-Junction problem does also appear with this algorithm. We can distinguish between problem edges that are within one monotone polygon (e.g. edge between triangles six and four) and problem points in between monotone polygons (e.g. edge between two and fifteen).

Side-note: Considering the following example, it is clear that there are certain scenarios where Monotone Meshing will surpass the Greedy Method. However in general Greedy Meshing seems to produce fewer triangles (reference).

To find the optimal triangulation for Monotone Meshing you need to run your algorithm four times over the rotated grid. Otherwise the optimal solution might not be found as the following example demonstrates.

The question is now: Can we modify the algorithm to produce a problem free triangulation? The answer is yes and no. The algorithm that triangulates the monotone polygons can be modified to efficiently produce a “safe” triangulation. This can be integrated extremely “tight” without major performance loss.

The basic idea is to “remember” points that were not used for triangulation, because they had an angle that was exactly zero, until an triangle is added that can be split using this point. In one case (when the new point lies between the previous two points) it is also necessary to backtrack and split a recently added triangle. In most cases only very few triangles need to be checked for splitting.

We notice that the edges three-four and eight-nine-ten are fixed now. However the edges between the monotone polygons are still causing problems.

Now the algorithm that detects the monotone polygon outline needs to be modified to find “edges” between the polygons. This can also be done very efficiently (it’s a bit tricky though).

The idea is to do a “look ahead” and a “look backwards” for every slice that is processed, find any edges from other monotone polygons and add appropriate points. These points will automatically force the already modified triangulation algorithm to add triangles. Only the difference between the previous (or next) and current top and previous (or next) and current bottom position need to be checked for edges.

Finally no more problem edges are present. This comes at a cost though: Instead of eighteen triangles we have twenty six now.

## Why Monotone Meshing is not Working

So, all seems great now. Algorithm written, plug it into the engine and no problem edges should be visible. Unless – you guessed it. There is something we didn’t consider. Notice the innocent looking triangle no. twenty six? Well, we’re in three dimensions, not in two. Here is what the harsh reality looks like.

Now we could start splitting the face that is “below” to fix the triangles in it. But that would mean triggering a change to one surface would require all adjacent faces to update. And that is really not a situation you want to find yourself in.

The underlying problem is that Monotone Meshing does create vertices that are not necessary for triangulation. For the same reason the greedy method is not fixable by simply splitting triangles.

At this point I abandoned the Monotone Meshing approach. For two dimensional surfaces it does work – but let’s face it, that’s not why you are here. So we have failed. At a very high level, but still failed.

So, all is lost? Fortunately no.

Continue to second part.

If you enjoyed this article, please consider following me on twitter. I love technical stuff and if you do too, chances are you will enjoy what I tweet =)