Meta-heuristics or meta-heuristic algorithms are evolutionary algorithms which are a part of "Soft Computing" and Soft computing itself is a form of AI. But what are these algorithms?

One instance or example of these algorithms is Genetic Algorithm which I've written about previously. Basically these algorithms are designed to search a space, and usually the goal or the objective is an optimum answer. The optimum answer is usually expressed as a form of a minimum value or maximum value.

So what's the point? Assume you have a 20 dimensional function, and you're trying to find the point in the 21 dimensional space it exists where the value of the function is maximized or minimized. By a point I mean 20 input variables for the function f where the value of f is greater or smaller than at every other point or 20 variables. In other words, again, maximizing or minimizing a function. If your search space is really big, or in this case and most cases even close to infinity, a classical search like brute-force search is not going to be the answer. So these "partial search" algorithms are going to be the answer to finding solutions much more quickly.

The main thing these searches try to do is try different random points and try to learn and get to better points by repeating a process a number of times which is finding a new point or points, evaluating it, learning from it and generating new points. There's a principle that each of these algorithms are usually required to follow in order to achieve the best possible results, and it is that at first (the first few iterations) the algorithm has to Explore the space and chose more distributed and random points and decrease it on every iteration and finally towards the end it should Exploit and choose the points closer to each other.

I could point to algorithms other than GA, which are Particle Swarm Optimization which is inspired by a flock of birds finding food, Simulated Annealing which is inspired by a heat treatment process which alters the physical properties of a material, usually glass or metal. There's also Ant Colony Optimization which is inspired by the way ants find short paths.

For specific problems that have a certain size, for instance the N-Queen problem's size can be 4 and above, most of these algorithms usually find a solution in a reasonable time, but get stuck if the size exceeds a limit. Finding the limit for each problem requires implementation. Usually in order to decrease programming time and increase productivity and also help with analyzing the algorithm's performance, MATLAB comes in handy.

I'm currently working on solving Rubik's Cube using meta-heuristics and I'm closing in on results. So far GA and PSO have performed almost alike with PSO being a bit faster, and both solving problem sets with a maximum size of 5, but 5 is a small number and I hope I can get better results using SA and ACO, or maybe even combining these methods.

Stay tuned for more!