Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When I was in college I was mystified by genetic algorithms, without knowing much about them. After taking 2 subjects on the matter and reading some books, I came to the conclusion that apart from being inherently inefficient (that's what you apply them when you have no alternative), they are actually outperformed by hill climbing (which can be seen as a particular case of the former if population = 1). Also, the crossover operator seems to make more harm than good, and it's not fully understood it's usefulness in nature, although there are some theories (this last point is taken from Pedro Domingos book).


Interesting. I want to ask this in a way that is respectful because I'm not grandstanding, but coming to that conclusion after two undergraduate courses, based on preceding mystification, did it not occur to you that it might be your understanding, rather than the whole field?

My personal story: I worked for just under a decade on EAs (PhD and professionally, late 90s and early 00s). I don't entirely disagree with you about pure GAs. But even back then, pure GAs were rarely used. But I do disagree about evolutionary algorithms in general.

The usefulness of EAs is highly problem dependent. And I will definitely concede that the space of problems in which they excel and are tractable is smaller than the space of problems for something like a neural network, in my experience.

As for crossover, when it works (and in engineering contexts the operator needs to be designed with the problem in mind) it does so by allowing individuals at different points on the landscape to share the optimisations they find. At the cost of increased genetic load. This, of course, is only useful if the fitness landscape is both heavily multimodal and self-similar. And it requires some effort to avoid the population converging around a single solution (where you do have a stochastic hill climb).

In my experience, EAs come into their own with non-fixed genotypes or complex genotype to phenotype expression. Multivariate optimisations with a fixed number of variables is probably best done with other tools.


> And I will definitely concede that the space of problems in which they excel and are tractable is smaller than the space of problems for something like a neural network, in my experience.

The only thing that EAs and neural networks have in common is the buzzword abuse. Neural networks are actually useful as they are at their core an efficient way to approximate multivariate functions, while EAs are computationally expensive heuristics abused by being passed off as optimization algorithms even in domains where deterministic algorithms (and even brute force approaches) actually perform better.


> they are actually outperformed by hill climbing

That depends on the particular problem you are trying to solve. If hill climbing works for your problem, then you are lucky and you should not use GAs.


I've also had the experience that hill climbing (with random restarts) can outperform GA's. In many problems you can fairly easily define a neighborhood concept for moves from the current solution. That tends to be easier than getting a meaningful crossover and mutation working. Then you just can hill climb by making neighborhood moves, with random restarts. Run 1000 random restarts and pick the best one, and you're going to have a really good solution.


Genetic algorithms aren't useless, evolutionary algorithms are, the crossover method is terrible at searching the space in any kind of dynamic way, but things like pep-g and covariance matrix adaptation are also genetic algorithms and they work significantly better because they have essentially dynamic learning rates, they are to genetic algorithms what Adam is to backpropagation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: