>False. There is no theoretical or comprehensive empirical study I've ever seen to suggest this is the case.
It is absolutely the case. What is not established is whether or not crossover will actually help. My main point was that mutation is not there to escape local optima, although I should have been clearer about the efficacy of crossover for that purpose.
>Also false. I'll especially take note that this is not the case if you're using a less-than-naive EA.
Yes, for specific classes of problem, you can converge on a global optimum. In the general case, there is no way to ensure that you will get to a global optimum.
>You start with a bunch of random bit strings. IF you have 1s in all indexes somewhere in the population, it's possible (though unlikely) that the algorithm will find the optimal answer.
Yes that's why I said it basically won't optimize at all. Random guess and check might arrive at an optimum answer too, but that is only "optimizing" in a very pedantic sense.
> My main point was that mutation is not there to escape local optima, although I should have been clearer about the efficacy of crossover for that purpose.
Mutation is absolutely there for the primary purpose of escaping local optima. If mutation's job was only to climb the gradient, then you would just use a much more efficient gradient ascent method. By having random mutation, you are effectively saying you want to stay in the known-good region most of the time, but occasionally explore a new area even if it goes against the perceived gradient.
It is absolutely the case. What is not established is whether or not crossover will actually help. My main point was that mutation is not there to escape local optima, although I should have been clearer about the efficacy of crossover for that purpose.
>Also false. I'll especially take note that this is not the case if you're using a less-than-naive EA.
Yes, for specific classes of problem, you can converge on a global optimum. In the general case, there is no way to ensure that you will get to a global optimum.
>You start with a bunch of random bit strings. IF you have 1s in all indexes somewhere in the population, it's possible (though unlikely) that the algorithm will find the optimal answer.
Yes that's why I said it basically won't optimize at all. Random guess and check might arrive at an optimum answer too, but that is only "optimizing" in a very pedantic sense.