The purpose of stochastic optimizers is to find a solution which is "good enough". Over time they will converge towards each other by design. If you want to prevent that type of behavior you will need to modify the algorithm for subsequent runs to accomplish that, I vaguely recall some work on repulsers placed on previous run solutions to prevent finding the same answer on subsequent runs. Also be aware that most of the algorithms you mentioned are sensitive to starting positions so you may want to get a less random and more evenly distributed starting position. Something like a sobol sequence generator could be useful for a more "random" but even distribution of starting positions. Lastly keep in mind that stochastic optimizers are not a one size fits all solution, many straight numeric optimization techniques, depending on your solution space, may be a better choice.
-Travis

> Date: Thu, 28 Jun 2012 08:19:31 -0400
> From: [log in to unmask]
> Subject: Brainstorming: Some kind of Anti-Swarm behavior
> To: [log in to unmask]
> 
> Hello,
> 
> This post is not directly related to ECJ, but if there exists a solution in
> ECJ, I would be glad. The background is that I have an optimization problem
> in which six variables (real valued, range: 1-40) must be tuned for
> minimizing a fitness function. From literature I know a good, general
> solution but I want to find a better one which is optimized for my input
> data. I know that I could start modifying this solution to find a better
> one, but that's not my intention. I want to find an algorithm which is able
> to find a good solution without this prior knowledge.
> 
> I have tried different algorithms (with different parameter combinations, of
> course) like genetic algorithm, differential evolution and particle swarm
> optimization. Although the algorithms infer good solutions, they do not come
> close to the one which I already know. I think that the major disadvantage
> of these population based solutions is that their candidates get more and
> more similiar to each other. Usually, this is a good idea because it allows
> the algorithm to fine-tune the candidate solutions. However, the whole
> optimization process gets very sensitive to the initial distribution of
> candidate solutions in the first generation because after some runs the
> majority of candidate solutions starts to form clusters in search space like
> a swarm. In this example, I don't want this behavior. I mean, why should
> 1,000 individuals in each generation concentrate on a tiny subspace whereas
> the vast majority of the search-space is still undiscovered? Instead I want
> some kind of anti-swarm behavior where some individuals form only small
> groups but multiple groups are spread over the search-space. Then the
> estimated optimum would be the maximum of all groups.
> 
> Do you have an idea how this could be accomplished? One of my ideas was to
> use some kind of grid of the search space where a specific number of
> individuals is put in every grid cell and no individual is allowed to move
> across grid cell borders. The disadvantage is that a large number of
> individuals would be necessary and if the grid cells get very small it would
> lead to an exhaustive search.