Problems. Just for fun. Nothing to win! |
Open question | Comment | |
1 |
Dynamic functions |
What happens if the solution point moves over the time ? Can PSO "track" it quck enough ?. Advance: Carlisle's and Dozier's work. |
2 |
El Farol |
Population members must coordinate their problem solutions to find a population-level optimum. See Adaptive versions. |
3 |
Distributed human PS. |
People at computers on a network, given p_i (best previous position) and p_g (best previous position in the neighbourhood) feedback -- how do they approach problem solving? How do they do at it? |
4 |
Online neural nets. |
The population members have to explore the search space collecting data about it, for instance, where obstacles are, as well as collectively optimizing a set of weights to describe it. |
5 |
PSO for PSO |
Technically, it is possible to use PSO to "optimize" PSO parameters, for a given set of problems. But is it worthwhile ? Advance: recursive PSO. |
6 |
Metamodel |
Cultural algorithms are special PSO cases. It may also be possible to describe a meta-model including PSO, GA (Genetic Algorithms), ACO (Ant Colony), etc. |
7 |
Swarm intelligence |
It can be proved that ACO algorithm is formally equivalent to a sequential one with just one ant at a time. Is it the same for PSO, or is there a real "swarm intelligence" ? (See also Question 2). |
8 |
Generalized PSO (1) |
In classical PSO, a particle just consider the best neighbour. Is it also interesting to consider others neighbours, for example with smaller "weights" ? Advance: Kennedy's and Mendes's work on FIPS (Fully Informed Particle Swarm) |
9 |
Generalized PSO (2) |
In classical PSO, a particle remember just its best previous position. Is it interesting to remember (and to use) more than one previous position ? Advance: TRIBES |
10 |
Mathematical explanation |
The current mathematical "explanation" (moves in a 5D space)
does not take interactions into account. If possible (see Question
7),
a better one would be interesting. |
11 |
Strategies | There is no special reason for all particles do use the same search srategy. For example, it may be better for a "good" particle to follow its own way more than for a "bad" one. What could be the best set of possible strategies and the best set of meta-rules to choose among them? Advance: TRIBES |
12 |
Proximity areas | When you choose a point
at random with a formula like rand(0,phi)(p_i(d) - x(d)) , you
implicitely use a hyperparallelepid with a uniform distribution in
it.
Wouldn't be another distribution better? Hyperspherical? Non
uniform?
Gaussian? Advance: Gaussian PSO, a lot of hybrid PSO, TRIBES |
13 |
Information and "guided"
randomness |
The move (and also the
particle generation in adaptive version) do use randomness. Is it
possible to take advantage of information theory to "guide" this
randomness towards promising areas? Advance: SunnySpell |
14 |
Pseudo-randomness |
In most of languages, the rand()
function is quite bad. For example, in ANSI C, it is just the choice
of
an integer amongst 32767. What happens if you use a better
pseudo-random number generator? Variant: what happens when using
chaos? On the contrary, what happens when reducing
the number of possible pseudo-random values? |
15 |
Class function and performance
maps |
For a given problem, if you
systematically try a lot of different parameter sets (including
swarm
size), you obtain some performance maps. With some (apparently)
quite
different problems, like say Sphere and Rosenbrock, the structures
of
these maps are very similar. It would be interesting to understand
why. Advance: PSO-equivalence |
16 |
Adaptive weighted information
link |
Usually each particle trusts
equally its neighbours and chooses the best. What happens if the
confidence coefficients are modified during the process (more or
less
like in neural networks)? |
17 | A straigth line? | Standard PSO. Sphere function. During the process, plot the curve (ln(error) vs number_of_fitness_evaluations). The best polynomial regression curve is a linear one. Prove it, and find the coefficients. |
18 | Bell curve | Standard PSO. Dimension 1. For a given particle, keep p and g constant. Plot the histogram of the successive positions x(t). It is a gaussian-like (but not gaussian) curve. Prove it. |
19 | NFLT? | It seems that PSO is a kind of algorithm for which the No Free Lunch Theorem does not hold (maybe because of the velocity). Is is true or not? |
20 | Clamping or not? | When a particle tends to leave the search space you may
either clamp it on the bound (or inside the search space) or do
nothing
special (and not evaluate the position). In the last case, it
usualy comes back. Two questions: 1) sometimes it apparently never
comes back. Why? 2) What is the most effective method, on a
given benchmark set, say
the one of CEC 2005 2008: several works show that clamping is definitely better |
21 | Discrete/binary (non combinatorial) | There are a lot of very different binary PSOs (or, more generally, discrete PSOs). Is there possible to define a "canonical" one, based on a mathematical analysis? |
22 | Combinatorial PSO | There are a lot of very different combinatorial PSOs. Is there possible to define a "canonical" one, based on a mathematical analysis? |
23 | Initialization | It is well known for years that the classical "uniform" initialization is not very good. Better ones (more regular) do exist. What could be the "best" one? |
24 | Strategies | In Standard PSO, all particles have the same behavior. Some attempts seem to show that using at least two kind of particles improves the algorithm. For example "normal" particles, and "scout" particles. But there are a lot of other possible methods |
25 |
A "standard" unbiased PSO |
In 2010, W. Spears et al.
mathematically explained (Biases in Particle Swarm Optimization,
International Journal of Swarm Intelligence Research) why the most
common PSO versions are rotationally variant, and why they easier
find
the solution when it lies on a coordinate axis or on a diagonal. PSO
versions without this drawback do exist for years but are either
quite
complicated or very different from the standard one. Or both. So,
the
challenge is to design a PSO version that is both "near to" the
current
standard PSO 2007, and at least equivalent in terms of performance
on
classical benchmarks (CEC 2005, for example), in order to suggest a
possible new standard. 2011: SPSO 2011, which makes use of hyperspheres, seems to be a possible solution. |
26 |
Random Number Generator |
For a given problem, and a given
algorithm, the results can be significantly different when using
different RNGs. Moreover, for a given problem, two algorithms A and
B,
and two RNG, RNG1 and RNG2, we may have the following situation: - with RNG1, A is better than B - with RNG2, B is better than A How to compare A and B? Are the published comparisons reliable? |
27 |
List-based optimisers |
Some experiments show that there
is no relationship between the "quality" of the RNG and the
performance
of a given algorithm. Moreover, the RNG can be replaced by a short
list
of predefined numbers, used cyclically. The result is sometimes even
better. Is there a non empirical way to define such "good" lists? |
28 | Quantum PSO | On the one hand there are several fake so called Quantum PSOs on the market, but they don't use qbits, unitary matrices (quantum gates), entanglement. On the other hand, it seems possible to define a real Quantum PSO, for example based on the Grover's algorithm. But how to do it exactly ? |