Swarm intelligence
Swarm intelligence is the collective behavior of decentralized, self-organized systems, natural or artificial. The concept is employed in work on artificial intelligence. The expression was introduced by Gerardo Beni and Jing Wang in 1989, in the context of cellular robotic systems.
Swarm intelligence systems consist typically of a population of simple agents or boids interacting locally with one another and with their environment. The inspiration often comes from nature, especially biological systems. The agents follow very simple rules, and although there is no centralized control structure dictating how individual agents should behave, local, and to a certain degree random, interactions between such agents lead to the emergence of "intelligent" global behavior, unknown to the individual agents. Examples of swarm intelligence in natural systems include ant colonies, bee colonies, bird flocking, hawks hunting, animal herding, bacterial growth, fish schooling and microbial intelligence.
The application of swarm principles to robots is called swarm robotics while swarm intelligence refers to the more general set of algorithms. Swarm prediction has been used in the context of forecasting problems. Similar approaches to those proposed for swarm robotics are considered for genetically modified organisms in synthetic collective intelligence.
Models of swarm behavior
Boids (Reynolds 1987)
Boids is an artificial life program, developed by Craig Reynolds in 1986, which simulates flocking. It was published in 1987 in the proceedings of the ACM SIGGRAPH conference.The name "boid" corresponds to a shortened version of "bird-oid object", which refers to a bird-like object.
As with most artificial life simulations, Boids is an example of emergent behavior; that is, the complexity of Boids arises from the interaction of individual agents adhering to a set of simple rules. The rules applied in the simplest Boids world are as follows:
- separation: steer to avoid crowding local flockmates
- alignment: steer towards the average heading of local flockmates
- cohesion: steer to move toward the average position of local flockmates
Self-propelled particles (Vicsek ''et al''. 1995)
Self-propelled particles, also referred to as the Vicsek model, was introduced in 1995 by Vicsek et al. as a special case of the boids model introduced in 1986 by Reynolds. A swarm is modelled in SPP by a collection of particles that move with a constant speed but respond to a random perturbation by adopting at each time increment the average direction of motion of the other particles in their local neighbourhood. SPP models predict that swarming animals share certain properties at the group level, regardless of the type of animals in the swarm. Swarming systems give rise to emergent behaviours which occur at many different scales, some of which are turning out to be both universal and robust. It has become a challenge in theoretical physics to find minimal statistical models that capture these behaviours.Social potential fields (Reif ''et al''. 1999)
Social Potential Fields, developed in 1999 by John H. Reif and Hongyan Wang, is one of the earliest models for Swarm Intelligence, developed for use for autonomous control of robot swarm systems that may consist of from hundreds to perhaps tens of thousands or more autonomous robots. It is the first paper to apply a potential field model to distributed autonomous multi-robot control. A Social Potential Field defines simple artificial force laws between pairs of robots or robot groups. These force laws are inverse-power force laws, incorporating both attraction and repulsion, similar but more general compared to the force laws found in molecular dynamics. As one of simplest examples, they define a force law where attraction dominates over long distances and repulsion dominates or short distances. The force laws can be distinct between various robots. An individual robot's motion is controlled by the resultant artificial force imposed by other robots and other components of the system. The Social Potential Fields approach is distributed since the force calculations and motion control can be done in an asynchronous and distributed manner. Using specially tailored force laws, they demonstrate complex behaviors and what might be viewed as "social relations' among robots. Therefore, the model was termed "Social Potential Fields”.They demonstrated by computer simulations that the method can yield interesting and useful behaviors among robots, including clustering, guarding, escorting, patrolling, etc. The 1999 paper envisioned many industrial and military applications such as assembling, transporting, hazardous inspection. patrolling, and military control of swarm systems.
Their simulations showed the social potential fields method is robust in that the method can tolerate errors in sensors and actuators. The Social Potential Fields paper also extended the social potential fields model to use spring laws as force laws.
Metaheuristics
s, particle swarm optimization, differential evolution, ant colony optimization and their variants dominate the field of nature-inspired metaheuristics. This list includes algorithms published up to circa the year 2000. A large number of more recent metaphor-inspired metaheuristics have started to attract criticism in the research community for hiding their lack of novelty behind an elaborate metaphor. For algorithms published since that time, see List of metaphor-based metaheuristics.Metaheuristics lack a confidence in a solution. When appropriate parameters are determined, and when sufficient convergence stage is achieved, they often find a solution that is optimal, or near close to optimum – nevertheless, if one does not know optimal solution in advance, a quality of a solution is not known. In spite of this obvious drawback it has been shown that these types of algorithms work well in practice, and have been extensively researched, and developed. On the other hand, it is possible to avoid this drawback by calculating solution quality for a special case where such calculation is possible, and after such run it is known that every solution that is at least as good as the solution a special case had, has at least a solution confidence a special case had. One such instance is Ant-inspired Monte Carlo algorithm for Minimum Feedback Arc Set where this has been achieved probabilistically via hybridization of Monte Carlo algorithm with Ant Colony Optimization technique.