Bluetronix, an R&D company in Cleveland, Ohio, is using ant behaviour as a way to remodel military wireless communications. Bluetronix is working on mobile routers (tiny computer chips holding mathematical algorithms) using Swarm Intelligence for military ad-hoc communications networks. These networks cover everything from tanks and missiles and everything in-between. Bluetronix's vision is to connect these multiple "nodes" so information can be efficiently processed and prioritised, even those these networks contain hundreds upon thousands of nodes.
In the U.S., Southwest Airlines has tested an ant-based model to improve service at Sky Harbor International Airport in Phoenix. With about 200 aircraft a day taking off and landing on two runways and using gates at three concourses, the company wanted to make sure that each plane got in and out as quickly as possible, even if it arrived early or late. The planes are like ants searching for the best gate. But rather than leaving virtual pheromones along the way, each aircraft remembers the faster gates and forgets the slower ones. After many simulations, using real data to vary arrival and departure times, each plane learned how to avoid an intolerable wait on the tarmac. Southwest was so pleased with the outcome, it may use a similar model to study the ticket counter area.
Marco Dorigo's group in Brussels is leading a European effort to create a "swarmanoid," a group of cooperating robots with complementary abilities: "foot-bots" to transport things on the ground, "hand-bots" to climb walls and manipulate objects, and "eye-bots" to fly around, providing information to the other units. The military is eager to acquire similar capabilities.
The biggest changes may be on the Internet. Consider the way Google uses group smarts to find what you're looking for. When you type in a search query, Google surveys billions.
of Web pages on its index servers to identify the most relevant ones. It then ranks them by the number of pages that link to them, counting links as votes (the most popular sites get weighted votes, since they're more likely to be reliable). The pages that receive the most votes are listed first in the search results. In this way, Google says, it "uses the collective intelligence of the Web to determine a page's importance."
Wikipedia, a free collaborative encyclopedia, has also proved to be a big success, with millions of articles in more than 200 languages about everything under the sun, each of which can be contributed by anyone or edited by anyone. It's now possible for huge numbers of people to think together in ways we never imagined a few decades ago.
Sunday, November 22, 2009
APPLICATIONS OF ANT COLONY OPTIMIZATION
1 Antnet
AntNet is an algorithm for adaptive best-effort routing in IP networks. AntNet's design is based on the Ant Colony Optimization (ACO) framework. It follows a metaheuristic for combinatorial optimization.
ACO features a multi-agent organization, stigmergic communication among the agents, distributed operations, use of a stochastic decision policy to construct solutions, stigmergic learning of the parameters of the decision policy, and so on. AntNet has been the first ACO algorithm for routing in packet-switched networks.
2 SWARM ROBOTICS
Swarm robotics is a new approach to the coordination of multirobot systems which consist of large numbers of relatively simple physical robots. The goal of this approach is to study the design of robots (both their physical body and their controlling behaviors) such that a desired collective behavior emerges from the inter-robot interactions and the interactions of the robots with the environment, inspired but not limited by the emergent behavior observed in social insects.
Potential application for swarm robotics include tasks that demand for extreme miniaturization (nanorobotics, microbotics), on the one hand, as for instance distributed sensing tasks in micromachinery or the human body. On the other hand, swarm robotics is suited to tasks that demand for extremely cheap designs, for instance a mining task, or an agricultural foraging task.
The U.S. military is investigating swarm techniques for controlling unmanned vehicles. ESA is thinking about an orbital swarm for self assembly and interferometry. NASA is investigating the use of swarm technology for planetary mapping. Artists are using swarm technology as a means of creating complex interactive systems or simulating crowds. Tim Burton's Batman Returns was the first movie to make use of swarm.
technology for rendering, realistically depicting the movements of a group of penguins using the Boids system. The Lord of the Rings film trilogy made use of similar technology, known as Massive, during battle scenes. Swarm technology is particularly attractive because it is cheap, robust, and simple.
AntNet is an algorithm for adaptive best-effort routing in IP networks. AntNet's design is based on the Ant Colony Optimization (ACO) framework. It follows a metaheuristic for combinatorial optimization.
ACO features a multi-agent organization, stigmergic communication among the agents, distributed operations, use of a stochastic decision policy to construct solutions, stigmergic learning of the parameters of the decision policy, and so on. AntNet has been the first ACO algorithm for routing in packet-switched networks.
2 SWARM ROBOTICS
Swarm robotics is a new approach to the coordination of multirobot systems which consist of large numbers of relatively simple physical robots. The goal of this approach is to study the design of robots (both their physical body and their controlling behaviors) such that a desired collective behavior emerges from the inter-robot interactions and the interactions of the robots with the environment, inspired but not limited by the emergent behavior observed in social insects.
Potential application for swarm robotics include tasks that demand for extreme miniaturization (nanorobotics, microbotics), on the one hand, as for instance distributed sensing tasks in micromachinery or the human body. On the other hand, swarm robotics is suited to tasks that demand for extremely cheap designs, for instance a mining task, or an agricultural foraging task.
The U.S. military is investigating swarm techniques for controlling unmanned vehicles. ESA is thinking about an orbital swarm for self assembly and interferometry. NASA is investigating the use of swarm technology for planetary mapping. Artists are using swarm technology as a means of creating complex interactive systems or simulating crowds. Tim Burton's Batman Returns was the first movie to make use of swarm.
technology for rendering, realistically depicting the movements of a group of penguins using the Boids system. The Lord of the Rings film trilogy made use of similar technology, known as Massive, during battle scenes. Swarm technology is particularly attractive because it is cheap, robust, and simple.
APPLICATION : TRAVELLING SALESMAN PROBLEM
The travelling salesman problem (TSP) is a problem in discrete or combinatorial optimization. It is a prominent illustration of a class of problems in computational complexity theory which are classified as NP-hard.
1 Problem statement
Given a number of cities and the costs of travelling from any city to any other city, what is the least-cost round-trip route that visits each city exactly once and then returns to the starting city?
2 Observation
The size of the solution space is ((n-1)/2)! for n > 2, where n is the number of cities. This is the number of Hamiltonian cycles in a complete graph of n nodes, that is, closed paths that visit all nodes exactly once.
Let us see the implementation of the problem using ant colony optimization.
3 FORMULA
1.At the beginning of the search process, a constant amount of pheromone is assigned to all arcs. When located at a node i an ant k uses the pheromone trail to compute the probability of choosing j as the next node:
where is the neighborhood of ant k when in node i.
2. When the arc (i,j) is traversed , the pheromone value changes as follows:
By using this rule, the probability increases that forthcoming ants will use this arc.
3.After each ant k has moved to the next node, the pheromones evaporate by the following equation to all the arcs:
where is a parameter. An iteration is a completer cycle involving ants’ movement, pheromone evaporation, and pheromone deposit
4 ALGORITHM
1. Initialize
set t:= 0
set NC:= 0 {number of cycles}
For every edge (i,j) set an initial value Tij(t) for trail intensity and d Tij = 0. Place m ants on the n nodes.
2. Set s:= 1 { tabu list index)
for k:= 1 to m do
Place starting town of the kth ant in tabuk(s).
3. Repeat until tabu list full
Set s := s + 1
for k:=1 to m do
Choose the town j to move to with probability pijk (t)
Move the kth ant to the town j.
Insert town j in tabuk(s).
4. For k:=1 to m do
Compute the length Lk of the tour described by tabuk(s). Update the shortest tour found.
For every edge (i,j)
For k:=1 to m do
d Tij = d Tij + d Tijk
5. For every edge (i,j), compute
Tij(t+n) = e Tij(t) + ë Tij
Set t:=t+n
Set NC:=NC+1
For all edges(i,j), set ë Tij=0
6. If NC < Ncmax
empty tabu lists, go to 2
else
print shortest tour; stop
5 CONCLUSION
The advantages of using the ant colony optimization on the traveling salesman problem are
1. For TSPs (Traveling Salesman Problem), relatively efficient
– for a small number of nodes, TSPs can be solved by exhaustive search
– for a large number of nodes, TSPs are very computationally difficult to solve (NP-hard) – exponential time to convergence
2. Performs better against other global optimization techniques for TSP (neural net, genetic algorithms, simulated annealing)
3. Compared to GAs (Genetic Algorithms):
– less affected by poor initial solutions (due to combination of random path selection and colony memory)
Fig. flowchart for implementing traveling salesman problem using ACO
– retains memory of entire colony instead of previous generation only
However the disadvantages are
1. Theoretical analysis is difficult:
– Due to sequences of random decisions (not independent)
– Probability distribution changes by iteration
– Research is experimental rather than theoretical
2. Convergence is guaranteed, but time to convergence uncertain
1 Problem statement
Given a number of cities and the costs of travelling from any city to any other city, what is the least-cost round-trip route that visits each city exactly once and then returns to the starting city?
2 Observation
The size of the solution space is ((n-1)/2)! for n > 2, where n is the number of cities. This is the number of Hamiltonian cycles in a complete graph of n nodes, that is, closed paths that visit all nodes exactly once.
Let us see the implementation of the problem using ant colony optimization.
3 FORMULA
1.At the beginning of the search process, a constant amount of pheromone is assigned to all arcs. When located at a node i an ant k uses the pheromone trail to compute the probability of choosing j as the next node:
where is the neighborhood of ant k when in node i.
2. When the arc (i,j) is traversed , the pheromone value changes as follows:
By using this rule, the probability increases that forthcoming ants will use this arc.
3.After each ant k has moved to the next node, the pheromones evaporate by the following equation to all the arcs:
where is a parameter. An iteration is a completer cycle involving ants’ movement, pheromone evaporation, and pheromone deposit
4 ALGORITHM
1. Initialize
set t:= 0
set NC:= 0 {number of cycles}
For every edge (i,j) set an initial value Tij(t) for trail intensity and d Tij = 0. Place m ants on the n nodes.
2. Set s:= 1 { tabu list index)
for k:= 1 to m do
Place starting town of the kth ant in tabuk(s).
3. Repeat until tabu list full
Set s := s + 1
for k:=1 to m do
Choose the town j to move to with probability pijk (t)
Move the kth ant to the town j.
Insert town j in tabuk(s).
4. For k:=1 to m do
Compute the length Lk of the tour described by tabuk(s). Update the shortest tour found.
For every edge (i,j)
For k:=1 to m do
d Tij = d Tij + d Tijk
5. For every edge (i,j), compute
Tij(t+n) = e Tij(t) + ë Tij
Set t:=t+n
Set NC:=NC+1
For all edges(i,j), set ë Tij=0
6. If NC < Ncmax
empty tabu lists, go to 2
else
print shortest tour; stop
5 CONCLUSION
The advantages of using the ant colony optimization on the traveling salesman problem are
1. For TSPs (Traveling Salesman Problem), relatively efficient
– for a small number of nodes, TSPs can be solved by exhaustive search
– for a large number of nodes, TSPs are very computationally difficult to solve (NP-hard) – exponential time to convergence
2. Performs better against other global optimization techniques for TSP (neural net, genetic algorithms, simulated annealing)
3. Compared to GAs (Genetic Algorithms):
– less affected by poor initial solutions (due to combination of random path selection and colony memory)
Fig. flowchart for implementing traveling salesman problem using ACO
– retains memory of entire colony instead of previous generation only
However the disadvantages are
1. Theoretical analysis is difficult:
– Due to sequences of random decisions (not independent)
– Probability distribution changes by iteration
– Research is experimental rather than theoretical
2. Convergence is guaranteed, but time to convergence uncertain
ANT COLONY OPTIMIZATION
In the real world, ants (initially) wander randomly, and upon finding food return to their colony while laying down pheromone trails. If other ants find such a path, they are likely not to keep traveling at random, but to instead follow the trail, returning and reinforcing it if they eventually find food.
Over time, however, the pheromone trail starts to evaporate, thus reducing its attractive strength. The more time it takes for an ant to travel down the path and back again, the more time the pheromones have to evaporate. A short path, by comparison, gets marched over faster, and thus the pheromone density remains high as it is laid on the path as fast as it can evaporate. Pheromone evaporation has also the advantage of avoiding the convergence to a locally optimal solution. If there were no evaporation at all, the paths chosen by the first ants would tend to be excessively attractive to the following ones. In that case, the exploration of the solution space would be constrained.
Thus, when one ant finds a good (i.e. short) path from the colony to a food source, other ants are more likely to follow that path, and positive feedback eventually leads all the ants following a single path.
Let us consider an example to demonstrate this behavior in ants.
Consider the following figure in which ants are moving on a straight line which connects a food source to the nest:
It is well-known that the main means used by ants to form and maintain the line is a pheromone trail. Ants deposit a certain amount of pheromone while walking, and each ant probabilistically prefers to follow a direction rich in pheromone rather than a poorer one. This elementary behavior of real ants can be used to explain how they can find the shortest path which reconnects a broken line after the sudden appearance of an unexpected obstacle has interrupted the initial path (see next figure).
In fact, once the obstacle has appeared, those ants which are just in front of the obstacle cannot continue to follow the pheromone trail and therefore they have to choose between turning right or left. In this situation we can expect half the ants to choose to turn right and the other half to turn left. The very same situation can be found on the other side of the obstacle (see next figure).
It is interesting to note that those ants which choose, by chance, the shorter path around the obstacle will more rapidly reconstitute the interrupted pheromone trail compared to those which choose the longer path. Hence, the shorter path will receive a higher amount of pheromone in the time unit and this will in turn cause a higher number of ants to choose the shorter path. Due to this positive feedback (autocatalytic) process, very soon all the ants will choose the shorter path (see next figure).
The most interesting aspect of this autocatalytic process is that finding the shortest path around the obstacle seems to be an emergent property of the interaction between the obstacle shape and ants distributed behavior: Although all ants move at approximately the same speed and deposit a pheromone trail at approximately the same rate, it is a fact that it takes longer to contour obstacles on their longer side than on their shorter side which makes the pheromone trail accumulate quicker on the shorter side. It is the ants preference for higher pheromone trail levels which makes this accumulation still quicker on the shorter path.
The idea of the ant colony algorithm is to mimic this behavior with "simulated ants" walking around the graph representing the problem to solve.
Over time, however, the pheromone trail starts to evaporate, thus reducing its attractive strength. The more time it takes for an ant to travel down the path and back again, the more time the pheromones have to evaporate. A short path, by comparison, gets marched over faster, and thus the pheromone density remains high as it is laid on the path as fast as it can evaporate. Pheromone evaporation has also the advantage of avoiding the convergence to a locally optimal solution. If there were no evaporation at all, the paths chosen by the first ants would tend to be excessively attractive to the following ones. In that case, the exploration of the solution space would be constrained.
Thus, when one ant finds a good (i.e. short) path from the colony to a food source, other ants are more likely to follow that path, and positive feedback eventually leads all the ants following a single path.
Let us consider an example to demonstrate this behavior in ants.
Consider the following figure in which ants are moving on a straight line which connects a food source to the nest:
It is well-known that the main means used by ants to form and maintain the line is a pheromone trail. Ants deposit a certain amount of pheromone while walking, and each ant probabilistically prefers to follow a direction rich in pheromone rather than a poorer one. This elementary behavior of real ants can be used to explain how they can find the shortest path which reconnects a broken line after the sudden appearance of an unexpected obstacle has interrupted the initial path (see next figure).
In fact, once the obstacle has appeared, those ants which are just in front of the obstacle cannot continue to follow the pheromone trail and therefore they have to choose between turning right or left. In this situation we can expect half the ants to choose to turn right and the other half to turn left. The very same situation can be found on the other side of the obstacle (see next figure).
It is interesting to note that those ants which choose, by chance, the shorter path around the obstacle will more rapidly reconstitute the interrupted pheromone trail compared to those which choose the longer path. Hence, the shorter path will receive a higher amount of pheromone in the time unit and this will in turn cause a higher number of ants to choose the shorter path. Due to this positive feedback (autocatalytic) process, very soon all the ants will choose the shorter path (see next figure).
The most interesting aspect of this autocatalytic process is that finding the shortest path around the obstacle seems to be an emergent property of the interaction between the obstacle shape and ants distributed behavior: Although all ants move at approximately the same speed and deposit a pheromone trail at approximately the same rate, it is a fact that it takes longer to contour obstacles on their longer side than on their shorter side which makes the pheromone trail accumulate quicker on the shorter side. It is the ants preference for higher pheromone trail levels which makes this accumulation still quicker on the shorter path.
The idea of the ant colony algorithm is to mimic this behavior with "simulated ants" walking around the graph representing the problem to solve.
SWARM INTELLIGENCE TECHNIQUES
As defined earlier swarm intelligence refers to the more general set of algorithms. The most popular are
• Ant colony optimization
• Particle swarm optimization
• Stochastic diffusion search
1 Ant colony optimizaion
Ant colony optimization is a class of optimization algorithms modeled on the actions of an ant colony. Artificial 'ants' - simulation agents - locate optimal solutions by moving through a parameter space representing all possible solutions. Real ants lay down pheromones directing each other to resources while exploring their environment. The simulated 'ants' similarly record their positions and the quality of their solutions, so that in later simulation iterations more ants locate better solutions.
The technique can be used for both Static and Dynamic Combinatorial optimization problems. Convergence is guaranteed, although the speed is unknown.
2 Particle Swarm Optimization
Particle swarm optimization or PSO is a global optimization algorithm for dealing with problems in which a best solution can be represented as a point or surface in an n-dimensional space. Hypotheses are plotted in this space and seeded with an initial velocity, as well as a communication channel between the particles. Particles then move through the solution space, and are evaluated according to some fitness criterion after each timestep. Over time, particles are accelerated towards those particles within their communication grouping which have better fitness values. The main advantage of such an approach over other global minimization strategies such as simulated annealing is that the large number of members that make up the particle swarm make the technique impressively resilient to the problem of local minima.
3 Stochastic diffusion search
Stochastic Diffusion Search or SDS is an agent based probabilistic global search and optimization technique best suited to problems where the objective function can be decomposed into multiple independent partial-functions. Each agent maintains a hypothesis which is iteratively tested by evaluating a randomly selected partial objective function parameterized by the agent's current hypothesis. In the standard version of SDS such partial function evaluations are binary resulting in each agent becoming active or inactive. Information on hypotheses is diffused across the population via inter-agent communication. Unlike the stigmergic communication used in ACO, in SDS agents communicate hypotheses via a one-to-one communication strategy analogous to the tandem running procedure observed in some species of ant. A positive feedback mechanism ensures that, over time, a population of agents stabilize around the global-best solution.
• Ant colony optimization
• Particle swarm optimization
• Stochastic diffusion search
1 Ant colony optimizaion
Ant colony optimization is a class of optimization algorithms modeled on the actions of an ant colony. Artificial 'ants' - simulation agents - locate optimal solutions by moving through a parameter space representing all possible solutions. Real ants lay down pheromones directing each other to resources while exploring their environment. The simulated 'ants' similarly record their positions and the quality of their solutions, so that in later simulation iterations more ants locate better solutions.
The technique can be used for both Static and Dynamic Combinatorial optimization problems. Convergence is guaranteed, although the speed is unknown.
2 Particle Swarm Optimization
Particle swarm optimization or PSO is a global optimization algorithm for dealing with problems in which a best solution can be represented as a point or surface in an n-dimensional space. Hypotheses are plotted in this space and seeded with an initial velocity, as well as a communication channel between the particles. Particles then move through the solution space, and are evaluated according to some fitness criterion after each timestep. Over time, particles are accelerated towards those particles within their communication grouping which have better fitness values. The main advantage of such an approach over other global minimization strategies such as simulated annealing is that the large number of members that make up the particle swarm make the technique impressively resilient to the problem of local minima.
3 Stochastic diffusion search
Stochastic Diffusion Search or SDS is an agent based probabilistic global search and optimization technique best suited to problems where the objective function can be decomposed into multiple independent partial-functions. Each agent maintains a hypothesis which is iteratively tested by evaluating a randomly selected partial objective function parameterized by the agent's current hypothesis. In the standard version of SDS such partial function evaluations are binary resulting in each agent becoming active or inactive. Information on hypotheses is diffused across the population via inter-agent communication. Unlike the stigmergic communication used in ACO, in SDS agents communicate hypotheses via a one-to-one communication strategy analogous to the tandem running procedure observed in some species of ant. A positive feedback mechanism ensures that, over time, a population of agents stabilize around the global-best solution.
Saturday, November 21, 2009
Briefly, the rules are:
1. Seperation : Boids try to fly towards the centre of mass of neighbouring boids. This is done to avoid crowding local flockmates.
Fig 1. seperation of boids
2. Cohesion : Boids try to keep a small distance away from other objects (including other boids). Boids steer to move toward the average position of local flockmates.
Fig 2. cohesion of boids
3. Allignment : Boids try to match velocity with near boids. So boids steer towards the average heading of local flockmate.
Fig 3. allignment of boids
Each boid has direct access to the whole scene's geometric description, but flocking requires that it reacts only to flockmates within a certain small neighborhood around itself. Flockmates outside this local neighborhood are ignored.
In the boids model interaction between simple behaviors of individuals produce complex yet organized group behavior. The component behaviors are inherently nonlinear, so mixing them gives the emergent group dynamics a chaotic aspect. At the same time, the negative feedback provided by the behavioral controllers tends to keep the group dynamics ordered. The result is life-like group behavior.
Fig 1. seperation of boids
2. Cohesion : Boids try to keep a small distance away from other objects (including other boids). Boids steer to move toward the average position of local flockmates.
Fig 2. cohesion of boids
3. Allignment : Boids try to match velocity with near boids. So boids steer towards the average heading of local flockmate.
Fig 3. allignment of boids
Each boid has direct access to the whole scene's geometric description, but flocking requires that it reacts only to flockmates within a certain small neighborhood around itself. Flockmates outside this local neighborhood are ignored.
In the boids model interaction between simple behaviors of individuals produce complex yet organized group behavior. The component behaviors are inherently nonlinear, so mixing them gives the emergent group dynamics a chaotic aspect. At the same time, the negative feedback provided by the behavioral controllers tends to keep the group dynamics ordered. The result is life-like group behavior.
TYPES OF INTERACTIONS
For insects the local interactions can be of two types
1 Direct interaction
this may be exchange of food, chemical substances, visual contact or pheromones.
Eg ants use pheromones for interaction. Birds use visual contact when flying in a flock to maintain their velocity with respect to other birds.
2 Indirect interaction
Individual behavior modifies the environment, which in turn modifies the behavior of other individuals.
Eg such interactions can be seen in case of termites when building a bridge.
There are a lot of real world examples for swarm. Colony of Ants, bees, flock of birds, school of fishes.
3 BOIDS
However one of the most beautiful findings of this field is a very simple algorithm known as boids, which models flocking behaviour in nature. This algorithm was invented by computer animator Craig Reynolds.
The algorithm models the behaviour of flocking animals (eg. birds) by simple rules which describe only the behaviour of individuals. The basic flocking model consists of three simple steering behaviors which describe how an individual boid maneuvers based on the positions and velocities its nearby flockmates:
1 Direct interaction
this may be exchange of food, chemical substances, visual contact or pheromones.
Eg ants use pheromones for interaction. Birds use visual contact when flying in a flock to maintain their velocity with respect to other birds.
2 Indirect interaction
Individual behavior modifies the environment, which in turn modifies the behavior of other individuals.
Eg such interactions can be seen in case of termites when building a bridge.
There are a lot of real world examples for swarm. Colony of Ants, bees, flock of birds, school of fishes.
3 BOIDS
However one of the most beautiful findings of this field is a very simple algorithm known as boids, which models flocking behaviour in nature. This algorithm was invented by computer animator Craig Reynolds.
The algorithm models the behaviour of flocking animals (eg. birds) by simple rules which describe only the behaviour of individuals. The basic flocking model consists of three simple steering behaviors which describe how an individual boid maneuvers based on the positions and velocities its nearby flockmates:
Subscribe to:
Posts (Atom)