Intro to swarm intelligence (part 1)

This semester I’ve been taking a very interesting class in artificial intelligence that has covered artificial neural networks, rule based identification trees, and expert systems among other things. But today I’m going to talk about a slightly less traditional but still very important artificial intelligence concept known as “swarm intelligence.” I’ll present some very recent research in the area, many of which use game theory which is not surprising since swarm intelligence is agent based.

Just to give a brief background of the concept. The name “swarm intelligence” was coined by Beni and Wang (1989) in their article “Swarm Intelligence in Cellular Robotic
. In the article, Beni and Wang introduced a swarm as a collection of a large number of autonomous (self-thinking) robots where each robot acts on its own with no central controllers and is only able to communicate with the robot next to it. Although they may act independently, they may also be used to perform a larger task together. This is similar (but distinct!) from cellular automata since CA automata is homogeneous and static while each robot is fully capable of “individual thought” and thus able to perform independently of the other robots to fully make up swarm intelligence. One key feature of a intelligent robot (in my personal opinion and presented in the article) is that an intelligent robot not only recognizes patterns (CA also does this) but also forms patterns in a non-deterministic but not completely random manner.

Perhaps the most often used technique is particle swarm optimization (PSO). Briefly, PSO relies on treating a set of candidate solutions as particles defined by location and velocity in space such that the “swarm” of particles are guided toward the best location in the search space by some particles which have discovered that location. (Imagine a flock of birds at first disorganized until led toward a certain location by a “leader” that found the optimal route.) To see a very comprehensive compendium of applications of PSO, please refer to Poli (2007) . Personally, one of my favorite papers on the topic is Eberhart and Hu (1999). Here, the researchers trained a neural network using PSO to distinguish between normal (essential) tremors and tremors caused by Parkinson’s Disease. Data for PSO analysis was gathered via Tele-Actigraphs strapped to a subject’s non-dominant wrist. In normal feedforward neural networks, weights are evolved and fed into a transfer function (usually the sigmoid function). Here, using PSO, the slope of the sigmoid function or k in \frac{1}{1+e^{-k*input}}. The researchers discovered that using PSO on a neural network composed of 60 inputs, 12 hidden units, and 2 output reached convergence in 38 generations, an extremely fast speed. Furthermore, the authors claim that the evolved network was highly accurate.

(To see a detailed and mathematically rigorous comparison between backpropagation (the usual method of adjusting neural network weights) and PSO, please refer here . The algorithms (backpropagation and PSO) are compared using Mean Square Error. Most importantly, the weights of the network in PSO is updated using the formula: W(t+1) = W(t) + \Delta W(t+1) and where \Delta W(t+1) = w*W(t) + c_1*rand()*[pBest(t) - W(t)] + c_2*rand()*[gBest(t) - W(t)]. Please refer to paper for variable definitions.)

I will attempt to write more parts to the Swarm Intelligence series I will be starting up. But I value my sleep right not, so I will leave this post at that.

This entry was posted in Algorithms, Swarm Intelligence and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s