Social cognitive optimization explained

Social cognitive optimization (SCO) is a population-based metaheuristic optimization algorithm which was developed in 2002. This algorithm is based on the social cognitive theory, and the key point of the ergodicity is the process of individual learning of a set of agents with their own memory and their social learning with the knowledge points in the social sharing library. It has been used for solving continuous optimization, integer programming, and combinatorial optimization problems. It has been incorporated into the NLPSolver extension of Calc in Apache OpenOffice.

Algorithm

Let

f(x)

be a global optimization problem, where

x

is a state in the problem space

S

. In SCO, each state is called a knowledge point, and the function

f

is the goodness function.

In SCO, there are a population of

Nc

cognitive agents solving in parallel, with a social sharing library. Each agent holds a private memory containing one knowledge point, and the social sharing library contains a set of

NL

knowledge points. The algorithm runs in T iterative learning cycles. By running as a Markov chain process, the system behavior in the tth cycle only depends on the system status in the (t − 1)th cycle. The process flow is in follows:

xi

in the memory of each agent

i

, and all knowledge points in the social sharing library

X

, normally at random in the problem space

S

.

t

(t=1,\ldots,T)

i

(i=1,\ldots,Nc)

xM

in

X(t)

, normally realized using tournament selection, which returns the best knowledge point from randomly selected

\tauB

points.

xi(t)

and the model point

xM

,and return the one with higher quality as the base point

xBase

,and another as the reference point

xRef

xBase

and

xRef

to generate a new knowledge point

xi(t+1)

. Normally

xi(t+1)

should be around

xBase

,and the distance with

xBase

is related to the distance between

xRef

and

xBase

, and boundary handling mechanism should be incorporated here to ensure that

xi(t+1)\inS

.

xi(t+1)

, to the social sharing library

X

.

i

, normally replace

xi(t)

by

xi(t+1)

. Some Monte Carlo types might also be considered.

X(t)

into

X(t+1)

. A simple way is one by one tournament selection: for each knowledge point submitted by an agent, replace the worse one among

\tauW

points randomly selected from

X(t)

.

SCO has three main parameters, i.e., the number of agents

Nc

, the size of social sharing library

NL

, and the learning cycle

T

. With the initialization process, the total number of knowledge points to be generated is

NL+Nc*(T+1)

, and is not related too much with

NL

if

T

is large.

Compared to traditional swarm algorithms, e.g. particle swarm optimization, SCO can achieving high-quality solutions as

Nc

is small, even as

Nc=1

. Nevertheless, smaller

Nc

and

NL

might lead to premature convergence. Some variants were proposed to guaranteed the global convergence. One can also make a hybrid optimization method using SCO combined with other optimizers. For example, SCO was hybridized with differential evolution to obtain better results than individual algorithms on a common set of benchmark problems.[1]

Notes and References

  1. Xie . Xiao-Feng . Liu. J.. Wang . Zun-Jing . A cooperative group optimization system . Soft Computing . 2014 . 18 . 3. 469–495 . 10.1007/s00500-013-1069-8 . 1808.01342 . 5393223 .