# simulated_annealing 0.3.0 simulated_annealing: ^0.3.0 copied to clipboard

Simulated annealing framework for Dart. Enables quickly setting up a simulator for finding the global minimum of multi-variate functions.

# Simulated Annealing For Dart #

## Introduction #

Simulated annealing (SA) is an algorithm aimed at finding the *global* minimum
of a function E(x_{0}, x_{1}, ..., x_{n})
for a given region ω(x_{0}, x_{1}, ..., x_{n}).
The function to be minimized can be interpreted as the
**system energy**. In that case, the global minimum represents
the **ground state** of the system.

## The algorithm name was coined by Kirkpatrick ... click to show details.

The algorithm name was coined by Kirkpatrick et al. and was derived from the process of annealing a metal alloy or glass. The first step of the annealing process consists of heating a solid material above a critical temperature. This allows its atoms to gain sufficient kinetic energy to be able to rearrange themselves. Then the temperature is decreased sufficiently slowly in order to minimize atomic lattice defects as the material solidifies.Simulated annealing works by randomly selecting a new point in the neighbourhood of the
current solution, evaluating the energy function, and deciding if the new solution is accepted or rejected.
If for a newly selected point the energy E is lower that the previous minimum energy
E_{min}, the new solution is accepted. Crucially, if ΔE > 0, the algorithm still accepts the new solution with probability:
P(ΔE > 0, T) = e^{-ΔE/(kB·T)} where ΔE = E - E_{min}. Accepting up-hill moves provides a method of escaping from local energy minima.

The Boltzmann constant k_{B} relates the system
temperature with the kinetic energy of particles in a gas. In the context of SA, it is customary to set k_{B} ≡ 1.
With this convention, the probability of accepting a new solution is given by:

P(ΔE > 0, T) = e^{-ΔE/T} and P(ΔE <0, T) = 1.0,
where ΔE = E - E_{min}.

The expression above ensures that the acceptance probability decreases with decreasing temperature (for ΔE > 0). As such, the temperature is a parameter that controls the probability of up-hill moves.

The process is demonstrated in the animation above. The left figure shows a spherical 3D search space while the energy value is represented by colour. The figure on the right shows a projection of the energy function onto the x-y plane. Initially, random points are chosen from a large region encompasing the entire spherical search space. In the simulation shown above, intermediate solutions near the local minimum are followed by up-hill moves. As the temperature drops the search neighourhood is contracted and the solution converges to the global minimum.

## Usage #

To use this package include `simulated_annealing`

as a `dependency`

in your `pubspec.yaml`

file.

The following steps are required to set up the SA algorithm.

- Specify the search space ω.
- Define the system EnergyField, an object encapsulating the energy function (cost function) and its domain (the search space).
- Create an instance of
`LoggingSimulator`

or alternatively extend the abstract class`Simulator`

. - Start the simulated annealing process.

## Click to show source code.

```
import 'dart:io';
import 'dart:math';
import 'package:list_operators/list_operators.dart';
import 'package:simulated_annealing/simulated_annealing.dart';
void main() async {
// Defining a spherical space.
final radius = 2;
final x = FixedInterval(-radius, radius);
num yLimit() => sqrt(pow(radius, 2) - pow(x.next(), 2));
final y = ParametricInterval(() => -yLimit(), yLimit);
num zLimit() => sqrt(pow(radius, 2) - pow(y.next(), 2) - pow(x.next(), 2));
final z = ParametricInterval(() => -zLimit(), zLimit);
// Parameteric intervals must be listed in order of dependence.
// Example: y depends on x, z depends on x and y => list order: [x, y, z].
final space = SearchSpace([x, y, z]);
// Defining an energy function.
final xGlobalMin = [0.5, 0.7, 0.8];
final xLocalMin = [-1.0, -1.0, -0.5];
num energy(List<num> x) {
return 4.0 -
4.0 * exp(-4 * xGlobalMin.distance(x)) -
2.0 * exp(-6 * xLocalMin.distance(x));
}
// Constructing an instance of `EnergyField`.
final energyField = EnergyField(
energy,
space,
);
// Construct a simulator instance.
final simulator = LoggingSimulator(field);
print(simulator);
print(await simulator.info);
// Running the simulated annealing process.
final xSol = await simulator.anneal((_) => 1, isRecursive: true, ratio: 0.5);
// Storing simulator log data.
await File('example/data/log.dat').writeAsString(simulator.export());
print('Solution: $xSol');
}
```

## Algorithm Tuning #

It can be shown that by selecting a sufficiently high initial temperature the algorithm converges to the global minimum if the temperature decreases on a logarithmic scale (slow cooling schedule) and the number of inner iterations (Markov chain length) is sufficiently high [1].

Practical implementations of the SA algorithm aim to generate
an acceptable solution with *minimal* computational effort.
For such *fast cooling* schedules, algorithm convergence to the global minimum is not
strictly guaranteed. In that sense, SA is a heuristic approach and some
degree of trial and error is required to determine which annealing schedule
works best for a given problem.

The behaviour of the annealing simulator can be tuned using the following **optional** parameters of the `Simulator`

constructor:

`gammaStart`

: Initial acceptance probability with default value 0.7. Useful values for γ_{start}are in the range of [0.7, 0.9]. If γ_{start}is too low, up-hill moves are unlikely (potentially) preventing the SA algorithm from escaping a local miniumum. If γ_{start}is set close to 1.0 the algorithm will accept too many up-hill moves at high temperatures wasting computational time and delaying convergence.`gammaEnd`

: Final acceptance probability. Towards the end of the annealing process one assumes that the solution has converged towards the global minimum and up-hill moves should be restricted. For this reason γ_{end}has default value 0.05.`iterations`

: Determines the number of temperature steps in the annealing schedule.

Additionally, it is possible to set the class variable `temperatureSequence`

to function of type `TemperatureSequence`

that is used to determine the temperature at each (outer) iteration step.

It is recommended to start with a higher number of outer iterations (number of entries in the sequence of temperatures) and log quantities like the current system energy, temperature, and the intermediate solutions.

The figure below shows a typical SA log where the x-coordinate of the solution (green dots) converges asymptotically to 0.5. The graph is discussed in more detail here.

The number of inner iterations (performed while the temperature is kept constant)
is also referred to as Markov chain length and is determined by a function with typedef `MarkovChainLength`

see method `anneal`

.

For fast cooling schedules convergence to an acceptable solution can be improved by increasing the number of inner iterations.

## Annealing Schedule #

In general, the following information is required to define an annealing schedule:

- T
_{start}, the initial temperature, - T
_{end}, the final temperature, - the number of (outer) iterations,
- a function of type
`TemperatureSequence`

that is used to determine the temperature at each (outer) iteration step.

The class `EnergyField`

provides the methods `tStart`

and `tEnd`

. These use
an algorithm introduced by Ben-Ameur to calculate the
initial and final annealing temperature [2].

## Examples #

Further information can be found in the folder example. The following topics are covered:

- search space,
- annealing schedule,
- system energy and logging simulator.

## Features and bugs #

Please file feature requests and bugs at the issue tracker.