simulated_annealing 0.0.1-nullsafety copy "simulated_annealing: ^0.0.1-nullsafety" to clipboard
simulated_annealing: ^0.0.1-nullsafety copied to clipboard

outdated

A simulated annealing framework for Dart.

Simulated Annealing For Dart #

Build Status

Introduction #

Simulated annealing (SA) is an algorithm aimed at finding the global minimum of of a function E(x0, x1, ..., xn) for a given region ω. The function to be minimized can be interpreted as the system energy. In that case, the global minimum represents the ground state of the system.

The algorithm name was coined by Kirkpatrick et al. [1] and was derived from the process of annealing a metal alloy or glass. The first step of the annealing process consists of heating a solid material above a critical temperature. This allows its atoms to gain sufficient kinetic energy to be able to rearrange themselves. Then the temperature is decreased sufficiently slowly in order to minimize atomic lattice defects as the material solidifies.

Simulated annealing works by randomly selecting points in the search space ω, evaluating the energy function, and deciding if the new solution is accepted or rejected. If for a newly selected point the energy E is lower that the previous minimum energy Emin, the new solution is accepted: P(ΔE < 0, T) = 1, where ΔE = E - Emin.

Crucially, if the energy is larger than Emin, the algorithm still accepts the new solution with probability: P(ΔE > 0, T) = e-ΔE/(kB·T). Accepting up-hill moves provides a method of escaping from local energy minima. The probability of accepting a solution with ΔE > 0 decreases with decreasing temperature.

The Boltzmann constant kB relates the system temperature with the kinetic energy of particles in a gas. In the context of SA, kB relates the system temperature with the probability of accepting a solution where ΔE > 0. As such, the temperature is a parameter that controls the probability of up-hill moves.

Most authors set kB = 1 and scale the temperature to control the solution acceptance probability. I find it more practical to use an independent temperature scale with the highest value T0 and the lowest value Tn, (where n is the number of outer SA iterations) and calculate the system dependent constant kB (see section Algorithm Tuning).

Usage #

To use this package include simulated_annealing as a dependency in your pubspec.yaml file.

The following steps are required to set up the SA algorithm.

  1. Extend the class Simulator implementing the methods prepareLog() and recordLog().
  2. Specify the search space ω.
  3. Define an annealing schedule and a neighbourhood function.
  4. Define the system energy.
  5. Start the simulated annealing process.
Click to show source code.

import 'dart:io';
import 'dart:math';

import 'package:simulated_annealing/simulated_annealing.dart';

class LoggingSimulator extends Simulator {
  LoggingSimulator(
    Energy system,
    AnnealingSchedule schedule, {
    num gamma = 0.8,
    num? dE0,
    List<num>? xMin0,
  }) : super(
          system,
          schedule,
          gamma: gamma,
          dE0: dE0,
          xMin0: xMin0,
        );

  final rec = DataRecorder();

  @override
  void prepareLog() {
    rec.prepareVector('x', 3);
    rec.prepareScalar('Energy');
    rec.prepareScalar('Energy Min');
    rec.prepareScalar('P(dE)');
    rec.prepareScalar('Temperature');
    rec.prepareVector('dx', 3);
  }

  @override
  void recordLog() {
    rec.addVector('x', x);
    rec.addVector('dx', dx);
    rec.addScalar('Energy', eCurrent);
    rec.addScalar('Energy Min', eMin);
    rec.addScalar('P(dE)',
        (eCurrent - eMin) < 0 ? 1 : exp(-(eCurrent - eMin) / (kB * t)));
    rec.addScalar('Temperature', t);
  }
}

void main() async {
  // Defining a spherical space.
  final radius = 2;
  final x = FixedInterval(-radius, radius);
  final y = ParametricInterval(
    () => -sqrt(pow(radius, 2) - pow(x.next(), 2)),
    () => sqrt(pow(radius, 2) - pow(x.next(), 2)),
  );
  final z = ParametricInterval(
    () => -sqrt(pow(radius, 2) - pow(y.next(), 2) - pow(x.next(), 2)),
    () => sqrt(pow(radius, 2) - pow(y.next(), 2) - pow(x.next(), 2)),
  );
  final space = SearchSpace([x, y, z]);

  // Defining an annealing schedule.
  final schedule = AnnealingSchedule(
    exponentialSequence(100, 1e-8, n: 750),
    space.size,
    [1e-6, 1e-6, 1e-6],
  );

  // Defining an energy function.
  // The energy function has a local minimum at xLocalMin
  // and a global minimum at xGlobalMin.
  final xGlobalMin = [0.5, 0.7, 0.8];
  final xLocalMin = [-1.0, -1.0, -0.5];
  num energyFunction(List<num> x) {
    return 4.0 -
        4.0 * exp(-4 * xGlobalMin.distance(x)) -
        2.0 * exp(-6 * xLocalMin.distance(x));
  }

  // ignore: unused_element
  int markov(num temperature) {
    return min(1 + 1 ~/ (100 * temperature), 25);
  }

  final energy = Energy(energyFunction, space);

  // Construct a simulator instance.
  final simulator = LoggingSimulator(
    energy,
    schedule,
    gamma: 0.8,
    dE0: energy.stdDev + 0.1,
    xMin0: [-1, -1, -0.5],
  );

  print(simulator);

  final sample = simulator.system.samplePoints;
  for (var i = 0; i < simulator.system.sampleSize; i++) {
    sample[i].add(simulator.system.sample[i]);
  }

  final xSol = simulator.anneal((t) => 1);
  await File('../data/log.dat').writeAsString(simulator.rec.export());
  await File('../data/energy_sample.dat')
      .writeAsString(sample.export(label: 'x y z energy'));

  print('Solution: $xSol');
}

Algorithm Tuning #

It can be shown that by selecting a sufficiently high initial factor kB·T the algorithm converges to the global minimum if the annealing schedule decreases on a logarithmic scale (slow cooling) and the number of inner iterations (Markov chain length) is sufficiently high [2].

Practical implementations of the SA algorithm aim to generate an acceptable solution with minimal computational effort. For such fast cooling schedules, algorithm convergence to the global minimum is not strictly guaranteed. In that sense, SA is a heuristic approach and some degree of trial and error is required to determine which annealing schedule works best for a given problem.

Estimating the value of kB #

An estimate for the average scale of the variation of the energy function ΔE0 can be obtained by sampling the energy function E at random points in the search space ω and calculating the sample standard deviation σE [3]. The constant kB is set such that the probability of accepting a solution P(ΔE0 = σE, T0) = γ where T0 is the initial temperature.

Note: When using the standard deviation as a measure of the average variation of E it is possible to underestimate ΔE0 if the function E is plateau-shaped with isolated extrema. For this reason, the constructor of Simulator accepts the optional argument ΔE0 with default value ΔE0 = 0.5·(σE + 0.2·|Emax - Emin|), where Emax and Emin are the maximum and minimum values found while sampling the energy function as mentioned above.

ΔE0 is the most critical SA parameter. If ΔE0 is too large the algorithm will oscillate wildy between random points and will most likely not converge towards an acceptable solution. On the other hand, if ΔE0 is too small up-hill moves are unlikely and the solution most likely converges towards a local minimum or a point situated in a plateau-shaped region.

Since gamma represents a probability, 0 < γ < 1, however useful values for γ are in the range of (0.7, 0.9). If γ is too low, up-hill moves are unlikely (potentially) preventing the SA algorithm from escaping a local miniumum. If γ is set close to 1.0 the algorithm will accept too many up-hill moves at high temperatures wasting computational time and delaying convergence.

Selecting an annealing schedule. #

It is recommended to start with a higher number of outer iterations (number of entries in the sequence of temperatures) and log quantities like the current system energy, temperature, and the intermediate solutions.

The figure below shows a typical SA log where the x-coordinate of the solution (green dots) converges asymptotically to 0.5 after 750 iteration. A projection of the energy function onto the x-y plane is shown in the inset. Note that the global minimum of the energy function occurs at x = 0.5. The graph is discussed in more detail here.

Convergence Graph

The number of inner iterations (performed while the temperature is kept constant) is determined by a function with typedef MarkovChainLength, see method anneal. For fast cooling schedules convergence to an acceptable solution can be improved by increasing the number of inner iterations (Markov chain length).

Examples #

Further examples on how to define and use a:

Features and bugs #

Please file feature requests and bugs at the issue tracker.