Open In App

Simulated Annealing

Improve
Improve
Like Article
Like
Save
Share
Report

Problem : Given a cost function f: R^n –> R, find an n-tuple that minimizes the value of f. Note that minimizing the value of a function is algorithmically equivalent to maximization (since we can redefine the cost function as 1-f).
Many of you with a background in calculus/analysis are likely familiar with simple optimization for single variable functions. For instance, the function f(x) = x^2 + 2x can be optimized setting the first derivative equal to zero, obtaining the solution x = -1 yielding the minimum value f(-1) = -1. This technique suffices for simple functions with few variables. However, it is often the case that researchers are interested in optimizing functions of several variables, in which case the solution can only be obtained computationally. 

One excellent example of a difficult optimization task is the chip floor planning problem. Imagine you’re working at Intel and you’re tasked with designing the layout for an integrated circuit. You have a set of modules of different shapes/sizes and a fixed area on which the modules can be placed. There are a number of objectives you want to achieve: maximizing ability for wires to connect components, minimize net area, minimize chip cost, etc. With these in mind, you create a cost function, taking all, say, 1000 variable configurations and returning a single real value representing the ‘cost’ of the input configuration. We call this the objective function, since the goal is to minimize its value. 
A naive algorithm would be a complete space search — we search all possible configurations until we find the minimum. This may suffice for functions of few variables, but the problem we have in mind would entail such a brute force algorithm to fun in O(n!).

Due to the computational intractability of problems like these, and other NP-hard problems, many optimization heuristics have been developed in an attempt to yield a good, albeit potentially suboptimal, value. In our case, we don’t necessarily need to find a strictly optimal value — finding a near-optimal value would satisfy our goal. One widely used technique is simulated annealing, by which we introduce a degree of stochasticity, potentially shifting from a better solution to a worse one, in an attempt to escape local minima and converge to a value closer to the global optimum. 

Simulated annealing is based on metallurgical practices by which a material is heated to a high temperature and cooled. At high temperatures, atoms may shift unpredictably, often eliminating impurities as the material cools into a pure crystal. This is replicated via the simulated annealing optimization algorithm, with energy state corresponding to current solution.
In this algorithm, we define an initial temperature, often set as 1, and a minimum temperature, on the order of 10^-4. The current temperature is multiplied by some fraction alpha and thus decreased until it reaches the minimum temperature. For each distinct temperature value, we run the core optimization routine a fixed number of times. The optimization routine consists of finding a neighboring solution and accepting it with probability e^(f(c) – f(n)) where c is the current solution and n is the neighboring solution. A neighboring solution is found by applying a slight perturbation to the current solution. This randomness is useful to escape the common pitfall of optimization heuristics — getting trapped in local minima. By potentially accepting a less optimal solution than we currently have, and accepting it with probability inverse to the increase in cost, the algorithm is more likely to converge near the global optimum. Designing a neighbor function is quite tricky and must be done on a case by case basis, but below are some ideas for finding neighbors in locational optimization problems. 

  • Move all points 0 or 1 units in a random direction
  • Shift input elements randomly
  • Swap random elements in input sequence
  • Permute input sequence
  • Partition input sequence into a random number of segments and permute segments

One caveat is that we need to provide an initial solution so the algorithm knows where to start. This can be done in two ways: (1) using prior knowledge about the problem to input a good starting point and (2) generating a random solution. Although generating a random solution is worse and can occasionally inhibit the success of the algorithm, it is the only option for problems where we know nothing about the landscape. 

There are many other optimization techniques, although simulated annealing is a useful, stochastic optimization heuristic for large, discrete search spaces in which optimality is prioritized over time. Below, I’ve included a basic framework for locational-based simulated annealing (perhaps the most applicable flavor of optimization for simulated annealing). Of course, the cost function, candidate generation function, and neighbor function must be defined based on the specific problem at hand, although the core optimization routine has already been implemented.

C++




#include <bits/stdc++.h>
using namespace std;
 
//c++ code for the above approach
class Solution {
     
    public:
    float CVRMSE;
    vector<int> config;
    Solution(float CVRMSE, vector<int> configuration) {
        this->CVRMSE = CVRMSE;
        config = configuration;
    }
};
 
// Function prototype
Solution genRandSol();
 
// global variables.
int T = 1;
float Tmin = 0.0001;
float alpha = 0.9;
int numIterations = 100;
int M = 5;
int N = 5;
vector<vector<char>> sourceArray(M, vector<char>(N, 'X'));
vector<int> temp = {};
Solution mini = Solution((float)INT_MAX, temp);
Solution currentSol = genRandSol();
 
Solution genRandSol() {
    // Instantiating for the sake of compilation
    vector<int> a = {1, 2, 3, 4, 5};
    return Solution(-1.0, a);
}
 
Solution neighbor(Solution currentSol) {
    return currentSol;
}
 
float cost(vector<int> inputConfiguration) {
    return -1.0;
}
 
// Mapping from [0, M*N] --> [0,M]x[0,N]
vector<int> indexToPoints(int index) {
    vector<int> points = {index % M,index/M};
    return points;
}
 
 
//Returns minimum value based on optimization
int main(){
     
    while (T > Tmin) {
        for (int i = 0; i < numIterations; i++) {
            // Reassigns global minimum accordingly
            if (currentSol.CVRMSE < mini.CVRMSE) {
                mini = currentSol;
            }
            Solution newSol = neighbor(currentSol);
            float ap = exp((currentSol.CVRMSE - newSol.CVRMSE) / T);
            srand( (unsigned)time( NULL ) );
            if (ap > (float) rand()/RAND_MAX) {
                currentSol = newSol;
            }
        }
        T *= alpha; // Decreases T, cooling phase
    }
     
    cout << mini.CVRMSE << "\n\n";
     
    for (int i = 0; i < M; i++) {
        for (int j = 0; j < N; j++) {
            sourceArray[i][j] = 'X';
        }
    }  
     
    // Displays
    for(int index = 0; index < mini.config.size(); index++){
        int obj = mini.config[index];
        vector<int> coord = indexToPoints(obj);
        sourceArray[coord[0]][coord[1]] = '-';
    }
 
    // Displays optimal location
    for (int i = 0; i < M; i++) {
        string row = "";
        for (int j = 0; j < N; j++) {
            row = row + sourceArray[i][j] + " ";
        }
        cout << (row) << endl;
    }
}
 
// The code is contributed by Nidhi goel.


Java





C#





Python3





Javascript




//Javascript code for the above approach
class Solution {
    constructor(CVRMSE, configuration) {
        this.CVRMSE = CVRMSE;
        this.config = configuration;
    }
}
 
let T = 1;
const Tmin = 0.0001;
const alpha = 0.9;
const numIterations = 100;
 
function genRandSol() {
    // Instantiating for the sake of compilation
    const a = [1, 2, 3, 4, 5];
    return new Solution(-1.0, a);
}
 
function neighbor(currentSol) {
    return currentSol;
}
 
function cost(inputConfiguration) {
    return -1.0;
}
 
// Mapping from [0, M*N] --> [0,M]x[0,N]
function indexToPoints(index) {
    const points = [index % M, Math.floor(index / M)];
    return points;
}
 
const M = 5;
const N = 5;
const sourceArray = Array.from(Array(M), () => new Array(N).fill('X'));
let min = new Solution(Number.POSITIVE_INFINITY, null);
let currentSol = genRandSol();
 
while (T > Tmin) {
    for (let i = 0; i < numIterations; i++) {
        // Reassigns global minimum accordingly
        if (currentSol.CVRMSE < min.CVRMSE) {
            min = currentSol;
        }
        const newSol = neighbor(currentSol);
        const ap = Math.exp((currentSol.CVRMSE - newSol.CVRMSE) / T);
        if (ap > Math.random()) {
            currentSol = newSol;
        }
    }
    T *= alpha; // Decreases T, cooling phase
}
 
//Returns minimum value based on optimization
console.log(min.CVRMSE, "\n\n");
 
for (let i = 0; i < M; i++) {
    for (let j = 0; j < N; j++) {
        sourceArray[i][j] = "X";
    }
}
 
// Displays
for (const obj of min.config) {
    const coord = indexToPoints(obj);
    sourceArray[coord[0]][coord[1]] = "-";
}
 
// Displays optimal location
for (let i = 0; i < M; i++) {
    let row = "";
    for (let j = 0; j < N; j++) {
        row += sourceArray[i][j] + " ";
    }
    console.log(row);
}


Output

-1.0


[X, -, X, X, X]
[-, X, X, X, X]
[-, X, X, X, X]
[-, X, X, X, X]
[-, X, X, X, X]

 



Last Updated : 03 Apr, 2023
Like Article
Save Article
Previous
Next
Share your thoughts in the comments