Simulated Annealing

Problem : Given a cost function f: R^n –> R, find an n-tuple that minimizes the value of f. Note that minimizing the value of a function is algorithmically equivalent to maximization (since we can redefine the cost function as 1-f).

Many of you with a background in calculus/analysis are likely familiar with simple optimization for single variable functions. For instance, the function f(x) = x^2 + 2x can be optimized setting the first derivative equal to zero, obtaining the solution x = -1 yielding the minimum value f(-1) = 0. This technique suffices for simple functions with few variables. However, it is often the case that researchers are interested in optimizing functions of several variables, in which case the solution can only be obtained computationally.

One excellent example of a difficult optimization task is the chip floor planning problem. Imagine you’re working at Intel and you’re tasked with designing the layout for an integrated circuit. You have a set of modules of different shapes/sizes and a fixed area on which the modules can be placed. There are a number of objectives you want to achieve: maximizing ability for wires to connect components, minimize net area, minimize chip cost, etc. With these in mind, you create a cost function, taking all, say, 1000 variable configurations and returning a single real value representing the ‘cost’ of the input configuration. We call this the objective function, since the goal is to minimize its value.

A naive algorithm would be a complete space search — we search all possible configurations until we find the minimum. This may suffice for functions of few variables, but the problem we have in mind would entail such a brute force algorithm to fun in O(n!).

Due to the computational intractability of problems like these, and other NP-hard problems, many optimization heuristics have been developed in an attempt to yield a good, albeit potentially suboptimal, value. In our case, we don’t necessarily need to find a strictly optimal value — finding a near-optimal value would satisfy our goal. One widely used technique is simulated annealing, by which we introduce a degree of stochasticity, potentially shifting from a better solution to a worse one, in an attempt to escape local minima and converge to a value closer to the global optimum.

Simulated annealing is based on metallurgical practices by which a material is heated to a high temperature and cooled. At high temperatures, atoms may shift unpredictably, often eliminating impurities as the material cools into a pure crystal. This is replicated via the simulated annealing optimization algorithm, with energy state corresponding to current solution.

In this algorithm, we define an initial temperature, often set as 1, and a minimum temperature, on the order of 10^-4. The current temperature is multiplied by some fraction alpha and thus decreased until it reaches the minimum temperature. For each distinct temperature value, we run the core optimization routine a fixed number of times. The optimization routine consists of finding a neighboring solution and accepting it with probability e^(f(c) – f(n)) where c is the current solution and n is the neighboring solution. A neighboring solution is found by applying a slight perturbation to the current solution. This randomness is useful to escape the common pitfall of optimization heuristics — getting trapped in local minima. By potentially accepting a less optimal solution than we currently have, and accepting it with probability inverse to the increase in cost, the algorithm is more likely to converge near the global optimum. Designing a neighbor function is quite tricky and must be done on a case by case basis, but below are some ideas for finding neighbors in locational optimization problems.

  • Move all points 0 or 1 units in a random direction
  • Shift input elements randomly
  • Swap random elements in input sequence
  • Permute input sequence
  • Partition input sequence into a random number of segments and permute segments

One caveat is that we need to provide an initial solution so the algorithm knows where to start. This can be done in two ways: (1) using prior knowledge about the problem to input a good starting point and (2) generating a random solution. Although generating a random solution is worse and can occasionally inhibit the success of the algorithm, it is the only option for problems where we know nothing about the landscape.

There are many other optimization techniques, although simulated annealing is a useful, stochastic optimization heuristic for large, discrete search spaces in which optimality is prioritized over time. Below, I’ve included a basic framework for locational-based simulated annealing (perhaps the most applicable flavor of optimization for simulated annealing). Of course, the cost function, candidate generation function, and neighbor function must be defined based on the specific problem at hand, although the core optimization routine has already been implemented.

// Java program to implement Simulated Annealing
import java.util.*;

public class SimulatedAnnealing {

    // Initial and final temperature
    public static double T = 1;

    // Simulated Annealing parameters

    // Temperature at which iteration terminates
    static final double Tmin = .0001;

    // Decrease in temperature
    static final double alpha = 0.9;

    // Number of iterations of annealing
    // before decreasing temperature
    static final int numIterations = 100;

    // Locational parameters

    // Target array is discretized as M*N grid
    static final int M = 5, N = 5;

    // Number of objects desired
    static final int k = 5;


    public static void main(String[] args) {

        // Problem: place k objects in an MxN target
        // plane yielding minimal cost according to
        // defined objective function

        // Set of all possible candidate locations
        String[][] sourceArray = new String[M][N];

        // Global minimum
        Solution min = new Solution(Double.MAX_VALUE, null);

        // Generates random initial candidate solution
        // before annealing process
        Solution currentSol = genRandSol();

        // Continues annealing until reaching minimum
        // temprature
        while (T > Tmin) {
            for (int i=0;i<numIterations;i++){

                // Reassigns global minimum accordingly
                if (currentSol.CVRMSE < min.CVRMSE){
                    min = currentSol;
                }

                Solution newSol = neighbor(currentSol);
                double ap = Math.pow(Math.E,
                     (currentSol.CVRMSE - newSol.CVRMSE)/T);
                if (ap > Math.random())
                    currentSol = newSol;
            }

            T *= alpha; // Decreases T, cooling phase
        }

        //Returns minimum value based on optimiation
        System.out.println(min.CVRMSE+"\n\n");

        for(String[] row:sourceArray) Arrays.fill(row, "X");

        // Displays
        for (int object:min.config) {
            int[] coord = indexToPoints(object);
            sourceArray[coord[0]][coord[1]] = "-";
        }

        // Displays optimal location
        for (String[] row:sourceArray)
            System.out.println(Arrays.toString(row));

    }

    // Given current configuration, returns "neighboring"
    // configuration (i.e. very similar)
    // integer of k points each in range [0, n)
    /* Different neighbor selection strategies:
        * Move all points 0 or 1 units in a random direction
        * Shift input elements randomly
        * Swap random elements in input sequence
        * Permute input sequence
        * Partition input sequence into a random number
          of segments and permute segments   */
    public static Solution neighbor(Solution currentSol){

        // Slight perturbation to the current solution
        // to avoid getting stuck in local minimas

        // Returning for the sake of compilation
        return currentSol;

    }

    // Generates random solution via modified Fisher-Yates
    // shuffle for first k elements
    // Pseudorandomly selects k integers from the interval
    // [0, n-1]
    public static Solution genRandSol(){

        // Instantiating for the sake of compilation
        int[] a = {1, 2, 3, 4, 5};

        // Returning for the sake of compilation
        return new Solution(-1, a);
    }


    // Complexity is O(M*N*k), asymptotically tight
    public static double cost(int[] inputConfiguration){

        // Given specific configuration, return object
        // solution with assigned cost
        return -1; //Returning for the sake of compilation
    }

    // Mapping from [0, M*N] --> [0,M]x[0,N]
    public static int[] indexToPoints(int index){
        int[] points = {index%M, index/M};
        return points;
    }

    // Class solution, bundling configuration with error
    static class Solution {

        // function value of instance of solution;
        // using coefficient of variance root mean
        // squared error
        public double CVRMSE;

        public int[] config; // Configuration array
        public Solution(double CVRMSE, int[] configuration) {
            this.CVRMSE = CVRMSE;
            config = configuration;
        }
    }
}

Output :

-1.0


[X, -, X, X, X]
[-, X, X, X, X]
[-, X, X, X, X]
[-, X, X, X, X]
[-, X, X, X, X]

This article is contributed by Joel Abraham . If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.

Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.

GATE CS Corner    Company Wise Coding Practice

Recommended Posts:







Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.