Problem : Given a cost function f: R^n --> R, find an n-tuple that minimizes the value of f. Note that minimizing the value of a function is algorithmically equivalent to maximization (since we can redefine the cost function as 1-f).
Many of you with a background in calculus/analysis are likely familiar with simple optimization for single variable functions. For instance, the function f(x) = x^2 + 2x can be optimized setting the first derivative equal to zero, obtaining the solution x = -1 yielding the minimum value f(-1) = -1. This technique suffices for simple functions with few variables. However, it is often the case that researchers are interested in optimizing functions of several variables, in which case the solution can only be obtained computationally.
One excellent example of a difficult optimization task is the chip floor planning problem. Imagine you're working at Intel and you're tasked with designing the layout for an integrated circuit. You have a set of modules of different shapes/sizes and a fixed area on which the modules can be placed. There are a number of objectives you want to achieve: maximizing ability for wires to connect components, minimize net area, minimize chip cost, etc. With these in mind, you create a cost function, taking all, say, 1000 variable configurations and returning a single real value representing the 'cost' of the input configuration. We call this the objective function, since the goal is to minimize its value.
A naive algorithm would be a complete space search -- we search all possible configurations until we find the minimum. This may suffice for functions of few variables, but the problem we have in mind would entail such a brute force algorithm to fun in O(n!).
Due to the computational intractability of problems like these, and other NP-hard problems, many optimization heuristics have been developed in an attempt to yield a good, albeit potentially suboptimal, value. In our case, we don't necessarily need to find a strictly optimal value -- finding a near-optimal value would satisfy our goal. One widely used technique is simulated annealing, by which we introduce a degree of stochasticity, potentially shifting from a better solution to a worse one, in an attempt to escape local minima and converge to a value closer to the global optimum.
Simulated annealing is based on metallurgical practices by which a material is heated to a high temperature and cooled. At high temperatures, atoms may shift unpredictably, often eliminating impurities as the material cools into a pure crystal. This is replicated via the simulated annealing optimization algorithm, with energy state corresponding to current solution.
In this algorithm, we define an initial temperature, often set as 1, and a minimum temperature, on the order of 10^-4. The current temperature is multiplied by some fraction alpha and thus decreased until it reaches the minimum temperature. For each distinct temperature value, we run the core optimization routine a fixed number of times. The optimization routine consists of finding a neighboring solution and accepting it with probability e^(f(c) - f(n)) where c is the current solution and n is the neighboring solution. A neighboring solution is found by applying a slight perturbation to the current solution. This randomness is useful to escape the common pitfall of optimization heuristics -- getting trapped in local minima. By potentially accepting a less optimal solution than we currently have, and accepting it with probability inverse to the increase in cost, the algorithm is more likely to converge near the global optimum. Designing a neighbor function is quite tricky and must be done on a case by case basis, but below are some ideas for finding neighbors in locational optimization problems.
- Move all points 0 or 1 units in a random direction
- Shift input elements randomly
- Swap random elements in input sequence
- Permute input sequence
- Partition input sequence into a random number of segments and permute segments
One caveat is that we need to provide an initial solution so the algorithm knows where to start. This can be done in two ways: (1) using prior knowledge about the problem to input a good starting point and (2) generating a random solution. Although generating a random solution is worse and can occasionally inhibit the success of the algorithm, it is the only option for problems where we know nothing about the landscape.
There are many other optimization techniques, although simulated annealing is a useful, stochastic optimization heuristic for large, discrete search spaces in which optimality is prioritized over time. Below, I've included a basic framework for locational-based simulated annealing (perhaps the most applicable flavor of optimization for simulated annealing). Of course, the cost function, candidate generation function, and neighbor function must be defined based on the specific problem at hand, although the core optimization routine has already been implemented.
C++
#include <bits/stdc++.h>
using namespace std;
//c++ code for the above approach
class Solution {
public:
float CVRMSE;
vector<int> config;
Solution(float CVRMSE, vector<int> configuration) {
this->CVRMSE = CVRMSE;
config = configuration;
}
};
// Function prototype
Solution genRandSol();
// global variables.
int T = 1;
float Tmin = 0.0001;
float alpha = 0.9;
int numIterations = 100;
int M = 5;
int N = 5;
vector<vector<char>> sourceArray(M, vector<char>(N, 'X'));
vector<int> temp = {};
Solution mini = Solution((float)INT_MAX, temp);
Solution currentSol = genRandSol();
Solution genRandSol() {
// Instantiating for the sake of compilation
vector<int> a = {1, 2, 3, 4, 5};
return Solution(-1.0, a);
}
Solution neighbor(Solution currentSol) {
return currentSol;
}
float cost(vector<int> inputConfiguration) {
return -1.0;
}
// Mapping from [0, M*N] --> [0,M]x[0,N]
vector<int> indexToPoints(int index) {
vector<int> points = {index % M,index/M};
return points;
}
//Returns minimum value based on optimization
int main(){
while (T > Tmin) {
for (int i = 0; i < numIterations; i++) {
// Reassigns global minimum accordingly
if (currentSol.CVRMSE < mini.CVRMSE) {
mini = currentSol;
}
Solution newSol = neighbor(currentSol);
float ap = exp((currentSol.CVRMSE - newSol.CVRMSE) / T);
srand( (unsigned)time( NULL ) );
if (ap > (float) rand()/RAND_MAX) {
currentSol = newSol;
}
}
T *= alpha; // Decreases T, cooling phase
}
cout << mini.CVRMSE << "\n\n";
for (int i = 0; i < M; i++) {
for (int j = 0; j < N; j++) {
sourceArray[i][j] = 'X';
}
}
// Displays
for(int index = 0; index < mini.config.size(); index++){
int obj = mini.config[index];
vector<int> coord = indexToPoints(obj);
sourceArray[coord[0]][coord[1]] = '-';
}
// Displays optimal location
for (int i = 0; i < M; i++) {
string row = "";
for (int j = 0; j < N; j++) {
row = row + sourceArray[i][j] + " ";
}
cout << (row) << endl;
}
}
// The code is contributed by Nidhi goel.
Java
// Java program to implement Simulated Annealing
import java.util.*;
public class SimulatedAnnealing {
// Initial and final temperature
public static double T = 1;
// Simulated Annealing parameters
// Temperature at which iteration terminates
static final double Tmin = .0001;
// Decrease in temperature
static final double alpha = 0.9;
// Number of iterations of annealing
// before decreasing temperature
static final int numIterations = 100;
// Locational parameters
// Target array is discretized as M*N grid
static final int M = 5, N = 5;
// Number of objects desired
static final int k = 5;
public static void main(String[] args) {
// Problem: place k objects in an MxN target
// plane yielding minimal cost according to
// defined objective function
// Set of all possible candidate locations
String[][] sourceArray = new String[M][N];
// Global minimum
Solution min = new Solution(Double.MAX_VALUE, null);
// Generates random initial candidate solution
// before annealing process
Solution currentSol = genRandSol();
// Continues annealing until reaching minimum
// temperature
while (T > Tmin) {
for (int i=0;i<numIterations;i++){
// Reassigns global minimum accordingly
if (currentSol.CVRMSE < min.CVRMSE){
min = currentSol;
}
Solution newSol = neighbor(currentSol);
double ap = Math.pow(Math.E,
(currentSol.CVRMSE - newSol.CVRMSE)/T);
if (ap > Math.random())
currentSol = newSol;
}
T *= alpha; // Decreases T, cooling phase
}
//Returns minimum value based on optimization
System.out.println(min.CVRMSE+"\n\n");
for(String[] row:sourceArray) Arrays.fill(row, "X");
// Displays
for (int object:min.config) {
int[] coord = indexToPoints(object);
sourceArray[coord[0]][coord[1]] = "-";
}
// Displays optimal location
for (String[] row:sourceArray)
System.out.println(Arrays.toString(row));
}
// Given current configuration, returns "neighboring"
// configuration (i.e. very similar)
// integer of k points each in range [0, n)
/* Different neighbor selection strategies:
* Move all points 0 or 1 units in a random direction
* Shift input elements randomly
* Swap random elements in input sequence
* Permute input sequence
* Partition input sequence into a random number
of segments and permute segments */
public static Solution neighbor(Solution currentSol){
// Slight perturbation to the current solution
// to avoid getting stuck in local minimas
// Returning for the sake of compilation
return currentSol;
}
// Generates random solution via modified Fisher-Yates
// shuffle for first k elements
// Pseudorandomly selects k integers from the interval
// [0, n-1]
public static Solution genRandSol(){
// Instantiating for the sake of compilation
int[] a = {1, 2, 3, 4, 5};
// Returning for the sake of compilation
return new Solution(-1, a);
}
// Complexity is O(M*N*k), asymptotically tight
public static double cost(int[] inputConfiguration){
// Given specific configuration, return object
// solution with assigned cost
return -1; //Returning for the sake of compilation
}
// Mapping from [0, M*N] --> [0,M]x[0,N]
public static int[] indexToPoints(int index){
int[] points = {index%M, index/M};
return points;
}
// Class solution, bundling configuration with error
static class Solution {
// function value of instance of solution;
// using coefficient of variance root mean
// squared error
public double CVRMSE;
public int[] config; // Configuration array
public Solution(double CVRMSE, int[] configuration) {
this.CVRMSE = CVRMSE;
config = configuration;
}
}
}
Python3
# PYTHON CODE for the above approach
import random
import math
class Solution:
def __init__(self, CVRMSE, configuration):
self.CVRMSE = CVRMSE
self.config = configuration
T = 1
Tmin = 0.0001
alpha = 0.9
numIterations = 100
def genRandSol():
# Instantiating for the sake of compilation
a = [1, 2, 3, 4, 5]
return Solution(-1.0, a)
def neighbor(currentSol):
return currentSol
def cost(inputConfiguration):
return -1.0
# Mapping from [0, M*N] --> [0,M]x[0,N]
def indexToPoints(index):
points = [index % M, index//M]
return points
M = 5
N = 5
sourceArray = [['X' for i in range(N)] for j in range(M)]
min = Solution(float('inf'), None)
currentSol = genRandSol()
while T > Tmin:
for i in range(numIterations):
# Reassigns global minimum accordingly
if currentSol.CVRMSE < min.CVRMSE:
min = currentSol
newSol = neighbor(currentSol)
ap = math.exp((currentSol.CVRMSE - newSol.CVRMSE)/T)
if ap > random.uniform(0, 1):
currentSol = newSol
T *= alpha # Decreases T, cooling phase
# Returns minimum value based on optimization
print(min.CVRMSE, "\n\n")
for i in range(M):
for j in range(N):
sourceArray[i][j] = "X"
# Displays
for obj in min.config:
coord = indexToPoints(obj)
sourceArray[coord[0]][coord[1]] = "-"
# Displays optimal location
for i in range(M):
row = ""
for j in range(N):
row += sourceArray[i][j] + " "
print(row)
C#
// C# program to implement Simulated Annealing
using System;
using System.Text;
// Class solution, bundling configuration with error
public class Solution {
// function value of instance of solution;
// using coefficient of variance root mean
// squared error
public double CVRMSE;
public int[] config; // Configuration array
public Solution(double CVRMSE, int[] configuration) {
this.CVRMSE = CVRMSE;
config = configuration;
}
}
public class GFG{
// Initial and final temperature
public static double T = 1;
// Simulated Annealing parameters
// Temperature at which iteration terminates
static double Tmin = .0001;
// Decrease in temperature
static double alpha = 0.9;
// Number of iterations of annealing
// before decreasing temperature
static int numIterations = 100;
// Locational parameters
// Target array is discretized as M*N grid
static int M = 5, N = 5;
// Number of objects desired
//static int k = 5;
// Generates random solution via modified Fisher-Yates
// shuffle for first k elements
// Pseudorandomly selects k integers from the interval
// [0, n-1]
public static Solution genRandSol(){
// Instantiating for the sake of compilation
int[] a = {1, 2, 3, 4, 5};
// Returning for the sake of compilation
return new Solution(-1.0, a);
}
// Given current configuration, returns "neighboring"
// configuration (i.e. very similar)
// integer of k points each in range [0, n)
/* Different neighbor selection strategies:
* Move all points 0 or 1 units in a random direction
* Shift input elements randomly
* Swap random elements in input sequence
* Permute input sequence
* Partition input sequence into a random number
of segments and permute segments */
public static Solution neighbor(Solution currentSol){
// Slight perturbation to the current solution
// to avoid getting stuck in local minimas
// Returning for the sake of compilation
return currentSol;
}
// Complexity is O(M*N*k), asymptotically tight
public static double cost(int[] inputConfiguration){
// Given specific configuration, return object
// solution with assigned cost
return -1.0; //Returning for the sake of compilation
}
// Mapping from [0, M*N] --> [0,M]x[0,N]
public static int[] indexToPoints(int index){
int[] points = {index%M, index/M};
return points;
}
static public void Main (){
// Problem: place k objects in an MxN target
// plane yielding minimal cost according to
// defined objective function
// Set of all possible candidate locations
String[,] sourceArray = new String[M,N];
// Global minimum
Solution min = new Solution(Double.MaxValue, null);
// Generates random initial candidate solution
// before annealing process
Solution currentSol = genRandSol();
// Continues annealing until reaching minimum
// temperature
while (T > Tmin) {
for (int i=0;i<numIterations;i++){
// Reassigns global minimum accordingly
if (currentSol.CVRMSE < min.CVRMSE){
min = currentSol;
}
Solution newSol = neighbor(currentSol);
double ap = Math.Pow(Math.E,(currentSol.CVRMSE - newSol.CVRMSE)/T);
Random rnd = new Random();
if (ap > rnd.Next(0,1)){
currentSol = newSol;
}
}
T *= alpha; // Decreases T, cooling phase
}
//Returns minimum value based on optimization
Console.Write(min.CVRMSE+"\n\n");
for(int i=0;i<M;i++){
for(int j=0;j<N;j++){
sourceArray[i,j]="X";
}
}
// Displays
for (int i=0;i<min.config.Length;i++) {
int obj = min.config[i];
int[] coord = indexToPoints(obj);
sourceArray[coord[0],coord[1]] = "-";
}
// Displays optimal location
for (int i=0;i<M;i++){
StringBuilder row = new StringBuilder("");
for(int j=0;j<N;j++){
row.Append(sourceArray[i,j]+" ");
}
Console.Write(row.ToString()+"\n");
}
}
}
//This code is contributed by shruti456rawal
JavaScript
//Javascript code for the above approach
class Solution {
constructor(CVRMSE, configuration) {
this.CVRMSE = CVRMSE;
this.config = configuration;
}
}
let T = 1;
const Tmin = 0.0001;
const alpha = 0.9;
const numIterations = 100;
function genRandSol() {
// Instantiating for the sake of compilation
const a = [1, 2, 3, 4, 5];
return new Solution(-1.0, a);
}
function neighbor(currentSol) {
return currentSol;
}
function cost(inputConfiguration) {
return -1.0;
}
// Mapping from [0, M*N] --> [0,M]x[0,N]
function indexToPoints(index) {
const points = [index % M, Math.floor(index / M)];
return points;
}
const M = 5;
const N = 5;
const sourceArray = Array.from(Array(M), () => new Array(N).fill('X'));
let min = new Solution(Number.POSITIVE_INFINITY, null);
let currentSol = genRandSol();
while (T > Tmin) {
for (let i = 0; i < numIterations; i++) {
// Reassigns global minimum accordingly
if (currentSol.CVRMSE < min.CVRMSE) {
min = currentSol;
}
const newSol = neighbor(currentSol);
const ap = Math.exp((currentSol.CVRMSE - newSol.CVRMSE) / T);
if (ap > Math.random()) {
currentSol = newSol;
}
}
T *= alpha; // Decreases T, cooling phase
}
//Returns minimum value based on optimization
console.log(min.CVRMSE, "\n\n");
for (let i = 0; i < M; i++) {
for (let j = 0; j < N; j++) {
sourceArray[i][j] = "X";
}
}
// Displays
for (const obj of min.config) {
const coord = indexToPoints(obj);
sourceArray[coord[0]][coord[1]] = "-";
}
// Displays optimal location
for (let i = 0; i < M; i++) {
let row = "";
for (let j = 0; j < N; j++) {
row += sourceArray[i][j] + " ";
}
console.log(row);
}
Output-1
X - X X X
- X X X X
- X X X X
- X X X X
- X X X X
Time Complexity: O(T * numIterations), where T
and numIterations
represent the loop counts.
Auxiliary Space: O(M * N + K), where M and N are the dimensions of sourceArray and K represents additional space for variables and instances.
Similar Reads
Basics & Prerequisites
Data Structures
Array Data StructureIn this article, we introduce array, implementation in different popular languages, its basic operations and commonly seen problems / interview questions. An array stores items (in case of C/C++ and Java Primitive Arrays) or their references (in case of Python, JS, Java Non-Primitive) at contiguous
3 min read
String in Data StructureA string is a sequence of characters. The following facts make string an interesting data structure.Small set of elements. Unlike normal array, strings typically have smaller set of items. For example, lowercase English alphabet has only 26 characters. ASCII has only 256 characters.Strings are immut
2 min read
Hashing in Data StructureHashing is a technique used in data structures that efficiently stores and retrieves data in a way that allows for quick access. Hashing involves mapping data to a specific index in a hash table (an array of items) using a hash function. It enables fast retrieval of information based on its key. The
2 min read
Linked List Data StructureA linked list is a fundamental data structure in computer science. It mainly allows efficient insertion and deletion operations compared to arrays. Like arrays, it is also used to implement other data structures like stack, queue and deque. Hereâs the comparison of Linked List vs Arrays Linked List:
2 min read
Stack Data StructureA Stack is a linear data structure that follows a particular order in which the operations are performed. The order may be LIFO(Last In First Out) or FILO(First In Last Out). LIFO implies that the element that is inserted last, comes out first and FILO implies that the element that is inserted first
2 min read
Queue Data StructureA Queue Data Structure is a fundamental concept in computer science used for storing and managing data in a specific order. It follows the principle of "First in, First out" (FIFO), where the first element added to the queue is the first one to be removed. It is used as a buffer in computer systems
2 min read
Tree Data StructureTree Data Structure is a non-linear data structure in which a collection of elements known as nodes are connected to each other via edges such that there exists exactly one path between any two nodes. Types of TreeBinary Tree : Every node has at most two childrenTernary Tree : Every node has at most
4 min read
Graph Data StructureGraph Data Structure is a collection of nodes connected by edges. It's used to represent relationships between different entities. If you are looking for topic-wise list of problems on different topics like DFS, BFS, Topological Sort, Shortest Path, etc., please refer to Graph Algorithms. Basics of
3 min read
Trie Data StructureThe Trie data structure is a tree-like structure used for storing a dynamic set of strings. It allows for efficient retrieval and storage of keys, making it highly effective in handling large datasets. Trie supports operations such as insertion, search, deletion of keys, and prefix searches. In this
15+ min read
Algorithms
Searching AlgorithmsSearching algorithms are essential tools in computer science used to locate specific items within a collection of data. In this tutorial, we are mainly going to focus upon searching in an array. When we search an item in an array, there are two most common algorithms used based on the type of input
2 min read
Sorting AlgorithmsA Sorting Algorithm is used to rearrange a given array or list of elements in an order. For example, a given array [10, 20, 5, 2] becomes [2, 5, 10, 20] after sorting in increasing order and becomes [20, 10, 5, 2] after sorting in decreasing order. There exist different sorting algorithms for differ
3 min read
Introduction to RecursionThe process in which a function calls itself directly or indirectly is called recursion and the corresponding function is called a recursive function. A recursive algorithm takes one step toward solution and then recursively call itself to further move. The algorithm stops once we reach the solution
14 min read
Greedy AlgorithmsGreedy algorithms are a class of algorithms that make locally optimal choices at each step with the hope of finding a global optimum solution. At every step of the algorithm, we make a choice that looks the best at the moment. To make the choice, we sometimes sort the array so that we can always get
3 min read
Graph AlgorithmsGraph is a non-linear data structure like tree data structure. The limitation of tree is, it can only represent hierarchical data. For situations where nodes or vertices are randomly connected with each other other, we use Graph. Example situations where we use graph data structure are, a social net
3 min read
Dynamic Programming or DPDynamic Programming is an algorithmic technique with the following properties.It is mainly an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for the same inputs, we can optimize it using Dynamic Programming. The idea is to simply store the results of
3 min read
Bitwise AlgorithmsBitwise algorithms in Data Structures and Algorithms (DSA) involve manipulating individual bits of binary representations of numbers to perform operations efficiently. These algorithms utilize bitwise operators like AND, OR, XOR, NOT, Left Shift, and Right Shift.BasicsIntroduction to Bitwise Algorit
4 min read
Advanced
Segment TreeSegment Tree is a data structure that allows efficient querying and updating of intervals or segments of an array. It is particularly useful for problems involving range queries, such as finding the sum, minimum, maximum, or any other operation over a specific range of elements in an array. The tree
3 min read
Pattern SearchingPattern searching algorithms are essential tools in computer science and data processing. These algorithms are designed to efficiently find a particular pattern within a larger set of data. Patten SearchingImportant Pattern Searching Algorithms:Naive String Matching : A Simple Algorithm that works i
2 min read
GeometryGeometry is a branch of mathematics that studies the properties, measurements, and relationships of points, lines, angles, surfaces, and solids. From basic lines and angles to complex structures, it helps us understand the world around us.Geometry for Students and BeginnersThis section covers key br
2 min read
Interview Preparation
Practice Problem