❤️ Announcing **Humans Behind Code**!

✅ Are you the developer/founder of an Open Source project? Apply here.

✅ Are you the developer/founder of an Open Source project? Apply here.

@sonaalpradeep ・ Dec 18,2021 ・ 7 min read ・ 310 views ・ Originally posted on faun.pub

A beginner's guide to genetic algorithms, plus a little bit more…

*“…It will also produce songs from your favourite artists, the speeches you made when you were in school and even this very article.”*

When I first read about genetic algorithms, I was as stunned as this monkey up here. Genetic algorithms are some of the most intuitive optimization techniques because they are inspired by Charles Darwin’s “Theory of Evolution”. Like how the aeroplane evolved from a bird and our brain inspired the ever-popular neural network, I am always amazed when we look at nature and the processes happening around us as a groundwork for optimization techniques. Thanks to biomimicry, many of the most advanced and simple technologies present today have their humble beginnings in nature.

**The Infinite Monkey Theorem**

This theorem talks about a monkey with a typewriter. If this monkey typed randomly for an infinite amount of time, it would certainly have eventually typed Shakespeare’s entire works. It will also produce songs from your favourite artists, the speeches you made when you were in school, and even this very article.

**Solution Spaces — and why bogosort works**

The Bogosort algorithm is a (highly inefficient) sorting algorithm. It uses the trial and test technique. The step-by-step procedure of this algorithm is as follows :

- Are the array elements sorted? If yes, return the array.
- Randomly shuffle the array. Return to step 1.

Here is the python program of this algorithm :

` ````
import random
import time
def generate_initial(num_of_elements):
'''
Generate a shuffled array of a given length
'''
array = list(range(num_of_elements))
random.shuffle(array)
return array
def check_sorted(array):
'''
Check if the array is sorted (in ascending order)
'''
for ind in range(len(array) - 1):
if array[ind] > array[ind + 1]:
return False
return True
def bogo_sort(array):
'''
Bogosort the array. Shuffle the array randomly and
return the sorted array as well as the time
taken by the algorithm
'''
start_time = time.time()
while not check_sorted(array):
random.shuffle(array)
return array, time.time() - start_time
if __name__ == "__main__":
initialised_array = generate_initial(10)
print("Initial Array : ", initialised_array)
print("Sorted Array : {0}\
\nTime to sort the array : {1:.4} seconds".format(*bogo_sort(initialised_array)))
```

The larger the size of the array, the more time it is going to take for this algorithm to run since the algorithm has to exhaustively search all possible permutations. But we can guarantee that the program will terminate because we have bound the algorithm to a solution space where we are certain our (most optimum) solution exists. In the case of the bogosort program, our solution space is the permutations of the initialised array and we know that the ordered array is also a permutation of the initialised array.

The program takes too long because randomly shuffling the array until we get to the correct solution is based on chance. There is no way for this algorithm to figure out how good the solution it guessed was. To make an algorithm that is purely based on randomization reach a solution more efficiently, we need to give it the ability to recognize and grade solutions that are better than the others.

**“Survival of the Fittest” — Natural Selection and Evolution**

As much as Epimetheus wouldn’t want us to believe, the traits of all animals were a product of evolution. Animals that needed to fly further adapted their wings to do so. Animals that needed to reach higher leaves grew longer necks.

More importantly, animals with traits that made them more likely to survive in their environment and produce offspring were preferred in nature over those that couldn’t. Favourable attributes produced in a generation of parents are propagated to their descendants, ensuring the survivability of the offspring.

**The Genetic Algorithm**

The genetic algorithm falls under the family of evolutionary computation. In these types of algorithms, an initial batch of solutions is iterated on until we reach an optimized solution. We reach this solution by selecting the solutions from the batch which are preferred more than the others and giving them a higher probability of propagating them to the next generation. To generate more variation between generations of solutions, operations such as cross-breeding and mutation are used.

Let us build a simple genetic algorithm that can help us optimize a problem. A simple genetic algorithm involves the following steps :

- Encoding the problem
- Initialize the parent generation
- Pick members from this generation which are more likely to propagate to the next
- Crossbreed the parents to generate children
- Mutate the children to improve variation
- Select the parents and children that survive
- Proceed to step 3 until the optimum solution is reached.

Let's try to learn about this algorithm with an example.

*Optimize the function f(x) = x³ + 9, where x is within [0, 63] and x belongs to the set of integers.*

**Encoding the Problem**

Our algorithm requires a batch of solutions, and a metric to determine how good the answer is. The function which we need to optimize *f(x)* will act as this metric. The initial batch of solutions is always chosen randomly.

The most common way of working with integer-based populations is to convert the values into binary format. This might not seem clear right now, but it will be when we are going to talk about crossover and mutation.

Let's import the necessary modules and define the functions to return the value of the objective function, the inverse of the objective function, and create the first generation of solutions.

` ````
import numpy as np
import math
import random
def f(x):
return pow(x, 3) + 9
def f_inv(x):
return round(pow(x-9, 1/3))
```

The first generation is initialised by picking random numbers within the solution space. To demonstrate the algorithm and to give a fair chance to the starting population, the values are selected from a substantially smaller subset. If we hadn’t done this, we might end up with an alien (say 52) within a sheeply population ([1, 5, 12, 16…]) and the algorithm would converge very quickly. After selecting the numbers, the values are converted to bit strings.

` ````
def create_parents(population_size, max_initial_number, bit_length):
num_list = [np.random.randint(0, max_initial_number) for _ in range(population_size)]
return [format(num, "#08b").lstrip('0b').zfill(bit_length) for num in num_list]
```

**Selection**

Selecting the population that propagates further is selected by using the Fitness Proportionate Selection, also known as the Roulette Wheel Selection. In this selection process, the probability of a solution being chosen is directly proportional to its fitness value. Specifically, the probability *p*ᵢ is given by…

… where *f*ᵢ is the fitness value of *i*ᵗʰ value.

The Roulette Wheel Selection is simple. Imagine each section of a roulette wheel is linked to a member of the population. To give the advantageous member a better chance of survival, each sector of the wheel is made larger or smaller proportionate to *p*ᵢ. Programatically, we wouldn’t have to worry about simulating a roulette wheel; NumPy has an option to set the probability of selecting a value from a given array.

` ````
def find_chance(parents):
num_list = [int(b, 2) for b in parents]
list_sum = sum([f(x) for x in num_list])
return [f(num) / list_sum for num in num_list]
def select_succesors(parents, population):
selection_chance = find_chance(parents)
return [np.random.choice(parents, p = selection_chance) for _ in range(population)]
```

The eagle-eyed amongst you would’ve noticed a limitation of the roulette wheel selection already. This selection process wouldn’t sit well with negative fitness values. We would be glossing over this for now, by limiting the domain of *x* such that *f(x)* ≥ 0. The way to deal with this problem is to use another selection algorithm known as tournament selection, where *K* random candidates are selected for a tournament and the candidate with the highest value goes to the next round. This process is repeated until only one member remains.

**Crossover**

Crossover is the step in which the new child is born. Information from the parents is used to create the progeny which can (hopefully) survive through many generations. There are many ways to perform a crossover while working with bit strings, two of them being the **single-point crossover** and the **uniform crossover**.

**Single-point crossover:**A random index from the array is chosen as the crossover point. All bits from the crossover point to the end gets swapped between the two parents.**Uniform crossover:**Each index from the array is considered for swapping.

` ````
def crossover(parent, cross_over_type, cross_breed_shuffle, bit_cross_over_threshold, bit_length):
partition_size = len(parent) // 2
child_set_1, child_set_2 = parent[:partition_size], children[partition_size:]
if cross_over_type == "uniform":
for ind in range(partition_size):
for b in range(bit_length):
rand_val = np.random.rand()
if rand_val < bit_cross_over_threshold:
bit_1 = int(child_set_1[ind], 2) & pow(2, b) # This is a method to select the bit at index b
bit_2 = int(child_set_2[ind], 2) & pow(2, b)
if bit_1 != bit_2:
child_set_1[ind] = format(int(child_set_1[ind], 2) ^ pow(2, b), "#08b").lstrip('0b').zfill(bit_length)
child_set_2[ind] = format(int(child_set_2[ind], 2) ^ pow(2, b), "#08b").lstrip('0b').zfill(bit_length)
return np.append(child_set_1, child_set_2)
elif cross_over_type == "single-point":
for ind in range(partition_size):
slice_index = np.random.randint(1, bit_length - 2)
r_child_1 = child_set_1[ind][slice_index:]
r_child_2 = child_set_2[ind][slice_index:]
child_set_1[ind] = child_set_1[ind][:slice_index] + r_child_2
child_set_2[ind] = child_set_2[ind][:slice_index] + r_child_1
return np.append(child_set_1, child_set_2)
```

**Mutation**

Each child from the new generation is modified (mutated) to ensure more diversity. This mutation can be done to all the bits or can be applied to a few bits at random.

` ````
def mutation(children, mutation_type, bit_mutation_threshold, bit_length):
population_size = len(children)
if mutation_type == "scan-all":
for ind in range(population_size):
for b in range(bit_length):
rand_val = np.random.ranf()
if rand_val < bit_mutation_threshold:
children[ind] = format(int(children[ind], 2) ^ pow(2, b), "#08b").lstrip('0b').zfill(6)
return children
elif mutation_type == "random":
for ind in range(population_size):
b_ind = np.random.randint(0, bit_length - 1)
children[ind] = format(int(children[ind], 2) ^ pow(2, b_ind), "#08b").lstrip('0b').zfill(6)
return children
```

**Evolution**

The last step to determine the parents of the next generation is evolution. It might seem intuitive to just let all the children survive, considering they have information from all the parents plus a little bit more. But not all children might be as good as their parents. This can happen either during the crossover, where the child receives the worst characters from either or both parents, or during mutation where a single bit manipulation can drastically change the value of a child. To counter this, the worst *x%* of the children might be swapped with the best *x%* of the parents. This will ensure that the survivors are truly the fittest.

` ````
def evolution(parents, children, population_size, survival_death_rate):
parent_partition = int(math.floor(population_size * survival_death_rate))
children_partition = population_size - parent_partition
fitted_parents = [f(int(x, 2)) for x in parents]
fitted_children = [f(int(x, 2)) for x in children]
sorted_parents = sorted(fitted_parents, reverse = True)
sorted_children = sorted(fitted_children, reverse = True)
selected_parents = [format(f_inv(num), "#08b").lstrip('0b').zfill(bit_length) for num in sorted_parents]
selected_children = [format(f_inv(num), "#08b").lstrip('0b').zfill(bit_length) for num in sorted_children]
progressors = np.append(selected_parents[:parent_partition], selected_children[:children_partition])
return progressors
```

If the fitness function you are presented with is strictly increasing or strictly decreasing, you could take advantage of it and directly sort the values by using their bit string.

**It's your turn to find out!**

How many generations does it take to reach the optimal solution?

What happens after reaching the optimal solution? Does the first man who reached the peak help the other climbers to the top? Or do the crabs in a bucket pull down the ones that are trying to get out?

What happens if the mutations happen too little? Too often? And while we are at it, is mutation even necessary? What about crossover? Will just one of these genetic operations suffice?

It's your time to find out! Have fun with the tweakable hyperparameters in the Colab notebook.

**Supplementary**

- If you still find Shakespeare’s Monkey to be unreal, you might want to look at the Library of Babel, and this page from the website. Keep your eyes peeled👀.
- There is a great guide online which goes a lot more detailed into the other types of selection, crossover and mutation techniques. I would highly recommend going through it if you are curious to learn more.
- The python program to generate the plot to calculate the time taken by the Bogosort algorithm against the number of elements in an array can be found here.

Join other developers and claim your FAUN account now!

Influence

Total Hits

Posts

Only registered users can post comments. Please, login or signup.