1. Introduction

In this tutorial, we’ll talk about the Slime Mould Algorithm and its implementation through Python code.

2. The Slime Mould Algorithm

The Slime Mould Algorithm (SMA) mimics the problem-solving behavior of slime mould, proving effective in tasks like pathfinding and network optimization. Utilizing agents representing mould expansion toward food sources, the algorithm combines competition and cooperation to form optimal networks. This iterative process, repeated multiple times, yields solutions for various problems.

The Slime Mould Algorithm (SMA) has found application in diverse optimization problems, encompassing areas such as network design, facility location, and scheduling. It has demonstrated its effectiveness and efficiency as an algorithm in addressing these various problem domains.

3. Phase-Based Learning

Phase-based learning is a crucial mechanism in the Slime Mould Algorithm, enabling it to effectively balance exploration and exploitation, leading to robust and efficient optimization results. It refers to its cyclic behavior through two distinct phases:

3.1. Approach Food

This phase represents exploration. The algorithm extends tendrils outwards from its current position, mimicking the slime mould’s exploration of its environment. These tendrils represent candidate solutions, and their movement is guided by a combination of fitness values (attractiveness of food sources) and random exploration.

This randomness helps in avoiding premature convergence and allows the algorithm to discover promising regions in the search space.

3.2. Wrap Food

This phase signifies exploitation. Once a promising candidate solution (food source) is identified, the algorithm focuses on refining its quality. The tendrils surrounding the food source thicken and contract inwards, similar to the slime mould’s veins thickening around a discovered food source.

This contraction improves the solution’s accuracy and leads to further exploitation of the promising region.

4. Mathematical Model

We initiate the algorithm construction process, consisting of two primary phases: the approach (search) food phase and the wrap (exploit) food phase, respectively. The algorithm will cyclically alternate between these two phases.

Before commencing the search, it’s imperative to initialize the algorithm. Despite the slime mould being a single entity in one sense, it can act at various locations along with itself. Consequently, we treat it as a fixed number of distinct entities or locations.

To begin, we initialize N locations to represent the slime mould. Subsequently, for each of these N locations, we calculate the fitness of the slime mould.

4.1. Approach Food and Wrap Food

The detection of food by the slime mould occurs through the perception of its odor in the air. We express this process of capturing the approaching food through odor with the following equation:

(1)    \begin{equation*} X(t+1) = \begin{cases}x_{b}(t)+vb\cdot (W\cdot X_A(t)-X_B(t)),r \leq p \\ vc\cdot X(t),r\geq p \end{cases} \end{equation*}

In this context, two randomly chosen locations of our mould are denoted as X_A(t) and X_B(t). Additionally, we use X_b to represent the best individual discovered thus far. The initial condition involves the exploration of food by utilizing the information from the best-discovered location and blending it with two randomly selected locations. The subsequent condition directs the slime mould to exploit the food source available in its local vicinity.

4.2. Breaking Down the Formula

(2)    \begin{equation*} p = tanh|S(i)-DF| \end{equation*}

(3)    \begin{equation*} a = arctanh(-(\frac{t}{max_{t}})+1) \end{equation*}

With a defined above, we now define vb as a random sample from the range [−a, a]. We also define vc as a random variable chosen from the linearly decreasing range [-(1-\frac{i}{max_it}),(1-\frac{i}{max_it})]:

(4)    \begin{equation*} W(SmellIndex(i)) = \begin{cases} 1+r \cdot log(\frac{bF-S(i)}{bF-wF}+1),condition \\ 1-r\cdot log(\frac{bF-S(i)}{bF-wF}+1) \end{cases} \end{equation*}

W is the weight parameter. The condition is that S(i) is in the first half of the population. We define W in this way to mimic the oscillation pattern of slime moulds. W helps the slime mould move towards high-quality food sources quickly while slowing down the approach when food concentration is lower:

(5)    \begin{equation*} SmellIndex = sort(S) \end{equation*}

The SmellIndex corresponds to the ranking of the slime mould based on its fitness value. Fitness serves as the metric employed to assess a particular solution.

4.3. Oscillation

During the execution of the complete algorithm, a level of random exploration is preserved, as indicated by the initial term in the comprehensive equation presented below. The hyperparameter z can be adjusted to align with the requirements of the specific problem at hand:

(6)    \begin{equation*} X{*} = \begin{cases}rand\cdot (UB-LB) + LB , rand<z\\ X_{b}(t) +vb\cdot (W\cdot X_{A}t) - X_{B}(t)), r < p \\ vc\cdot X(t), r \geq p \end{cases} \end{equation*}

The movement of food within the venous structure of the slime mould exhibits a natural rhythm, which is encapsulated in the weight, W, and the random parameters, vb and \boldsymbol{vc}.

This facilitates an oscillation in the thickness of the venous structure of the slime mould, enabling variations in the flow of food either to increase or decrease. This adaptive behavior assists the slime mould in steering clear of local optima. The parameters incorporated into our model are designed to produce a similar effect, enhancing the algorithm’s ability to navigate and optimize effectively.

5. Pseudocode

Here is an example pseudocode of the algorithm:

 \begin{tikzpicture} \begin{algorithm} \caption{Slime Mould Algorithm}\label{alg:two}  \setcounter{algocf}{0} \SetAlgoLined  \KwData{PopulationSize, MaxIters,z} \KwResult{The Optimal Agent} Initialize Slime Mould Positions, $ X_i |i = 1,2 ...PopulationSize$ \; $t \gets 0$\; \While{$t < MaxIters$}{ $Calculate Fitness $\; $Update bestFitness, X_b$\; $Calculate, W$\; $a \gets atanh( 1 -(\frac{t}{MaxIters}))$\; $b \gets 1 - \frac{t}{MaxIters}$\; $AgentIndex \gets 0$\; \While{$AgentIndex < PopulationSize$}{ Update $p,vb,vc$\; Update Mould Positions\; $AgentIndex \gets AgentIndex + 1$\; }  \State $t \gets t + 1$\; } \end{algorithm} \begin{tikzpicture}

6. Computational Complexity

The runtime of the algorithm can be divided into five primary operations, each associated with its corresponding computational complexity:

  • initialization O(D), D is the dimension of functions
  • fitness evaluation O(N), N is the number of cells of slime mould
  • sorting O(N*LOG N)
  • weight update O(D x N)
  • location update O(D x N)

Total time complexity is O(D + T * N * (1 + Log N + D) ).

7. Implementation

Let’s see a detailed step-by-step Python code implementation of the slime mould algorithm, offering a comprehensive representation of each stage in the algorithmic process.

7.1. Required Libraries

Installing the required libraries before starting any implementation, including those for the Slime Mould Algorithm (SMA) is strongly recommended:

import numpy as np
import random

7.2. Define the Objective Function

It’s the essential factor directing the algorithm’s quest for optimal solutions, mathematically framing the problem and enabling the assessment of the potential solution’s goodness or fitness.

The objective function accepts a solution candidate (represented as a list or array of values) as input, performs calculations based on the problem’s specific requirements, and returns a single value representing the fitness of that solution. The algorithm uses this fitness value to steer the search process towards better solutions.

The below code uses a simple objective_function() that calculates the sum of squares. This is a common test function, but in real-world applications, we’d replace it with a function tailored to your specific problem:

def objective_function(x):
    return np.sum(x**2)

7.3. Initialize Parameters

Next, we need to set up key values that control the algorithm’s behaviour and search space, similar to tuning knobs for an optimization task. It ensures the algorithm starts with appropriate settings for the problem at hand.

Let’s choose the following to start, but we should always fine-tune them to suit our specific problem better:

num_agents = 50  # Number of slime mold agents
dimensions = 10  # Number of dimensions in the search space
max_iters = 100   # Maximum number of iterations
lb = -5  #  Lower bound of the search space
ub = 5   #  Upper bound of the search space
w = 0.9  #  Weight parameter
vb = 0   #  Vibration parameter
z  = 0.1 #  Random parameter

7.4. Initialize Slime Mold Positions

Next, we need to establish the initial positions for the algorithm’s search space exploration. Effectively distributed starting points can improve the algorithm’s efficiency in finding the optimal solution. The array is filled with random values, which helps avoid premature convergence to a suboptimal solution.

A 2D NumPy array named positions is created with dimensions as an output. Each row of this array represents the position of a single slime mold agent. Each column corresponds to a dimension in the search space:

positions = np.random.uniform(lb, ub, size=(num_agents, dimensions))

7.5. Slime Mold Loop

It’s the heart of the algorithm, where the iterative search for optimal solutions takes place. It mimics the foraging behavior of slime mold to explore and exploit the search space effectively.

The algorithm calculates the fitness value for each agent’s position using the objective_function(). This measures how well each solution candidate solves the problem. Agents are sorted based on their fitness values, from best to worst. This ensures that better solutions are prioritized for further exploration.

The weight parameter w is adjusted, typically decreasing over iterations using an exponential function. This promotes exploration in the early stages and exploitation in the later stages.

The loop repeats for a specified number of iterations max_iters, allowing for continuous improvement of solutions:

for i in range(max_iters):
    # Calculate the fitness value for each agent
    fitness_value = np.array([objective_function(pos) for pos in positions])

    # Sort the agents based on fitness value from best to worst
    sorted_indices = np.argsort(fitness_value)
    positions = positions[sorted_indices]

    # Update the weights
    w = w * np.exp(-i / max_iters)

    # Update the positions using the SMA equation
    for j in range(num_agents):
        if random.random() < w:
            # Approach food (exploitation)
            best_pos = positions[0]
            positions[j] = positions[j] + np.random.rand() * (best_pos - positions[j])
        else:
            # Explore randomly
            random_index = random.randint(0, num_agents - 1)
            random_pos = positions[random_index]
            positions[j] = positions[j] + z * np.random.rand() * (random_pos - positions[j])

        # Vibrate randomly
        positions[j] = positions[j] + vb * np.random.randn(dimensions)

        # Ensure positions stay within bounds
        positions[j] = np.clip(positions[j], lb, ub)

# Get the best solution
best_pos = positions[0]
best_fitness = objective_function(best_pos)

print("Best Final solution:", best_pos)
print("Best Final fitness:", best_fitness)

The loop’s effectiveness depends on carefully tuning parameters like w, z, vb, and max_iters.

Let’s now explain the code above.  A group of tiny creatures called slime molds work together to find the best spot in a maze-like space. The maze has 10 different directions to explore (10 dimensions) and 50 slime molds are searching for the best spot. They possess a unique method of communication to determine their direction, and they also experiment with various paths randomly, exploring the possibility of discovering an improved location.

The code keeps track of how close each slime mold is to the best spot they’ve found so far. The slime molds keep moving and exploring for 100 rounds. In the end, the code tells us which slime mold found the very best spot and how good that spot is.

This code is like a recipe for solving a puzzle where we must find the best possible answer out of many choices. It uses the clever teamwork of slime molds as inspiration to search for the best solution.

Visualizing the final positions of all agents in a 2D grid view provides insights into their exploration patterns and convergence behavior. Tracking the best solution and fitness value at each iteration allows analysis of the algorithm’s progress and convergence characteristics.

Grid value

8. Conclusion

In this article, we discussed the intricate workings of the Slime Mold Algorithm (SMA), exploring its Python implementation and the magic of adaptive weights.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.