**The new Certification Class of ***REST With Spring* is out:

*REST With Spring*is out:

**1. Introduction**

The aim of this series is to explain the idea of genetic algorithms and show the most known implementations.

In this tutorial, we’ll **describe a very powerful Jenetics Java library that can be used for solving various optimization problems.**

If you feel that you need to learn more about genetic algorithms, we recommend starting with this article.

**2. How Does it Work?**

According to its official documents, Jenetics is a library based on an evolutionary algorithm written in Java. Evolutionary algorithms have their roots in biology, as they use mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection.

**Jenetics is implemented using the Java Stream interface, so it works smoothly with the rest of the Java Stream API.**

The main features are:

**frictionless minimization**– there is no need to change or tweak the fitness function; we can just change the configuration of the*Engine*class and we are ready to start our first application**dependency free**– there are no runtime third-party libraries needed to use Jenetics**Java 8 ready**– full support for*Stream*and lambda expressions**multithreaded**– evolutionary steps can be executed in parallel

In order to use Jenetics, we need to add the following dependency into our *pom.xml*:

<dependency> <groupId>io.jenetics</groupId> <artifactId>jenetics</artifactId> <version>3.7.0</version> </dependency>

The latest version can be found in Maven Central.

**3. Use Cases**

To test all features of Jenetics, we’ll try to solve various well-known optimization problems, starting from the simple binary algorithm and ending with the Knapsack problem.

**3.1. Simple Genetic Algorithm**

Let’s assume that we need to solve the simplest binary problem, where we need to optimize the positions of the 1 bits in the chromosome consisting of 0’s and 1’s. First, we need to define the factory suitable for the problem:

Factory<Genotype<BitGene>> gtf = Genotype.of(BitChromosome.of(10, 0.5));

We created the *BitChromosome* with a length of 10, and the probability of having 1’s in the chromosome equal to 0.5.

Now, let’s create the execution environment:

Engine<BitGene, Integer> engine = Engine.builder(SimpleGeneticAlgorithm::eval, gtf).build();

The *eval()* method returns the bit count:

private Integer eval(Genotype<BitGene> gt) { return gt.getChromosome().as(BitChromosome.class).bitCount(); }

In the final step, we start the evolution and collect the results:

Genotype<BitGene> result = engine.stream() .limit(500) .collect(EvolutionResult.toBestGenotype());

The final result will look similar to this:

Before the evolution: [00000010|11111100] After the evolution: [00000000|11111111]

We managed to optimize the position of 1’s in the gene.

**3.2. Subset Sum Problem**

Another use case for Jenetics is to solve the subset sum problem. In brief, the challenge to optimize is that, given a set of integers, we need to find a non-empty subset whose sum is zero.

There are predefined interfaces in Jenetics to solve such problems:

public class SubsetSum implements Problem<ISeq<Integer>, EnumGene<Integer>, Integer> { // implementation }

As we can see, we implement the *Problem<T, G, C>*, that has three parameters:

– the argument type of the problem fitness function, in our case an immutable, ordered, fixed sized**<T>***Integer*sequence*ISeq<Integer>*–**<G>***Integer*genes*EnumGene<Integer>*– the result type of the fitness function; here it is an**<C>***Integer*

In order to use the *Problem<T, G, C> *interface, we need to override two methods:

@Override public Function<ISeq<Integer>, Integer> fitness() { return subset -> Math.abs(subset.stream() .mapToInt(Integer::intValue).sum()); } @Override public Codec<ISeq<Integer>, EnumGene<Integer>> codec() { return codecs.ofSubSet(basicSet, size); }

In the first one, we define our fitness function, whereas the second one is a class containing factory methods for creating common problem encodings, for example, to find the best fixed-size subset from a given basic set, as in our case.

Now we can proceed to the main part. At the beginning, we need to create a subset to use in the problem:

SubsetSum problem = of(500, 15, new LCG64ShiftRandom(101010));

Please note that we are using the *LCG64ShiftRandom* generator provided by Jenetics. In the next step, we are building the engine of our solution:

In the next step, we are building the engine of our solution:

Engine<EnumGene<Integer>, Integer> engine = Engine.builder(problem) .minimizing() .maximalPhenotypeAge(5) .alterers(new PartiallyMatchedCrossover<>(0.4), new Mutator<>(0.3)) .build();

We try to minimize the result (optimally the result will be 0) by setting the phenotype age and alterers used to alter the offspring. In the next step we can obtain the result:

Phenotype<EnumGene<Integer>, Integer> result = engine.stream() .limit(limit.bySteadyFitness(55)) .collect(EvolutionResult.toBestPhenotype());

Please note that we are using *bySteadyFitness()* that returns a predicate, which will truncate the evolution stream if no better phenotype could be found after the given number of generations and collect the best result. If we get lucky, and there is a solution to the randomly created set, we’ll see something similar to this:

If we get lucky, and there is a solution to the randomly created set, we’ll see something similar to this:

[85|-76|178|-197|91|-106|-70|-243|-41|-98|94|-213|139|238|219] --> 0

Otherwise, the sum of subset will be different than 0.

**3.3. Knapsack First Fit Problem**

The Jenetics library allows us to solve even more sophisticated problems, such as the Knapsack problem. Briefly speaking, in this problem, we have a limited space in our knapsack, and we need to decide which items to put inside.

Let’s start with defining the bag size and number of items:

int nItems = 15; double ksSize = nItems * 100.0 / 3.0;

In the next step, we’ll generate a random array containing *KnapsackItem* objects (defined by *size* and *value* fields) and we’ll put those items randomly inside the knapsack, using the First Fit method:

KnapsackFF ff = new KnapsackFF(Stream.generate(KnapsackItem::random) .limit(nItems) .toArray(KnapsackItem[]::new), ksSize);

Next, we need to create the *Engine*:

Engine<BitGene, Double> engine = Engine.builder(ff, BitChromosome.of(nItems, 0.5)) .populationSize(500) .survivorsSelector(new TournamentSelector<>(5)) .offspringSelector(new RouletteWheelSelector<>()) .alterers(new Mutator<>(0.115), new SinglePointCrossover<>(0.16)) .build();

There are a few points to note here:

- population size is 500
- the offspring will be chosen through the tournament and roulette wheel selections
- as we did in the previous subsection, we need also to define the alterers for the newly created offspring

**There is also one very important feature of Jenetics. We can easily collect all statistics and insights from the whole simulation duration. **We are going to do this by using the* EvolutionStatistics *class:

EvolutionStatistics<Double, ?> statistics = EvolutionStatistics.ofNumber();

Finally, let’s run the simulations:

Phenotype<BitGene, Double> best = engine.stream() .limit(bySteadyFitness(7)) .limit(100) .peek(statistics) .collect(toBestPhenotype());

Please note that we are updating the evaluation statistics after each generation, which is limited to 7 steady generation and a maximum of 100 generations in total. In more detail there are two possible scenarios:

- we achieve 7 steady generations, then the simulation stops
- we cannot get 7 steady generations in less than 100 generations, so the simulation stops due to the second
*limit()*

**It’s important to have maximum generations limit, otherwise, the simulations may not stop in a reasonable time.**

The final result contains a lot of information:

+---------------------------------------------------------------------------+ | Time statistics | +---------------------------------------------------------------------------+ | Selection: sum=0,039207931000 s; mean=0,003267327583 s | | Altering: sum=0,065145069000 s; mean=0,005428755750 s | | Fitness calculation: sum=0,029678433000 s; mean=0,002473202750 s | | Overall execution: sum=0,111383965000 s; mean=0,009281997083 s | +---------------------------------------------------------------------------+ | Evolution statistics | +---------------------------------------------------------------------------+ | Generations: 12 | | Altered: sum=7 664; mean=638,666666667 | | Killed: sum=0; mean=0,000000000 | | Invalids: sum=0; mean=0,000000000 | +---------------------------------------------------------------------------+ | Population statistics | +---------------------------------------------------------------------------+ | Age: max=10; mean=1,792167; var=4,657748 | | Fitness: | | min = 0,000000000000 | | max = 716,684883338605 | | mean = 587,012666759785 | | var = 17309,892287851708 | | std = 131,567063841418 | +---------------------------------------------------------------------------+

This particular time, we were able to put items with a total value of 716,68 in the best scenario. We also can see the detailed statistics of evolution and time.

**How to test?**

It is a fairly simple process — just open the main file related to the problem and first run the algorithm. Once we have a general idea, then we can start playing with the parameters.

**4. Conclusion**

In this article, we covered the Jenetics library features based on real optimization problems.

The code is available as a Maven project on GitHub. Please note that we provided the code examples for more optimization challenges, such as the Springsteen Record (yes, it exists!) and Traveling Salesman problems.

For all articles in the series, including other examples of genetic algorithms, check out the following links:

- How to Design a Genetic Algorithm in Java
- The Traveling Salesman Problem in Java
- Ant Colony Optimization
- Introduction to Jenetics library (this)