1. Overview

Performance and processing speed are crucial to the vast majority of software. As a result, parallel computing, the process of breaking down complex tasks into minimal subtasks to compute simultaneously, has become a central concept of software engineering.

In the Computer Science field, parallelism is not a new topic. Parallelism has been studied since the firsts mechanical machines and is vigorously applied on our known transistor-based processor.

Scala programming language already brings us a collection to quickly implement parallel computing, the Parallel Collections.

In this tutorial, we’ll check out some concepts of parallelism with Scala and the usage of parallel collections.

2. Parallelism Overview

It’s important to understand that parallelism has efficiency as the primary goal. It uses existing hardware architectures to obtain the computing, varying from multiple granularities and architectures.

In other words, parallelism is complex because optimizing hardware usage and dividing subprocess affects accuracy directly and leads to logical challenges.

There are some levels of parallelism, bit-level, instruction-level, and task-level. Bit-level and instruction-level refer to how hardware architecture works parallelism, while task-level deals with code instructions.

Parallelism is also significantly related to the hardware. The way that processing units handle instructions directly affects the parallelism capability. Multi-core processors, multiprocessors, graphic process units (GPU), and computer clusters, are some known architectures applied.

Worth highlighting, when looking into computer cluster parallelism, there is a framework written in Scala to perform big data processing through clusters, the Apache Spark.

2.1. Parallel Limitations

As important as parallelism seems, there are cases in which parallelism is impossible. Some are side-effecting operations, non-associative operations, race conditions, and so on.

Side-effecting operations are those operations that have a mutable state in which multiple threads changing may conflict. For example, executing an iteration accessing an external variable. Each thread may evaluate simultaneously, each one ignoring another result.

Let’s take a look at a simple demonstration. For now, we’re going to ignore the implementation of parallel collections, therefore let’s assume that parallelExecutionList is a 0 to 300 Integer List in which works parallelizing all execution:

val parallelExecutionList = ...

var count = 0

def countTo150(num: Int): Unit = {
  if (num < 150) {
    count += 1
  }
}

parallelExecutionList.foreach(countTo150)

After executing this code three times, the count result varied. Firstly count was equal to 144, the second was 138, and the third was 140.

Let’s see another example. Let’s suppose that parallelExecutionList is a List with random numbers:

val parallelExecutionList = ...

parallelExecutionList.reduce(_%_)

When executing this code, we receive different results. Race conditions are those scenarios in which the evaluation must be sequential.

So, it’s essential to understand each task we are performing to decide if there is or if it’s possible to apply parallelism.

3. Set Up

Before Scala 2.13.0, the parallel collections are part of the standard library. However, from 2.13 onwards, it is moved out to a separate library. So, for Scala 2.13 or above, we need to add the scala-parallel-collections dependency to build.sbt:

libraryDependencies += "org.scala-lang.modules" %% "scala-parallel-collections" % "1.0.4"

Additionally, we need to add the below import statement to bring the parallel methods in scope:

import scala.collection.parallel.CollectionConverters._

4. Scala Parallel Collections

As already said, Scala implements collections to work with parallelism. The main sequential collection is integrated with parallel ones. Let’s see some classes:

  • ParArray
  • ParRange
  • ParVector
  • immutable.ParHashSet
  • immutable.ParHashMap

Let’s create a List of Integers from 0 to 100:

val parallelList: ParSeq[Int] = (0 to 100).toList.par

As we can see, parallel collections integrate easily with sequential collections, converting to the parallel one with the par method. However, it’s possible to utilize these classes in the same way with sequential collections. For example, let’s create a parallel Range from 0 to 1000:

val parallelVector = ParVector.range(0, 1000)
val otherParallelVector = ParVector.tabulate(1000)(x=>x)

Now, to execute parallel computation, it’s just using the conventional methods like filter, fold, map, etc.

4.1. Map and Filter Functions

map, filter, and other functions have similar behavior to sequential ones:

parallelList.map(_ * 2) // ParVector(0, 2, 4, 6, 8, 10, ...) 
parallelList.filter(_ % 2 != 0) // ParVector(1, 3, 5, 7, 9, ...)

As we can see, there is no difference in implementation or result, only the type.

4.2. Fold Function

The fold function can be tricky and allow non-associative operations:

parallelList.fold(0)(_ + _) // Int = 5050 

We can see that this fold sums all elements from the List starting from zero, equally as would be in sequential collections. However, if we change the function to subtraction, we’ll fall into a non-associative scenario.

4.3. Parallelism Configuration

In addition, Parallel Collections allow us to customize the parallelization task management, which defines when and how tasks are parallelized. We can use the tasksupport definition to set up our configuration:

parallelList.tasksupport = new ForkJoinTaskSupport()

Scala Collections architecture makes it possible to efficiently utilize, maintain, and extend all collection and collection operations for sequential and parallel collections.

5. Conclusion

In conclusion, we presented some theoretical concepts of parallel computing applied with Scala. We looked into some ideas and limitations when working with parallelism.

To summarize, we mainly use parallelism with complex tasks and extensive data. For instance, if we analyze the performance for these operations above, we’ll see that sometimes, sequential collections can outperform parallel ones because these operations are soo simple and minimal. So, the cost of parallelizing these actions is not worth it.

As usual, the sample code used in this tutorial is available over on GitHub.

Comments are closed on this article!