0% found this document useful (0 votes)
65 views338 pages

Co Routines

Uploaded by

Alex Trujillo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views338 pages

Co Routines

Uploaded by

Alex Trujillo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Kotlin Coroutines

Deep Dive

Marcin Moskała
This book is for sale at [Link]

This version was published on 2022-04-01

ISBN 978-83-963958-1-8

This is a Leanpub book. Leanpub empowers authors and publishers


with the Lean Publishing process. Lean Publishing is the act of
publishing an in-progress ebook using lightweight tools and many
iterations to get reader feedback, pivot until you have the right book
and build traction once you do.

© 2021 - 2022 Marcin Moskała


Tweet This Book!
Please help Marcin Moskała by spreading the word about this book
on Twitter!
The suggested tweet for this book is:
I just bought Kotlin Coroutines by Marcin Moskała
The suggested hashtag for this book is #KotlinCoroutines.
Find out what other people are saying about the book by clicking on
this link to search for this hashtag on Twitter:
#KotlinCoroutines
For my beloved wife, Maja.
Contents

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 1

Part 1: Understanding Kotlin Coroutines . . . . . . . . . . . 7


Why Kotlin Coroutines? . . . . . . . . . . . . . . . . . . . 8
Sequence builder . . . . . . . . . . . . . . . . . . . . . . . 20
How does suspension work? . . . . . . . . . . . . . . . . . 26
Coroutines under the hood . . . . . . . . . . . . . . . . . . 38
Coroutines: built-in support vs library . . . . . . . . . . . 56

Part 2: Kotlin Coroutines library . . . . . . . . . . . . . . . . 58


Coroutine builders . . . . . . . . . . . . . . . . . . . . . . 59
Coroutine context . . . . . . . . . . . . . . . . . . . . . . 75
Jobs and awaiting children . . . . . . . . . . . . . . . . . . 87
Cancellation . . . . . . . . . . . . . . . . . . . . . . . . . 100
Exception handling . . . . . . . . . . . . . . . . . . . . . 115
Coroutine scope functions . . . . . . . . . . . . . . . . . . 125
Dispatchers . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Constructing a coroutine scope . . . . . . . . . . . . . . . 162
The problem with shared state . . . . . . . . . . . . . . . . 172
Testing Kotlin Coroutines . . . . . . . . . . . . . . . . . . 186

Part 3: Channel and Flow . . . . . . . . . . . . . . . . . . . . 219


Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Actors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Hot and cold data sources . . . . . . . . . . . . . . . . . . 245
Flow introduction . . . . . . . . . . . . . . . . . . . . . . 253
Flow building . . . . . . . . . . . . . . . . . . . . . . . . . 264
Flow lifecycle functions . . . . . . . . . . . . . . . . . . . 276
Flow processing . . . . . . . . . . . . . . . . . . . . . . . 287
SharedFlow and StateFlow . . . . . . . . . . . . . . . . . . 310

Ending . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
Introduction 1

Introduction

Why do you want to learn about Kotlin Coroutines? I like to ask my


workshop attendees this question. “Because they’re cool” and “Be-
cause everyone is talking about them” are common answers. Then I
dive deeper and I hear “because they are lighter threads”, “because
they are easier than RxJava”, or “because they allow concurrency
while allowing our code to have an imperative style”. However,
coroutines are much more than that. They are the holy grail of con-
currency. As a concept, they have been known in Computer Science
since the 1960s, but in mainstream programming they have only
been used in a very limited form (like async/await). This changed
with Golang, which introduced much more general-purpose corou-
tines. Kotlin built on that, creating what I believe is currently the
most powerful and practical implementation of this idea.
The importance of concurrency is growing, but the classic tech-
niques are not enough. Current trends suggest that coroutines are
the direction in which our industry is clearly heading, and Kotlin
Coroutines are a very solid step. Let me show them to you, with
examples of how well they help in common use cases. I hope you will
have a lot of fun reading this book.

Who is this book for?

As a developer experienced in both backend and Android, in this


book I try to mainly focus on these two perspectives. These are
currently the two major industry applications of Kotlin, and it can
be seen that coroutines were largely designed to suit these use cases
well¹. So, you might say that this book is primarily designed for
Android and backend developers, but it should be just as useful for
other developers using Kotlin.
This book assumes that readers know Kotlin well. If you do not, I
recommend starting with Kotlin in Action by Dmitry Jemerov and
Svetlana Isakova.

¹Google’s Android team cooperated in designing and creating


some features we will present in this book.
Introduction 2

The structure of this book

The book is divided into the following parts:

• Part 1: Understanding Kotlin Coroutines - dedicated to ex-


plaining what Kotlin Coroutines are and how they really work.
• Part 2: Kotlin Coroutines library - explaining the most impor-
tant concepts from the [Link] library and how to
use them well.
• Part 3: Channel and Flow - focused on Channel and Flow from
the [Link] library.

What will be covered?

This book is based on a workshop I conduct. During its iterations I


have been able to observe what interested attendees and what did not.
These are the elements that are most often mentioned by attendees:

• How do coroutines really work? (Part 1)


• How to use coroutines in practice? (Part 2 and 3)
• What are the best practices? (Part 2 and 3)
• Testing Kotlin coroutines (Testing Kotlin Coroutines in Part 2)
• What is Flow and how does it work? (Part 3)

Conventions

When I use a concrete element from code, I will use code-font. When
naming a concept, I will capitalize the word. To reference an arbi-
trary element of some type, I will use lowercase. For example:

• Flow is a type or an interface, it is printed with code-font (like


in “Function needs to return Flow”);
• Flow represents a concept, so it starts with an uppercase letter
(like in “This explains the essential difference between Chan-
nel and Flow”);
• a flow is an instance, like a list or a set, therefore it is in
lowercase (like in “Every flow consists of a few elements”).

Another example: List means an interface or a type (“The type of l is


List”); List represents a concept (a data structure), while a list is one
of many lists (“the list variable holds a list”).
I have used American English in the book, except for the spelling
of “cancellation/cancelled”, which I chose due to the spelling of the
“Cancelled” coroutine state.
Introduction 3

Code conventions

Most of the presented snippets are executable code with no import


statements. In the online version of this book on the Kt. Academy
website, most snippets can be executed, so readers can play with the
code.
Snippet results are presented using the println function. The result
will often be placed at the end of these snippets in comments. If there
is a delay between output lines, it will be shown in brackets. Here is
an example:

suspend fun main(): Unit = coroutineScope {


launch {
delay(1000L)
println("World!")
}
println("Hello,")
}
// Hello,
// (1 sec)
// World!

Sometimes, some parts of code or a result are shortened with .... In


such cases, you can read it as “there should be more here, but it is not
relevant to the example”.

launch(CoroutineName("Name1")) { ... }
launch(CoroutineName("Name2") + Job()) { ... }

In some cases, I will show comments next to the line that prints them.
I do this when the order is clear:

suspend fun main(): Unit = coroutineScope {


println("Hello,") // Hello,
delay(1000L) // (1 sec)
println("World!") // World!
}

In a few snippets I have added a number after the line to more easily
explain the snippet’s behavior. This is what it might look like:
Introduction 4

suspend fun main(): Unit = coroutineScope {


println("Hello,") // 1
delay(1000L) // 2
println("World!") // 3
}

At 1 we print “Hello,”, then we wait for a second because line 2


contains delay, and we print “World!” at line 3.
Introduction 5

Acknowledgments

This book would not be so good without the reviewers’ great sugges-
tions and comments. I would like to thank all of them. Here is the
whole list of reviewers, starting from the most active ones.

Nicola Corti - a Google Developer Expert for


Kotlin. He has been working with the language
since before version 1.0, and he is the main-
tainer of several open-source libraries and
tools for mobile developers (Detekt, Chucker,
AppIntro). He’s currently working in the React
Native core team at Meta, building one of the
most popular cross-platform mobile frameworks. Furthermore, he
is an active member of the developer community. His involvement
goes from speaking at international conferences to being a member
of CFP committees and supporting developer communities across
Europe. In his free time, he also loves baking, podcasting, and run-
ning.

Garima Jain - a Google Developer Expert in


Android from India. She is also known around
the community as @ragdroid. Garima works
as a Principal Android Engineer at GoDaddy.
She is also an international speaker and an
active technical blogger. She enjoys interact-
ing with other people from the community
and sharing her thoughts with them. In her leisure time, she loves
watching television shows, playing TT, and basketball. Due to her
love for fiction and coding, she loves to mix technology with fiction
and then shares her ideas with others through talks and blog posts.

Ilmir Usmanov - a software developer at Jet-


Brains, working on coroutine support in the
Kotlin compiler since 2017. Was responsible
for stabilization and implementation of the
coroutines design. Since then, he has moved
to other features, namely inline classes. Cur-
rently, his work with coroutines is limited to
bug fixing and optimization, since coroutines as a language feature
is complete and stable and does not require much attention.
Introduction 6

Sean McQuillan - a Developer Advocate at


Google. With a decade of experience at Twilio
and other San Francisco startups, he is an ex-
pert at building apps that scale. Sean is pas-
sionate about using great tooling to build high-
quality apps quickly. When he is not working
on Android, you can find him fiddling on the
piano or crocheting hats.

Igor Wojda - a passionate engineer with over


a decade of software development experience.
He is deeply interested in Android application
architecture and the Kotlin language, and he
is an active member of the open-source com-
munity. Igor is a conference speaker, technical
proofreader for the ‘Kotlin In Action’ book,
and author of the ‘Android Development with Kotlin’ book. Igor
enjoys sharing his passion for coding with other developers.
Jana Jarolimova - an Android developer at Avast. She started her ca-
reer teaching Java classes at Prague City University, before moving
on to mobile development, which inevitably led to Kotlin and her
love thereof.
Richard Schielek - an experienced developer and an early adopter of
Kotlin and coroutines, using both in production before they became
stable. Worked in the European space industry for several years.
Vsevolod Tolstopyatov - a team lead of the Kotlin Libraries team. He
works at JetBrains and is interested in API design, concurrency, JVM
internals, performance tuning and methodologies.
Ibrahim Yilmaz, Dean Djermanović and Dan O’Neill.
I would also like to thank Michael Timberlake, our language re-
viewer, for his excellent corrections to the whole book.
Part 1: Understanding Kotlin
Coroutines
Before we start our adventure with the Kotlin Coroutines library,
let’s start with some more basic concepts. What are coroutines? How
does suspension work? What does it all look like under the hood? We
will explore all this while learning some useful tools and practices.

7
Why Kotlin Coroutines? 8

Why Kotlin Coroutines?

Why do we need to learn Kotlin Coroutines? We already have well-


established JVM libraries like RxJava or Reactor. Moreover, Java
itself has support for multithreading, while many people also choose
to just use plain old callbacks instead. Clearly, we already have many
options for performing asynchronous operations.
Kotlin Coroutines offer much more than that. They are an imple-
mentation of a concept that was first described in 1963² but waited
years for a proper industry-ready implementation³. Kotlin Corou-
tines connects powerful capabilities presented by half-century-old
papers to a library that is designed to perfectly help in real-life use
cases. What is more, Kotlin Coroutines are multiplatform, which
means they can be used across all Kotlin platforms (like JVM, JS,
iOS, and also in the common modules). Finally, they do not change
the code structure drastically. We can use most Kotlin coroutines’
capabilities nearly effortlessly (which we cannot say about RxJava or
callbacks). This makes them beginner-friendly⁴.
Let’s see it in practice. We will explore how different common use
cases are solved by coroutines and other well-known approaches. I
will show two typical use cases: Android and backend business logic
implementation. Let’s start with the first one.

²Conway, Melvin E. (July 1963). “Design of a Separable Transition-


diagram Compiler”. Communications of the ACM. ACM. 6 (7): 396–
408. doi:10.1145/366663.366704. ISSN 0001-0782. S2CID 10559786
³I believe that the first industry-ready and universal coroutines
were introduced by Go in 2009. However, it is worth mentioning
that coroutines were also implemented in some older languages, like
Lisp, but they didn’t become popular. I believe this is because their
implementation wasn’t designed to support real-life cases. Lisp (just
like Haskell) was mostly treated as a playground for scientists rather
than as a language for professionals.
⁴This does not change the fact that we should understand corou-
tines to use them well.
Why Kotlin Coroutines? 9

Coroutines on Android (and other frontend


platforms)

When you implement application logic on the frontend, what you


most often need to do is:

1. get some data from one or many sources (API, view element,
database, preferences, another application);
2. process this data;
3. do something with this data (display it in the view, store it in a
database, send it to an API).

To make our discussion more practical, let’s first assume we are


developing an Android application. We will start with a situation in
which we need to get news from an API, sort it, and display it on the
screen. This is a direct representation of what we want our function
to do:

fun onCreate() {
val news = getNewsFromApi()
val sortedNews = news
.sortedByDescending { [Link] }
[Link](sortedNews)
}

Sadly, this cannot be done so easily. On Android, each application


has only one thread that can modify the view. This thread is very im-
portant and should never be blocked. That is why the above function
cannot be implemented in this way. If it were started on the main
thread, getNewsFromApi would block it, and our application would
crash. If we started it on another thread, our application would crash
when we call showNews because it needs to run on the main thread.

Thread switching

We could solve these problems by switching threads. First to a thread


that can be blocked, and then to the main thread.
Why Kotlin Coroutines? 10

fun onCreate() {
thread {
val news = getNewsFromApi()
val sortedNews = news
.sortedByDescending { [Link] }
runOnUiThread {
[Link](sortedNews)
}
}
}

Such thread switching can still be found in some applications, but it


is known for being problematic for several reasons:

• There is no mechanism here to cancel these threads, so we


often face memory leaks.
• Making so many threads is costly.
• Frequently switching threads is confusing and hard to man-
age.
• The code will unnecessarily get bigger and more complicated.

To see those problems well, imagine the following situation: You


open and quickly close a view. While opening, you might have
started multiple threads that fetch and process data. Without
cancelling them, they will still be doing their job and trying to
modify a view that does not exist anymore. This means unnecessary
work for your device, possible exceptions in the background, and
who knows what other unexpected results.
Considering all these problems, let’s look for a better solution.

Callbacks

Callbacks are another pattern that might be used to solve our prob-
lems. The idea is that we make our functions non-blocking, but we
pass to them a function that should be executed once the process
started by the callback function has finished. This is how our func-
tion might look if we use this pattern:
Why Kotlin Coroutines? 11

fun onCreate() {
getNewsFromApi { news ->
val sortedNews = news
.sortedByDescending { [Link] }
[Link](sortedNews)
}
}

Notice that this implementation does not support cancellation. We


might make cancellable callback functions, but it is not easy. Not
only does each callback function need to be specially implemented
for cancellation, but to cancel them we need to collect all the objects
separately.

fun onCreate() {
startedCallbacks += getNewsFromApi { news ->
val sortedNews = news
.sortedByDescending { [Link] }
[Link](sortedNews)
}
}

Callback architecture solves this simple problem, but it has many


downsides. To explore them, let’s discuss a more complex case in
which we need to get data from three endpoints:

fun showNews() {
getConfigFromApi { config ->
getNewsFromApi(config) { news ->
getUserFromApi { user ->
[Link](user, news)
}
}
}
}

This code is far from perfect for several reasons:

• Getting news and user data might be parallelized, but our


current callback architecture doesn’t support that (it would be
hard to achieve this with callbacks).
Why Kotlin Coroutines? 12

• As mentioned before, supporting cancellation would require a


lot of additional effort.
• The increasing number of indentations make this code hard to
read (code with multiple callbacks is often considered highly
unreadable). Such a situation is called “callback hell”, which
can be found especially in some older [Link] projects:

• When we use callbacks, it is hard to control what happens after


what. The following way of showing a progress indicator will
not work:

fun onCreate() {
showProgressBar()
showNews()
hideProgressBar() // Wrong
}

The progress bar will be hidden just after starting the process of
showing news, so practically immediately after it has been shown.
To make this work, we would need to make showNews a callback
function as well.
Why Kotlin Coroutines? 13

fun onCreate() {
showProgressBar()
showNews {
hideProgressBar()
}
}

That’s why the callback architecture is far from perfect for non-
trivial cases. Let’s take a look at another approach: RxJava and other
reactive streams.

RxJava and other reactive streams

An alternative approach that is popular in Java (both in Android and


backend) is using reactive streams (or Reactive Extensions): RxJava
or its successor Reactor. With this approach, all operations happen
inside a stream of data that can be started, processed, and observed.
These streams support thread-switching and concurrent processing,
so they are often used to parallelize processing in applications.
This is how we might solve our problem using RxJava:

fun onCreate() {
disposables += getNewsFromApi()
.subscribeOn([Link]())
.observeOn([Link]())
.map { news ->
[Link] { [Link] }
}
.subscribe { sortedNews ->
[Link](sortedNews)
}
}

The disposables in the above example are needed to


cancel this stream if (for example) the user exits the
screen.

This is definitely a better solution than callbacks: no memory leaks,


cancellation is supported, proper use of threads. The only problem
is that it is complicated. If you compare it with the “ideal” code from
the beginning (also shown below), you’ll see that they have very little
in common.
Why Kotlin Coroutines? 14

fun onCreate() {
val news = getNewsFromApi()
val sortedNews = news
.sortedByDescending { [Link] }
[Link](sortedNews)
}

All these functions, like subscribeOn, observeOn, map, or subscribe,


need to be learned. Cancelling needs to be explicit. Functions need
to return objects wrapped inside Observable or Single classes. In
practice, when we introduce RxJava, we need to reorganize our code
a lot.

fun getNewsFromApi(): Single<List<News>>

Think of the second problem, for which we need to call three end-
points before showing the data. This can be solved properly with
RxJava, but it is even more complicated.

fun showNews() {
disposables += [Link](
getConfigFromApi().flatMap { getNewsFromApi(it) },
getUserFromApi(),
Function2 { news: List<News>, config: Config ->
Pair(news, config)
})
.subscribeOn([Link]())
.observeOn([Link]())
.subscribe { (news, config) ->
[Link](news, config)
}
}

This code is truly concurrent and has no memory leaks, but we need
to introduce RxJava functions such as zip and flatMap, pack a value
into Pair, and destructure it. This is a correct implementation, but
it’s quite complicated. So finally, let’s see what coroutines offer us.
Why Kotlin Coroutines? 15

Using Kotlin coroutines

The core functionality that Kotlin coroutines introduce is the ability


to suspend a coroutine at some point and resume it in the future.
Thanks to that, we might run our code on the Main thread and
suspend it when we request data from an API. When a coroutine is
suspended, the thread is not blocked and is free to go, therefore it
can be used to change the view or process other coroutines. Once
the data is ready, the coroutine waits for the Main thread (this is a
rare situation, but there might be a queue of coroutines waiting for
it); once it gets the thread, it can continue from the point where it
stopped.

This picture shows theupdateNews and updateProfile functions running on the Main
thread in separate coroutines. They can do this interchangeably because they suspend
their coroutines instead of blocking the thread. When the updateNews function is
waiting for a network response, the Main thread is used by updateProfile. Here, it’s
assumed that getUserData did not suspend because the user’s data was already cached,
therefore it can run until its completion. This wasn’t enough time for the network
response, so the main thread is not used at that time (it can be used by other functions).
Once the data appears, we grab the Main thread and use it on the updateNews function,
starting from the point straight after getNewsFromApi().

By definition, coroutines are components that can be sus-


pended and resumed. Concepts like async/await and gen-
erators, which can be found in languages like JavaScript,
Rust or Python, also use coroutines, but their capabili-
ties are very limited.

So, our first problem might be solved by using Kotlin coroutines in


the following way:
Why Kotlin Coroutines? 16

fun onCreate() {
[Link] {
val news = getNewsFromApi()
val sortedNews = news
.sortedByDescending { [Link] }
[Link](sortedNews)
}
}

In the above code, I used viewModelScope, which is cur-


rently quite common on Android. We might instead use
a custom scope. We will discuss both options later.

This code is nearly identical to what we’ve wanted since the begin-
ning! In this solution, the code runs on the Main thread but it never
blocks it. Thanks to the suspension mechanism, we are suspending
(instead of blocking) the coroutine when we need to wait for data.
When the coroutine is suspended, the Main thread can go do other
things, like drawing a beautiful progress bar animation. Once the
data is ready, our coroutine takes the Main thread again and starts
from where it previously stopped.
How about the other problem with three calls? It could be solved
similarly:

fun showNews() {
[Link] {
val config = getConfigFromApi()
val news = getNewsFromApi(config)
val user = getUserFromApi()
[Link](user, news)
}
}

This solution looks good, but how it works is not optimal. These
calls will happen sequentially (one after another), so if each of them
takes 1 second, the whole function will take 3 seconds instead of 2
seconds, which we can achieve if the API calls execute in parallel.
This is where the Kotlin coroutines library helps us with functions
like async, which can be used to immediately start another coroutine
with some request and wait for its result to arrive later (with the
await function).
Why Kotlin Coroutines? 17

fun showNews() {
[Link] {
val config = async { getConfigFromApi() }
val news = async { getNewsFromApi([Link]()) }
val user = async { getUserFromApi() }
[Link]([Link](), [Link]())
}
}

This code is still simple and readable. It uses the async/await pattern
that is popular in other languages, including JavaScript or C#. It is
also efficient and does not cause memory leaks. The code is both
simple and well implemented.
With Kotlin coroutines, we can easily implement different use cases
and use other Kotlin features. For instance, they do not block us from
using for-loops or collection-processing functions. Below, you can
see how the next pages might be downloaded in parallel or one after
another.

// all pages will be loaded simultaneously


fun showAllNews() {
[Link] {
val allNews = (0 until getNumberOfPages())
.map { page -> async { getNewsFromApi(page) } }
.flatMap { [Link]() }
[Link](allNews)
}
}

// next pages are loaded one after another


fun showPagesFromFirst() {
[Link] {
for (page in 0 until getNumberOfPages()) {
val news = getNewsFromApi(page)
[Link](news)
}
}
}
Why Kotlin Coroutines? 18

Coroutines on the backend

In my opinion, the biggest advantage of using coroutines on the back-


end is simplicity. Unlike RxJava, using coroutines barely changes
how our code looks. In most cases, migrating from threads to corou-
tines only involves adding the suspend modifier. When we do this,
we can easily introduce concurrence, test concurrent behavior, can-
cel coroutines, and use all the other powerful features we will ex-
plore in this book.

suspend fun getArticle(


articleKey: String,
lang: Language
): ArticleJson? {
return [Link](articleKey, lang)
?.let { toArticleJson(it) }
}

suspend fun getAllArticles(


userUuid: String?,
lang: Language
): List<ArticleJson> = coroutineScope {
val user = async { [Link](userUuid) }
val articles = [Link](lang)
articles
.filter { hasAccess([Link](), it) }
.map { toArticleJson(it) }
}

Except for all these features, there is one more important reason to
use coroutines: threads are costly. They need to be created, main-
tained, and they need their memory allocated⁵. If your application
is used by millions of users and you are blocking whenever you wait
for a response from a database or another service, this adds up to
a significant cost in memory and processor use (for the creation,
maintenance, and synchronization of these threads).

⁵Most often, the default size of the thread stack is 1 MB. Due to
Java optimizations, this does not necessarily mean 1 MB times the
number of threads will be used, but a lot of extra memory is spent
just because we create threads.
Why Kotlin Coroutines? 19

This problem can be visualized with the following snippets that sim-
ulate a backend service with 100,000 users asking for data. The first
snippet starts 100,000 threads and makes them sleep for a second (to
simulate waiting for a response from a database or other service). If
you run it on your computer, you will see it takes a while to print
all those dots, or it will break with an OutOfMemoryError exception.
This is the cost of running so many threads. The second snippet uses
coroutines instead of threads and suspends them instead of making
them sleep. If you run it, the program will wait for a second and then
print all the dots. The cost of starting all these coroutines is so cheap
that it is barely noticeable.

fun main() {
repeat(100_000) {
thread {
[Link](1000L)
print(".")
}
}
}

fun main() = runBlocking {


repeat(100_000) {
launch {
delay(1000L)
print(".")
}
}
}

Conclusion

I hope you feel convinced to learn more about Kotlin coroutines now.
They are much more than just a library, and they make concurrent
programming as easy as possible with modern tools. If we have
that settled, let’s start learning. For the rest of this chapter, we will
explore how suspension works: first from the usage point of view,
then under the hood.
Sequence builder 20

Sequence builder

In some other languages, like Python or JavaScript, you can find


structures that use limited forms of coroutines:

• async functions (also called async/await);


• generator functions (functions in which subsequent values are
yielded).

We’ve already seen how async can be used in Kotlin, but this will
be explained in detail in the Coroutine builders chapter. Instead of
generators, Kotlin provides a sequence builder - a function used to
create a sequence⁶.
A Kotlin sequence is a similar concept to a collection (like List or
Set), but it is evaluated lazily, meaning the next element is always
calculated on demand, when it is needed. As a result, sequences:

• do the minimal number of required operations;


• can be infinite;
• are more memory-efficient⁷.

Due to these characteristics, it makes a lot of sense to define a builder


where subsequent elements are calculated and “yielded” on demand.
We define it using the function sequence. Inside its lambda expres-
sion, we can call the yield function to produce the next elements of
this sequence.

val seq = sequence {


yield(1)
yield(2)
yield(3)
}

⁶Even better, it offers flow builders. Flow is a similar but much


more powerful concept which we will explain later in the book.
⁷See item Prefer Sequence for big collections with more than one
processing step in the Effective Kotlin.
Sequence builder 21

fun main() {
for (num in seq) {
print(num)
} // 123
}

The sequence function here is a small DSL. Its argu-


ment is a lambda expression with a receiver (suspend
SequenceScope<T>.() -> Unit). Inside it, the receiver
this refers to an object of type SequenceScope<T>. It has
functions like yield. When we call yield(1), it is equiv-
alent to calling [Link](1) because this can be used
implicitly. If this is your first contact with lambda ex-
pressions with receivers, I recommend starting from
learning about them and about DSL creation, as they are
used intensively in Kotlin Coroutines.

What is essential here is that each number is generated on demand,


not in advance. You can observe this process clearly if we print
something in both the builder and in the place where we handle our
sequence.

val seq = sequence {


println("Generating first")
yield(1)
println("Generating second")
yield(2)
println("Generating third")
yield(3)
println("Done")
}

fun main() {
for (num in seq) {
println("The next number is $num")
}
}
// Generating first
// The next number is 1
// Generating second
Sequence builder 22

// The next number is 2


// Generating third
// The next number is 3
// Done

Let’s analyze how it works. We ask for the first number, so we enter
the builder. We print “Generating first”, and we yield number 1.
Then we get back to the loop with the yielded value, and so “Next
number is 1” is printed. Then something crucial happens: execution
jumps to the place where we previously stopped to find another num-
ber. This would be impossible without a suspension mechanism, as
it wouldn’t be possible to stop a function in the middle and resume
it from the same point in the future. Thanks to suspension, we
can do this as execution can jump between main and the sequence
generator.

When we ask for the next value in the sequence, we resume in the builder straight
after the previous yield.

To see it more clearly, let’s manually ask for a few values from the
sequence.
Sequence builder 23

val seq = sequence {


println("Generating first")
yield(1)
println("Generating second")
yield(2)
println("Generating third")
yield(3)
println("Done")
}

fun main() {
val iterator = [Link]()
println("Starting")
val first = [Link]()
println("First: $first")
val second = [Link]()
println("Second: $second")
// ...
}

// Prints:
// Starting
// Generating first
// First: 1
// Generating second
// Second: 2

Here, we used an iterator to get the next values. At any point, we


can call it again to jump into the middle of the builder function and
generate the next value. Would this be possible without coroutines?
Maybe, if we dedicated a thread for that. Such a thread would need
to be maintained, and that would be a huge cost. With coroutines,
it is fast and simple. Moreover, we can keep this iterator for as long
as we wish as it costs nearly nothing. Soon we will learn how this
mechanism works under the hood (in the Suspension under the hood
chapter).
Sequence builder 24

Real-life usages

There are a few use cases where sequence builders are used. The most
typical one is generating a mathematical sequence, like a Fibonacci
sequence.

val fibonacci: Sequence<BigInteger> = sequence {


var first = [Link]()
var second = [Link]()
while (true) {
yield(first)
val temp = first
first += second
second = temp
}
}

fun main() {
print([Link](10).toList())
}
// [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]

This builder can also be used to generate random numbers or texts.

fun randomNumbers(
seed: Long = [Link]()
): Sequence<Int> = sequence {
val random = Random(seed)
while (true) {
yield([Link]())
}
}

fun randomUniqueStrings(
length: Int,
seed: Long = [Link]()
): Sequence<String> = sequence {
val random = Random(seed)
val charPool = ('a'..'z') + ('A'..'Z') + ('0'..'9')
Sequence builder 25

while (true) {
val randomString = (1..length)
.map { i -> [Link]([Link]) }
.map(charPool::get)
.joinToString("");
yield(randomString)
}
}.distinct()

The sequence builder should not use suspending operations other


than yielding operations⁸. If you need, for instance, to fetch data,
it’s better to use Flow, as will be explained later in the book. The
way its builder works is similar to the sequence builder, but Flow has
support for other coroutine features.

fun allUsersFlow(
api: UserApi
): Flow<User> = flow {
var page = 0
do {
val users = [Link](page++) // suspending
emitAll(users)
} while (![Link]())
}

We’ve learned about the sequence builder and why it needs suspen-
sion to work correctly. Now that we’ve seen suspension in action,
it is time to dive even deeper to understand how suspension works
when we use it directly.
⁸And it cannot be, since SequenceScope is annotated with
RestrictsSuspension, which prevents the suspend function being
called unless its receiver is SequenceScope.
How does suspension work? 26

How does suspension work?

Suspending functions are the hallmark of Kotlin coroutines. The


suspension capability is the single most essential feature upon which
all other Kotlin Coroutines concepts are built. That is why our goal
in this chapter is to forge a solid understanding of how it works.
Suspending a coroutine means stopping it in the middle. It is similar
to stopping a video game: you save at a checkpoint, turn off the game,
and both you and your computer can focus on doing different things.
Then, when you would like to continue some time later, you turn
on the game again, resume from the saved checkpoint, and thus you
can play from where you previously left off. This is an analogy to
coroutines. When they are suspended, they return a Continuation.
It is like a save in a game: we can use it to continue from the point
where we stopped.
Notice that this is very different from a thread, which cannot be
saved, only blocked. A coroutine is much more powerful. When
suspended, it does not consume any resources. A coroutine can be
resumed on a different thread, and (at least in theory) a continuation
can be serialized, deserialized and then resumed.

Resume

So let’s see it in action. For this, we need a coroutine. We start


coroutines using coroutine builders (like runBlocking or launch),
which we will introduce later. Although there is also a simpler way,
we can use a suspending main function.
Suspending functions are functions that can suspend a coroutine.
This means that they must be called from a coroutine (or another
suspending function). In the end, they need to have something to
suspend. Function main is the starting point, so Kotlin will start it in
a coroutine when we run it.

suspend fun main() {


println("Before")

println("After")
}
// Before
// After
How does suspension work? 27

This is a simple program that will print “Before” and “After”. What
will happen if we suspend in between these two prints? For that,
we can use the suspendCoroutine function provided by the standard
Kotlin library⁹.

suspend fun main() {


println("Before")

suspendCoroutine<Unit> { }

println("After")
}
// Before

If you call the above code, you will not see the “After”, and the code
will not stop running (as our main function never finished). The
coroutine is suspended after “Before”. Our game was stopped and
never resumed. So, how can we resume? Where is this aforemen-
tioned Continuation?
Take a look again at the suspendCoroutine invocation and notice that
it ends with a lambda expression ({ }). The function passed as an
argument will be invoked before the suspension. This function gets
a continuation as an argument.

suspend fun main() {


println("Before")

suspendCoroutine<Unit> { continuation ->


println("Before too")
}

println("After")
}
// Before
// Before too

⁹It directly calls suspendCoroutineUninterceptedOrReturn,


which is a primitive function, that means a function with intrinsic
implementation.
How does suspension work? 28

Such a function calling another function in place is nothing new.


This is similar to let, apply, or useLines. The suspendCoroutine
function is designed in the same way, which makes it possible to use
continuation just before the suspension. After the suspendCoroutine
call, it would be too late. So, the lambda expression passed as a
parameter to the suspendCoroutine function is invoked just before
the suspension. This lambda is used to store this continuation some-
where or to plan whether to resume it.
We could use it to resume immediately:

suspend fun main() {


println("Before")

suspendCoroutine<Unit> { continuation ->


[Link](Unit)
}

println("After")
}
// Before
// After

Notice that “After” in the example above is printed because we call


resume in suspendCoroutine¹⁰.

Since Kotlin 1.3, the definition of Continuation has been


changed. Instead of resume and resumeWithException,
there is one resumeWith function that expects Result.
The resume and resumeWithException functions we are
using are extension functions from the standard library
that use resumeWith.

¹⁰This statement is true, but I need to clarify. You might imagine


that here we suspend and immediately resume. This is a good intu-
ition, but the truth is that there is an optimization that prevents a
suspension if resuming is immediate.
How does suspension work? 29

inline fun <T> Continuation<T>.resume(value: T): Unit =


resumeWith([Link](value))

inline fun <T> Continuation<T>.resumeWithException(


exception: Throwable
): Unit = resumeWith([Link](exception))

We could also start a different thread that will sleep for a set duration
and resume after that time:

suspend fun main() {


println("Before")

suspendCoroutine<Unit> { continuation ->


thread {
println("Suspended")
[Link](1000)
[Link](Unit)
println("Resumed")
}
}

println("After")
}
// Before
// Suspended
// (1 second delay)
// After
// Resumed

This is an important observation. Notice that we can make a func-


tion that will resume our continuation after a defined period. In
such a case, the continuation is captured by the lambda expression,
as shown in the code snippet below.
How does suspension work? 30

fun continueAfterSecond(continuation: Continuation<Unit>) {


thread {
[Link](1000)
[Link](Unit)
}
}

suspend fun main() {


println("Before")

suspendCoroutine<Unit> { continuation ->


continueAfterSecond(continuation)
}

println("After")
}
// Before
// (1 sec)
// After
//sampleEnd

Such a mechanism works, but it unnecessarily creates threads only


to end them after just a second of inactivity. Threads are not cheap,
so why waste them? A better way would be to set up an “alarm clock”.
In JVM, we can use ScheduledExecutorService for that. We can set
it to call some [Link](Unit) after a defined amount of
time.

private val executor =


[Link] {
Thread(it, "scheduler").apply { isDaemon = true }
}

suspend fun main() {


println("Before")

suspendCoroutine<Unit> { continuation ->


[Link]({
[Link](Unit)
How does suspension work? 31

}, 1000, [Link])
}

println("After")
}
// Before
// (1 second delay)
// After

Suspending for a set amount of time seems like a useful feature. Let’s
extract it into a function. We will name it delay.

private val executor =


[Link] {
Thread(it, "scheduler").apply { isDaemon = true }
}

suspend fun delay(timeMillis: Long): Unit =


suspendCoroutine { cont ->
[Link]({
[Link](Unit)
}, timeMillis, [Link])
}

suspend fun main() {


println("Before")

delay(1000)

println("After")
}
// Before
// (1 second delay)
// After

The executor still uses a thread, but it is one thread for all coroutines
using the delay function. This is much better than blocking one
thread every time we need to wait for some time.
How does suspension work? 32

This is exactly how delay from the Kotlin Coroutines library used to
be implemented. The current implementation is more complicated,
mainly so as to support testing, but the essential idea remains the
same.

Resuming with a value

One thing that might concern you is why we passed Unit to the
resume function. You might also be wondering why we used Unit as a
type argument for the suspendCoroutine. The fact that these two are
the same is no coincidence. Unit is also returned from the function
and is the generic type of the Continuation parameter.

val ret: Unit =


suspendCoroutine<Unit> { cont: Continuation<Unit> ->
[Link](Unit)
}

When we call suspendCoroutine, we can specify which type will be


returned in its continuation. The same type needs to be used when
we call resume.

suspend fun main() {


val i: Int = suspendCoroutine<Int> { cont ->
[Link](42)
}
println(i) // 42

val str: String = suspendCoroutine<String> { cont ->


[Link]("Some text")
}
println(str) // Some text

val b: Boolean = suspendCoroutine<Boolean> { cont ->


[Link](true)
}
println(b) // true
}
How does suspension work? 33

This does not fit well with the game analogy. I don’t know of any
game in which you can put something inside the game when re-
suming a save¹¹ (unless you cheated and googled how to solve the
next challenge). However, it makes perfect sense with coroutines.
Often we are suspended because we are waiting for some data, such
as a network response from an API. This is a common scenario.
Your thread is running business logic until it reaches a point where
it needs some data. So, it asks your network library to deliver it.
Without coroutines, this thread would then need to sit and wait.
This would be a huge waste as threads are expensive, especially if
this is an important thread, like the Main Thread on Android. With
coroutines, it just suspends and gives the library a continuation with
the instruction “Once you’ve got this data, just send it to the resume
function”. Then the thread can go do other things. Once the data is
there, the thread will be used to resume from the point where the
coroutine was suspended.
To see this in action, let’s see how we might suspend until we re-
ceive some data. In the example below, we use a callback function
requestUser that is implemented externally.

suspend fun main() {


println("Before")
val user = suspendCoroutine<User> { cont ->
requestUser { user ->
[Link](user)
}
}
println(user)
println("After")
}
// Before
// (1 second delay)
// User(name=Test)
// After

¹¹During a workshop discussion it turned out there is such a game:


in Don’t Starve Together, when you resume, you can change players.
I haven’t played it myself, but this sounds like a nice metaphor for
resuming with a value.
How does suspension work? 34

Calling suspendCoroutine directly is not convenient. We would pre-


fer to have a suspending function instead. We can extract it our-
selves.

suspend fun requestUser(): User {


return suspendCoroutine<User> { cont ->
requestUser { user ->
[Link](user)
}
}
}

suspend fun main() {


println("Before")
val user = requestUser()
println(user)
println("After")
}

Currently, suspending functions are already supported by many pop-


ular libraries, such as Retrofit and Room. This is why we rarely need
to use callback functions in suspending functions. However, if you
have such a need, I recommend using suspendCancellableCoroutine
(instead of suspendCoroutine), which will be explained in the Cancel-
lation chapter.

suspend fun requestUser(): User {


return suspendCancellableCoroutine<User> { cont ->
requestUser { user ->
[Link](user)
}
}
}

You might wonder what happens if the API gives us not data but
some kind of problem. What if the service is dead or responds
with an error? In such a case, we cannot return data; instead, we
should throw an exception from the place where the coroutine was
suspended. This is where we need to resume with an exception.
How does suspension work? 35

Resume with an exception

Every function we call might return some value or throw an excep-


tion. The same is true for suspendCoroutine. When resume is called,
it returns data passed as an argument. When resumeWithException
is called, the exception that is passed as an argument is conceptually
thrown from the suspension point.

class MyException : Throwable("Just an exception")

suspend fun main() {


try {
suspendCoroutine<Unit> { cont ->
[Link](MyException())
}
} catch (e: MyException) {
println("Caught!")
}
}
// Caught!

This mechanism is used for different kinds of problems. For instance,


to signal network exceptions.

suspend fun requestUser(): User {


return suspendCancellableCoroutine<User> { cont ->
requestUser { resp ->
if ([Link]) {
[Link]([Link])
} else {
val e = ApiException(
[Link],
[Link]
)
[Link](e)
}
}
}
}
How does suspension work? 36

suspend fun requestNews(): News {


return suspendCancellableCoroutine<News> { cont ->
requestNews(
onSuccess = { news -> [Link](news) },
onError = { e -> [Link](e) }
)
}
}

Suspending a coroutine, not a function

One thing that needs to be stressed here is that we suspend a corou-


tine, not a function. Suspending functions are not coroutines, just
functions that can suspend a coroutine¹². Imagine that we store a
function in some variable and try to resume it after the function call.

// Do not do this
var continuation: Continuation<Unit>? = null

suspend fun suspendAndSetContinuation() {


suspendCoroutine<Unit> { cont ->
continuation = cont
}
}

suspend fun main() {


println("Before")

suspendAndSetContinuation()
continuation?.resume(Unit)

println("After")
}
// Before

This makes no sense. It is equivalent to stopping a game and planning


to resume it at a later point in the game. resume will never be called.
¹²Suspending main function is a special case. Kotlin compiler
starts it in a coroutine.
How does suspension work? 37

You will only see “Before”, and your program will never end unless
we resume on another thread or another coroutine. To show this, we
can set another coroutine to resume after a second.

// Do not do this, potential memory leak


var continuation: Continuation<Unit>? = null

suspend fun suspendAndSetContinuation() {


suspendCoroutine<Unit> { cont ->
continuation = cont
}
}

suspend fun main() = coroutineScope {


println("Before")

launch {
delay(1000)
continuation?.resume(Unit)
}

suspendAndSetContinuation()
println("After")
}
// Before
// (1 second delay)
// After

Summary

I hope now you have a clear picture of how suspension works from
the user’s point of view. It is important, and we will see it throughout
the book. It is also practical, as now you can take callback functions
and make them suspending functions. If you are like me and like to
know exactly how things work, you are likely still wondering about
how it is implemented. If you’re curious about this, it will be covered
in the next chapter. If you don’t feel you need to know, just skip it. It
is not very practical, it just reveals the magic of Kotlin coroutines.
Coroutines under the hood 38

Coroutines under the hood

There is a certain kind of person who cannot accept that a car can just
be driven. They need to open its hood to understand how it works. I
am one of these people, so I just had to find out how coroutines work.
If you’re like this too, you will enjoy this chapter. If not, you can just
skip it.
This chapter won’t introduce any new tools that you might use. It
is purely explanatory. It tries to explain to a satisfactory level how
coroutines work. The key lessons are:

• Suspending functions are like state machines, with a possible


state at the beginning of the function and after each suspend-
ing function call.
• Both the number identifying the state and the local data are
kept in the continuation object.
• Continuation of a function decorates a continuation of its
caller function; as a result, all these continuations represent a
call stack that is used when we resume or a resumed function
completes.

If you are interested in learning some internals (simplified, of


course), read on.

Continuation-passing style

There are a few ways in which suspending functions could have


been implemented, but the Kotlin team decided on an option
called continuation-passing style. This means that continuations
(explained in the previous chapter) are passed from function to
function as arguments. By convention, a continuation takes the last
parameter position.
Coroutines under the hood 39

suspend fun getUser(): User?


suspend fun setUser(user: User)
suspend fun checkAvailability(flight: Flight): Boolean

// under the hood is


fun getUser(continuation: Continuation<*>): Any?
fun setUser(user: User, continuation: Continuation<*>): Any
fun checkAvailability(
flight: Flight,
continuation: Continuation<*>
): Any

You might have also noticed that the result type under the hood is
different from the originally declared one. It has changed to Any or
Any?. Why so? The reason is that a suspending function might be
suspended, and so it might not return a declared type. In such a case,
it returns a special COROUTINE_SUSPENDED marker, which we will later
see in practice. For now, just notice that since getUser might return
User? or COROUTINE_SUSPENDED (which is of type Any), its result type
must be the closest supertype of User? and Any, so it is Any?. Perhaps
one day Kotlin will introduce union types, in which case we will have
User? | COROUTINE_SUSPENDED instead.

A very simple function

To dive deeper, let’s start with a very simple function that prints
something before and after a delay.

suspend fun myFunction() {


println("Before")
delay(1000) // suspending
println("After")
}

You can already deduce how the myFunction function signature will
look under the hood:

fun myFunction(continuation: Continuation<*>): Any


Coroutines under the hood 40

The next thing is that this function needs its own continuation in
order to remember its state. Let’s name it MyFunctionContinuation
(the actual continuation is an object expression and has no name,
but it will be easier to explain this way). At the beginning of its body,
myFunction will wrap the continuation (the parameter) with its own
continuation (MyFunctionContinuation).

val continuation = MyFunctionContinuation(continuation)

This should be done only if the continuation isn’t wrapped already.


If it is, this is part of the resume process, and we should keep the
continuation unchanged¹³ (this might be confusing now, but later
you will better see why).

val continuation =
if (continuation is MyFunctionContinuation) continuation
else MyFunctionContinuation(continuation)

This condition can be simplified to:

val continuation = continuation as? MyFunctionContinuation


?: MyFunctionContinuation(continuation)

Finally, let’s talk about the body of our function.

suspend fun myFunction() {


println("Before")
delay(1000) // suspending
println("After")
}

The function could be started from two places: either from the be-
ginning (in the case of a first call) or from the point after suspension
(in the case of resuming from continuation). To identify the current
state, we use a field called label. At the start, it is 0, therefore the
function will start from the beginning. However, it is set to the next
state before each suspension point so that we start from just after the
suspension point after a resume.
¹³The actual mechanism here is a little more complicated as the
first bit of the label is also changed, and this change is checked by
the suspending function. This mechanism is needed for suspending
functions in order to support recurrence. This has been skipped for
the sake of simplicity.
Coroutines under the hood 41

// A simplified picture of how myFunction looks under the hood


fun myFunction(continuation: Continuation<Unit>): Any {
val continuation = continuation as? MyFunctionContinuation
?: MyFunctionContinuation(continuation)

if ([Link] == 0) {
println("Before")
[Link] = 1
if (delay(1000, continuation) == COROUTINE_SUSPENDED){
return COROUTINE_SUSPENDED
}
}
if ([Link] == 1) {
println("After")
return Unit
}
error("Impossible")
}

The last important piece is also presented in the snippet above.


When delay is suspended, it returns COROUTINE_SUSPENDED, then
myFunction returns COROUTINE_SUSPENDED; the same is done by the
function that called it, and the function that called this function,
and all other functions until the top of the call stack¹⁴. This is how a
suspension ends all these functions and leaves the thread available
for other runnables (including coroutines) to be used.
Before we go any further, let’s analyze the code above. What would
happen if this delay call didn’t return COROUTINE_SUSPENDED? What
if it just returned Unit instead (we know it won’t, but let’s hypothe-
size)? Notice that if the delay just returned Unit, we would just move
to the next state, and the function would behave like any other.
Now, let’s talk about the continuation, which is implemented as an
anonymous class. Simplified, it looks like this:

¹⁴More concretely, COROUTINE_SUSPENDED is propagated until it


reaches either the builder function or the ‘resume’ function.
Coroutines under the hood 42

cont = object : ContinuationImpl(continuation) {


var result: Any? = null
var label = 0

override fun invokeSuspend(`$result`: Any?): Any? {


[Link] = `$result`;
return myFunction(this);
}
};

To improve the readability of our function, I decided to represent it


as a class named MyFunctionContinuation. I also decided to hide the
inheritance by inlining the ContinuationImpl body. The resulting
class is simplified: I’ve skipped many optimizations and functionali-
ties so as to keep only what is essential.

In JVM, type arguments are erased during compilation;


so, for instance, both Continuation<Unit> or
Continuation<String> become just Continuation. Since
everything we present here is Kotlin representation of
JVM bytecode, you should not worry about these type
arguments at all.

The code below presents a complete simplification of how our func-


tion looks under the hood:

fun myFunction(continuation: Continuation<Unit>): Any {


val continuation = continuation as? MyFunctionContinuation
?: MyFunctionContinuation(continuation)

if ([Link] == 0) {
println("Before")
[Link] = 1
if (delay(1000, continuation) == COROUTINE_SUSPENDED){
return COROUTINE_SUSPENDED
}
}
if ([Link] == 1) {
println("After")
return Unit
Coroutines under the hood 43

}
error("Impossible")
}

class MyFunctionContinuation(
val completion: Continuation<Unit>
) : Continuation<Unit> {
override val context: CoroutineContext
get() = [Link]

var label = 0

var label = 0
var result: Result<Any>? = null
var userId: String? = null

override fun resumeWith(result: Result<String>) {


[Link] = result
val res = try {
val r = myFunction(token, this)
if (r == COROUTINE_SUSPENDED) return
[Link](r as Unit)
} catch (e: Throwable) {
[Link](e)
}
[Link](res)
}
}

If you want to analyze by yourself what suspending functions are


under the hood, open the function in IntelliJ IDEA, use Tools >
Kotlin > Show Kotlin bytecode, and click the “Decompile” button. As
a result, you will see this code decompiled to Java (so more or less
how this code would look if it were written in Java).
Coroutines under the hood 44

How to show the bytecode generated from the file.

The bytecode generated from the file. Notice the “Decompile” button, which lets us
decompile this bytecode to Java.
Coroutines under the hood 45

Bytecode from the Kotlin suspending function decompiled into Java.

A function with a state

If a function has some state (like local variables or parameters) that


needs to be restored after suspension, this state needs to be kept in
this function’s continuation. Let’s consider the following function:

suspend fun myFunction() {


println("Before")
var counter = 0
delay(1000) // suspending
counter++
println("Counter: $counter")
println("After")
}

Here counter is needed in two states (for a label equal to 0 and 1),
so it needs to be kept in the continuation. It will be stored right be-
fore suspension. Restoring these kinds of properties happens at the
beginning of the function. So, this is how the (simplified) function
looks under the hood:
Coroutines under the hood 46

fun myFunction(continuation: Continuation<Unit>): Any {


val continuation = continuation as? MyFunctionContinuation
?: MyFunctionContinuation(continuation)

var counter = [Link]

if ([Link] == 0) {
println("Before")
counter = 0
[Link] = counter
[Link] = 1
if (delay(1000, continuation) == COROUTINE_SUSPENDED){
return COROUTINE_SUSPENDED
}
}
if ([Link] == 1) {
counter = (counter as Int) + 1
println("Counter: $counter")
println("After")
return Unit
}
error("Impossible")
}

class MyFunctionContinuation(
val completion: Continuation<Unit>
) : Continuation<Unit> {
override val context: CoroutineContext
get() = [Link]

var result: Result<Unit>? = null


var label = 0
var counter = 0

override fun resumeWith(result: Result<Unit>) {


[Link] = result
val res = try {
val r = myFunction(this)
if (r == COROUTINE_SUSPENDED) return
Coroutines under the hood 47

[Link](r as Unit)
} catch (e: Throwable) {
[Link](e)
}
[Link](res)
}
}

A function resumed with a value

The situation is slightly different if we actually expect some data


from the suspension. Let’s analyze the function below:

suspend fun printUser(token: String) {


println("Before")
val userId = getUserId(token) // suspending
println("Got userId: $userId")
val userName = getUserName(userId, token) // suspending
println(User(userId, userName))
println("After")
}

Here there are two suspending functions: getUserId and


getUserName. We also added a parameter token, and our suspending
function also returns some values. This all needs to be stored in the
continuation:

• token, because it is needed in states 0 and 1,


• userId, because it is needed in states 1 and 2,
• result of type Result, which represents how this function was
resumed.

If the function was resumed with a value, the result will be


[Link](value). In such a case, we can get and use this
value. If it was resumed with an exception, the result will be
[Link](exception). In such a case, this exception will be
thrown.
Coroutines under the hood 48

fun printUser(
token: String,
continuation: Continuation<*>
): Any {
val continuation = continuation as? PrintUserContinuation
?: PrintUserContinuation(
continuation as Continuation<Unit>,
token
)

var result: Result<Any>? = [Link]


var userId: String? = [Link]
val userName: String

if ([Link] == 0) {
println("Before")
[Link] = 1
val res = getUserId(token, continuation)
if (res == COROUTINE_SUSPENDED) {
return COROUTINE_SUSPENDED
}
result = [Link](res)
}
if ([Link] == 1) {
userId = result!!.getOrThrow() as String
println("Got userId: $userId")
[Link] = 2
[Link] = userId
val res = getUserName(userId, continuation)
if (res == COROUTINE_SUSPENDED) {
return COROUTINE_SUSPENDED
}
result = [Link](res)
}
if ([Link] == 2) {
userName = result!!.getOrThrow() as String
println(User(userId as String, userName))
println("After")
return Unit
Coroutines under the hood 49

}
error("Impossible")
}

class PrintUserContinuation(
val completion: Continuation<Unit>,
val token: String
) : Continuation<String> {
override val context: CoroutineContext
get() = [Link]

var label = 0
var result: Result<Any>? = null
var userId: String? = null

override fun resumeWith(result: Result<String>) {


[Link] = result
val res = try {
val r = printUser(token, this)
if (r == COROUTINE_SUSPENDED) return
[Link](r as Unit)
} catch (e: Throwable) {
[Link](e)
}
[Link](res)
}
}

The call stack

When function a calls function b, the virtual machine needs to store


the state of a somewhere, as well as the address where execution
should return once b is finished. All this is stored in a structure called
call stack¹⁵. The problem is that when we suspend, we free a thread;
as a result, we clear our call stack. Therefore, the call stack is not
useful when we resume. Instead, the continuations serve as a call
¹⁵The call stack has limited space. When it has all been used,
StackOverflowError occurs. Does this remind you of some popular
website we use to ask or answer technical questions?
Coroutines under the hood 50

stack. Each continuation keeps the state where we suspended (as a


label) the function’s local variables and parameters (as fields), and
the reference to the continuation of the function that called this
function. One continuation references another, which references
another, etc. As a result, our continuation is like a huge onion: it
keeps everything that is generally kept on the call stack. Take a look
at the following example:

suspend fun a() {


val user = readUser()
b()
b()
b()
println(user)
}

suspend fun b() {


for (i in 1..10) {
c(i)
}
}

suspend fun c(i: Int) {


delay(i * 100L)
println("Tick")
}

A sample continuation could be represented as follows:

CContinuation(
i = 4,
label = 1,
completion = BContinuation(
i = 4,
label = 1,
completion = AContinuation(
label = 2,
user = User@1234,
completion = ...
Coroutines under the hood 51

)
)
)

Looking at the above representation, how many times


was “Tick” already printed (assume readUser is not a
suspending function)¹⁶?

When a continuation is resumed, each continuation first calls its


function; once this is done, that continuation resumes the contin-
uation of the function that called the function. This continuation
calls its function, and the process repeats until the top of the stack
is reached.

override fun resumeWith(result: Result<String>) {


[Link] = result
val res = try {
val r = printUser(token, this)
if (r == COROUTINE_SUSPENDED) return
[Link](r as Unit)
} catch (e: Throwable) {
[Link](e)
}
[Link](res)
}

For example, think of a situation where function a calls function


b, which calls function c, which is suspended. During resuming,
the c continuation first resumes the c function. Once this function
is done, the c continuation resumes the b continuation that calls
the b function. Once it is done, the b continuation resumes the a
continuation, which calls the a function.
The whole process can be visualized with the following sketch:
¹⁶The answer is 13. Since the label on AContinuation is 1, one b
function call has already finished (this means 10 ticks). Since i equals
4, three ticks have already been printed in this b function.
Coroutines under the hood 52

It’s similar with exceptions: an uncaught exception is caught in


resumeWith and then wrapped with [Link](e), and then the
function that called our function is resumed with this result.
I hope this all gives you some picture of what is going on when we
suspend. The state needs to be stored in a continuation, and the
suspension mechanism needs to be supported. When we resume, we
need to restore the state from the continuation and either use the
result or throw an exception.
Coroutines under the hood 53

The actual code

The actual code that continuations and suspending functions are


compiled to is more complicated as it includes optimizations and
some additional mechanisms, like:

• constructing a better exceptions stack trace;


• adding coroutine suspension interception (we will talk about
this feature later);
• optimizations on different levels (like removing unused vari-
ables or tail-call optimization).

Here is a part of the BaseContinuationImpl from Kotlin version


“1.5.30”; it shows the actual resumeWith implementation (other meth-
ods and some comments skipped):

internal abstract class BaseContinuationImpl(


val completion: Continuation<Any?>?
) : Continuation<Any?>, CoroutineStackFrame, Serializable {
// This implementation is final. This fact is used to
// unroll resumeWith recursion.
final override fun resumeWith(result: Result<Any?>) {
// This loop unrolls recursion in
// [Link](param) to make saner and
// shorter stack traces on resume
var current = this
var param = result
while (true) {
// Invoke "resume" debug probe on every resumed
// continuation, so that a debugging library
// infrastructure can precisely track what part
// of suspended call stack was already resumed
probeCoroutineResumed(current)
with(current) {
val completion = completion!! // fail fast
// when trying to resume continuation
// without completion
val outcome: Result<Any?> =
try {
Coroutines under the hood 54

val outcome = invokeSuspend(param)


if (outcome === COROUTINE_SUSPENDED)
return
[Link](outcome)
} catch (exception: Throwable) {
[Link](exception)
}
releaseIntercepted()
// this state machine instance is terminating
if (completion is BaseContinuationImpl) {
// unrolling recursion via loop
current = completion
param = outcome
} else {
// top-level completion reached --
// invoke and return
[Link](outcome)
return
}
}
}
}

// ...
}

As you can see, it uses a loop instead of recursion. This change allows
the actual code to make some optimizations and simplifications.

The performance of suspending functions

What is the cost of using suspending functions instead of regular


ones? When looking under the hood, many people might have an
impression that the cost is significant, but this is not true. Dividing
a function into states is cheap as number comparison and execution
jumping costs nearly nothing. Saving a state in a continuation is
also cheap. We do not copy local variables: we make new variables
point to the same points in memory. The only operation that costs
something is creating a continuation class, but this is still not a big
deal. If you are not worried about the performance of RxJava or
Coroutines under the hood 55

callbacks, you should definitely not worry about the performance of


suspending functions.

Summary

What is actually under the hood is more complicated than what I’ve
described, but I hope that you’ve got some idea of the internals of
coroutines. The key lessons are:

• Suspending functions are like state machines, with a possible


state at the beginning of the function and after each suspend-
ing function call.
• Both the label identifying the state and the local data are kept
in the continuation object.
• Continuation of one function decorates a continuation of its
caller function; as a result, all these continuations represent a
call stack that is used when we resume or a resumed function
completes.
Coroutines: built-in support vs library 56

Coroutines: built-in support vs library

When talking about coroutines, it’s common to refer to them as


a single concept. In fact, they consist of two components: built-
in support provided by the Kotlin language (compiler support and
elements in the Kotlin standard library), and the Kotlin Coroutines
library (named [Link]). Sometimes they are treated as
the same entity, but they are very different from each other.
Built-in language support is designed to be minimalistic and give as
much freedom as possible. It can be used to reproduce practically
any concurrency style known from other programming languages,
but it is not convenient to use it directly. Most of its elements, such
as suspendCoroutine or Continuation, are supposed to be used by
library creators rather than by application developers.
On the other hand, we have the [Link] library. This is a
separate dependency that needs to be added to the project. It’s built
on top of the built-in language support. It is way easier to use and
gives developers a concrete concurrence style.

Built-in support [Link]


library
Compiler support and Separate dependency
elements in the Kotlin needs to be added to the
standard library. project.
Elements are in the Elements are in the
[Link] [Link]
package. package.
Minimalistic, provides a Provides many elements
few basic elements (like (like launch, async,
Continuation or Deferred).
suspendCoroutine) and the
suspend keyword.
Hard to use directly. Designed for direct use.
Allows nearly any Designed for one concrete
concurrence style. concurrence style.

Currently, built-in support and the [Link] library are


nearly always used together, but that’s not a requirement. Many
Coroutines: built-in support vs library 57

computer science papers¹⁷ show the universality of the suspension


concept. It was also shown by the team working on the Kotlin Corou-
tines library. When looking for the best concurrency style, they used
built-in Kotlin support to reproduce the concurrency styles from
many other languages (like Go’s Goroutines). The current concur-
rency style offered by [Link] is elegant, convenient, and
aligned with other patterns in the programming ecosystem. How-
ever, patterns and programming styles change over time. Maybe one
day our community will come up with a better concurrency style. If
so, someone will most likely be able to implement it by using built-
in Kotlin support and ship it as a separate library. This promising
new library might even replace the [Link] library. Who
knows what the future will bring?
So far in this book we have mostly only focussed on the built-
in support. From now on, we will be concentrating on the
[Link] library.

¹⁷For example, Revisiting coroutines (2009) by Ana Lúcia De


Moura and Roberto Ierusalimschy, and Continuations and corou-
tines (1984), by Christopher T. Haynes, Daniel P. Friedman, and
Mitchell Wand.
Part 2: Kotlin Coroutines
library
Now that we understand how built-in support works, it is time to
concentrate on the [Link] library. In this part, we will
learn everything that is needed to use it properly. We will explore
coroutine builders, different coroutine contexts, and how cancella-
tion works. We will finish with practical knowledge on how to set
up coroutines, how to test them, and how to safely access a shared
state.

58
Coroutine builders 59

Coroutine builders

Suspending functions need to pass continuations to one another.


They do not have any trouble calling normal functions, but normal
functions cannot call suspending functions.

Every suspending function needs to be called by another suspending


function, which is called by another suspending function, and so on.
This all needs to start somewhere. It starts with a coroutine builder,
a bridge from the normal to the suspending world¹⁸.
We are going to explore the three essential coroutine builders pro-
vided by the [Link] library:

• launch
• runBlocking
• async

Each has its own use cases. Let’s explore them.

¹⁸It can also start from a suspending main, but even though we
use it in a lot in examples, it is not helpful when we write Android or
backend applications. It is also good to know that a suspending main
is also just started by a builder, but we don’t see it because Kotlin does
that for us.
Coroutine builders 60

launch builder

The way launch works is conceptually similar to starting a new


thread (thread function). We just start a coroutine, and it will run
independently, like a firework that is launched into the air. This is
how we use launch - to start a process.

fun main() {
[Link] {
delay(1000L)
println("World!")
}
[Link] {
delay(1000L)
println("World!")
}
[Link] {
delay(1000L)
println("World!")
}
println("Hello,")
[Link](2000L)
}
// Hello,
// (1 sec)
// World!
// World!
// World!

The launch function is an extension function on the CoroutineScope


interface. This is part of an important mechanism called structured
concurrency, whose purpose is to build a relationship between the
parent coroutine and a child coroutine. Later in this chapter, we
will learn about structured concurrency, but for now we will avoid
this topic by calling launch (and later async) on the GlobalScope
object. This is not a standard practice though as we should rarely use
GlobalScope in real-life projects.

Another thing you might have noticed is that at the end of the main
function we need to call [Link]. Without doing this, this func-
tion would end immediately after launching the coroutines, so they
Coroutine builders 61

wouldn’t have a chance to do their job. This is because delay does


not block the thread: it suspends a coroutine. You might remember
from the How does suspension work? chapter that delay just sets a
timer to resume after a set amount of time, and suspends a coroutine
until then. If the thread is not blocked, nothing is busy, so nothing
stops the program from finishing (later we will see that if we use
structured concurrency, [Link] is not needed).
To some degree, how launch works is similar to a daemon thread¹⁹
but much cheaper. This metaphor is useful initially but it becomes
problematic later. Maintaining a blocked thread is always costly,
while maintaining a suspended coroutine is almost free (as explained
in the Coroutines under the hood chapter). They both start some
independent processes and need something that will prevent the
program ending before they are done (in the example below, this is
[Link](2000L)).

fun main() {
thread(isDaemon = true) {
[Link](1000L)
println("World!")
}
thread(isDaemon = true) {
[Link](1000L)
println("World!")
}
thread(isDaemon = true) {
[Link](1000L)
println("World!")
}
println("Hello,")
[Link](2000L)
}

runBlocking builder

The general rule is that coroutines should never block threads, only
suspend them. On the other hand, there are cases in which blocking
¹⁹A daemon thread is a low-priority thread that runs in the back-
ground. Here I compare launch to a daemon thread because both do
not stop the program from ending.
Coroutine builders 62

is necessary. Like in the main function, we need to block the thread,


otherwise our program will end too early. For such cases, we might
use runBlocking.
runBlocking is a very atypical builder. It blocks the thread it has
been started on whenever its coroutine is suspended²⁰ (similar to
suspending main). This means that delay(1000L) inside runBlocking
will behave like [Link](1000L)²¹.

fun main() {
runBlocking {
delay(1000L)
println("World!")
}
runBlocking {
delay(1000L)
println("World!")
}
runBlocking {
delay(1000L)
println("World!")
}
println("Hello,")
}
// (1 sec)
// World!
// (1 sec)
// World!
// (1 sec)
// World!
// Hello,

²⁰To be more precise, it runs a new coroutine and blocks the


current thread interruptibly until its completion.
²¹Using a dispatcher, we can make runBlocking run on a different
thread. But still, the thread on which this builder has been started
will be blocked until the coroutine is done.
Coroutine builders 63

fun main() {
[Link](1000L)
println("World!")
[Link](1000L)
println("World!")
[Link](1000L)
println("World!")
println("Hello,")
}
// (1 sec)
// World!
// (1 sec)
// World!
// (1 sec)
// World!
// Hello,

There are actually a couple of specific use cases in which runBlocking


is used. The first one is the main function, where we need to block
the thread, because otherwise the program will end. Another com-
mon use case is unit tests, where we need to block the thread for the
same reason.

fun main() = runBlocking {


// ...
}

class MyTests {

@Test
fun `a test`() = runBlocking {

}
}

We might use runBlocking in our example to replace


[Link](2000) with delay(2000). Later we will see that it
is even more useful once we introduce structured concurrency.
Coroutine builders 64

fun main() = runBlocking {


[Link] {
delay(1000L)
println("World!")
}
[Link] {
delay(1000L)
println("World!")
}
[Link] {
delay(1000L)
println("World!")
}
println("Hello,")
delay(2000L) // still needed
}
// Hello,
// (1 sec)
// World!
// World!
// World!

runBlocking used to be an important builder, but in modern pro-


gramming it is used rather rarely. In unit tests, we often use its suc-
cessor runTest instead, which makes coroutines operate in virtual
time (a very useful feature for testing which we will describe in the
Testing coroutines chapter). For the main function, we often make it
suspending.

suspend fun main() {


[Link] {
delay(1000L)
println("World!")
}
[Link] {
delay(1000L)
println("World!")
}
[Link] {
Coroutine builders 65

delay(1000L)
println("World!")
}
println("Hello,")
delay(2000L)
}
// Hello,
// (1 sec)
// World!
// World!
// World!

Suspending main is convenient, but for now we will keep using


runBlocking²².

async builder

The async coroutine builder is similar to launch, but it is designed


to produce a value. This value needs to be returned by the lambda ex-
pression²³. The async function returns an object of type Deferred<T>,
where T is the type of the produced value. Deferred has a suspending
method await, which returns this value once it is ready. In the
example below, the produced value is 42, and its type is Int, so
Deferred<Int> is returned, and await returns 42 of type Int.

fun main() = runBlocking {


val resultDeferred: Deferred<Int> = [Link] {
delay(1000L)
42
}
// do other stuff...
val result: Int = [Link]() // (1 sec)
println(result) // 42
// or just

²²The reason is that runBlocking creates a scope, while suspending


main does not unless we use the coroutineScope function, which we
will introduce later.
²³To be strict with wording: by the argument of a functional type
placed on the last position.
Coroutine builders 66

println([Link]()) // 42
}

Just like the launch builder, async starts a coroutine immediately


when it is called. So, it is a way to start a few processes at once
and then await all their results. The returned Deferred stores a
value inside itself once it is produced, so once it is ready it will be
immediately returned from await. However, if we call await before
the value is produced, we are suspended until the value is ready.

fun main() = runBlocking {


val res1 = [Link] {
delay(1000L)
"Text 1"
}
val res2 = [Link] {
delay(3000L)
"Text 2"
}
val res3 = [Link] {
delay(2000L)
"Text 3"
}
println([Link]())
println([Link]())
println([Link]())
}
// (1 sec)
// Text 1
// (2 sec)
// Text 2
// Text 3

How the async builder works is very similar to launch, but it has
additional support for returning a value. If all launch functions were
replaced with async, the code would still work fine. But don’t do
that! async is about producing a value, so if we don’t need a value, we
should use launch.
Coroutine builders 67

fun main() = runBlocking {


// Don't do that!
// this is misleading to use async as launch
[Link] {
delay(1000L)
println("World!")
}
println("Hello,")
delay(2000L)
}
// Hello,
// (1 sec)
// World!

The async builder is often used to parallelize two processes, such as


obtaining data from two different places, to combine them together.

[Link] {
val news = async {
[Link]()
.sortedByDescending { [Link] }
}
val newsSummary = [Link]()
// we could wrap it with async as well,
// but it would be redundant
[Link](
newsSummary,
[Link]()
)
}

Structured Concurrency

If a coroutine is started on GlobalScope, the program will not wait


for it. As previously mentioned, coroutines do not block any threads,
and nothing prevents the program from ending. This is why, in the
below example, an additional delay at the end of runBlocking needs
to be called if we want to see “World!” printed.
Coroutine builders 68

fun main() = runBlocking {


[Link] {
delay(1000L)
println("World!")
}
[Link] {
delay(2000L)
println("World!")
}
println("Hello,")
// delay(3000L)
}
// Hello,

Why do we need this GlobalScope in the first place? It is because


launch and async are extension functions on the CoroutineScope.
However, if you take a look at the definitions of these and of
runBlocking, you will see that the block parameter is a function type
whose receiver type is also CoroutineScope.

fun <T> runBlocking(


context: CoroutineContext = EmptyCoroutineContext,
block: suspend CoroutineScope.() -> T
): T

fun [Link](
context: CoroutineContext = EmptyCoroutineContext,
start: CoroutineStart = [Link],
block: suspend CoroutineScope.() -> Unit
): Job

fun <T> [Link](


context: CoroutineContext = EmptyCoroutineContext,
start: CoroutineStart = [Link],
block: suspend CoroutineScope.() -> T
): Deferred<T>

This means that we can get rid of the GlobalScope; instead, launch
can be called on the receiver provided by runBlocking, so with
Coroutine builders 69

[Link] or simply launch. As a result, launch becomes a child of


runBlocking. As parents might recognize, a parental responsibility
is to wait for all their children, so runBlocking will suspend until all
its children are finished.

fun main() = runBlocking {


[Link] { // same as just launch
delay(1000L)
println("World!")
}
launch { // same as [Link]
delay(2000L)
println("World!")
}
println("Hello,")
}
// Hello,
// (1 sec)
// World!
// (1 sec)
// World!

A parent provides a scope for its children, and they are called in
this scope. This builds a relationship that is called a structured
concurrency. Here are the most important effects of the parent-child
relationship:

• children inherit context from their parent (but they can also
overwrite it, as will be explained in the Coroutine context chap-
ter);
• a parent suspends until all the children are finished (this will
be explained in the Job and children awaiting chapter);
• when the parent is cancelled, its child coroutines are cancelled
too (this will be explained in the Cancellation chapter);
• when a child raises an error, it destroys the parent as well (this
will be explained in the Exception handling chapter).

Notice that, unlike other coroutine builders, runBlocking is not an


extension function on CoroutineScope. This means that it cannot be
a child: it can only be used as a root coroutine (the parent of all the
Coroutine builders 70

children in a hierarchy). This means that runBlocking will be used in


different cases than other coroutines. As we mentioned before, this
is very different from other builders.

The bigger picture

Suspending functions need to be called from other suspending func-


tions. This all needs to start with a coroutine builder. Except for
runBlocking, builders need to be started on CoroutineScope. In our
simple examples, the scope is provided by runBlocking, but in bigger
applications it is either constructed by us (we will explain how to do
this in the Constructing coroutine scope chapter) or it is provided by
the framework we use (for instance, Ktor on a backend or Android
KTX on Android). Once the first builder is started on a scope, other
builders can be started on the scope of the first builder, and so on.
This is in essence how our applications are structured.
Here are a few examples of how coroutines are used in real-life
projects. The first two are typical for both backend and Android.
MainPresenter represents a case typical for Android. UserController
represents a case typical for backend applications.

class NetworkUserRepository(
private val api: UserApi,
) : UserRepository {
suspend fun getUser(): User = [Link]().toDomainUser()
}

class NetworkNewsRepository(
private val api: NewsApi,
private val settings: SettingsRepository,
) : NewsRepository {

suspend fun getNews(): List<News> = [Link]()


.map { [Link]() }

suspend fun getNewsSummary(): List<News> {


val type = [Link]()
return [Link](type)
}
}
Coroutine builders 71

class MainPresenter(
private val view: MainView,
private val userRepo: UserRepository,
private val newsRepo: NewsRepository
) : BasePresenter {

fun onCreate() {
[Link] {
val user = [Link]()
[Link](user)
}
[Link] {
val news = async {
[Link]()
.sortedByDescending { [Link] }
}
val newsSummary = async {
[Link]()
}
[Link]([Link](), [Link]())
}
}
}

@Controller
class UserController(
private val tokenService: TokenService,
private val userService: UserService,
) {
@GetMapping("/me")
suspend fun findUser(
@PathVariable userId: String,
@RequestHeader("Authorization") authorization: String
): UserJson {
val userId = [Link](authorization)
val user = [Link](userId)
return [Link]()
}
Coroutine builders 72

There is one problem though: what about suspending functions? We


can suspend there, but we do not have any scope. Passing scope as an
argument is not a good solution (as we will see in the Scoping func-
tions chapter). Instead, we should use the coroutineScope function,
which is a suspending function that creates a scope for builders.

Using coroutineScope

Imagine that in some repository function you need to


asynchronously load two resources, for example user data and
a list of articles. In this case, you want to return only those articles
that should be seen by the user. To call async, we need a scope, but
we don’t want to pass it to a function²⁴. To create a scope out of a
suspending function, we use the coroutineScope function.

suspend fun getArticlesForUser(


userToken: String?,
): List<ArticleJson> = coroutineScope {
val articles = async { [Link]() }
val user = [Link](userToken)
[Link]()
.filter { canSeeOnList(user, it) }
.map { toArticleJson(it) }
}

coroutineScope is just a suspending function that creates a scope for


its lambda expression. The function returns whatever is returned by
the lambda expression (like let, run, use, or runBlocking). So, in the
above example, it returns List<ArticleJson> because this is what is
returned from the lambda expression.
coroutineScope is a standard function we use when we need a scope
inside a suspending function. It is really important. The way it is
designed is perfect for this use case, but to analyze it we first need to
learn a bit about context, cancelling, and exception handling. This
is why the function will be explained in detail later in a dedicated
chapter (Coroutine scope functions).
²⁴In the Coroutine scope functions chapter, we will explain in detail
why.
Coroutine builders 73

We can also start using the suspending main function together with
coroutineScope, which is a modern alternative to using the main
function with runBlocking.

suspend fun main(): Unit = coroutineScope {


launch {
delay(1000L)
println("World!")
}
println("Hello,")
}
// Hello,
// (1 sec)
// World!

A diagram showing how different kinds of elements of the [Link] library


are used. We generally start with a scope or runBlocking. In these, we can call other
builders or suspending functions. We cannot run builders on suspending functions,
so we use coroutine scope functions (like coroutineScope).

Summary

This knowledge is enough for most Kotlin coroutine usages. In most


cases, we just have suspending functions calling other suspending or
normal functions. If we need to introduce concurrent processing, we
wrap a function with a coroutineScope and use builders on its scope.
Everything needs to start with some builders called on some scope.
We will learn in later parts how to construct such a scope, but for
most projects it needs to be defined once and is touched only rarely.
Coroutine builders 74

Even though we have learned the essentials, there is still a lot to


learn about. In the next chapters, we will dive deeper into coroutines.
We will learn to use different contexts, how to tame cancellations,
exception handling, how to test coroutines, etc. There are still a lot
of great features to discover.
Coroutine context 75

Coroutine context

If you take a look at the coroutine builders’ definitions, you will see
that their first parameter is of type CoroutineContext.

public fun [Link](


context: CoroutineContext = EmptyCoroutineContext,
start: CoroutineStart = [Link],
block: suspend CoroutineScope.() -> Unit
): Job {
...
}

The receiver and the last argument’s receiver are of type


CoroutineScope²⁵. This CoroutineScope seems to be an important
concept, so let’s check out its definition:

public interface CoroutineScope {


public val coroutineContext: CoroutineContext
}

It seems to be just a wrapper around CoroutineContext. So, you


might want to recall how Continuation is defined.

public interface Continuation<in T> {


public val context: CoroutineContext
public fun resumeWith(result: Result<T>)
}

Continuation contains CoroutineContext as well. This type is used by


the most important Kotlin coroutine elements. This must be a really
important concept, so what is it?

²⁵Let’s clear up the nomenclature. launch is an extension function


on CoroutineContext, so CoroutineContext is its receiver type. The
extension function’s receiver is the object we reference with this.
Coroutine context 76

CoroutineContext interface

CoroutineContext is an interface that represents an element or a col-


lection of elements. It is conceptually similar to a map or a set collec-
tion: it is an indexed set of Element instances like Job, CoroutineName,
CouroutineDispatcher, etc. The unusual thing is that each Element
is also a CoroutineContext. So, every element in a collection is a
collection in itself.
This concept is quite intuitive. Imagine a mug. It is a single element,
but it is also a collection that contains a single element. When you
add another mug, you have a collection with two elements.
In order to allow convenient context specification and modification,
each CoroutineContext element is a CoroutineContext itself, as in
the example below (adding contexts and setting a coroutine builder
context will be explained later). Just specifying or adding contexts is
much easier than creating an explicit set.

launch(CoroutineName("Name1")) { ... }
launch(CoroutineName("Name2") + Job()) { ... }

Every element in this set has a unique Key that is used to identify it.
These keys are compared by reference.
For example CoroutineName or Job implement [Link],
which implements the CoroutineContext interface.

fun main() {
val name: CoroutineName = CoroutineName("A name")
val element: [Link] = name
val context: CoroutineContext = element

val job: Job = Job()


val jobElement: [Link] = job
val jobContext: CoroutineContext = jobElement
}

It’s the same with SupervisorJob, CoroutineExceptionHandler and


dispatchers from the Dispatchers object. These are the most impor-
tant coroutine contexts. They will be explained in the next chapters.
Coroutine context 77

Finding elements in CoroutineContext

Since CoroutineContext is like a collection, we can find an element


with a concrete key using get. Another option is to use square brack-
ets, because in Kotlin the get method is an operator and can be
invoked using square brackets instead of an explicit function call.
Just like in Map: when an element is in the context, it will be returned.
If it is not, null will be returned instead.

fun main() {
val ctx: CoroutineContext = CoroutineName("A name")

val coroutineName: CoroutineName? = ctx[CoroutineName]


// or [Link](CoroutineName)
println(coroutineName?.name) // A name
val job: Job? = ctx[Job] // or [Link](Job)
println(job) // null

CoroutineContext is part of the built-in support


for Kotlin coroutines, so it is imported from
[Link], while contexts like Job or
CoroutineName are part of the [Link] library,
so they need to be imported from [Link].

To find a CoroutineName, we use just CoroutineName. This is not a type


or a class: it is a companion object. It is a feature of Kotlin that a name
of a class used by itself acts as a reference to its companion object, so
ctx[CoroutineName] is just a shortcut to ctx[[Link]].

data class CoroutineName(


val name: String
) : AbstractCoroutineContextElement(CoroutineName) {

override fun toString(): String = "CoroutineName($name)"

companion object Key : [Link]<CoroutineName>


}

It is common practice in the [Link] library to use


companion objects as keys to elements with the same name. This
Coroutine context 78

makes it easier to remember²⁶. A key might point to a class (like


CoroutineName) or to an interface (like Job) that is implemented by
many classes with the same key (like Job and SupervisorJob).

interface Job : [Link] {


companion object Key : [Link]<Job>

// ...
}

Adding contexts

What makes CoroutineContext truly useful is the ability to merge


two of them together.
When two elements with different keys are added, the resulting
context responds to both keys.

fun main() {
val ctx1: CoroutineContext = CoroutineName("Name1")
println(ctx1[CoroutineName]?.name) // Name1
println(ctx1[Job]?.isActive) // null

val ctx2: CoroutineContext = Job()


println(ctx2[CoroutineName]?.name) // null
println(ctx2[Job]?.isActive) // true, because "Active"
// is the default state of a job created this way

val ctx3 = ctx1 + ctx2


println(ctx3[CoroutineName]?.name) // Name1
println(ctx3[Job]?.isActive) // true

When another element with the same key is added, just like in a map,
the new element replaces the previous one.

²⁶The companion object below is named Key. We can name com-


panion objects, but this changes little in terms of how they are used.
The default companion object name is Companion, so this name is
used when we need to reference this object using reflection or when
we define an extension function on it. Here we use Key instead.
Coroutine context 79

fun main() {
val ctx1: CoroutineContext = CoroutineName("Name1")
println(ctx1[CoroutineName]?.name) // Name1

val ctx2: CoroutineContext = CoroutineName("Name2")


println(ctx2[CoroutineName]?.name) // Name2

val ctx3 = ctx1 + ctx2


println(ctx3[CoroutineName]?.name) // Name2
}

Empty coroutine context

Since CoroutineContext is like a collection, we also have an empty


context. Such a context by itself returns no elements; if we add it to
another context, it behaves exactly like this other context.

fun main() {
val empty: CoroutineContext = EmptyCoroutineContext
println(empty[CoroutineName]) // null
println(empty[Job]) // null

val ctxName = empty + CoroutineName("Name1") + empty


println(ctxName[CoroutineName]) // CoroutineName(Name1)
}

Subtracting elements

Elements can also be removed from a context by their key using the
minusKey function.

The minus operator is not overloaded for


CoroutineContext. I believe this is because its meaning
would not be clear enough, as explained in Effective
Kotlin Item 12: An operator’s meaning should be consistent
with its function name.
Coroutine context 80

fun main() {
val ctx = CoroutineName("Name1") + Job()
println(ctx[CoroutineName]?.name) // Name1
println(ctx[Job]?.isActive) // true

val ctx2 = [Link](CoroutineName)


println(ctx2[CoroutineName]?.name) // null
println(ctx2[Job]?.isActive) // true

val ctx3 = (ctx + CoroutineName("Name2"))


.minusKey(CoroutineName)
println(ctx3[CoroutineName]?.name) // null
println(ctx3[Job]?.isActive) // true
}

Folding context

If we need to do something for each element in a context, we can


use the fold method, which is similar to fold for other collections.
It takes:

• an initial accumulator value;


• an operation to produce the next state of the accumulator,
based on the current state, and the element it is currently
invoked in.

fun main() {
val ctx = CoroutineName("Name1") + Job()

[Link]("") { acc, element -> "$acc$element " }


.also(::println)
// CoroutineName(Name1) JobImpl{Active}@dbab622e

val empty = emptyList<CoroutineContext>()


[Link](empty) { acc, element -> acc + element }
.joinToString()
.also(::println)
// CoroutineName(Name1), JobImpl{Active}@dbab622e
}
Coroutine context 81

Coroutine context and builders

So CoroutineContext is just a way to hold and pass data. By default,


the parent passes its context to the child, which is one of the parent-
child relationship effects. We say that the child inherits context from
its parent.

fun [Link](msg: String) {


val name = coroutineContext[CoroutineName]?.name
println("[$name] $msg")
}

fun main() = runBlocking(CoroutineName("main")) {


log("Started") // [main] Started
val v1 = async {
delay(500)
log("Running async") // [main] Running async
42
}
launch {
delay(1000)
log("Running launch") // [main] Running launch
}
log("The answer is ${[Link]()}")
// [main] The answer is 42
}

Each child might have a specific context defined in the argument.


This context overrides the one from the parent.

fun main() = runBlocking(CoroutineName("main")) {


log("Started") // [main] Started
val v1 = async(CoroutineName("c1")) {
delay(500)
log("Running async") // [c1] Running async
42
}
launch(CoroutineName("c2")) {
delay(1000)
Coroutine context 82

log("Running launch") // [c2] Running launch


}
log("The answer is ${[Link]()}")
// [main] The answer is 42
}

A simplified formula to calculate a coroutine context is:

defaultContext + parentContext + childContext

Since new elements always replace old ones with the same key,
the child context always overrides elements with the same key
from the parent context. The defaults are used only for keys that
are not specified anywhere else. Currently, the defaults only set
[Link] when no ContinuationInterceptor is set, and
they only set CoroutineId when the application is in debug mode.
There is a special context called Job, which is mutable and is used to
communicate between a coroutine’s child and its parent. The next
chapters will be dedicated to the effects of this communication.

Accessing context in a suspending function

CoroutineScope has a coroutineContext property that can be used


to access the context. But what if we are in a regular suspending
function? As you might remember from the Coroutines under the
hood chapter, context is referenced by continuations, which are
passed to each suspending function. So, it is possible to access a
parent’s context in a suspending function. To do this, we use the
coroutineContext property, which is available on every suspending
scope.

suspend fun printName() {


println(coroutineContext[CoroutineName]?.name)
}

suspend fun main() = withContext(CoroutineName("Outer")) {


printName() // Outer
launch(CoroutineName("Inner")) {
printName() // Inner
Coroutine context 83

}
delay(10)
printName() // Outer
}

Creating our own context

It is not a common need, but we can create our own coroutine context
pretty easily. To do this, the easiest way is to create a class that
implements the [Link] interface. Such a class
needs a property key of type [Link]<*>. This key will
be used as the key that identifies this context. The common practice
is to use this class’s companion object as a key. This is how a very
simple coroutine context can be implemented:

class MyCustomContext : [Link] {

override val key: [Link]<*> = Key

companion object Key :


[Link]<MyCustomContext>
}

Such a context will behave a lot like CoroutineName: it will propagate


from parent to child, but any children will be able to override it with
a different context with the same key. To see this in practice, below
you can see an example context that is designed to print consecutive
numbers.

class CounterContext(
private val name: String
) : [Link] {
override val key: [Link]<*> = Key
private var nextNumber = 0

fun printNext() {
println("$name: $nextNumber")
nextNumber++
}
Coroutine context 84

companion object Key:[Link]<CounterContext>


}

suspend fun printNext() {


coroutineContext[CounterContext]?.printNext()
}

suspend fun main(): Unit =


withContext(CounterContext("Outer")) {
printNext() // Outer: 0
launch {
printNext() // Outer: 1
launch {
printNext() // Outer: 2
}
launch(CounterContext("Inner")) {
printNext() // Inner: 0
printNext() // Inner: 1
launch {
printNext() // Inner: 2
}
}
}
printNext() // Outer: 3
}

I have seen custom contexts in use as a kind of dependency injection -


to easily inject different values in production than in tests. However,
I don’t think this will become standard practice.

data class User(val id: String, val name: String)

abstract class UuidProviderContext :


[Link] {

abstract fun nextUuid(): String

override val key: [Link]<*> = Key


Coroutine context 85

companion object Key :


[Link]<UuidProviderContext>
}

class RealUuidProviderContext : UuidProviderContext() {


override fun nextUuid(): String =
[Link]().toString()
}

class FakeUuidProviderContext(
private val fakeUuid: String
) : UuidProviderContext() {
override fun nextUuid(): String = fakeUuid
}

suspend fun nextUuid(): String =


checkNotNull(coroutineContext[UuidProviderContext]) {
"UuidProviderContext not present" }
.nextUuid()

// function under test


suspend fun makeUser(name: String) = User(
id = nextUuid(),
name = name
)

suspend fun main(): Unit {


// production case
withContext(RealUuidProviderContext()) {
println(makeUser("Michał"))
// e.g. User(id=d260482a-..., name=Michał)
}

// test case
withContext(FakeUuidProviderContext("FAKE_UUID")) {
val user = makeUser("Michał")
println(user) // User(id=FAKE_UUID, name=Michał)
assertEquals(User("FAKE_UUID", "Michał"), user)
Coroutine context 86

}
}

Summary

CoroutineContext is conceptually similar to a map or a set collection.


It is an indexed set of Element instances, where each Element is also
a CoroutineContext. Every element in it has a unique Key that is
used to identify it. This way, CoroutineContext is just a universal
way to group and pass objects to coroutines. These objects are kept
by the coroutines and can determine how these coroutines should
be running (what their state is, in which thread, etc). In the next
chapters, we will discuss the most essential coroutine contexts in the
Kotlin coroutines library.
Jobs and awaiting children 87

Jobs and awaiting children

In the Structured Concurrency chapter, we mentioned the following


consequences of the parent-child relationship:

• children inherit context from their parent;


• a parent suspends until all the children are finished;
• when the parent is cancelled, its child coroutines are also
cancelled;
• when a child is destroyed, it also destroys the parent.

The fact that a child inherits its context from its parent is a basic part
of a coroutine builder’s behavior.

fun main(): Unit = runBlocking(CoroutineName("main")) {


val name = coroutineContext[CoroutineName]?.name
println(name) // main
launch {
delay(1000)
val name = coroutineContext[CoroutineName]?.name
println(name) // main
}
}

The other three important consequences of structured concurrency


depend fully on the Job context. Furthermore, Job can be used to
cancel coroutines, track their state, and much more. It is really
important and useful, so this and the next two chapters are dedicated
to the Job context and the essential Kotlin Coroutine mechanisms
that are connected to it.

What is Job?

Conceptually, a job represents a cancellable thing with a lifecycle.


Formally, Job is an interface, but it has a concrete contract and state,
so it might be treated similarly to an abstract class.
A job lifecycle is represented by its state. Here is a graph of states and
the transitions between them:
Jobs and awaiting children 88

A diagram of job (so also coroutine) states.

In the “Active” state, a job is running and doing its job. If the job
is created with a coroutine builder, this is the state where the body
of this coroutine will be executed. In this state, we can start child
coroutines. Most coroutines will start in the “Active” state. Only
those that are started lazily will start with the “New” state. These
need to be started in order for them to move to the “Active” state.
When a coroutine is executing its body, it is surely in the “Active”
state. When it is done, its state changes to “Completing”, where it
waits for its children. Once all its children are done, the job changes
its state to “Completed”, which is a terminal one. Alternatively, if a
job cancels or fails when running (in the “Active” or “Completing”
state), its state will change to “Cancelling”. In this state, we have the
last chance to do some clean-up, like closing connections or freeing
resources (we will see how to do this in the next chapter). Once this
is done, the job will move to the “Cancelled” state.
Jobs and awaiting children 89

The state is displayed in a job’s toString²⁷. In the example below,


we see different jobs as their states change. The last one is started
lazily, which means it does not start automatically. All the others
will immediately become active once created.
The code below presents Job in different states. I use join to await
coroutine completion. This will be explained later.

suspend fun main() = coroutineScope {


// Job created with a builder is active
val job = Job()
println(job) // JobImpl{Active}@ADD
// until we complete it with a method
[Link]()
println(job) // JobImpl{Completed}@ADD

// launch is initially active by default


val activeJob = launch {
delay(1000)
}
println(activeJob) // StandaloneCoroutine{Active}@ADD
// here we wait until this job is done
[Link]() // (1 sec)
println(activeJob) // StandaloneCoroutine{Completed}@ADD

// launch started lazily is in New state


val lazyJob = launch(start = [Link]) {
delay(1000)
}
println(lazyJob) // LazyStandaloneCoroutine{New}@ADD
// we need to start it, to make it active
[Link]()
println(lazyJob) // LazyStandaloneCoroutine{Active}@ADD
[Link]() // (1 sec)
println(lazyJob) //LazyStandaloneCoroutine{Completed}@ADD
}

²⁷I hope I do not need to remind the reader that toString should
be used for debugging and logging purposes; it should not be parsed
in code as this would break this function’s contract, as I described in
Effective Kotlin.
Jobs and awaiting children 90

To check the state in code, we use the properties isActive,


isCompleted, and isCancelled.

State isActive isCompleted isCancelled


New (optional false false false
initial state)
Active true false false
(default initial
state)
Completing true false false
(transient
state)
Cancelling false false true
(transient
state)
Cancelled false true true
(final state)
Completed false true false
(final state)

As mentioned above, each coroutine has its own job. Let’s see how we
can access and use it.

Coroutine builders create their jobs based on their


parent job

Every coroutine builder from the Kotlin Coroutines library creates


its own job. Most coroutine builders return their jobs, so it can be
used elsewhere. This is clearly visible for launch, where Job is an
explicit result type.

fun main(): Unit = runBlocking {


val job: Job = launch {
delay(1000)
println("Test")
}
}

The type returned by the async function is Deferred<T>, and


Deferred<T> also implements the Job interface, so it can also be used
in the same way.
Jobs and awaiting children 91

fun main(): Unit = runBlocking {


val deferred: Deferred<String> = async {
delay(1000)
"Test"
}
val job: Job = deferred
}

Since Job is a coroutine context, we can access it using


coroutineContext[Job]. However, there is also an extension
property job, which lets us access the job more easily.

// extension
val [Link]: Job
get() = get(Job) ?: error("Current context doesn't...")

// usage
fun main(): Unit = runBlocking {
print([Link]) // true
}

There is a very important rule: Job is the only coroutine context that
is not inherited by a coroutine from a coroutine. Every coroutine cre-
ates its own Job, and the job from an argument or parent coroutine
is used as a parent of this new job.

fun main(): Unit = runBlocking {


val name = CoroutineName("Some name")
val job = Job()

launch(name + job) {
val childName = coroutineContext[CoroutineName]
println(childName == name) // true
val childJob = coroutineContext[Job]
println(childJob == job) // false
println(childJob == [Link]()) // true
}
}
Jobs and awaiting children 92

The parent can reference all its children, and the children can refer
to the parent. This parent-child relationship (Job reference storing)
enables the implementation of cancellation and exception handling
inside a coroutine’s scope.

fun main(): Unit = runBlocking {


val job: Job = launch {
delay(1000)
}

val parentJob: Job = [Link]


// or coroutineContext[Job]!!
println(job == parentJob) // false
val parentChildren: Sequence<Job> = [Link]
println([Link]() == job) // true
}

Structured concurrency mechanisms will not work if a new Job


context replaces the one from the parent. To see this, we might
use the Job() function, which creates a Job context (this will be
explained later).

fun main(): Unit = runBlocking {


launch(Job()) { // the new job replaces one from parent
delay(1000)
println("Will not be printed")
}
}
// (prints nothing, finishes immediately)

In the above example, the parent does not wait for its children
because it has no relation with them. This is because the child uses
the job from the argument as a parent, so it has no relation to the
runBlocking.

When a coroutine has its own (independent) job, it has nearly no


relation to its parent. It only inherits other contexts, but other
results of the parent-child relationship will not apply. This causes us
to lose structured concurrency, which is a problematic situation that
should be avoided.
Jobs and awaiting children 93

Waiting for children

The first important advantage of a job is that it can be used to wait


until the coroutine is completed. For that, we use the join method.
This is a suspending function that suspends until a concrete job
reaches a final state (either Completed or Cancelled).

fun main(): Unit = runBlocking {


val job1 = launch {
delay(1000)
println("Test1")
}
val job2 = launch {
delay(2000)
println("Test2")
}

[Link]()
[Link]()
println("All tests are done")
}
// (1 sec)
// Test1
// (1 sec)
// Test2
// All tests are done

The Job interface also exposes a children property that lets us refer-
ence all its children. We might as well use it to wait until all children
are in a final state.

fun main(): Unit = runBlocking {


launch {
delay(1000)
println("Test1")
}
launch {
delay(1000)
println("Test2")
Jobs and awaiting children 94

val children = coroutineContext[Job]


?.children

val childrenNum = [Link]()


println("Number of children: $childrenNum")
children?.forEach { [Link]() }
println("All tests are done")
}
// (1 sec)
// Test1
// Test2
// All tests are done

Job factory function

A Job can be created without a coroutine using the Job() factory


function. It creates a job that isn’t associated with any coroutine and
can be used as a context. This also means that we can use such a job
as a parent of many coroutines.
A common mistake is to create a job using the Job() factory function,
use it as a parent for some coroutines, and then use join on the job.
Such a program will never end because Job is still in an active state,
even when all its children are finished. This is because this context
is still ready to be used by other coroutines.

suspend fun main(): Unit = coroutineScope {


val job = Job()
launch(job) { // the new job replaces one from parent
delay(1000)
println("Text 1")
}
launch(job) { // the new job replaces one from parent
delay(2000)
println("Text 2")
}
[Link]() // Here we will await forever
println("Will not be printed")
Jobs and awaiting children 95

}
// (1 sec)
// Text 1
// (1 sec)
// Text 2
// (runs forever)

A better approach would be to join all the current children of the job.

suspend fun main(): Unit = coroutineScope {


val job = Job()
launch(job) { // the new job replaces one from parent
delay(1000)
println("Text 1")
}
launch(job) { // the new job replaces one from parent
delay(2000)
println("Text 2")
}
[Link] { [Link]() }
}
// (1 sec)
// Text 1
// (1 sec)
// Text 2

Job() is a great example of a factory function. At first, you might


think that you’re calling a constructor of Job, but you might then
realize that Job is an interface, and interfaces cannot have construc-
tors. The reality is that it is a fake constructor²⁸ - a simple function
that looks like a constructor. Moreover, the actual type returned by
this function is not a Job but its subinterface CompletableJob.

public fun Job(parent: Job? = null): CompletableJob

The CompletableJob interface extends the functionality of the Job


interface by providing two additional methods:
²⁸A pattern that is well described in Effective Kotlin Item 33: Con-
sider factory functions instead of constructors.
Jobs and awaiting children 96

• complete(): Boolean - used to complete a job. Once it is used,


all the child coroutines will keep running until they are all
done, but new coroutines cannot be started in this job. The
result is true if this job was completed as a result of this
invocation; otherwise, it is false (if it was already completed).

fun main() = runBlocking {


val job = Job()

launch(job) {
repeat(5) { num ->
delay(200)
println("Rep$num")
}
}

launch {
delay(500)
[Link]()
}

[Link]()

launch(job) {
println("Will not be printed")
}

println("Done")
}
// Rep0
// Rep1
// Rep2
// Rep3
// Rep4
// Done

• completeExceptionally(exception: Throwable): Boolean


- Completes this job with a given exception. This means
that all children will be cancelled immediately (with
Jobs and awaiting children 97

CancellationException wrapping the exception provided


as an argument). The result, just like in the above function,
responds to the question: “Was this job finished because of the
invocation?”.

fun main() = runBlocking {


val job = Job()

launch(job) {
repeat(5) { num ->
delay(200)
println("Rep$num")
}
}

launch {
delay(500)
[Link](Error("Some error"))
}

[Link]()

launch(job) {
println("Will not be printed")
}

println("Done")
}
// Rep0
// Rep1
// Done

The complete function is often used after we start the last coroutine
on a job. Thanks to this, we can just wait for the job completion using
the join function.
Jobs and awaiting children 98

suspend fun main(): Unit = coroutineScope {


val job = Job()
launch(job) { // the new job replaces one from parent
delay(1000)
println("Text 1")
}
launch(job) { // the new job replaces one from parent
delay(2000)
println("Text 2")
}
[Link]()
[Link]()
}
// (1 sec)
// Text 1
// (1 sec)
// Text 2

You can pass a reference to the parent as an argument of the Job


function. Thanks to this, such a job will be cancelled when the parent
is.

suspend fun main(): Unit = coroutineScope {


val parentJob = Job()
val job = Job(parentJob)
launch(job) {
delay(1000)
println("Text 1")
}
launch(job) {
delay(2000)
println("Text 2")
}
delay(1100)
[Link]()
[Link] { [Link]() }
}
// Text 1
Jobs and awaiting children 99

The next two chapters describe cancellation and exception handling


in Kotlin Coroutines. These two important mechanisms fully de-
pend on the child-parent relationship created using Job.
Cancellation 100

Cancellation

A very important functionality of Kotlin Coroutines is cancellation.


It is so important that some classes and libraries use suspending
functions primarily to support cancellation²⁹. There is a good reason
for this: a good cancellation mechanism is worth its weight in gold³⁰.
Just killing a thread is a terrible solution as there should be an
opportunity to close connections and free resources. Forcing devel-
opers to frequently check if some state is still active isn’t convenient
either. The problem of cancellation waited for a good solution for
a very long time, but what Kotlin Coroutines offer is surprisingly
simple: they are convenient and safe. This is the best cancellation
mechanism I’ve seen in my career. Let’s explore it.

Basic cancellation

The Job interface has a cancel method, which allows its cancellation.
Calling it triggers the following effects:

• Such a coroutine ends the job at the first suspension point


(delay in the example below).
• If a job has some children, they are also cancelled (but its
parent is not affected).
• Once a job is cancelled, it cannot be used as a parent for any
new coroutines. It is first in the “Cancelling” and then in the
“Cancelled” state.

²⁹A good example is CoroutineWorker on Android, where accord-


ing to the presentation Understand Kotlin Coroutines on Android on
Google I/O’19 by Sean McQuillan and Yigit Boyar (both working on
Android at Google), support for coroutines was added primarily to
use the cancellation mechanism.
³⁰Actually, it’s worth much more since the code is currently not
very heavy (it used to be, when it was stored on punched cards).
Cancellation 101

suspend fun main(): Unit = coroutineScope {


val job = launch {
repeat(1_000) { i ->
delay(200)
println("Printing $i")
}
}

delay(1100)
[Link]()
[Link]()
println("Cancelled successfully")
}
// Printing 0
// Printing 1
// Printing 2
// Printing 3
// Printing 4
// Cancelled successfully

We might cancel with a different exception (by passing an exception


as an argument to the cancel function) to specify the cause. This
cause needs to be a subtype of CancellationException, because only
an exception of this type can be used to cancel a coroutine.
After cancel, we often also add join to wait for the cancellation to
finish before we can proceed. Without this, we would have some race
conditions. The snippet below shows an example in which without
join we will see “Printing 4” after “Cancelled successfully”.

suspend fun main() = coroutineScope {


val job = launch {
repeat(1_000) { i ->
delay(100)
[Link](100) // We simulate long operation
println("Printing $i")
}
}

delay(1000)
Cancellation 102

[Link]()
println("Cancelled successfully")
}
// Printing 0
// Printing 1
// Printing 2
// Printing 3
// Cancelled successfully
// Printing 4

Adding [Link]() would change this because it suspends until a


coroutine has finished cancellation.

suspend fun main() = coroutineScope {


val job = launch {
repeat(1_000) { i ->
delay(100)
[Link](100) // We simulate long operation
println("Printing $i")
}
}

delay(1000)
[Link]()
[Link]()
println("Cancelled successfully")
}
// Printing 0
// Printing 1
// Printing 2
// Printing 3
// Printing 4
// Cancelled successfully

To make it easier to call cancel and join together, the


[Link] library offers a convenient extension function
with a self-descriptive name, cancelAndJoin.
Cancellation 103

// The most explicit function name I've ever seen


public suspend fun [Link]() {
cancel()
return join()
}

A job created using the Job() factory function can be cancelled in


the same way. This is often used to make it easy to cancel many
coroutines at once.

suspend fun main(): Unit = coroutineScope {


val job = Job()
launch(job) {
repeat(1_000) { i ->
delay(200)
println("Printing $i")
}
}
delay(1100)
[Link]()
println("Cancelled successfully")
}
// Printing 0
// Printing 1
// Printing 2
// Printing 3
// Printing 4
// Cancelled successfully

This is a crucial capability. On many platforms, we often need to


cancel a group of concurrent tasks. For instance, in Android, we
cancel all the coroutines started in a view when a user leaves this
view.
Cancellation 104

class ProfileViewModel : ViewModel() {


private val scope =
CoroutineScope([Link] + SupervisorJob())

fun onCreate() {
[Link] { loadUserData() }
}

override fun onCleared() {


[Link]()
}

// ...
}

How does cancellation work?

When a job is cancelled, it changes its state to “Cancelling”. Then, at


the first suspension point, a CancellationException is thrown. This
exception can be caught using a try-catch, but it is recommended to
rethrow it.

suspend fun main(): Unit = coroutineScope {


val job = Job()
launch(job) {
try {
repeat(1_000) { i ->
delay(200)
println("Printing $i")
}
} catch (e: CancellationException) {
println(e)
throw e
}
}
delay(1100)
[Link]()
println("Cancelled successfully")
delay(1000)
Cancellation 105

}
// Printing 0
// Printing 1
// Printing 2
// Printing 3
// Printing 4
// JobCancellationException...
// Cancelled successfully

Keep in mind that a cancelled coroutine is not just stopped: it is


cancelled internally using an exception. Therefore, we can freely
clean up everything inside the finally block. For instance, we can
use a finally block to close a file or a database connection. Since
most resource-closing mechanisms rely on the finally block (for
instance, if we read a file using useLines), we simply do not need to
worry about them.

suspend fun main(): Unit = coroutineScope {


val job = Job()
launch(job) {
try {
delay([Link](2000))
println("Done")
} finally {
print("Will always be printed")
}
}
delay(1000)
[Link]()
}
// Will always be printed
// (or)
// Done
// Will always be printed

Just one more call

Since we can catch CancellationException and invoke more oper-


ations before the coroutine truly ends, you might be wondering
Cancellation 106

where the limit is. The coroutine can run as long as it needs to clean
up all the resources. However, suspension is no longer allowed. The
Job is already in a “Cancelling” state, in which suspension or starting
another coroutine is not possible at all. If we try to start another
coroutine, it will just be ignored. If we try to suspend, it will throw
CancellationException.

suspend fun main(): Unit = coroutineScope {


val job = Job()
launch(job) {
try {
delay(2000)
println("Job is done")
} finally {
println("Finally")
launch { // will be ignored
println("Will not be printed")
}
delay(1000) // here exception is thrown
println("Will not be printed")
}
}
delay(1000)
[Link]()
println("Cancel done")
}
// (1 sec)
// Finally
// Cancel done

Sometimes, we truly need to use a suspending call when a coroutine


is already cancelled. For instance, we might need to roll back changes
in a database. In this case, the preferred way is to wrap this call with
the withContext(NonCancellable) function. We will later explain in
detail how withContext works. For now, all we need to know is that it
changes the context of a block of code. Inside withContext, we used
the NonCancellable object, which is a Job that cannot be cancelled.
So, inside the block the job is in the active state, and we can call
whatever suspending functions we want.
Cancellation 107

suspend fun main(): Unit = coroutineScope {


val job = Job()
launch(job) {
try {
delay(200)
println("Coroutine finished")
} finally {
println("Finally")
withContext(NonCancellable) {
delay(1000L)
println("Cleanup done")
}
}
}
delay(100)
[Link]()
println("Done")
}
// Finally
// Cleanup done
// Done

invokeOnCompletion

Another mechanism that is often used to free resources is the


invokeOnCompletion function from Job. It is used to set a handler
to be called when the job reaches a terminal state, namely either
“Completed” or “Cancelled”.

suspend fun main(): Unit = coroutineScope {


val job = launch {
delay(1000)
}
[Link] { exception: Throwable? ->
println("Finished")
}
delay(400)
[Link]()
Cancellation 108

}
// Finished

One of this handler’s parameters is an exception:

• null if the job finished with no exception;


• CancellationException if the coroutine was cancelled;
• the exception that finished a coroutine (more about this in the
next chapter).

If a job was completed before invokeOnCompletion was called, the


handler will be invoked immediately. The onCancelling³¹ and
invokeImmediately³² parameters allow further customization.

suspend fun main(): Unit = coroutineScope {


val job = launch {
delay([Link](2400))
println("Finished")
}
delay(800)
[Link] { exception: Throwable? ->
println("Will always be printed")
println("The exception was: $exception")
}
delay(800)
[Link]()
}
// Will always be printed
// The exception was:
// [Link]
// (or)
// Finished
// Will always be printed
// The exception was null

³¹If true, the function is called in the “Cancelling” state (i.e.,


before “Cancelled”). false by default.
³²This parameter determines whether the handler should be
called immediately if the handler is set when a coroutine is already
in the desired state. true by default.
Cancellation 109

invokeOnCompletion is called synchronously during cancellation,


and we do not control the thread in which it will be running.

Stopping the unstoppable

Because cancellation happens on suspension points, it will not hap-


pen if there is no suspension point. To simulate such a situation, we
could use [Link] instead of delay. This is a terrible practice,
so please don’t do this in any real-life projects. We are just trying
to simulate a case in which we are using our coroutines extensively
but not suspending them. In practice, such a situation might happen
if we have some complex calculations, like neural network learning
(yes, we also use coroutines for such cases in order to simplify pro-
cessing parallelization), or when we need to do some blocking calls
(for instance, reading files).
The example below presents a situation in which a coroutine cannot
be cancelled because there is no suspension point inside it (we use
[Link] instead of delay). The execution needs over 3 minutes,
even though it should be cancelled after 1 second.

suspend fun main(): Unit = coroutineScope {


val job = Job()
launch(job) {
repeat(1_000) { i ->
[Link](200) // We might have some
// complex operations or reading files here
println("Printing $i")
}
}
delay(1000)
[Link]()
println("Cancelled successfully")
delay(1000)
}
// Printing 0
// Printing 1
// Printing 2
// ... (up to 1000)

There are a few ways to deal with such situations. The first one is to
use the yield() function from time to time. This function suspends
Cancellation 110

and immediately resumes a coroutine. This gives an opportunity to


do whatever is needed during suspension (or resuming), including
cancellation (or changing a thread using a dispatcher).

suspend fun main(): Unit = coroutineScope {


val job = Job()
launch(job) {
repeat(1_000) { i ->
[Link](200)
yield()
println("Printing $i")
}
}
delay(1100)
[Link]()
println("Cancelled successfully")
delay(1000)
}
// Printing 0
// Printing 1
// Printing 2
// Printing 3
// Printing 4
// Cancelled successfully

Another option is to track the state of the job. Inside a coroutine


builder, this (the receiver) references the scope of this builder.
CoroutineScope has a context we can reference using the
coroutineContext property. Thus, we can access the coroutine job
(coroutineContext[Job] or [Link]) and check what
its current state is. Since a job is often used to check if a coroutine is
active, the Kotlin Coroutines library provides a function to simplify
that:

public val [Link]: Boolean


get() = coroutineContext[Job]?.isActive ?: true

We can use the isActive property to check if a job is still active and
stop calculations when it is inactive.
Cancellation 111

suspend fun main(): Unit = coroutineScope {


val job = Job()
launch(job) {
do {
[Link](200)
println("Printing")
} while (isActive)
}
delay(1100)
[Link]()
println("Cancelled successfully")
}
// Printing
// Printing
// Printing
// Printing
// Printing
// Printing
// Cancelled successfully

Alternatively, we might use the ensureActive() function, which


throws CancellationException if Job is not active.

suspend fun main(): Unit = coroutineScope {


val job = Job()
launch(job) {
repeat(1000) { num ->
[Link](200)
ensureActive()
println("Printing $num")
}
}
delay(1100)
[Link]()
println("Cancelled successfully")
}
// Printing 0
// Printing 1
// Printing 2
Cancellation 112

// Printing 3
// Printing 4
// Cancelled successfully

The result of ensureActive() and yield() seem similar, but they


are very different. The function ensureActive() needs to be called
on a CoroutineScope (or CoroutineContext, or Job). All it does is
throw an exception if the job is no longer active. It is lighter, so
generally it should be preferred. The function yield is a regular
top-level suspension function. It does not need any scope, so it can
be used in regular suspending functions. Since it does suspension
and resuming, other effects might arise, such as thread changing if
we use a dispatcher with a pool of threads (more about this in the
Dispatchers chapter). yield is more often used just in suspending
functions that are CPU intensive or are blocking threads.

suspendCancellableCoroutine

Here, you might remind yourself of the suspendCancellableCoroutine


function introduced in the How does suspension work? chapter. It
behaves like suspendCoroutine, but its continuation is wrapped
into CancellableContinuation<T>, which provides some additional
methods. The most important one is invokeOnCancellation, which
we use to define what should happen when a coroutine is cancelled.
Most often we use it to cancel processes in a library or to free some
resources.

suspend fun someTask() = suspendCancellableCoroutine { cont ->


[Link] {
// do cleanup
}
// rest of the implementation
}

Here is a full example in which we wrap a Retrofit Call with a


suspending function.
Cancellation 113

suspend fun getOrganizationRepos(


organization: String
): List<Repo> =
suspendCancellableCoroutine { continuation ->
val orgReposCall = apiService
.getOrganizationRepos(organization)
[Link](object : Callback<List<Repo>> {
override fun onResponse(
call: Call<List<Repo>>,
response: Response<List<Repo>>
) {
if ([Link]) {
val body = [Link]()
if (body != null) {
[Link](body)
} else {
[Link](
ResponseWithEmptyBody
)
}
} else {
[Link](
ApiException(
[Link](),
[Link]()
)
)
}
}

override fun onFailure(


call: Call<List<Repo>>,
t: Throwable
) {
[Link](t)
}
})
[Link] {
[Link]()
Cancellation 114

}
}

It’s so good that Retrofit now supports suspending functions!

class GithubApi {
@GET("orgs/{organization}/repos?per_page=100")
suspend fun getOrganizationRepos(
@Path("organization") organization: String
): List<Repo>
}

The CancellableContinuation<T> also lets us check the job state


(using the isActive, isCompleted and isCancelled properties) and
cancel this continuation with an optional cancellation cause.

Summary

Cancellation is a powerful feature. It is generally easy to use, but it


can sometimes be tricky. So, it is important to understand how it
works.
A properly used cancellation means fewer wasted resources and
fewer memory leaks. It is important for our application’s perfor-
mance, and I hope you will use these advantages from now on.
Exception handling 115

Exception handling

A very important part of how coroutines behave is their exception


handling. Just as a program breaks when an uncaught exception
slips by, a coroutine breaks in the case of an uncaught exception.
This behavior is nothing new: for instance, threads also end in such
cases. The difference is that coroutine builders also cancel their
parents, and each cancelled parent cancels all its children. Let’s look
at the example below. Once a coroutine receives an exception, it
cancels itself and propagates the exception to its parent (launch).
The parent cancels itself and all its children, then it propagates the
exception to its parent (runBlocking). runBlocking is a root coroutine
(it has no parent), so it just ends the program (runBlocking rethrows
the exception).

fun main(): Unit = runBlocking {


launch {
launch {
delay(1000)
throw Error("Some error")
}

launch {
delay(2000)
println("Will not be printed")
}

launch {
delay(500) // faster than the exception
println("Will be printed")
}
}

launch {
delay(2000)
println("Will not be printed")
}
}
// Will be printed
// Exception in thread "main" [Link]: Some error...
Exception handling 116

Adding additional launch coroutines wouldn’t change anything. Ex-


ception propagation is bi-directional: the exception is propagated
from child to parent, and when those parents are cancelled, they
cancel their children. Thus, if exception propagation is not stopped,
all coroutines in the hierarchy will be cancelled.

Stop breaking my coroutines

Catching an exception before it breaks a coroutine is helpful, but any


later is too late. Communication happens via a job, so wrapping a
coroutine builder with a try-catch is not helpful at all.

fun main(): Unit = runBlocking {


// Don't wrap in a try-catch here. It will be ignored.
try {
launch {
delay(1000)
throw Error("Some error")
}
} catch (e: Throwable) { // nope, does not help here
println("Will not be printed")
}
Exception handling 117

launch {
delay(2000)
println("Will not be printed")
}
}
// Exception in thread "main" [Link]: Some error...

SupervisorJob

The most important way to stop coroutines breaking is by using a


SupervisorJob. This is a special kind of job that ignores all exceptions
in its children.
Exception handling 118

SupervisorJob is generally used as part of a scope in which we start


multiple coroutines (more about this in the Constructing coroutine
scope chapter).

fun main(): Unit = runBlocking {


val scope = CoroutineScope(SupervisorJob())
[Link] {
delay(1000)
throw Error("Some error")
}

[Link] {
delay(2000)
println("Will be printed")
}

delay(3000)
}
// Exception...
// Will be printed
Exception handling 119

A common mistake is to use a SupervisorJob as an argument to a par-


ent coroutine, like in the code below. It won’t help us handle excep-
tions, because in such a case SupervisorJob has only one direct child,
namely the launch defined at 1 that received this SupervisorJob
as an argument. So, in such a case there is no advantage of using
SupervisorJob over Job (in both cases, the exception will not prop-
agate to runBlocking because we are not using its job).

fun main(): Unit = runBlocking {


// Don't do that, SupervisorJob with one children
// and no parent works similar to just Job
launch(SupervisorJob()) { // 1
launch {
delay(1000)
throw Error("Some error")
}

launch {
delay(2000)
println("Will not be printed")
Exception handling 120

}
}

delay(3000)
}
// Exception...

It would make more sense if we used the same job as a context for
multiple coroutine builders because each of them can be cancelled,
but they won’t cancel each other.

fun main(): Unit = runBlocking {


val job = SupervisorJob()
launch(job) {
delay(1000)
throw Error("Some error")
}
launch(job) {
delay(2000)
println("Will be printed")
}
[Link]()
}
// (1 sec)
// Exception...
// (1 sec)
// Will be printed

supervisorScope

Another way to stop exception propagation is to wrap coroutine


builders with supervisorScope. This is very convenient as we still
keep a connection to the parent, yet any exceptions from the corou-
tine will be silenced.
Exception handling 121

fun main(): Unit = runBlocking {


supervisorScope {
launch {
delay(1000)
throw Error("Some error")
}

launch {
delay(2000)
println("Will be printed")
}
}
delay(1000)
println("Done")
}
// Exception...
// Will be printed
// (1 sec)
// Done

supervisorScope is just a suspending function and can be used to


wrap suspending function bodies. This and other functionalities of
supervisorScope will be described better in the next chapter. The
common way to use it is to start multiple independent tasks.

suspend fun notifyAnalytics(actions: List<UserAction>) =


supervisorScope {
[Link] { action ->
launch {
notifyAnalytics(action)
}
}
}

Another way to stop exception propagation is to use coroutineScope.


Instead of influencing a parent, this function throws an exception
that can be caught using try-catch (in contrast to coroutine builders).
Both will be described in the next chapter.
Exception handling 122

Await

So, we know how to stop exception propagation, but sometimes


this is not enough. In the case of an exception, the async coroutine
builder breaks its parent, just like launch and other coroutine
builders that have a relation with their parents. However, what
if this process is silenced (for instance, using SupervisorJob or
supervisorScope) and await is called? Let’s look at the following
example:

class MyException : Throwable()

suspend fun main() = supervisorScope {


val str1 = async<String> {
delay(1000)
throw MyException()
}

val str2 = async {


delay(2000)
"Text2"
}

try {
println([Link]())
} catch (e: MyException) {
println(e)
}

println([Link]())
}
// MyException
// Text2

We have no value to return since the coroutine ended with an excep-


tion, so instead the MyException exception is thrown by await. This is
why MyException is printed. The other async finishes uninterrupted
because we’re using the supervisorScope.
Exception handling 123

CancellationException does not propagate to its


parent

If an exception is a subclass of CancellationException, it will not


be propagated to its parent. It will only cause cancellation of the
current coroutine. CancellationException is an open class, so it can
be extended by our own classes or objects.

object MyNonPropagatingException : CancellationException()

suspend fun main(): Unit = coroutineScope {


launch { // 1
launch { // 2
delay(2000)
println("Will not be printed")
}
throw MyNonPropagatingException // 3
}
launch { // 4
delay(2000)
println("Will be printed")
}
}
// (2 sec)
// Will be printed

In the above snippet, we start two coroutines with builders at 1 and


4. At 3, we throw a MyNonPropagatingException exception, which
is a subtype of CancellationException. This exception is caught by
launch (started at 4). This builder cancels itself, then it also cancels
its children, namely the builder defined at 5. The second launch is
not affected, so it prints “Will be printed” after 2 seconds.

Coroutine exception handler

When dealing with exceptions, sometimes it is useful to


define default behavior for all of them. This is where the
CoroutineExceptionHandler context comes in handy. It does not
stop the exception propagating, but it can be used to define what
should happen in the case of an exception (by default, it prints the
exception stack trace).
Exception handling 124

fun main(): Unit = runBlocking {


val handler =
CoroutineExceptionHandler { ctx, exception ->
println("Caught $exception")
}
val scope = CoroutineScope(SupervisorJob() + handler)
[Link] {
delay(1000)
throw Error("Some error")
}

[Link] {
delay(2000)
println("Will be printed")
}

delay(3000)
}
// Caught [Link]: Some error
// Will be printed

This context is useful on many platforms to add a default way of


dealing with exceptions. For Android, it often informs the user
about a problem by showing a dialog or an error message.

Summary

Exception handling is an important part of the [Link]


library. Over time, we will inevitably come back to these topics. For
now, I hope that you understand how exceptions propagate from
child to parent in basic builders, and how they can be stopped. Now
it’s time for a long-awaited, strongly connected topic. Time to talk
about coroutine scope functions.
Coroutine scope functions 125

Coroutine scope functions

Imagine that in a suspending function you need to concurrently get


data from two (or more) endpoints. Before we explore how to do this
correctly, let’s see some suboptimal approaches.

Approaches that were used before coroutine scope


functions were introduced

The first approach is calling suspending functions from a suspend-


ing function. The problem with this solution is that it is not concur-
rent (so, if getting data from one endpoint takes 1 second, a function
will take 2 seconds instead of 1).

// Data loaded sequentially, not simultaneously


suspend fun getUserProfile(): UserProfileData {
val user = getUserData() // (1 sec)
val notifications = getNotifications() // (1 sec)

return UserProfileData(
user = user,
notifications = notifications,
)
}

To make two suspending calls concurrently, the easiest way is by


wrapping them with async. However, async requires a scope, and
using GlobalScope is not a good idea.

// DON'T DO THAT
suspend fun getUserProfile(): UserProfileData {
val user = [Link] { getUserData() }
val notifications = [Link] {
getNotifications()
}

return UserProfileData(
user = [Link](), // (1 sec)
Coroutine scope functions 126

notifications = [Link](),
)
}

GlobalScope is just a scope with EmptyCoroutineContext.

public object GlobalScope : CoroutineScope {


override val coroutineContext: CoroutineContext
get() = EmptyCoroutineContext
}

If we call async on a GlobalScope, we will have no relationship to the


parent coroutine. This means:

• it cannot be cancelled (if the parent is cancelled, functions


inside async still run, wasting resources until they are done);
• it does not inherit a scope from any parent (it will always run
on the default dispatcher and will not respect any context from
the parent).

The most important consequences are:

• potential memory leaks and redundant CPU usage;


• the tools for unit testing coroutines will not work here, so
testing this function is very hard.

This is not a good solution. Let’s take a look at another one, in which
we pass a scope as an argument:

// DON'T DO THAT
suspend fun getUserProfile(
scope: CoroutineScope
): UserProfileData {
val user = [Link] { getUserData() }
val notifications = [Link] { getNotifications() }

return UserProfileData(
user = [Link](), // (1 sec)
notifications = [Link](),
Coroutine scope functions 127

)
}

// or

// DON'T DO THAT
suspend fun [Link](): UserProfileData {
val user = async { getUserData() }
val notifications = async { getNotifications() }

return UserProfileData(
user = [Link](), // (1 sec)
notifications = [Link](),
)
}

This one is a bit better as cancellation and proper unit testing are
now possible. The problem is that this solution requires this scope
to be passed from function to function. Also, such functions can
cause unwanted side effects in the scope; for instance, if there is an
exception in one async, the whole scope will be shut down (assuming
it is using Job, not SupervisorJob). What is more, a function that has
access to the scope could easily abuse this access and, for instance,
cancel this scope with the cancel method. This is why this approach
can be tricky and potentially dangerous.

data class Details(val name: String, val followers: Int)


data class Tweet(val text: String)

fun getFollowersNumber(): Int =


throw Error("Service exception")

suspend fun getUserName(): String {


delay(500)
return "marcinmoskala"
}

suspend fun getTweets(): List<Tweet> {


return listOf(Tweet("Hello, world"))
}
Coroutine scope functions 128

suspend fun [Link](): Details {


val userName = async { getUserName() }
val followersNumber = async { getFollowersNumber() }
return Details([Link](), [Link]())
}

fun main() = runBlocking {


val details = try {
getUserDetails()
} catch (e: Error) {
null
}
val tweets = async { getTweets() }
println("User: $details")
println("Tweets: ${[Link]()}")
}
// Only Exception...

In the above code, we would like to at least see Tweets, even if we


have a problem fetching user details. Unfortunately, an exception
on getFollowersNumber brakes async, which brakes the whole scope
and ends the program. Instead, we would prefer a function that
just throws an exception if it occurs. Time to introduce our hero:
coroutineScope.

coroutineScope

coroutineScope is a suspending function that starts a scope. It re-


turns the value produced by the argument function.

suspend fun <R> coroutineScope(


block: suspend CoroutineScope.() -> R
): R

Unlike async or launch, the body of coroutineScope is called in-place.


It formally creates a new coroutine, but it suspends the previous one
until the new one is finished, so it does not start any concurrent
process. Take a look at the below example, in which both delay calls
suspend runBlocking.
Coroutine scope functions 129

fun main() = runBlocking {


val a = coroutineScope {
delay(1000)
10
}
println("a is calculated")
val b = coroutineScope {
delay(1000)
20
}
println(a) // 10
println(b) // 20
}
// (1 sec)
// a is calculated
// (1 sec)
// 10
// 20

The provided scope inherits its coroutineContext from the outer


scope, but it overrides the context’s Job. Thus, the produced scope
respects its parental responsibilities:

• inherits a context from its parent;


• waits for all its children before it can finish itself;
• cancels all its children when the parent is cancelled.

In the example below, you can observe that “After” will be printed at
the end because coroutineScope will not finish until all its children
are finished. Also, CoroutineName is properly passed from parent to
child.
Coroutine scope functions 130

suspend fun longTask() = coroutineScope {


launch {
delay(1000)
val name = coroutineContext[CoroutineName]?.name
println("[$name] Finished task 1")
}
launch {
delay(2000)
val name = coroutineContext[CoroutineName]?.name
println("[$name] Finished task 2")
}
}

fun main() = runBlocking(CoroutineName("Parent")) {


println("Before")
longTask()
println("After")
}
// Before
// (1 sec)
// [Parent] Finished task 1
// (1 sec)
// [Parent] Finished task 2
// After

In the next snippet, you can observe how cancellation works. A


cancelled parent leads to the cancellation of unfinished children.

suspend fun longTask() = coroutineScope {


launch {
delay(1000)
val name = coroutineContext[CoroutineName]?.name
println("[$name] Finished task 1")
}
launch {
delay(2000)
val name = coroutineContext[CoroutineName]?.name
println("[$name] Finished task 2")
}
Coroutine scope functions 131

fun main(): Unit = runBlocking {


val job = launch(CoroutineName("Parent")) {
longTask()
}
delay(1500)
[Link]()
}
// [Parent] Finished task 1

Unlike coroutine builders, if there is an exception in coroutineScope


or any of its children, it cancels all other children and rethrows it.
This is why using coroutineScope would fix our previous “Twitter
example”. To show that the same exception is rethrown, I changed a
generic Error into a concrete ApiException.

data class Details(val name: String, val followers: Int)


data class Tweet(val text: String)
class ApiException(
val code: Int,
message: String
) : Throwable(message)

fun getFollowersNumber(): Int =


throw ApiException(500, "Service unavailable")

suspend fun getUserName(): String {


delay(500)
return "marcinmoskala"
}

suspend fun getTweets(): List<Tweet> {


return listOf(Tweet("Hello, world"))
}

suspend fun getUserDetails(): Details = coroutineScope {


val userName = async { getUserName() }
val followersNumber = async { getFollowersNumber() }
Coroutine scope functions 132

Details([Link](), [Link]())
}

fun main() = runBlocking<Unit> {


val details = try {
getUserDetails()
} catch (e: ApiException) {
null
}
val tweets = async { getTweets() }
println("User: $details")
println("Tweets: ${[Link]()}")
}
// User: null
// Tweets: [Tweet(text=Hello, world)]

This all makes coroutineScope a perfect candidate for most cases


when we just need to start a few concurrent calls in a suspending
function.

suspend fun getUserProfile(): UserProfileData =


coroutineScope {
val user = async { getUserData() }
val notifications = async { getNotifications() }

UserProfileData(
user = [Link](),
notifications = [Link](),
)
}

As we’ve already mentioned, coroutineScope is nowadays often used


to wrap a suspending main body. You can think of it as the modern
replacement for the runBlocking function:
Coroutine scope functions 133

suspend fun main(): Unit = coroutineScope {


launch {
delay(1000)
println("World")
}
println("Hello, ")
}
// Hello
// (1 sec)
// World

The function coroutineScope creates a scope out of a suspending


context. It inherits a scope from its parent and supports structured
concurrency.
To make it clear, there is practically no difference between the below
functions, except that the first one calls getProfile and getFriends
sequentially, where the second one calls them simultaneously.

suspend fun produceCurrentUserSeq(): User {


val profile = [Link]()
val friends = [Link]()
return User(profile, friends)
}

suspend fun produceCurrentUserSym(): User = coroutineScope {


val profile = async { [Link]() }
val friends = async { [Link]() }
User([Link](), [Link]())
}

coroutineScope is a useful function, but it’s not the only one of its
kind.

Coroutine scope functions

There are more functions that create a scope and behave similarly
to coroutineScope. supervisorScope is like coroutineScope but it
uses SupervisorJob instead of Job. withContext is a coroutineScope
that can modify coroutine context. withTimeout is a coroutineScope
Coroutine scope functions 134

with a timeout. Each of those functions will be better explained


in the following parts of this chapter. For now, I just want you to
know there are such functions because if there is a group of similar
functions, it makes sense that it should have a name. So how should
we name this group? Some people call them “scoping functions”, but
I find this confusing as I am not sure what is meant by “scoping”. I
guess that whoever started using this term just wanted to make it
different from “scope functions” (functions like let, with or apply).
It is not really helpful as those two terms are still often confused.
This is why I decided to use the term “coroutine scope functions”.
It is longer but should cause fewer misunderstandings, and I find
it more correct. Just think about that: coroutine scope functions
are those that are used to create a coroutine scope in suspending
functions.
On the other hand, coroutine scope functions are often confused
with coroutine builders, but this is incorrect because they are very
different, both conceptually and practically. To clarify this, the table
below presents the comparison between them.

Coroutine builders Coroutine scope


(except for runBlocking) functions
launch, async, produce coroutineScope,
supervisorScope,
withContext, withTimeout
Are extension functions Are suspending functions.
on CoroutineScope.
Take coroutine context Take coroutine context
from CoroutineScope from suspending function
receiver. continuation.
Exceptions are propagated Exceptions are thrown in
to the parent through Job. the same way as they are
from/by regular
functions.
Starts an asynchronous Starts a coroutine that is
coroutine. called in-place.

Now think about runBlocking. You might notice that it looks like
it has more in common with coroutine scope functions than with
builders. runBlocking also calls its body in-place and returns its
result. The biggest difference is that runBlocking is a blocking
function, while coroutine scope functions are suspending functions.
This is why runBlocking must be at the top of the hierarchy of
coroutines, while coroutine scope functions must be in the middle.
Coroutine scope functions 135

withContext

The withContext function is similar to coroutineScope, but it


additionally allows some changes to be made to the scope. The
context provided as an argument to this function overrides the
context from the parent scope (the same way as in coroutine
builders). This means that withContext(EmptyCoroutineContext)
and coroutineScope() behave in exactly the same way.

fun [Link](text: String) {


val name = [Link][CoroutineName]?.name
println("[$name] $text")
}

fun main() = runBlocking(CoroutineName("Parent")) {


log("Before")

withContext(CoroutineName("Child 1")) {
delay(1000)
log("Hello 1")
}

withContext(CoroutineName("Child 2")) {
delay(1000)
log("Hello 2")
}

log("After")
}
// [Parent] Before
// (1 sec)
// [Child 1] Hello 1
// (1 sec)
// [Child 2] Hello 2
// [Parent] After

The function withContext is often used to set a different coroutine


scope for part of our code. Usually, you should use it together with
dispatchers, as will be described in the next chapter.
Coroutine scope functions 136

launch([Link]) {
[Link]()
withContext([Link]) {
[Link](data)
}
[Link]()
}

You might notice that the way coroutineScope { /*...*/


} works very similar to async with immediate await:
async { /*...*/ }.await(). Also withContext(context)
{ /*...*/ } is in a way similar to async(context) {
/*...*/ }.await(). The biggest difference is that async
requires a scope, where coroutineScope and withContext
take the scope from suspension. In both cases, it’s better
to use coroutineScope and withContext, and avoid async
with immediate await.

supervisorScope

The supervisorScope function also behaves a lot like coroutineScope:


it creates a CoroutineScope that inherits from the outer scope and
calls the specified suspend block in it. The difference is that it
overrides the context’s Job with SupervisorJob, so it is not cancelled
when a child raises an exception.

fun main() = runBlocking {


println("Before")

supervisorScope {
launch {
delay(1000)
throw Error()
}

launch {
delay(2000)
println("Done")
}
}
Coroutine scope functions 137

println("After")
}
// Before
// (1 sec)
// Exception...
// (1 sec)
// Done
// After

supervisorScope is mainly used in functions that start multiple inde-


pendent tasks.

suspend fun notifyAnalytics(actions: List<UserAction>) =


supervisorScope {
[Link] { action ->
launch {
notifyAnalytics(action)
}
}
}

If you use async, silencing its exception propagation to the parent


is not enough. When we call await and the async coroutine finishes
with an exception, then await will rethrow it. This is why if we want
to truly ignore exceptions, we should also wrap await calls with a try-
catch block.

class ArticlesRepositoryComposite(
private val articleRepositories: List<ArticleRepository>,
) : ArticleRepository {
override suspend fun fetchArticles(): List<Article> =
supervisorScope {
articleRepositories
.map { async { [Link]() } }
.mapNotNull {
try {
[Link]()
} catch (e: Throwable) {
Coroutine scope functions 138

[Link]()
null
}
}
.flatten()
.sortedByDescending { [Link] }
}
}

In my workshops, I am often asked if we can use


withContext(SupervisorJob()) instead of supervisorScope. No,
we can’t. When we use withContext(SupervisorJob()), then
withContext is still using a regular Job, and the SupervisorJob()
becomes its parent. As a result, when one child raises an exception,
the other children will be cancelled as well. withContext will also
throw an exception, so its SupervisorJob() is practically useless.
This is why I find withContext(SupervisorJob()) pointless and
misleading, and I consider it a bad practice.

fun main() = runBlocking {


println("Before")

withContext(SupervisorJob()) {
launch {
delay(1000)
throw Error()
}

launch {
delay(2000)
println("Done")
}
}

println("After")
}
// Before
// (1 sec)
// Exception...
Coroutine scope functions 139

withTimeout

Another function that behaves a lot like coroutineScope


is withTimeout. It also creates a scope and returns a value.
Actually, withTimeout with a very big timeout behaves just like
coroutineScope. The difference is that withTimeout additionally sets
a time limit for its body execution. If it takes too long, it cancels
this body and throws TimeoutCancellationException (a subtype of
CancellationException).

suspend fun test(): Int = withTimeout(1500) {


delay(1000)
println("Still thinking")
delay(1000)
println("Done!")
42
}

suspend fun main(): Unit = coroutineScope {


try {
test()
} catch (e: TimeoutCancellationException) {
println("Cancelled")
}
delay(1000) // Extra timeout does not help,
// `test` body was cancelled
}
// (1 sec)
// Still thinking
// (0.5 sec)
// Cancelled

The function withTimeout is especially useful for testing. It can be


used to test if some function takes more or less than some time. If it
is used inside runTest, it will operate on virtual time. We also use it
inside runBlocking to just limit the execution time of some function
(this is then like setting timeout on @Test).
Coroutine scope functions 140

class Test {
@Test
fun testTime2() = runTest {
withTimeout(1000) {
// something that should take less than 1000
delay(900) // virtual time
}
}

@Test(expected = TimeoutCancellationException::class)
fun testTime1() = runTest {
withTimeout(1000) {
// something that should take more than 1000
delay(1100) // virtual time
}
}

@Test
fun testTime3() = runBlocking {
withTimeout(1000) {
// normal test, that should not take too long
delay(900) // really waiting 900 ms
}
}
}

Beware that withTimeout throws TimeoutCancellationException,


which is a subtype of CancellationException (the same exception
that is thrown when a coroutine is cancelled). So, when this
exception is thrown in a coroutine builder, it only cancels it and
does not affect its parent (as explained in the previous chapter).
Coroutine scope functions 141

suspend fun main(): Unit = coroutineScope {


launch { // 1
launch { // 2, cancelled by its parent
delay(2000)
println("Will not be printed")
}
withTimeout(1000) { // we cancel launch
delay(1500)
}
}
launch { // 3
delay(2000)
println("Done")
}
}
// (2 sec)
// Done

In the above example, delay(1500) takes longer than


withTimeout(1000) expects, so it throws TimeoutCancellationException.
The exception is caught by launch from 1, and it cancels itself and its
children, so launch from 2. launch started at 3 is also not affected.
A less aggressive variant of withTimeout is withTimeoutOrNull, which
does not throw an exception. If the timeout is exceeded, it just
cancels its body and returns null. I find withTimeoutOrNull useful for
wrapping functions in which waiting times that are too long signal
that something went wrong. For instance, network operations: if we
wait over 5 seconds for a response, it is unlikely we will ever receive
it (some libraries might wait forever).

suspend fun fetchUser(): User {


// Runs forever
while (true) {
yield()
}
}

suspend fun getUserOrNull(): User? =


withTimeoutOrNull(5000) {
Coroutine scope functions 142

fetchUser()
}

suspend fun main(): Unit = coroutineScope {


val user = getUserOrNull()
println("User: $user")
}
// (5 sec)
// User: null

Connecting coroutine scope functions

If you need to use functionalities from two coroutine scope func-


tions, you need to use one inside another. For instance, to set both
a timeout and a dispatcher, you can use withTimeoutOrNull inside
withContext.

suspend fun calculateAnswerOrNull(): User? =


withContext([Link]) {
withTimeoutOrNull(1000) {
calculateAnswer()
}
}

Additional operations

Imagine a case in which in the middle of some processing you need


to execute an additional operation. For example, after showing a
user profile you want to send a request for analytics purposes. People
often do this with just a regular launch on the same scope:

class ShowUserDataUseCase(
private val repo: UserDataRepository,
private val view: UserDataView,
) {

suspend fun showUserData() = coroutineScope {


val name = async { [Link]() }
val friends = async { [Link]() }
Coroutine scope functions 143

val profile = async { [Link]() }


val user = User(
name = [Link](),
friends = [Link](),
profile = [Link]()
)
[Link](user)
launch { [Link]() }
}
}

However, there are some problems with this approach. Firstly, this
launch does nothing here because coroutineScope needs to await its
completion anyway. So if you are showing a progress bar when up-
dating the view, the user needs to wait until this notifyProfileShown
is finished as well. This does not make much sense.

fun onCreate() {
[Link] {
_progressBar.value = true
showUserData()
_progressBar.value = false
}
}

The second problem is cancellation. Coroutines are designed (by


default) to cancel other operations when there is an exception. This
is great for essential operations. If getProfile has an exception, we
should cancel getName and getFriends because their response would
be useless anyway. However, canceling a process just because an
analytics call has failed does not make much sense.
So what should we do? When you have an additional (non-essential)
operation that should not influence the main process, it is better to
start it on a separate scope. Creating your own scope is easy. In this
example, we create an analyticsScope.

val analyticsScope = CoroutineScope(SupervisorJob())

For unit testing and controlling this scope, it is better to inject it via
a constructor:
Coroutine scope functions 144

class ShowUserDataUseCase(
private val repo: UserDataRepository,
private val view: UserDataView,
private val analyticsScope: CoroutineScope,
) {

suspend fun showUserData() = coroutineScope {


val name = async { [Link]() }
val friends = async { [Link]() }
val profile = async { [Link]() }
val user = User(
name = [Link](),
friends = [Link](),
profile = [Link]()
)
[Link](user)
[Link] { [Link]() }
}
}

Starting operations on an injected scope is common. Passing a scope


clearly signals that such a class can start independent calls. This
means suspending functions might not wait for all the operations
they start. Otherwise, we know a suspending call will not finish until
all its operations are done.

Summary

Coroutine scope functions are really useful, especially since they can
be used in any suspending function. Most often they are used to wrap
the whole function body. Although they are often used to just wrap
a bunch of calls with a scope (especially withContext), I hope you
can appreciate their usefulness. They are a very important part of
the Kotlin Coroutines ecosystem. You will see how we will use them
through the rest of the book.
Dispatchers 145

Dispatchers

An important functionality offered by the Kotlin Coroutines library


is letting us decide on which thread (or pool of threads) a coroutine
should be running (starting and resuming). This is done using dis-
patchers.
In the English dictionary, a dispatcher is defined as “a person
who is responsible for sending people or vehicles to where they
are needed, especially emergency vehicles”. In Kotlin coroutines,
CoroutineContext determines on which thread a certain coroutine
will run.

Dispatchers in Kotlin Coroutines are a similar concept to


RxJava Schedulers.

Default dispatcher

If you don’t set any dispatcher, the one chosen by default is


[Link], which is designed to run CPU-intensive
operations. It has a pool of threads with a size equal to the number
of cores on the machine your code is running on (but not less than
two). At least theoretically, this is the optimal number of threads,
assuming you are using these threads efficiently, i.e., performing
CPU-intensive calculations and not blocking them.
To see this dispatcher in action, run the following code:

suspend fun main() = coroutineScope {


repeat(1000) {
launch { // or launch([Link]) {
// To make it busy
List(1000) { [Link]() }.maxOrNull()

val threadName = [Link]().name


println("Running on thread: $threadName")
}
}
}

Example result on my machine (I have 12 cores, so there are 12


threads in the pool):
Dispatchers 146

Running on thread: DefaultDispatcher-worker-1


Running on thread: DefaultDispatcher-worker-5
Running on thread: DefaultDispatcher-worker-7
Running on thread: DefaultDispatcher-worker-6
Running on thread: DefaultDispatcher-worker-11
Running on thread: DefaultDispatcher-worker-2
Running on thread: DefaultDispatcher-worker-10
Running on thread: DefaultDispatcher-worker-4
...

Warning: runBlocking sets its own dispatcher if no other


one is set; so, inside it, the [Link] is not
the one that is chosen automatically. So, if we used
runBlocking instead of coroutineScope in the above
example, all coroutines would be running on “main”.

Limiting the default dispatcher

Let’s say that you have an expensive process, and you suspect that
it might use all [Link] threads and starve other
coroutines using the same dispatcher. In such cases, we can use
limitedParallelism on [Link] to make a dispatcher
that runs on the same threads but is limited to using not more than
a certain number of them at the same time.

private val dispatcher = [Link]


.limitedParallelism(5)

This mechanism is used not to limit [Link]


but it is worth remembering it, because soon we will present
limitedParallelism for [Link], which is much more
important and common.

limitedParallelism was introduced in kotlinx-


coroutines version 1.6.

Main dispatcher

Android and many other application frameworks have a concept of


a main or UI thread, which is generally the most important thread.
Dispatchers 147

On Android, it is the only one that can be used to interact with the
UI. Therefore, it needs to be used very often but also with great care.
When the Main thread is blocked, the whole application is frozen. To
run a coroutine on the Main thread, we use [Link].
[Link] is available on Android if we use the
kotlinx-coroutines-android artifact. Similarly, it’s available
on JavaFX if we use kotlinx-coroutines-javafx, and on Swing if
we use kotlinx-coroutines-swing. If you do not have a dependency
that defines the main dispatcher, it is not available and cannot be
used.
Notice that frontend libraries are typically not used in unit tests, so
[Link] is not defined there. To be able to use it, you need
to set a dispatcher using [Link](dispatcher) from
kotlinx-coroutines-test.

class SomeTest {

private val dispatcher = Executors


.newSingleThreadExecutor()
.asCoroutineDispatcher()

@Before
fun setup() {
[Link](dispatcher)
}

@After
fun tearDown() {
// reset main dispatcher to
// the original Main dispatcher
[Link]()
[Link]()
}

@Test
fun testSomeUI() = runBlocking {
launch([Link]) {
// ...
}
Dispatchers 148

}
}

On Android, we typically use the Main dispatcher as the default one.


If you use libraries that are suspending instead of blocking, and you
don’t do any complex calculations, in practice you can often use only
[Link]. If you do some CPU-intensive operations, you
should run them on [Link]. These two are enough
for many applications, but what if you need to block the thread?
For instance, if you need to perform long I/O operations (e.g., read
big files) or if you need to use a library with blocking functions.
You cannot block the Main thread, because your application would
freeze. If you block your default dispatcher, you risk blocking all
the threads in the thread pool, in which case you wouldn’t be able
to do any calculations. This is why we need a dispatcher for such a
situation, and it is [Link].

IO dispatcher

[Link] is designed to be used when we block threads with


I/O operations, for instance when we read/write files, use Android
shared preferences, or call blocking functions. The code below takes
around 1 second because [Link] allows more than 50 active
threads at the same time.

suspend fun main() {


val time = measureTimeMillis {
coroutineScope {
repeat(50) {
launch([Link]) {
[Link](1000)
}
}
}
}
println(time) // ~1000
}

How does it work? Imagine an unlimited pool of threads. Initially


it is empty, but as we need more threads, they are created and kept
Dispatchers 149

active until they are not used for some time. Such a pool exists, but it
wouldn’t be safe to use it directly. With too many active threads, the
performance degrades in a slow but unlimited manner, eventually
causing out-of-memory errors. This is why we create dispatchers
that have a limited number of threads they can use at the same
time. [Link] is limited by the number of cores in your
processor. The limit of [Link] is 64 (or the number of cores
if there are more).

suspend fun main() = coroutineScope {


repeat(1000) {
launch([Link]) {
[Link](200)

val threadName = [Link]().name


println("Running on thread: $threadName")
}
}
}
// Running on thread: DefaultDispatcher-worker-1
//...
// Running on thread: DefaultDispatcher-worker-53
// Running on thread: DefaultDispatcher-worker-14

As we mentioned, both [Link] and [Link]


share the same pool of threads. This is an important optimization.
Threads are reused, and often redispatching is not needed. For in-
stance, let’s say you are running on [Link] and then
execution reaches withContext([Link]) { ... }. Most
often, you will stay on the same thread³³, but what changes is that
this thread counts not towards the [Link] limit but
towards the [Link] limit. Their limits are independent, so
they will never starve each other.

³³This mechanism is not deterministic.


Dispatchers 150

suspend fun main(): Unit = coroutineScope {


launch([Link]) {
println([Link]().name)
withContext([Link]) {
println([Link]().name)
}
}
}
// DefaultDispatcher-worker-2
// DefaultDispatcher-worker-2

To see this more clearly, imagine that you use both


[Link] and [Link] to the maximum. As
a result, your number of active threads will be the sum of their
limits. If you allow 64 threads in [Link] and you have 8
cores, you will have 72 active threads in the shared pool. This means
we have efficient thread reuse and both dispatchers have strong
independence.
The most typical case where we use [Link] is when we need
to call blocking functions from libraries. The best practice is to wrap
them with withContext([Link]) to make them suspending
functions. Such functions can be used without any special care:
they can be treated like all other properly implemented suspending
functions.

class DiscUserRepository(
private val discReader: DiscReader
) : UserRepository {
override suspend fun getUser(): UserData =
withContext([Link]) {
UserData([Link]("userName"))
}
}

The only problem is when such functions are blocking too many
threads. [Link] is limited to 64. One service that is mas-
sively blocking threads might make all others wait for their turn. To
help us deal with this, we again use limitedParallelism.
Dispatchers 151

IO dispatcher with a custom pool of threads

[Link] has a special behavior defined for the


limitedParallelism function. It creates a new dispatcher with
an independent pool of threads. What is more, this pool is not
limited to 64 as we can decide to limit it to as many threads as we
want.
For example, imagine you start 100 coroutines, each of which blocks
a thread for a second. If you run these coroutines on [Link],
it will take 2 seconds. If you run them on [Link] with
limitedParallelism set to 100 threads, it will take 1 second. Execu-
tion time for both dispatchers can be measured at the same time
because the limits of these two dispatchers are independent anyway.

suspend fun main(): Unit = coroutineScope {


launch {
printCoroutinesTime([Link])
// [Link] took: 2074
}

launch {
val dispatcher = [Link]
.limitedParallelism(100)
printCoroutinesTime(dispatcher)
// LimitedDispatcher@XXX took: 1082
}
}

suspend fun printCoroutinesTime(


dispatcher: CoroutineDispatcher
) {
val test = measureTimeMillis {
coroutineScope {
repeat(100) {
launch(dispatcher) {
[Link](1000)
}
}
}
Dispatchers 152

}
println("$dispatcher took: $test")
}

Conceptually, you might imagine it in the following way:

// Dispatcher with an unlimited pool of threads


private val pool = ...

[Link] = [Link](64)
[Link](x) = [Link](x)

The best practice for classes that might intensively block threads is
to define their own dispatchers that have their own independent
limits. How big should this limit be? You need to decide for yourself.
Too many threads are an inefficient use of our resources. On the
other hand, waiting for an available thread is not good for perfor-
mance. What is most essential is that this limit is independent from
[Link] and other dispatchers’ limits. Thanks to that, one
service will not block another.

class DiscUserRepository(
private val discReader: DiscReader
) : UserRepository {
private val dispatcher = [Link]
.limitParallelism(5)

override suspend fun getUser(): UserData =


withContext(dispatcher) {
UserData([Link]("userName"))
}
}

Dispatcher with a fixed pool of threads

Some developers like to have more control over the pools of threads
they use, and Java offers a powerful API for that. For example, we
can create a fixed or cached pool of threads with the Executors
class. These pools implement the ExecutorService or Executor
interfaces, which we can transform into a dispatcher using the
asCoroutineDispatcher function.
Dispatchers 153

val NUMBER_OF_THREADS = 20
val dispatcher = Executors
.newFixedThreadPool(NUMBER_OF_THREADS)
.asCoroutineDispatcher()

limitedParallelism was introduced in


kotlinx-coroutines version 1.6; in previous versions,
we often created dispatchers with independent pools of
threads using the Executors class.

The biggest problem with this approach is that a dispatcher created


with [Link]() needs to be closed
with the close function. Developers often forget about this, which
leads to leaking threads. Another problem is that when you create
a fixed pool of threads, you are not using them efficiently. You will
keep unused threads alive without sharing them with other services.

Dispatcher limited to a single thread

For all dispatchers using multiple threads, we need to consider the


shared state problem. Notice that in the example below 10,000 corou-
tines are increasing i by 1. So, its value should be 10,000, but it
is a smaller number. This is a result of a shared state (i property)
modification on multiple threads at the same time.

var i = 0

suspend fun main(): Unit = coroutineScope {


repeat(10_000) {
launch([Link]) { // or Default
i++
}
}
delay(1000)
println(i) // ~9930
}

There are many ways to solve this problem (most will be described
in the The problem with state chapter), but one option is to use a
dispatcher with just a single thread. If we use just a single thread at a
time, we do not need any other synchronization. The classic way to
do this was to create such a dispatcher using Executors.
Dispatchers 154

val dispatcher = [Link]()


.asCoroutineDispatcher()

// previously:
// val dispatcher = newSingleThreadContext("My name")

The problem is that this dispatcher keeps an extra thread active,


and it needs to be closed when it is not used anymore. A modern
solution is to use [Link] or [Link] (if we
block threads) with parallelism limited to 1.

var i = 0

suspend fun main(): Unit = coroutineScope {


val dispatcher = [Link]
.limitedParallelism(1)

repeat(10000) {
launch(dispatcher) {
i++
}
}
delay(1000)
println(i) // 10000
}

The biggest disadvantage is that because we have only one thread,


our calls will be handled sequentially if we block it.

suspend fun main(): Unit = coroutineScope {


val dispatcher = [Link]
.limitedParallelism(1)

val job = Job()


repeat(5) {
launch(dispatcher + job) {
[Link](1000)
}
}
Dispatchers 155

[Link]()
val time = measureTimeMillis { [Link]() }
println("Took $time") // Took 5006
}

Unconfined dispatcher

The last dispatcher we need to discuss is [Link].


This dispatcher is different from the previous one as it does not
change any threads. When it is started, it runs on the thread on
which it was started. If it is resumed, it runs on the thread that
resumed it.

suspend fun main(): Unit =


withContext(newSingleThreadContext("Thread1")) {
var continuation: Continuation<Unit>? = null

launch(newSingleThreadContext("Thread2")) {
delay(1000)
continuation?.resume(Unit)
}

launch([Link]) {
println([Link]().name) // Thread1

suspendCancellableCoroutine<Unit> {
continuation = it
}

println([Link]().name) // Thread2

delay(1000)

println([Link]().name)
// [Link]
// (used by delay)
}
}
Dispatchers 156

This is sometimes useful for unit testing. Imagine that you need
to test a function that calls launch. Synchronizing the time might
not be easy. One solution is to use [Link] instead
of all other dispatchers. If it is used in all scopes, everything runs
on the same thread, and we can more easily control the order
of operations. This trick is not needed if we use runTest from
kotlinx-coroutines-test. We will discuss this later in the book.

From the performance point of view, this dispatcher is the cheapest


as it never requires thread switching. So, we might choose it if we
do not care at all on which thread our code is running. However,
in practice, it is not considered good to use it so recklessly. What if
by accident we miss a blocking call and we are running on the Main
thread? This could lead to blocking the entire application.

Immediate main dispatching

There is a cost associated with dispatching a coroutine. When


withContext is called, the coroutine needs to be suspended, possibly
wait in a queue, and then resumed. This is a small but unnecessary
cost if we are already on this thread. Look at the function below:

suspend fun showUser(user: User) =


withContext([Link]) {
[Link] = [Link]
// ...
}

If this function had already been called on the main dispatcher, we


would have an unnecessary cost of re-dispatching. What is more, if
there were a long queue for the Main thread because of withContext,
the user data might be shown with some delay (this coroutine would
need to wait for other coroutines to do their job first). To prevent
this, there is [Link], which dispatches only if
it is needed. So, if the function below is called on the Main thread, it
won’t be re-dispatched, it will be called immediately.
Dispatchers 157

suspend fun showUser(user: User) =


withContext([Link]) {
[Link] = [Link]
// ...
}

We prefer [Link] as the withContext


argument whenever this function might have already been called
from the main dispatcher. Currently, the other dispatchers do not
support immediate dispatching.

Continuation interceptor

Dispatching works based on the mechanism of continuation


interception, which is built into the Kotlin language. There
is a coroutine context named ContinuationInterceptor,
whose interceptContinuation method is used to modify a
continuation when a coroutine is suspended³⁴. It also has a
releaseInterceptedContinuation method that is called when a
continuation is ended.

public interface ContinuationInterceptor :


[Link] {

companion object Key :


[Link]<ContinuationInterceptor>

fun <T> interceptContinuation(


continuation: Continuation<T>
): Continuation<T>

fun releaseInterceptedContinuation(
continuation: Continuation<*>
) {
}

//...
}

³⁴Wrapping needs to happen only once per continuation thanks to


the caching mechanism.
Dispatchers 158

The capability to wrap a continuation gives a lot of control. Dis-


patchers use interceptContinuation to wrap a continuation with
DispatchedContinuation, which runs on a specific pool of threads.
This is how dispatchers work.
The problem is that the same context is also used by many testing
libraries, for instance by runTest from kotlinx-coroutines-test.
Each element in a context has to have a unique key. This is why we
sometimes inject dispatchers to replace them in unit tests with test
dispatchers. We will get back to this topic in the chapter dedicated to
coroutine testing.

class DiscUserRepository(
private val discReader: DiscReader,
private val dispatcher: CoroutineContext = [Link],
) : UserRepository {
override suspend fun getUser(): UserData =
withContext(dispatcher) {
UserData([Link]("userName"))
}
}

class UserReaderTests {

@Test
fun `some test`() = runTest {
// given
val discReader = FakeDiscReader()
val repo = DiscUserRepository(
discReader,
// one of coroutines testing practices
[Link][ContinuationInterceptor]!!
)
//...
}
}

Performance of dispatchers against different tasks

To show how different dispatchers perform against different tasks, I


made some benchmarks. In all these cases, the task is to run 100 inde-
Dispatchers 159

pendent coroutines with the same task. Different columns represent


different tasks: suspending for a second, blocking for a second, CPU-
intensive operation, and memory-intensive operation (where the
majority of the time is spent on accessing, allocating, and freeing
memory). Different rows represent the different dispatchers used
for running these coroutines. The table below shows the average
execution time in milliseconds.

Suspending Blocking CPU Memory


Single 1 002 100 003 39 103 94 358
thread
Default (8 1 002 13 003 8 473 21 461
threads)
IO (64 1 002 2 003 9 893 20 776
threads)
100 1 002 1 003 16 379 21 004
threads

There are a few important observations you can make:

1. When we are just suspending, it doesn’t really matter how


many threads we are using.
2. When we are blocking, the more threads we are using, the
faster all these coroutines will be finished.
3. With CPU-intensive operations, [Link] is the
best option³⁵.
4. If we are dealing with a memory-intensive problem, more
threads might provide some (but not a significant) improve-
ment.

Here is how the tested functions look³⁶:

³⁵The main reason is that the more threads we use, the more time
the processor needs to spend on switching between them, and so
it has less time to do meaningful operations. Also [Link]
should not be used for CPU-intensive operations because it is used to
block operations, and some other process might block all its threads.
³⁶The whole code can be found on [Link]
Dispatchers 160

fun cpu(order: Order): Coffee {


var i = Int.MAX_VALUE
while (i > 0) {
i -= if (i % 2 == 0) 1 else 2
}
return Coffee([Link](customer = [Link] + i))
}

fun memory(order: Order): Coffee {


val list = List(1_000) { it }
val list2 = List(1_000) { list }
val list3 = List(1_000) { list2 }
return Coffee(
[Link](
customer = [Link] + [Link]()
)
)
}

fun blocking(order: Order): Coffee {


[Link](1000)
return Coffee(order)
}

suspend fun suspending(order: Order): Coffee {


delay(1000)
return Coffee(order)
}

Summary

Dispatchers determine on which thread or thread pool a coroutine


will be running (starting and resuming). The most important op-
tions are:

• [Link], which we use for CPU-intensive opera-


tions;
• [Link], which we use to access the Main thread on
Android, Swing, or JavaFX;
Dispatchers 161

• [Link], which runs on the same thread


as [Link] but is not re-dispatched if it is not neces-
sary;
• [Link], which we use when we need to do some block-
ing operations;
• [Link] with limited parallelism or a custom
dispatcher with a pool of threads, which we use when we
might have a big number of blocking calls;
• [Link] or [Link] with parallelism lim-
ited to 1, or a custom dispatcher with a single thread, which is
used when we need to secure shared state modifications;
• [Link], which we use when we do not care
where the coroutine will be executed.
Constructing a coroutine scope 162

Constructing a coroutine scope

In previous chapters, we’ve learned about the tools needed to con-


struct a proper scope. Now it is time to summarize this knowledge
and see how it is typically used. We will see two common examples:
one for Android, and one for backend development.

CoroutineScope factory function

CoroutineScope is an interface with a single property


coroutineContext.

interface CoroutineScope {
val coroutineContext: CoroutineContext
}

Therefore, we can make a class implement this interface and just


directly call coroutine builders in it.

class SomeClass : CoroutineScope {


override val coroutineContext: CoroutineContext = Job()

fun onStart() {
launch {
// ...
}
}
}

However, this approach is not very popular. On one hand, it is


convenient; on the other, it is problematic that in such a class
we can directly call other CoroutineScope methods like cancel or
ensureActive. Even accidentally, someone might cancel the whole
scope, and coroutines will not start anymore. Instead, we generally
prefer to hold a coroutine scope as an object in a property and use it
to call coroutine builders.
Constructing a coroutine scope 163

class SomeClass {
val scope: CoroutineScope = ...

fun onStart() {
[Link] {
// ...
}
}
}

The easiest way to create a coroutine scope object is by using the


CoroutineScope factory function³⁷. It creates a scope with provided
context (and an additional Job for structured concurrency if no job is
already part of the context).

public fun CoroutineScope(


context: CoroutineContext
): CoroutineScope =
ContextScope(
if (context[Job] != null) context
else context + Job()
)

internal class ContextScope(


context: CoroutineContext
) : CoroutineScope {
override val coroutineContext: CoroutineContext = context
override fun toString(): String =
"CoroutineScope(coroutineContext=$coroutineContext)"
}

Constructing a scope on Android

In most Android applications, we use an architecture that is a de-


scendant of MVC: currently mainly MVVM or MVP. In these architec-
tures, we extract presentation logic into objects called ViewModels
³⁷A function that looks like a constructor is known as a fake
constructor. This pattern is explained in Effective Kotlin Item 33:
Consider factory functions instead of constructors.
Constructing a coroutine scope 164

or Presenters. This is where coroutines are generally started. In


other layers, like in Use Cases or Repositories, we generally just
use suspending functions. Coroutines might also be started in Frag-
ments or Activities. Regardless of where coroutines are started on
Android, how they are constructed will most likely be the same. Let’s
take a MainViewModel as an example: let’s say it needs to fetch some
data in onCreate (which is called when a user enters the screen). This
data fetching needs to happen in a coroutine which needs to be called
on some scope. We will construct a scope in the BaseViewModel so it
is defined just once for all view models. So, in the MainViewModel, we
can just use the scope property from BaseViewModel.

abstract class BaseViewModel : ViewModel() {


protected val scope = CoroutineScope(TODO())
}

class MainViewModel(
private val userRepo: UserRepository,
private val newsRepo: NewsRepository,
) : BaseViewModel {

fun onCreate() {
[Link] {
val user = [Link]()
[Link](user)
}
[Link] {
val news = [Link]()
.sortedByDescending { [Link] }
[Link](news)
}
}
}

Time to define a context for this scope. Given that many functions in
Android need to be called on the Main thread, [Link] is
considered the best option as the default dispatcher. We will use it as
a part of our default context on Android.
Constructing a coroutine scope 165

abstract class BaseViewModel : ViewModel() {


protected val scope = CoroutineScope([Link])
}

Second, we need to make our scope cancellable. It is a common


feature to cancel all unfinished processes once a user exits a screen
and onDestroy (or onCleared in case of ViewModels) is called. To
make our scope cancellable, we need it to have some Job (we do not
really need to add it, because if we don’t it will be added by the
CoroutineScope function anyway, but it is more explicit this way).
Then, we can cancel it in onCleared.

abstract class BaseViewModel : ViewModel() {


protected val scope =
CoroutineScope([Link] + Job())

override fun onCleared() {


[Link]()
}
}

Even better, it is a common practice to not cancel the whole scope but
only its children. Thanks to that, as long as this view model is active,
new coroutines can start on its scope.

abstract class BaseViewModel : ViewModel() {


protected val scope =
CoroutineScope([Link] + Job())

override fun onCleared() {


[Link]()
}
}

We also want different coroutines started on this scope to be indepen-


dent. When we use Job, if any of the children is cancelled due to an
error, the parent and all its other children are cancelled as well. Even
if there was an exception when loading user data, it should not stop
us from seeing the news. To have such independence, we should use
SupervisorJob instead of Job.
Constructing a coroutine scope 166

abstract class BaseViewModel : ViewModel() {


protected val scope =
CoroutineScope([Link] + SupervisorJob())

override fun onCleared() {


[Link]()
}
}

The last important functionality is the default way of handling un-


caught exceptions. On Android, we often define what should happen
in the case of different kinds of exceptions. If we receive a 401
Unauthorized from an HTTP call, we might open the login screen.
On a 503 Service Unavailable, we might show a server problem
message. In other cases, we might show dialogs, snackbars, or toasts.
We often define these exception handlers once, for instance in some
BaseActivity, and then pass them to view models (often via con-
structor). Then, we can use CoroutineExceptionHandler to call this
function if there is an unhandled exception.

abstract class BaseViewModel(


private val onError: (Throwable) -> Unit
) : ViewModel() {
private val exceptionHandler =
CoroutineExceptionHandler { _, throwable ->
onError(throwable)
}

private val context =


[Link] + SupervisorJob() + exceptionHandler

protected val scope = CoroutineScope(context)

override fun onCleared() {


[Link]()
}
}

An alternative would be to hold exceptions as a live data property,


which is observed in the BaseActivity or another view element.
Constructing a coroutine scope 167

abstract class BaseViewModel : ViewModel() {


private val _failure: MutableLiveData<Throwable> =
MutableLiveData()
val failure: LiveData<Throwable> = _failure

private val exceptionHandler =


CoroutineExceptionHandler { _, throwable ->
_failure.value = throwable
}

private val context =


[Link] + SupervisorJob() + exceptionHandler

protected val scope = CoroutineScope(context)

override fun onCleared() {


[Link]()
}
}

viewModelScope and lifecycleScope

In modern Android applications, instead of defining


your own scope, you can also use viewModelScope
(needs [Link]:lifecycle-viewmodel-ktx
version 2.2.0 or higher) or lifecycleScope (needs
[Link]:lifecycle-runtime-ktx version 2.2.0 or
higher). How they work is nearly identical to what we’ve just
constructed: they use [Link] and SupervisorJob, and
they cancel the job when the view model or lifecycle owner gets
destroyed.
Constructing a coroutine scope 168

// Implementation from lifecycle-viewmodel-ktx version 2.4.0


public val [Link]: CoroutineScope
get() {
val scope: CoroutineScope? = [Link](JOB_KEY)
if (scope != null) {
return scope
}
return setTagIfAbsent(
JOB_KEY,
CloseableCoroutineScope(
SupervisorJob() +
[Link]
)
)
}

internal class CloseableCoroutineScope(


context: CoroutineContext
) : Closeable, CoroutineScope {
override val coroutineContext: CoroutineContext = context

override fun close() {


[Link]()
}
}

Using viewModelScope and lifecycleScope is convenient and rec-


ommended if we do not need any special context as a part of our
scope (like CoroutineExceptionHandler). This is why this approach
is chosen by many (maybe most) Android applications.

class ArticlesListViewModel(
private val produceArticles: ProduceArticlesUseCase,
) : ViewModel() {

private val _progressBarVisible =


MutableStateFlow(false)
val progressBarVisible: StateFlow<Boolean> =
_progressBarVisible
Constructing a coroutine scope 169

private val _articlesListState =


MutableStateFlow<ArticlesListState>(Initial)
val articlesListState: StateFlow<ArticlesListState> =
_articlesListState

fun onCreate() {
[Link] {
_progressBarVisible.value = true
val articles = [Link]()
_articlesListState.value =
ArticlesLoaded(articles)
_progressBarVisible.value = false
}
}
}

Constructing a coroutine on the backend

Many backend frameworks have built-in support for suspending


functions. Spring Boot allows controller functions to be suspended.
In Ktor, all handlers are suspending functions by default. Thanks to
that, we rarely need to create a scope ourselves. However, assuming
that we do (maybe because we need to start a task or work with an
older version of Spring), what we most likely need is:

• a custom dispatcher with a pool of threads (or


[Link]);
• SupervisorJob to make different coroutines independent;
• probably some CoroutineExceptionHandler to respond with
proper error codes, send dead letters³⁸, or log problems.

³⁸This is a popular microservices pattern that is used when we use


a software bus, like in Apache Kafka.
Constructing a coroutine scope 170

@Configuration
public class CoroutineScopeConfiguration {

@Bean(name = "coroutineDispatcher")
fun coroutineDispatcher(): CoroutineDispatcher =
[Link](5)

@Bean(name = "coroutineExceptionHandler")
fun coroutineExceptionHandler() =
CoroutineExceptionHandler { _, throwable ->
[Link]()
.recordException(throwable)
}

@Bean
fun coroutineScope(
coroutineDispatcher: CoroutineDispatcher,
coroutineExceptionHandler: CoroutineExceptionHandler,
) = CoroutineScope(
SupervisorJob() +
coroutineDispatcher +
coroutineExceptionHandler
)
}

Such a scope is most often injected into classes via the constructor.
Thanks to that, the scope can be defined once to be used on many
classes, and it can be easily replaced with a different scope for testing
purposes.

Constructing a scope for additional calls

As explained in the Additional operations section of the Coroutine


scope functions chapter, we often make scopes for starting additional
operations. These scopes are then typically injected via arguments
to functions or the constructor. If we only plan to use these scopes to
suspend calls, it is enough if they just have a SupervisorScope.
Constructing a coroutine scope 171

val analyticsScope = CoroutineScope(SupervisorJob())

All their exceptions will only be shown in logs; so, if you want to send
them to a monitoring system, use CoroutineExceptionHandler.

private val exceptionHandler =


CoroutineExceptionHandler { _, throwable ->
[Link]()
.recordException(throwable)
}

val analyticsScope = CoroutineScope(


SupervisorJob() + exceptionHandler
)

Another common customization is setting a different dispatcher. For


instance, use [Link] if you might have blocking calls on this
scope, or use [Link] if you might want to modify the main
view on Android (if we set [Link], testing on Android is
easier).

val analyticsScope = CoroutineScope(


SupervisorJob() + [Link]
)

Summary

I hope that after this chapter you will know how to construct scopes
in most typical situations. This is important when using coroutines
in real-life projects. This is enough for many small and simple appli-
cations, but for those that are more serious we still need to cover two
more topics: proper synchronization and testing.
The problem with shared state 172

The problem with shared state

Before we start, take a look at the UserDownloader class below. It


allows us to fetch a user by id or to get all the users that were
downloaded before. What is wrong with this implementation?

class UserDownloader(
private val api: NetworkService
) {
private val users = mutableListOf<User>()

fun downloaded(): List<User> = [Link]()

suspend fun fetchUser(id: Int) {


val newUser = [Link](id)
[Link](newUser)
}
}

Notice the use of the defensive copy toList. This is done


to avoid a conflict between reading the object returned
by downloaded and adding an element to the mutable list.
We could also represent users using the read-only list
(List<User>) and the read-write property (var). Then, we
would not need to make a defensive copy, and downloaded
would not need to be protected at all, but we would
decrease the performance of adding elements to the col-
lection. I personally prefer the second approach, but I
decided to show the one using a mutable collection as I
see it more often in real-life projects.

The above implementation is not prepared for concurrent use. Each


fetchUser call modifies users. This is fine as long as this function is
not started on more than one thread at the same time. Since it can be
started on more than one thread at the same time, we say users is a
shared state, therefore it needs to be secured. This is because concur-
rent modifications can lead to conflicts. This problem is presented
below:
The problem with shared state 173

class FakeNetworkService : NetworkService {


override suspend fun fetchUser(id: Int): User {
delay(2)
return User("User$id")
}
}

suspend fun main() {


val downloader = UserDownloader(FakeNetworkService())
coroutineScope {
repeat(1_000_000) {
launch {
[Link](it)
}
}
}
print([Link]().size) // ~998242
}

Because there are multiple threads interacting with the same in-
stance, the above code will print a number smaller than 1,000,000
(like 998,242 for example), or it might throw an exception.

Exception in thread "main"


[Link]: 22
at [Link]([Link])
...

This is a typical problem with shared state modifications. To see


it more clearly, I will present a simpler example: multiple threads
incrementing an integer. I am using massiveRun to call an oper-
ation 1,000 times on 1,000 coroutines using [Link].
After these operations, the number should be 1,000,000 (1,000 *
1,000). However, without any synchronization the real result will be
smaller because of the conflicts.
The problem with shared state 174

var counter = 0

fun main() = runBlocking {


massiveRun {
counter++
}
println(counter) // ~567231
}

suspend fun massiveRun(action: suspend () -> Unit) =


withContext([Link]) {
repeat(1000) {
launch {
repeat(1000) { action() }
}
}
}

To understand why the result is not 1,000,000, imagine a scenario


in which two threads try to increment the same number at the same
time. Let’s say that the initial value is 0. The first thread takes the
current value 0, and then the processor decides to switch to the
second thread. The second thread takes 0 as well, increments it to
1, and stores it in the variable. We switch to the first thread where it
has finished: it has 0, so it increments it to 1 and stores it. As a result,
the variable is 1 but it should be 2. This is how some operations are
lost.

Blocking synchronization

The problem above can be solved using classic tools we know from
Java, like synchronized block or synchronized collections.
The problem with shared state 175

var counter = 0

fun main() = runBlocking {


val lock = Any()
massiveRun {
synchronized(lock) { // We are blocking threads!
counter++
}
}
println("Counter = $counter") // 1000000
}

This solution works, but there are a few problems. The biggest one
is, that inside synchronized block you cannot use suspending func-
tions. The second one is, that this block is blocking threads when a
coroutine is waiting for its turn. I hope that after the chapter about
dispatchers you understand that we do not want to block threads.
What if it is the main thread? What if we only have a limited pool
of threads? Why waste these resources? We should use coroutine-
specific tools instead. Ones that do not block but instead suspend
or avoid conflict. So, let’s set aside this solution and explore some
others.

Atomics

There is another Java solution that can help us in some simple cases.
Java has a set of atomic values. All their operations are fast and
guaranteed to be “thread-safe”. They are called atomic. Their oper-
ations are implemented at a low level without locks, so this solution
is efficient and appropriate for us. There are different atomic values
we can use. For our case, we can use AtomicInteger.
The problem with shared state 176

private var counter = AtomicInteger()

fun main() = runBlocking {


massiveRun {
[Link]()
}
println([Link]()) // 1000000
}

It works perfectly here, but the utility of atomic values is generally


very limited, therefore we need to be careful: just knowing a single
operation will be atomic does not help us when we have a bundle of
operations.

private var counter = AtomicInteger()

fun main() = runBlocking {


massiveRun {
[Link]([Link]() + 1)
}
println([Link]()) // ~430467
}

To secure our UserDownloader, we could use the AtomicReference


wrapping around the read-only list of users. We can use the
getAndUpdate atomic function to update its value without conflicts.
The problem with shared state 177

class UserDownloader(
private val api: NetworkService
) {
private val users = AtomicReference(listOf<User>())

fun downloaded(): List<User> = [Link]()

suspend fun fetchUser(id: Int) {


val newUser = [Link](id)
[Link] { it + newUser }
}
}

We often use atomics to secure a single primitive or a single refer-


ence, but for more complicated cases we still need better tools.

A dispatcher limited to a single thread

We saw a dispatcher with parallelism limited to a single thread in the


[Dispatchers] chapter ([Link]
This is the easiest solution for most problems with shared states.

val dispatcher = [Link]


.limitedParallelism(1)

var counter = 0

fun main() = runBlocking {


massiveRun {
withContext(dispatcher) {
counter++
}
}
println(counter) // 1000000
}

In practice, this approach can be used in two ways. The first


approach is known as coarse-grained thread confinement. This is
an easy approach whereby we just wrap the whole function with
The problem with shared state 178

withContext, with a dispatcher limited to a single thread. This


solution is easy and eliminates conflicts, but the problem is that we
lose multithreading capabilities on the whole function. Let’s take
a look at the example below. [Link](id) could be started
concurrently on many threads, but its body will be running on a
dispatcher that is limited to a single thread. As a result, this function
execution could slow down when we invoke functions that are
blocking or CPU-intensive.

class UserDownloader(
private val api: NetworkService
) {
private val users = mutableListOf<User>()
private val dispatcher = [Link]
.limitedParallelism(1)

suspend fun downloaded(): List<User> =


withContext(dispatcher) {
[Link]()
}

suspend fun fetchUser(id: Int) = withContext(dispatcher) {


val newUser = [Link](id)
users += newUser
}
}

The second approach is known as fine-grained thread confinement.


In this approach, we wrap only those statements which modify the
state. In our example, these are all the lines where users is used. This
approach is more demanding, but it offers us better performance if
the functions excluded from our critical section (like fetchUser in
our example) are blocking or CPU-intensive. If they are just plain
suspending functions, the performance improvement is unlikely to
be seen.
The problem with shared state 179

class UserDownloader(
private val api: NetworkService
) {
private val users = mutableListOf<User>()
private val dispatcher = [Link]
.limitedParallelism(1)

suspend fun downloaded(): List<User> =


withContext(dispatcher) {
[Link]()
}

suspend fun fetchUser(id: Int) {


val newUser = [Link](id)
withContext(dispatcher) {
users += newUser
}
}
}

In most cases, using a dispatcher with a single thread is not only easy,
but also efficient, thanks to the fact that standard dispatchers share
the same pool of threads.

Mutex

The last popular approach is to use a Mutex. You can imagine it as


a room with a single key (or maybe a toilet at a cafeteria). Its most
important function is lock. When the first coroutine calls it, it kind
of takes the key and passes through lock without suspension. If
another coroutine then calls lock, it will be suspended until the first
coroutine calls unlock (like a person waiting for a key to the toilet³⁹).
If another coroutine reaches the lock function, it is suspended and
³⁹To avoid giving a false impression of my home country, asking
for the key to a toilet is something I mainly experience outside of
Poland. For instance, in Poland practically every petrol station has a
toilet available for everyone, no key required (and they are generally
clean and tidy). However, in many other European countries, toilets
are better protected from people who might try to use them without
buying anything.
The problem with shared state 180

put in a queue, just after the second coroutine. When the first
coroutine finally calls the unlock function, it gives back the key, so
the second coroutine (the first one in the queue) is now resumed and
can finally pass through the lock function. Thus, only one coroutine
will be between lock and unlock.

suspend fun main() = coroutineScope {


repeat(5) {
launch {
delayAndPrint()
}
}
}

val mutex = Mutex()

suspend fun delayAndPrint() {


[Link]()
delay(1000)
println("Done")
[Link]()
}
// (1 sec)
// Done
// (1 sec)
// Done
// (1 sec)
// Done
// (1 sec)
// Done
// (1 sec)
// Done

Using lock and unlock directly is risky, as any exception (or prema-
ture return) in between would lead to the key never being given back
(unlock never been called), and as a result, no other coroutines would
be able to pass through the lock. This is a serious problem known
as a deadlock (imagine a toilet that cannot be used because someone
was in a hurry and forgot to give back the key). So, instead we can
use the withLock function, which starts with lock but calls unlock on
The problem with shared state 181

the finally block so that any exceptions thrown inside the block will
successfully release the lock. In use, it is similar to a synchronized
block.

val mutex = Mutex()

var counter = 0

fun main() = runBlocking {


massiveRun {
[Link] {
counter++
}
}
println(counter) // 1000000
}

The important advantage of mutex over a synchronized block is


that we suspend a coroutine instead of blocking a thread. This is
a safer and lighter approach. Compared to using a dispatcher with
parallelism limited to a single thread, mutex is lighter, and in some
cases it might offer better performance. On the other hand, it is also
harder to use it properly. It has one important danger: a coroutine
cannot get past the lock twice (maybe the key stays in the door, so
another door requiring the same key would be impossible to get past).
Execution of the code below will result in a program state called
deadlock - it will be blocked forever.

suspend fun main() {


val mutex = Mutex()
println("Started")
[Link] {
[Link] {
println("Will never be printed")
}
}
}
// Started
// (runs forever)
The problem with shared state 182

The second problem with mutex is that it is not unlocked when a


coroutine is suspended. Take a look at the code below. It takes over 5
seconds because mutex is still locked during delay.

class MessagesRepository {
private val messages = mutableListOf<String>()
private val mutex = Mutex()

suspend fun add(message: String) = [Link] {


delay(1000) // we simulate network call
[Link](message)
}
}

suspend fun main() {


val repo = MessagesRepository()

val timeMillis = measureTimeMillis {


coroutineScope {
repeat(5) {
launch {
[Link]("Message$it")
}
}
}
}
println(timeMillis) // ~5120
}

When we use a dispatcher that is limited to a single thread, we don’t


have such a problem. When a delay or a network call suspends a
coroutine, the thread can be used by other coroutines.
The problem with shared state 183

class MessagesRepository {
private val messages = mutableListOf<String>()
private val dispatcher = [Link]
.limitedParallelism(1)

suspend fun add(message: String) =


withContext(dispatcher) {
delay(1000) // we simulate network call
[Link](message)
}
}

suspend fun main() {


val repo = MessagesRepository()

val timeMillis = measureTimeMillis {


coroutineScope {
repeat(5) {
launch {
[Link]("Message$it")
}
}
}
}
println(timeMillis) // 1058
}

This is why we avoid using mutex to wrap whole functions (coarse-


grained approach). When we use it at all, we need to do so with great
care to avoid locking twice and calling suspending functions.

class MongoUserRepository(
//...
) : UserRepository {
private val mutex = Mutex()

override suspend fun updateUser(


userId: String,
userUpdate: UserUpdate
The problem with shared state 184

): Unit = [Link] {
// Yes, update should happen on db,
// not via multiple functions,
// this is just an example.
val currentUser = getUser(userId) // Deadlock!
deleteUser(userId) // Deadlock!
addUser([Link](userUpdate)) // Deadlock!
}

override suspend fun getUser(


userId: String
): User = [Link] {
// ...
}

override suspend fun deleteUser(


userId: String
): Unit = [Link] {
// ...
}

override suspend fun addUser(


user: User
): User = [Link] {
// ...
}
}

Fine-grained thread confinement (wrapping only the place where


we modify the shared state) would help, but in the above example I
would prefer to use a dispatcher that is limited to a single thread.

Summary

There are many ways in which coroutines can be orchestrated to


avoid conflicts when modifying a shared state. The most practical
solution is to modify a shared state in a dispatcher that is limited
to a single thread. This can be a fine-grained thread confinement that
only wraps concrete places where synchronization is needed; alter-
natively, it can be a coarse-grained thread confinement that wraps
The problem with shared state 185

the whole function. The second approach is easier, but it might be


slower. We might also use atomic values or a mutex.
Testing Kotlin Coroutines 186

Testing Kotlin Coroutines

Testing suspending functions in most cases is not different from


testing normal functions. Take a look at the showUserData below
from ShowUserUseCase. Checking whether it shows data as expected
can be easily achieved thanks to a few fakes⁴⁰ (or mocks⁴¹) and simple
assertions.

class FetchUserUseCase(
private val repo: UserDataRepository,
) {

suspend fun fetchUserData(): User = coroutineScope {


val name = async { [Link]() }
val friends = async { [Link]() }
val profile = async { [Link]() }
User(
name = [Link](),
friends = [Link](),
profile = [Link]()
)
}
}

class FetchUserDataTest {

@Test
fun `should construct user`() = runBlocking {
// given
val repo = FakeUserDataRepository()
val useCase = FetchUserUseCase(repo)

⁴⁰A fake is a class that implements an interface but contains fixed


data and no logic. They are useful to mimic a concrete behavior for
testing.
⁴¹Mocks are universal simulated objects that mimic the behavior
of real objects in controlled ways. We generally create them using
libraries, like MockK, which support mocking suspending functions.
In the examples below, I decided to use fakes to avoid using an
external library.
Testing Kotlin Coroutines 187

// when
val result = [Link]()

// then
val expectedUser = User(
name = "Ben",
friends = listOf(Friend("some-friend-id-1")),
profile = Profile("Example description")
)
assertEquals(listOf(expectedUser), result)
}

class FakeUserDataRepository : UserDataRepository {


override suspend fun getName(): String = "Ben"

override suspend fun getFriends(): List<Friend> =


listOf(Friend("some-friend-id-1"))

override suspend fun getProfile(): Profile =


Profile("Example description")
}
}

My method for testing logic should not be used as a


reference. There are many conflicting ideas for how tests
should look. I’ve used fakes here instead of mocks so as
not to introduce any external library (I also personally
prefer them). I’ve also tried to keep all tests minimalistic
to make them easier to read.

Similarly, in many other cases, if we are interested in what the


suspending function does, we practically need nothing else but
runBlocking and classic tools for asserting. This is what unit
tests look like in many projects. Here are a few unit tests from Kt.
Academy backend:
Testing Kotlin Coroutines 188

class UserTests : KtAcademyFacadeTest() {

@Test
fun `should modify user details`() = runBlocking {
// given
thereIsUser(aUserToken, aUserId)

// when
[Link](
aUserToken,
PatchUserSelfRequest(
bio = aUserBio,
bioPl = aUserBioPl,
publicKey = aUserPublicKey,
customImageUrl = aCustomImageUrl
)
)

// then
with(findUser(aUserId)) {
assertEquals(aUserBio, bio)
assertEquals(aUserBioPl, bioPl)
assertEquals(aUserPublicKey, publicKey)
assertEquals(aCustomImageUrl, customImageUrl)
}
}

//...
}

Integration tests can be implemented in the same way. We just use


runBlocking, and there is nearly no difference between testing how
suspending and blocking functions behave.

Testing time dependencies

The difference arises when we want to start testing time dependen-


cies. For example, think of the following functions:
Testing Kotlin Coroutines 189

suspend fun produceCurrentUserSeq(): User {


val profile = [Link]()
val friends = [Link]()
return User(profile, friends)
}

suspend fun produceCurrentUserSym(): User = coroutineScope {


val profile = async { [Link]() }
val friends = async { [Link]() }
User([Link](), [Link]())
}

Both functions produce the same result; the difference is that the
first one does it sequentially, while the second one does it simulta-
neously. The difference is that if fetching the profile and the friends
takes 1 second each, then the first function would require around 2
seconds, whereas the first would require only 1. How would you test
this?
Notice that the difference arises only when execution of getProfile
and getFriends truly takes some time. If they are immediate, both
ways of producing the user are indistinguishable. So, we might
help ourselves by delaying fake functions using delay to simulate a
delayed data loading scenario:

class FakeDelayedUserDataRepository : UserDataRepository {

override suspend fun getProfile(): Profile {


delay(1000)
return Profile("Example description")
}

override suspend fun getFriends(): List<Friend> {


delay(1000)
return listOf(Friend("some-friend-id-1"))
}
}

Now, the difference will be visible in unit tests: the


produceCurrentUserSeq call will take around 1 second, and the
Testing Kotlin Coroutines 190

produceCurrentUserSym call will take around 2 seconds. The problem


is that we do not want a single unit test to take so much time. We
typically have thousands of unit tests in our projects, and we want
all of them to execute as quickly as possible. How to have your cake
and eat it too? For that, we need to operate on simulated time. Here
comes the kotlinx-coroutines-test library to the rescue with its
StandardTestDispatcher.

This chapter presents the kotlinx-coroutines-test


functions and classes introduced in version 1.6. If
you use an older version of this library, in most
cases it should be enough to use runBlockingTest
instead of runTest, TestCoroutineDispatcher instead of
StandardTestDispatcher, and TestCoroutineScope
instead of TestScope. Also, advanceTimeBy
in older versions is like advanceTimeBy and
runCurrent in versions newer than 1.6. The detailed
differences are described in the migration guide at
[Link]
coroutines-test/[Link].

TestCoroutineScheduler and StandardTestDispatcher

When we call delay, our coroutine is suspended and resumed


after a set time. This behavior can be altered thanks to
TestCoroutineScheduler from kotlinx-coroutines-test, which
makes delay operate on virtual time, which is fully simulated and
does not depend on real time.

fun main() {
val scheduler = TestCoroutineScheduler()

println([Link]) // 0
[Link](1_000)
println([Link]) // 1000
[Link](1_000)
println([Link]) // 2000
}

TestCoroutineScheduler as well as StandardTestDispatcher,


TestScope and runTest are still experimental.
Testing Kotlin Coroutines 191

To use TestCoroutineScheduler on coroutines, we should


use a dispatcher that supports it. The standard option is
StandardTestDispatcher. Unlike most dispatchers, it is not used
just to decide on which thread a coroutine should run. Coroutines
started with such a dispatcher will not run until we advance virtual
time. The most typical way to do this is by using advanceUntilIdle,
which advances virtual time and invokes all the operations that
would be called during that time if this were real time.

fun main() {
val scheduler = TestCoroutineScheduler()
val testDispatcher = StandardTestDispatcher(scheduler)

CoroutineScope(testDispatcher).launch {
println("Some work 1")
delay(1000)
println("Some work 2")
delay(1000)
println("Coroutine done")
}

println("[${[Link]}] Before")
[Link]()
println("[${[Link]}] After")
}
// [0] Before
// Some work 1
// Some work 2
// Coroutine done
// [2000] After

StandardTestDispatcher creates TestCoroutineScheduler by default,


so we do not need to do so explicitly. We can access it with the
scheduler property.
Testing Kotlin Coroutines 192

fun main() {
val dispatcher = StandardTestDispatcher()

CoroutineScope(dispatcher).launch {
println("Some work 1")
delay(1000)
println("Some work 2")
delay(1000)
println("Coroutine done")
}

println("[${[Link]}] Before")
[Link]()
println("[${[Link]}] After")
}
// [0] Before
// Some work 1
// Some work 2
// Coroutine done
// [2000] After

It is important to notice that StandardTestDispatcher does not ad-


vance time by itself. We need to do this, otherwise our coroutine will
never be resumed.

fun main() {
val testDispatcher = StandardTestDispatcher()

runBlocking(testDispatcher) {
delay(1)
println("Coroutine done")
}
}
// (code runs forever)

Another way to push time is using advanceTimeBy with a concrete


number of milliseconds. This function advances time and executes
all operations that happened in the meantime. This means that if we
push by 2 milliseconds, everything that was delayed by less than that
Testing Kotlin Coroutines 193

time will be resumed. To resume operations scheduled exactly at the


second millisecond, we need to additionally invoke the runCurrent
function.

fun main() {
val testDispatcher = StandardTestDispatcher()

CoroutineScope(testDispatcher).launch {
delay(1)
println("Done1")
}
CoroutineScope(testDispatcher).launch {
delay(2)
println("Done2")
}
[Link](2) // Done
[Link]() // Done2
}

Here is a bigger example of using advanceTimeBy together with


runCurrent.

fun main() {
val testDispatcher = StandardTestDispatcher()

CoroutineScope(testDispatcher).launch {
delay(2)
print("Done")
}

CoroutineScope(testDispatcher).launch {
delay(4)
print("Done2")
}

CoroutineScope(testDispatcher).launch {
delay(6)
print("Done3")
}
Testing Kotlin Coroutines 194

for (i in 1..5) {
print(".")
[Link](1)
[Link]()
}
}
// ..Done..Done2.

How does it work under the hood? When delay


is called, it checks if the dispatcher (class with
ContinuationInterceptor key) implements the Delay
interface (StandardTestDispatcher does). For such
dispatchers, it calls their scheduleResumeAfterDelay
function instead of the one from the DefaultDelay,
which waits in real time.

To see that virtual time is truly independent of real time, see


the example below. Adding [Link] will not influence the
coroutine with StandardTestDispatcher. Note also that the call
to advanceUntilIdle takes only a few milliseconds, so it does not
wait for any real time. It immediately pushes the virtual time and
executes coroutine operations.

fun main() {
val dispatcher = StandardTestDispatcher()

CoroutineScope(dispatcher).launch {
delay(1000)
println("Coroutine done")
}

[Link]([Link](2000)) // Does not matter


// how much time we wait here, it will not influence
// the result

val time = measureTimeMillis {


println("[${[Link]}] Before")
[Link]()
println("[${[Link]}] After")
Testing Kotlin Coroutines 195

}
println("Took $time ms")
}
// [0] Before
// Coroutine done
// [1000] After
// Took 15 ms (or other small number)

In the previous examples, we were using StandardTestDispatcher


and wrapping it with a scope. Instead, we could use TestScope,
which does the same (and it collects all exceptions with
CoroutineExceptionHandler). The trick is that on this scope we
can also use functions like advanceUntilIdle, advanceTimeBy, or the
currentTime property , all of which are delegated to the scheduler
used by this scope. This is very convenient.

fun main() {
val scope = TestScope()

[Link] {
delay(1000)
println("First done")
delay(1000)
println("Coroutine done")
}

println("[${[Link]}] Before") // [0] Before


[Link](1000)
[Link]() // First done
println("[${[Link]}] Middle") // [1000] Middle
[Link]() // Coroutine done
println("[${[Link]}] After") // [2000] After
}

We will later see that StandardTestDispatcher is often used


directly on Android to test ViewModels, Presenters, Fragments,
etc. We could also use it to test the produceCurrentUserSeq and
produceCurrentUserSym functions by starting them in a coroutine,
advancing time until idle, and checking how much simulated time
they took. However, this would be quite complicated; instead, we
should use runTest, which is designed for such purposes.
Testing Kotlin Coroutines 196

runTest

runTest is the most commonly used function from


kotlinx-coroutines-test. It starts a coroutine with TestScope
and immediately advances it until idle. Within its coroutine, the
scope is of type TestScope, so we can check currentTime at any point.
Therefore, we can check how time flows in our coroutines, while
our tests take milliseconds.

class TestTest {

@Test
fun test1() = runTest {
assertEquals(0, currentTime)
delay(1000)
assertEquals(1000, currentTime)
}

@Test
fun test2() = runTest {
assertEquals(0, currentTime)
coroutineScope {
launch { delay(1000) }
launch { delay(1500) }
launch { delay(2000) }
}
assertEquals(2000, currentTime)
}
}

Let’s get back to our functions, where we loaded user data sequen-
tially and simultaneously. With runTest, testing them is easy. As-
suming that our fake repository needs 1 second for each function
call, sequential processing should take 2 seconds, and simultaneous
processing should take only 1. Thanks to the fact we are using virtual
time, our tests are immediate, and the values of currentTime are
precise.
Testing Kotlin Coroutines 197

@Test
fun `Should produce user sequentially`() = runTest {
// given
val userDataRepository = FakeDelayedUserDataRepository()
val useCase = ProduceUserUseCase(userDataRepository)

// when
[Link]()

// then
assertEquals(2000, currentTime)
}

@Test
fun `Should produce user simultaneously`() = runTest {
// given
val userDataRepository = FakeDelayedUserDataRepository()
val useCase = ProduceUserUseCase(userDataRepository)

// when
[Link]()

// then
assertEquals(1000, currentTime)
}

Since it is an important use case, let’s see a full example of testing a


sequential function with all required classes and interfaces:

class FetchUserUseCase(
private val repo: UserDataRepository,
) {

suspend fun fetchUserData(): User = coroutineScope {


val name = async { [Link]() }
val friends = async { [Link]() }
val profile = async { [Link]() }
User(
name = [Link](),
Testing Kotlin Coroutines 198

friends = [Link](),
profile = [Link]()
)
}
}

class FetchUserDataTest {

@Test
fun `should load data concurrently`() = runTest {
// given
val userRepo = FakeUserDataRepository()
val useCase = FetchUserUseCase(userRepo)

// when
[Link]()

// then
assertEquals(1000, currentTime)
}

@Test
fun `should construct user`() = runTest {
// given
val userRepo = FakeUserDataRepository()
val useCase = FetchUserUseCase(userRepo)

// when
val result = [Link]()

// then
val expectedUser = User(
name = "Ben",
friends = listOf(Friend("some-friend-id-1")),
profile = Profile("Example description")
)
assertEquals(expectedUser, result)
}
Testing Kotlin Coroutines 199

class FakeUserDataRepository : UserDataRepository {


override suspend fun getName(): String {
delay(1000)
return "Ben"
}

override suspend fun getFriends(): List<Friend> {


delay(1000)
return listOf(Friend("some-friend-id-1"))
}

override suspend fun getProfile(): Profile {


delay(1000)
return Profile("Example description")
}
}
}

interface UserDataRepository {
suspend fun getName(): String
suspend fun getFriends(): List<Friend>
suspend fun getProfile(): Profile
}

data class User(


val name: String,
val friends: List<Friend>,
val profile: Profile
)

data class Friend(val id: String)


data class Profile(val description: String)
Testing Kotlin Coroutines 200

runTest includes TestScope, that includes StandardTestDispatcher, that includes


TestCoroutineScheduler.

UnconfinedTestDispatcher

Except for StandardTestDispatcher we also have


UnconfinedTestDispatcher. The biggest difference is that
StandardTestDispatcher does not invoke any operations until we
use its scheduler. UnconfinedTestDispatcher immediately invokes
all the operations before the first delay on started coroutines, which
is why the code below prints “C”.

fun main() {
CoroutineScope(StandardTestDispatcher()).launch {
print("A")
delay(1)
print("B")
}
CoroutineScope(UnconfinedTestDispatcher()).launch {
print("C")
delay(1)
print("D")
}
}
// C
Testing Kotlin Coroutines 201

The runTest function was introduced in version 1.6 of


kotlinx-coroutines-test. Previously, we used runBlockingTest,
whose behavior is much closer to runTest using
UnconfinedTestDispatcher. So, if want to directly migrate from
runBlockingTest to runTest, this is how our tests might look:

@Test
fun testName() = runTest(UnconfinedTestDispatcher()) {
//...
}

Using mocks

Using delay in fakes is easy but not very explicit. Many developers
prefer to call delay in the test function. One way to do this is using
mocks⁴²:

@Test
fun `should load data concurrently`() = runTest {
// given
val userRepo = mockk<UserDataRepository>()
coEvery { [Link]() } coAnswers {
delay(600)
aName
}
coEvery { [Link]() } coAnswers {
delay(700)
someFriends
}
coEvery { [Link]() } coAnswers {
delay(800)
aProfile
}

⁴²Not everyone likes mocking. On one hand, mocking libraries


have plenty of powerful features. On the other hand, think of the
following situation: you have thousands of tests, and you change an
interface of a repository that is used by all of them. If you use fakes, it
is typically enough to update only a few classes. This is a big problem,
which is why I generally prefer to use fakes.
Testing Kotlin Coroutines 202

val useCase = FetchUserUseCase(userRepo)

// when
[Link]()

// then
assertEquals(800, currentTime)
}

In the above example, I’ve used the MockK library.

Testing functions that change a dispatcher

In the Dispatchers chapter, we covered typical cases where we set


concrete dispatchers. For example, we use [Link] (or a cus-
tom dispatcher) for blocking calls, or [Link] for CPU-
intensive calls. Such functions rarely need to be simultaneous, so
typically it is enough to test them with runBlocking. This case is easy
and practically indistinguishable from testing blocking functions.
For example, think of the following function:

suspend fun readSave(name: String): GameState =


withContext([Link]) {
[Link](name, GameState::[Link])
}

suspend fun calculateModel() =


withContext([Link]) {
[Link](
dataset = newTrain,
epochs = 10,
batchSize = 100,
verbose = false
)
}

We could test the behavior of such functions in tests wrapped with


runBlocking, but how about checking if these functions truly do
change the dispatcher? This can also be done if we mock the func-
tions we call, and inside we capture the name of the used thread.
Testing Kotlin Coroutines 203

@Test
fun `should change dispatcher`() = runBlocking {
// given
val csvReader = mockk<CsvReader>()
val startThreadName = "MyName"
var usedThreadName: String? = null
every {
[Link](
aFileName,
GameState::[Link]
)
} coAnswers {
usedThreadName = [Link]().name
aGameState
}
val saveReader = SaveReader(csvReader)

// when
withContext(newSingleThreadContext(startThreadName)) {
[Link](aFileName)
}

// then
assertNotNull(usedThreadName)
val expectedPrefix = "DefaultDispatcher-worker-"
assert(usedThreadName!!.startsWith(expectedPrefix))
}

In the above function, I couldn’t use fakes because


CsvReader is a class, so I used mocks.
Remember, that [Link] and
[Link] share the same pool of threads.

However, in rare cases we might want to test time dependencies


in functions that do change the dispatcher. This is a tricky case
because the new dispatcher replaces our StandardTestDispatcher, so
we stop operating on virtual time. To make this clear, let’s wrap the
fetchUserData function with withContext([Link]).
Testing Kotlin Coroutines 204

suspend fun fetchUserData() = withContext([Link]) {


val name = async { [Link]() }
val friends = async { [Link]() }
val profile = async { [Link]() }
User(
name = [Link](),
friends = [Link](),
profile = [Link]()
)
}

Now all our previously implemented tests will wait in real time, and
currentTime will keep returning 0. The easiest way to prevent this is
by injecting the dispatcher via a constructor and replacing it in unit
tests.

class FetchUserUseCase(
private val userRepo: UserDataRepository,
private val ioDispatcher: CoroutineDispatcher =
[Link]
) {

suspend fun fetchUserData() = withContext(ioDispatcher) {


val name = async { [Link]() }
val friends = async { [Link]() }
val profile = async { [Link]() }
User(
name = [Link](),
friends = [Link](),
profile = [Link]()
)
}
}

Now, instead of providing [Link] in unit tests, we should


use StandardTestDispatcher from runTest. We can get it from
coroutineContext using the ContinuationInterceptor key.
Testing Kotlin Coroutines 205

val testDispatcher = this


.coroutineContext[ContinuationInterceptor]
as CoroutineDispatcher

val useCase = FetchUserUseCase(


userRepo = userRepo,
ioDispatcher = testDispatcher,
)

Another possibility is to cast ioDispatcher as CoroutineContext, and


replace it in unit tests with EmptyCoroutineContext. The final behav-
ior will be the same: the function will never change the dispatcher.

val useCase = FetchUserUseCase(


userRepo = userRepo,
ioDispatcher = EmptyCoroutineContext,
)

Testing what happens during function execution

Imagine a function which during its execution first shows a progress


bar and later hides it.

suspend fun sendUserData() {


val userData = [Link]()
[Link] = true
[Link](userData)
[Link] = false
}

If we only check the final result, we cannot verify that the progress
bar changed its state during function execution. The trick that is
helpful in such cases is to start this function in a new coroutine
and control virtual time from outside. Notice that runTest creates
a coroutine with the StandardTestDispatcher dispatcher and ad-
vances its time until idle (using the advanceUntilIdle function).
This means that its children’s time will start once the parent starts
waiting for them, so once its body is finished. Before that, we can
advance virtual time by ourselves.
Testing Kotlin Coroutines 206

@Test
fun `should show progress bar when sending data`() = runTest {
// given
val database = FakeDatabase()
val vm = UserViewModel(database)

// when
launch {
[Link]()
}

// then
assertEquals(false, [Link])

// when
advanceTimeBy(1000)

// then
assertEquals(false, [Link])

// when
runCurrent()

// then
assertEquals(true, [Link])

// when
advanceUntilIdle()

// then
assertEquals(false, [Link])
}

Notice that, thanks to runCurrent, we can precisely


check when some value changes.

A similar effect could be achieved if we used delay. This is like having


two independent processes: one is doing things, while the other is
checking if the first one is behaving properly.
Testing Kotlin Coroutines 207

@Test
fun `should show progress bar when sending data`() =
runTest {
val database = FakeDatabase()
val vm = UserViewModel(database)
launch {
[Link]()
}

// then
assertEquals(false, [Link])
delay(1000)
assertEquals(true, [Link])
delay(1000)
assertEquals(false, [Link])
}

Using explicit functions like advanceTimeBy is considered


more readable than using delay.

Testing functions that launch new coroutines

Coroutines need to start somewhere. On the backend, they are often


started by the framework we use (for instance Spring or Ktor), but
sometimes we might also need to construct a scope ourselves and
launch coroutines on it.

@Scheduled(fixedRate = 5000)
fun sendNotifications() {
[Link] {
val notifications = notificationsRepository
.notificationsToSend()
for (notification in notifications) {
launch {
[Link](notification)
notificationsRepository
.markAsSent([Link])
}
}
Testing Kotlin Coroutines 208

}
}

How can we test sendNotifications if the notifications are


truly sent concurrently? Again, in unit tests we need to use
StandardTestDispatcher as part of our scope. We should also
add some delays to send and markAsSent.

@Test
fun testSendNotifications() {
// given
val notifications = List(100) { Notification(it) }
val repo = FakeNotificationsRepository(
delayMillis = 200,
notifications = notifications,
)
val service = FakeNotificationsService(
delayMillis = 300,
)
val testScope = TestScope()
val sender = NotificationsSender(
notificationsRepository = repo,
notificationsService = service,
notificationsScope = testScope
)

// when
[Link]()
[Link]()

// then all notifications are sent and marked


assertEquals(
[Link](),
[Link]()
)
assertEquals(
[Link] { [Link] }.toSet(),
[Link]()
)
Testing Kotlin Coroutines 209

// and notifications are sent concurrently


assertEquals(700, [Link])
}

Notice that runBlocking is not needed in the code above.


Both sendNotifications and advanceUntilIdle are regu-
lar functions.

Replacing the main dispatcher

There is no main function in unit tests. This means that if we try to


use it, our tests will fail with the “Module with the Main dispatcher
is missing” exception. On the other hand, injecting the Main thread
every time would be demanding, so the “kotlinx-coroutines-test”
library provides the setMain extension function on Dispatchers in-
stead.

class MainPresenter(
private val mainView: MainView,
private val dataRepository: DataRepo
) {
suspend fun onCreate() = coroutineScope {
launch([Link]) {
val data = [Link]()
[Link](data)
}
}
}

class FakeMainView : MainView {


var dispatchersUsedToShow: List<CoroutineContext?> =
emptyList()

override suspend fun show(data: Data) {


dispatchersUsedToShow +=
coroutineContext[ContinuationInterceptor]
}
}
Testing Kotlin Coroutines 210

class FakeDataRepo : DataRepo {


override suspend fun fetchData(): Data {
delay(1000)
return Data()
}
}

class SomeTest {

private val mainDispatcher = Executors


.newSingleThreadExecutor()
.asCoroutineDispatcher()

@Before
fun setup() {
[Link](mainDispatcher)
}

@After
fun tearDown() {
[Link]()
}

@Test
fun testSomeUI() = runBlocking {
// given
val view = FakeMainView()
val repo = FakeDataRepo()
val presenter = MainPresenter(view, repo)

// when
[Link]()

// then show was called on the main dispatcher


assertEquals(
listOf([Link]),
[Link]
)
Testing Kotlin Coroutines 211

Notice that in the above example. in assertEquals I


compare dispatchersUsedToShow to [Link],
not to mainDispatcher. It needs to be like this because
mainDispatcher is set as a delegate inside the class
provided by [Link].

We often define main on a setup function (function with @Before or


@BeforeEach) on a base class extended by all unit tests. As a result,
we are always sure we can run our coroutines on [Link].
We should also reset the main function to the initial state with
[Link]().

Testing Android functions that launch coroutines

On Android, we typically start coroutines on ViewModels, Presen-


ters, Fragments, or Activities. These are very important classes, and
we should test them. Think of the MainViewModel implementation
below:

class MainViewModel(
private val userRepo: UserRepository,
private val newsRepo: NewsRepository,
) : BaseViewModel() {

private val _userName: MutableLiveData<String> =


MutableLiveData()
val userName: LiveData<String> = _userName

private val _news: MutableLiveData<List<News>> =


MutableLiveData()
val news: LiveData<List<News>> = _news

private val _progressVisible: MutableLiveData<Boolean> =


MutableLiveData()
val progressVisible: LiveData<Boolean> =
_progressVisible

fun onCreate() {
Testing Kotlin Coroutines 212

[Link] {
val user = [Link]()
_userName.value = [Link]
}
[Link] {
_progressVisible.value = true
val news = [Link]()
.sortedByDescending { [Link] }
_news.value = news
_progressVisible.value = false
}
}
}

Instead of viewModelScope, there might be our own scope, and in-


stead of ViewModel, it might be Presenter, Activity, or some other
class. It does not matter for our example. As in every class that starts
coroutines, we should use StandardTestDispatcher as a part of the
scope. Previously, we needed to inject a different scope with a de-
pendency injection, but now there is a simpler way: on Android, we
use [Link] as the default dispatcher, and we can replace
it with StandardTestDispatcher thanks to the [Link]
function:

private val testDispatcher = StandardTestDispatcher()

@Before
fun setUp() {
[Link](testDispatcher)
}

@After
fun tearDown() {
[Link]()
}

After setting the Main dispatcher this way, onCreate coroutines will
be running on testDispatcher, so we can control their time. We can
use the advanceTimeBy function to pretend that a certain amount of
time has passed. We can also use advanceUntilIdle to resume all
coroutines until they are done.
Testing Kotlin Coroutines 213

class MainViewModelTests {
private lateinit var scheduler: TestCoroutineScheduler
private lateinit var viewModel: MainViewModel

@BeforeEach
fun setUp() {
scheduler = TestCoroutineScheduler()
[Link](StandardTestDispatcher(scheduler))
viewModel = MainViewModel(
userRepo = FakeUserRepository(aName),
newsRepo = FakeNewsRepository(someNews)
)
}

@AfterEach
fun tearDown() {
[Link]()
[Link]()
}

@Test
fun `should show user name and sorted news`() {
// when
[Link]()
[Link]()

// then
assertEquals(aName, [Link])
val someNewsSorted =
listOf(News(date1), News(date2), News(date3))
assertEquals(someNewsSorted, [Link])
}

@Test
fun `should show progress bar when loading news`() {
// given
assertEquals(null, [Link])

// when
Testing Kotlin Coroutines 214

[Link]()

// then
assertEquals(false, [Link])

// when
[Link](200)

// then
assertEquals(true, [Link])

// when
[Link]()

// then
assertEquals(false, [Link])
}

@Test
fun `user and news are called concurrently`() {
// when
[Link]()

[Link]()

// then
assertEquals(300, [Link])
}

class FakeUserRepository(
private val name: String
) : UserRepository {
override suspend fun getUser(): UserData {
delay(300)
return UserData(name)
}
}

class FakeNewsRepository(
Testing Kotlin Coroutines 215

private val news: List<News>


) : NewsRepository {
override suspend fun getNews(): List<News> {
delay(200)
return news
}
}
}

Setting a test dispatcher with a rule

JUnit 4 allows us to use rules. These are classes that contain logic
that should be invoked on some test class lifecycle events. For in-
stance, a rule can define what to do before and after all tests, there-
fore it can be used in our case to set our test dispatcher and clean it
up later. Here is a good implementation of such a rule:

class MainCoroutineRule : TestWatcher() {


lateinit var scheduler: TestCoroutineScheduler
private set
lateinit var dispatcher: TestDispatcher
private set

override fun starting(description: Description) {


scheduler = TestCoroutineScheduler()
dispatcher = StandardTestDispatcher(scheduler)
[Link](dispatcher)
}

override fun finished(description: Description) {


[Link]()
}
}

This rule needs to extend TestWatcher, which provides test life-


cycle methods, like starting and finished, which we override. It
composes TestCoroutineScheduler and TestDispatcher. Before each
test in a class using this rule, TestDispatcher will be set as Main.
After each test, the Main dispatcher will be reset. We can access the
scheduler with the scheduler property of this rule.
Testing Kotlin Coroutines 216

class MainViewModelTests {
@get:Rule
var mainCoroutineRule = MainCoroutineRule()

// ...

@Test
fun `should show user name and sorted news`() {
// when
[Link]()
[Link]()

// then
assertEquals(aName, [Link])
val someNewsSorted =
listOf(News(date1), News(date2), News(date3))
assertEquals(someNewsSorted, [Link])
}

@Test
fun `should show progress bar when loading news`() {
// given
assertEquals(null, [Link])

// when
[Link]()

// then
assertEquals(true, [Link])

// when
[Link](200)

// then
assertEquals(false, [Link])
}

@Test
fun `user and news are called concurrently`() {
Testing Kotlin Coroutines 217

// when
[Link]()

[Link]()

// then
assertEquals(300, [Link])
}
}

If you want to call advanceUntilIdle, advanceTimeBy,


runCurrent and currentTime directly on
MainCoroutineRule, you can define them as extension
functions and properties.

This way of testing Kotlin coroutines is fairly common on Android.


It is even explained in Google’s Codelabs materials (Advanced An-
droid in Kotlin 05.3: Testing Coroutines and Jetpack integrations)
(currently, for older kotlinx-coroutines-test API).
It is similar with JUnit 5, where we can define an extension:

@ExperimentalCoroutinesApi
class MainCoroutineExtension:
BeforeEachCallback, AfterEachCallback {

lateinit var scheduler: TestCoroutineScheduler


private set
lateinit var dispatcher: TestDispatcher
private set

override fun beforeEach(context: ExtensionContext?) {


scheduler = TestCoroutineScheduler()
dispatcher = StandardTestDispatcher(scheduler)
[Link](dispatcher)
}

override fun afterEach(context: ExtensionContext?) {


[Link]()
}
}
Testing Kotlin Coroutines 218

Using MainCoroutineExtension is nearly identical to using the


MainCoroutineRule rule. The difference is that instead of @get:Rule
annotation, we need to use @JvmField and @RegisterExtension.

@JvmField
@RegisterExtension
var mainCoroutineExtension = MainCoroutineExtension()

Summary

In this chapter, we’ve discussed the most important use cases for
testing Kotlin coroutines. There are some tricks we need to know,
but in the end our tests can be very elegant and everything can be
tested quite easily. I hope you feel inspired to write good tests in your
applications using Kotlin Coroutines.
Part 3: Channel and Flow
In my workshops, many attendees are eager to finally hear about
Flow - also known as coroutines’ reactive streams. We are finally
getting there: in this chapter, we will learn about Channel and Flow,
both of which are useful tools that are worth knowing. We will start
with Channel, as it is a bit more of a basic concept, then we will get
deeply into Flow.

219
Channel 220

Channel

The Channel API was added as an inter-coroutine communication


primitive. Many imagine a channel as a pipe, but I prefer a different
metaphor. Are you familiar with public bookcases for exchanging
books? One person needs to bring a book for another person to find it.
This is very similar to how Channel from [Link] works.

Channel supports any number of senders and receivers, and every


value that is sent to a channel is received only once.
Channel 221

Channel is an interface that implements two other interfaces:

• SendChannel, which is used to send elements (adding elements)


and to close the channel;
• ReceiveChannel, which receives (or takes)the elements.

interface SendChannel<in E> {


suspend fun send(element: E)
fun close(): Boolean
//...
}

interface ReceiveChannel<out E> {


suspend fun receive(): E
fun cancel(cause: CancellationException? = null)
// ...
}

interface Channel<E> : SendChannel<E>, ReceiveChannel<E>

Thanks to this distinction, we can expose just ReceiveChannel or


SendChannel in order to restrict the channel entry points.
You might notice that both send and receive are suspending func-
tions. This is an essential feature:

• When we try to receive and there are no elements in the chan-


nel, the coroutine is suspended until the element is available.
Like in our metaphorical bookshelf, when someone goes to the
shelf to find a book but the bookshelf is empty, that person
needs to suspend until someone puts an element there.
Channel 222

• On the other hand, send will be suspended when the channel


reaches its capacity. We will soon see that most channels have
limited capacity. Like in our metaphorical bookshelf, when
someone tries to put a book on a shelf but it is full, that person
needs to suspend until someone takes a book and makes space.

If you need to send or receive from a non-suspending


function, you can use trySend and tryReceive. Both
operations are immediate and return ChannelResult,
which contains information about the success or failure
of the operation, as well as its result. Use trySend and
tryReceive only for channels with limited capacity
because they will not work for the rendezvous channel.

A channel might have any number of senders and receivers. How-


ever, the most common situation is when there is one coroutine on
both sides of the channel.

To see the simplest example of a channel, we need to have a producer


(sender) and a consumer (receiver) in separate coroutines. The pro-
ducer will send elements, and the consumer will receive them. This
is how it can be implemented:
Channel 223

suspend fun main(): Unit = coroutineScope {


val channel = Channel<Int>()
launch {
repeat(5) { index ->
delay(1000)
println("Producing next one")
[Link](index * 2)
}
}

launch {
repeat(5) {
val received = [Link]()
println(received)
}
}
}
// (1 sec)
// Producing next one
// 0
// (1 sec)
// Producing next one
// 2
// (1 sec)
// Producing next one
// 4
// (1 sec)
// Producing next one
// 6
// (1 sec)
// Producing next one
// 8

Such an implementation is far from perfect. First, the receiver needs


to know how many elements will be sent; however, this is rarely the
case, so we would prefer to listen for as long as the sender is willing to
send. To receive elements on the channel until it is closed, we could
Channel 224

use a for-loop or consumeEach function⁴³.

suspend fun main(): Unit = coroutineScope {


val channel = Channel<Int>()
launch {
repeat(5) { index ->
println("Producing next one")
delay(1000)
[Link](index * 2)
}
[Link]()
}

launch {
for (element in channel) {
println(element)
}
// or
// [Link] { element ->
// println(element)
// }
}
}

The common problem with this way of sending elements is that it


is easy to forget to close the channel, especially in the case of excep-
tions. If one coroutine stops producing because of an exception, the
other will wait for elements forever. It is much more convenient to
use the produce function, which is a coroutine builder that returns
ReceiveChannel.

⁴³The consumeEach function uses a for-loop under the hood, but


it also cancels the channel once it has consumed all its elements (so,
once it is closed).
Channel 225

// This function produces a channel with


// next positive integers from 0 to `max`
fun [Link](
max: Int
): ReceiveChannel<Int> = produce {
var x = 0
while (x < 5) send(x++)
}

The produce function closes the channel whenever the builder corou-
tine ends in any way (finished, stopped, cancelled). Thanks to this,
we will never forget to call close. The produce builder is a very
popular way to create a channel, and for good reason: it offers a lot
of safety and convenience.

suspend fun main(): Unit = coroutineScope {


val channel = produce {
repeat(5) { index ->
println("Producing next one")
delay(1000)
send(index * 2)
}
}

for (element in channel) {


println(element)
}
}

Channel types

Depending on the capacity size we set, we distinguish four types of


channels:

• Unlimited - channel with capacity [Link] that has


an unlimited capacity buffer, and send never suspends.
• Buffered - channel with concrete capacity
size or [Link] (which is 64 by de-
fault and can be overridden by setting the
[Link] system
property in JVM).
Channel 226

• Rendezvous⁴⁴ (default) - channel with capacity 0 or


[Link] (which is equal to 0), meaning that
an exchange can happen only if sender and receiver meet (so it
is like a book exchange spot, instead of a bookshelf).
• Conflated - channel with capacity [Link] which
has a buffer of size 1, and each new element replaces the pre-
vious one.

Let’s now see these capacities in action. We can set them directly on
Channel, but we can also set them when we call the produce function.

We will make our producer fast and our receiver slow. With unlim-
ited capacity, the channel should accept all the elements and then
let them be received one after another.

suspend fun main(): Unit = coroutineScope {


val channel = produce(capacity = [Link]) {
repeat(5) { index ->
send(index * 2)
delay(100)
println("Sent")
}
}

delay(1000)
for (element in channel) {
println(element)
delay(1000)
}
}

// Sent
// (0.1 sec)
// Sent
// (0.1 sec)
// Sent

⁴⁴The origin is the French word “rendez-vous”, which commonly


means “appointment”. This beautiful word has crossed borders: in
English, there is the less popular word “rendezvous”; in Polish, there
is the word “randka”, which means a date (a romantic appointment).
Channel 227

// (0.1 sec)
// Sent
// (0.1 sec)
// Sent
// (1 - 4 * 0.1 = 0.6 sec)
// 0
// (1 sec)
// 2
// (1 sec)
// 4
// (1 sec)
// 6
// (1 sec)
// 8
// (1 sec)

With a capacity of concrete size, we will first produce until the buffer
is full, after which the producer will need to start waiting for the
receiver.

suspend fun main(): Unit = coroutineScope {


val channel = produce(capacity = 3) {
repeat(5) { index ->
send(index * 2)
delay(100)
println("Sent")
}
}

delay(1000)
for (element in channel) {
println(element)
delay(1000)
}
}

// Sent
// (0.1 sec)
// Sent
Channel 228

// (0.1 sec)
// Sent
// (1 - 2 * 0.1 = 0.8 sec)
// 0
// Sent
// (1 sec)
// 2
// Sent
// (1 sec)
// 4
// (1 sec)
// 6
// (1 sec)
// 8
// (1 sec)

With a channel of default (or [Link]) capacity, the


producer will always wait for a receiver.

suspend fun main(): Unit = coroutineScope {


val channel = produce {
// or produce(capacity = [Link]) {
repeat(5) { index ->
send(index * 2)
delay(100)
println("Sent")
}
}

delay(1000)
for (element in channel) {
println(element)
delay(1000)
}
}

// 0
// Sent
// (1 sec)
Channel 229

// 2
// Sent
// (1 sec)
// 4
// Sent
// (1 sec)
// 6
// Sent
// (1 sec)
// 8
// Sent
// (1 sec)

Finally, we will not be storing past elements when using the


[Link] capacity. New elements will replace the previous
ones, so we will be able to receive only the last one, therefore we lose
elements that were sent earlier.

suspend fun main(): Unit = coroutineScope {


val channel = produce(capacity = [Link]) {
repeat(5) { index ->
send(index * 2)
delay(100)
println("Sent")
}
}

delay(1000)
for (element in channel) {
println(element)
delay(1000)
}
}

// Sent
// (0.1 sec)
// Sent
// (0.1 sec)
// Sent
Channel 230

// (0.1 sec)
// Sent
// (0.1 sec)
// Sent
// (1 - 4 * 0.1 = 0.6 sec)
// 8

On buffer overflow

To customize channels further, we can control what happens when


the buffer is full (onBufferOverflow parameter). There are the follow-
ing options:

• SUSPEND (default) - when the buffer is full, suspend on the send


method.
• DROP_OLDEST - when the buffer is full, drop the oldest element.
• DROP_LATEST - when the buffer is full, drop the latest element.

As you might guess, the channel capacity [Link] is the


same as setting the capacity to 1 and onBufferOverflow to DROP_-
OLDEST. Currently, the produce function does not allow us to set
custom onBufferOverflow, so to set it we need to define a channel
using the function Channel⁴⁵.

suspend fun main(): Unit = coroutineScope {


val channel = Channel<Int>(
capacity = 2,
onBufferOverflow = BufferOverflow.DROP_OLDEST
)

launch {
repeat(5) { index ->
[Link](index * 2)
delay(100)
println("Sent")
}
[Link]()

⁴⁵Channel is an interface, so Channel() is a call of a function that is


pretending to be a constructor.
Channel 231

delay(1000)
for (element in channel) {
println(element)
delay(1000)
}
}

// Sent
// (0.1 sec)
// Sent
// (0.1 sec)
// Sent
// (0.1 sec)
// Sent
// (0.1 sec)
// Sent
// (1 - 4 * 0.1 = 0.6 sec)
// 6
// (1 sec)
// 8

On undelivered element handler

One more Channel function parameter which we should know about


is onUndeliveredElement. It is called when an element couldn’t be
handled for some reason. Most often this means that a channel was
closed or cancelled, but it might also happen when send, receive,
receiveOrNull, or hasNext throw an error. We generally use it to
close resources that are sent by this channel.
Channel 232

val channel = Channel<Resource>(capacity) { resource ->


[Link]()
}
// or
// val channel = Channel<Resource>(
// capacity,
// onUndeliveredElement = { resource ->
// [Link]()
// }
// )

// Producer code
val resourceToSend = openResource()
[Link](resourceToSend)

// Consumer code
val resourceReceived = [Link]()
try {
// work with received resource
} finally {
[Link]()
}

Fan-out

Multiple coroutines can receive from a single channel; however, to


receive them properly we should use a for-loop (consumeEach is not
safe to use from multiple coroutines).
Channel 233

fun [Link]() = produce {


repeat(10) {
delay(100)
send(it)
}
}

fun [Link](
id: Int,
channel: ReceiveChannel<Int>
) = launch {
for (msg in channel) {
println("#$id received $msg")
}
}

suspend fun main(): Unit = coroutineScope {


val channel = produceNumbers()
repeat(3) { id ->
delay(10)
launchProcessor(id, channel)
}
}

// #0 received 0
// #1 received 1
// #2 received 2
// #0 received 3
// #1 received 4
// #2 received 5
// #0 received 6
// ...

The elements are distributed fairly. The channel has a FIFO (first-in-
first-out) queue of coroutines waiting for an element. This is why in
the above example you can see that the elements are received by the
next coroutines (0, 1, 2, 0, 1, 2, etc).

To better understand why, imagine kids in a kinder-


garten queuing for candies. Once they get some, they
Channel 234

immediately eat them and go to the last position in the


queue. Such distribution is fair (assuming the number of
candies is a multiple of the number of kids, and assuming
their parents are fine with their children eating candies).

Fan-in

Multiple coroutines can send to a single channel. In the below ex-


ample, you can see two coroutines sending elements to the same
channel.

suspend fun sendString(


channel: SendChannel<String>,
text: String,
time: Long
) {
while (true) {
delay(time)
[Link](text)
}
}

fun main() = runBlocking {


val channel = Channel<String>()
launch { sendString(channel, "foo", 200L) }
launch { sendString(channel, "BAR!", 500L) }
repeat(50) {
println([Link]())
}
Channel 235

[Link]()
}

Sometimes, we need to merge multiple channels into one. For that,


you might find the following function useful as it merges multiple
channels using the produce function:

fun <T> [Link](


channels: List<ReceiveChannel<T>>
): ReceiveChannel<T> = produce {
for (channel in channels) {
launch {
for (elem in channel) {
send(elem)
}
}
}
}

Pipelines

Sometimes we set two channels such that one produces elements


based on those received from another. In such cases, we call it a
pipeline.

// A channel of number from 1 to 3


fun [Link](): ReceiveChannel<Int> =
produce {
repeat(3) { num ->
send(num + 1)
}
}

fun [Link](numbers: ReceiveChannel<Int>) =


produce {
for (num in numbers) {
send(num * num)
}
}
Channel 236

suspend fun main() = coroutineScope {


val numbers = numbers()
val squared = square(numbers)
for (num in squared) {
println(num)
}
}
// 1
// 4
// 9

Channels as a communication primitive

Channels are useful when different coroutines need to communicate


with each other. They guarantee no conflicts (i.e., no problem with
the shared state) and fairness.
To see them in action, imagine that different baristas are making
coffees. Each barista should be a separate coroutine working inde-
pendently. Different coffee types take different amounts of time to
prepare, but we want to handle orders in the order they appear. The
easiest way to solve this problem is by sending both the orders and
the resulting coffees in channels. A barista can be defined using the
produce builder:

suspend fun [Link](


orders: ReceiveChannel<Order>,
baristaName: String
): ReceiveChannel<CoffeeResult> = produce {
for (order in orders) {
val coffee = prepareCoffee([Link])
send(
CoffeeResult(
coffee = coffee,
customer = [Link],
baristaName = baristaName
)
)
}
}
Channel 237

When we set up a pipeline, we can use the previously defined fanIn


function to merge the results produced by the different baristas into
one:

val coffeeResults = fanIn(


serveOrders(ordersChannel, "Alex"),
serveOrders(ordersChannel, "Bob"),
serveOrders(ordersChannel, "Celine"),
)

You will find more practical examples in the next chapter.

Practical usage

A typical case in which we use channels is when values are produced


on one side, and we want to process them on the other side. Examples
include responding to user clicks, new notifications from a server,
or updating search results over time (a good example is SkyScan-
ner, which searches for the cheapest flights by querying multiple
airline websites). However, in most of these cases it’s better to use
channelFlow or callbackFlow, both of which are a hybrid of Channel
and Flow (we will explain them in the Flow building chapter).
Channel 238

SkyScanner displays better and better flight search results as more and more airlines
respond.

In their pure form, I find channels useful in some more complex


processing cases. For example, let’s say that we are maintaining an
online shop, like Amazon. Let’s say that your service receives a big
number of seller changes that might affect their offers. For each
change, we need to first find a list of offers to update, and then we
update them one after another.
Channel 239

Doing this the traditional way wouldn’t be optimal. One seller might
even have hundreds of thousands of offers. Doing it all in a single
long process is not the best idea.
First, either an internal exception or a server restart might leave us
not knowing where we stopped. Second, one big seller might block
the server for a long time, thus leaving small sellers waiting for their
changes to apply. Moreover, we should not send too many network
requests at the same time so as to not overload the service that needs
to handle them (and our network interface).
The solution to this problem might be to set up a pipeline. The first
channel could contain the sellers to process, while the second one
would contain the offers to be updated. These channels would have a
buffer. The buffer in the second one could prevent our service from
getting more offers when too many are already waiting. Thus, our
server would be able to balance the number of offers we are updating
at the same time.
We might also easily add some intermediate steps, like removing
duplicates. By defining the number of coroutines that listen on each
channel, we decide how many concurrent requests to the external
service we wish to make. Manipulating these parameters gives us
a lot of freedom. There are also many improvements that can be
added quite easily, like persistence (for cases where the server is
restarted) or element uniqueness (for cases where the seller made
another change before the previous one was processed).
Channel 240

// A simplified implementation
suspend fun handleOfferUpdates() = coroutineScope {
val sellerChannel = listenOnSellerChanges()

val offerToUpdateChannel = produce(capacity = UNLIMITED) {


repeat(NUMBER_OF_CONCURRENT_OFFER_SERVICE_REQUESTS) {
launch {
for (seller in sellerChannel) {
val offers = offerService
.requestOffers([Link])
[Link] { send(it) }
}
}
}
}

repeat(NUMBER_OF_CONCURRENT_UPDATE_SENDERS) {
launch {
for (offer in offerToUpdateChannel) {
sendOfferUpdate(offer)
}
}
}
Channel 241

Summary

Channel is a powerful inter-coroutine communication primitive. It


supports any number of senders and receivers, and every value that
is sent to a channel is received once. We often create a channel using
the produce builder. Channels can also be used to set up a pipeline
where we control the number of coroutines working on some tasks.
Nowadays, we most often use channels in connection with Flow,
which will be presented later in the book.
Actors 242

Actors

In computer science, there is a model of concurrent computation


known as the actor model, in which the most important concept
is the actor. This is a computational entity that, in response to a
message it receives, can concurrently:

• send a finite number of messages to other actors;


• create a finite number of new actors;
• designate the behavior to be used for the next message it
receives.

Actors may modify their own private state but can only affect each
other indirectly through messaging, therefore there is no need to
synchronize them. Each actor runs on a single thread and handles
its messages one after another.
In Kotlin coroutines, we can implement this model quite easily. We
use channels to synchronize a queue of messages to the actor, then
we just need a coroutine that will handle these messages one after
another. In the snippet below, you can see our massiveRun problem
solved with the actor model.

sealed class CounterMsg


object IncCounter : CounterMsg()
class GetCounter(
val response: CompletableDeferred<Int>
) : CounterMsg()

fun [Link](): Channel<CounterMsg> {


val channel = Channel<CounterMsg>()
launch {
var counter = 0
for (msg in channel) {
when (msg) {
is IncCounter -> {
counter++
}
is GetCounter -> {
[Link](counter)
Actors 243

}
}
}
}
return channel
}

suspend fun main(): Unit = coroutineScope {


val counter: SendChannel<CounterMsg> = counterActor()
massiveRun { [Link](IncCounter) }
val response = CompletableDeferred<Int>()
[Link](GetCounter(response))
println([Link]()) // 1000000
[Link]()
}

There are no problems with synchronization here because the actor


works on a single thread.
To simplify this model, there is the actor coroutine builder, which
does what was already shown (creates a channel and starts a corou-
tine) and gives better support for exception handling (i.e., an excep-
tion in the builder coroutine closes the channel).

fun [Link]() = actor<CounterMsg> {


var counter = 0
for (msg in channel) {
when (msg) {
is IncCounter -> counter++
is GetCounter -> [Link](counter)
}
}
}

Summary

The Actor Model is an important model for concurrent processing.


It is currently not popular in Kotlin, but it is worth knowing about
as there are cases where it fits perfectly. In this model, the most
important concept is the actor, which is a computational entity that
Actors 244

responds to messages. Since it operates on a single coroutine, it


has no conflicts when accessing its state. We can wrap actors with
standard functionalities that might offer persistency or priorities,
thus opening up many interesting possibilities.
There was a time when the actor model was popular as a promising
way to design backend applications (like in Akka), but in my per-
ception this does not seem like a trend now. In my opinion, this
is because most developers prefer instead to use ad-hoc queues or
streaming software, such as Kafka or RabbitMQ.
Hot and cold data sources 245

Hot and cold data sources

Kotlin Coroutines initially had only Channel, but creators noticed


that this was not enough. Channels are a hot stream of values, but
we often need a stream that is cold.

Understanding the difference between hot and cold streams of


data is useful software-craftsmanship knowledge because most
data sources you use daily fall into one of these two categories.
Collections (List, Set, etc.) are hot, while Sequence and Java Stream
are cold. Channel is hot, while Flow and RxJava streams (Observable,
Single, etc.) are cold⁴⁶.

Hot Cold
Collections (List, Set) Sequence, Stream
Channel Flow, RxJava streams

Hot vs cold

Hot data streams are eager, produce elements independently of their


consumption, and store the elements. Cold data streams are lazy,
perform their operations on-demand, and store nothing.
We can observe these differences when we use lists (hot) and se-
quences (cold). Builders and operations on hot data streams start
immediately. On cold data streams, they are not started until the
elements are needed.

⁴⁶This is true in general, but there are exceptions. Some functions


and builders, like buffer or channelFlow, introduce some hotness
into Flow. Also, SharedFlow and StateFlow are hot.
Hot and cold data sources 246

@OptIn(ExperimentalStdlibApi::class)
fun main() {
val l = buildList {
repeat(3) {
add("User$it")
println("L: Added User")
}
}

val l2 = [Link] {
println("L: Processing")
"Processed $it"
}

val s = sequence {
repeat(3) {
yield("User$it")
println("S: Added User")
}
}

val s2 = [Link] {
println("S: Processing")
"Processed $it"
}
}
// L: Added User
// L: Added User
// L: Added User
// L: Processing
// L: Processing
// L: Processing

As a result, cold data streams (like Sequence, Stream or Flow):

• can be infinite;
• do a minimal number of operations;
• use less memory (no need to allocate all the intermediate col-
lections).
Hot and cold data sources 247

Sequence processing does fewer operations because it processes el-


ements lazily. The way it works is very simple. Each intermediate
operation (like map or filter) just decorates the previous sequence
with a new operation. The terminal operation⁴⁷ does all the work.
Think of the example below. In the case of a sequence, find asks the
result of the map for the first element. It asks the sequence returned
from the sequenceOf (returns 1), then maps it (to 1) and returns it to
the filter. filter checks if this is an element that fulfills its criteria.
If the element does not fulfill the criteria, filter asks again and
again until the proper element is found.
This is very different from list processing, which at every interme-
diate step calculates and returns a fully processed collection. This
is why the order of element processing is different and collection
processing takes more memory and might require more operations
(like in the example below).

fun m(i: Int): Int {


print("m$i ")
return i * i
}

fun f(i: Int): Boolean {


print("f$i ")
return i >= 10
}

fun main() {
listOf(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
.map { m(it) }
.find { f(it) }
.let { print(it) }
// m1 m2 m3 m4 m5 m6 m7 m8 m9 m10 f1 f4 f9 f16 16

println()

sequenceOf(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
.map { m(it) }

⁴⁷This is true in general, but there are exceptions. Some functions


and builders, like buffer or channelFlow, introduce some hotness
into Flow. Also, SharedFlow and StateFlow are hot.
Hot and cold data sources 248

.find { f(it) }
.let { print(it) }
// m1 f1 m2 f4 m3 f9 m4 f16 16
}

This means that a list is a collection of elements, but a sequence is


just a definition of how these elements should be calculated. Hot data
streams:

• are always ready to be used (each operation can be a terminal


operation);
• do not need to recalculate the result when used multiple times.

fun m(i: Int): Int {


print("m$i ")
return i * i
}

fun main() {
val l = listOf(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
.map { m(it) } // m1 m2 m3 m4 m5 m6 m7 m8 m9 m10

println(l) // [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]


println([Link] { it > 10 }) // 16
println([Link] { it > 10 }) // 16
println([Link] { it > 10 }) // 16

val s = sequenceOf(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)


.map { m(it) }

println([Link]())
// [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
println([Link] { it > 10 }) // m1 m2 m3 m4 16
println([Link] { it > 10 }) // m1 m2 m3 m4 16
println([Link] { it > 10 }) // m1 m2 m3 m4 16
}

Java Stream shares characteristics with Kotlin’s Sequence. They are


both cold streams of values.
Hot and cold data sources 249

Hot channels, cold flow

Time to get back to coroutines. The most typical way to create a flow
is by using a builder, which is similar to the produce function. It is
called flow.

val channel = produce {


while (true) {
val x = computeNextValue()
send(x)
}
}

val flow = flow {


while (true) {
val x = computeNextValue()
emit(x)
}
}

These builders are conceptually equivalent, but since the behavior


of channel and flow is very different, there are also important dif-
ferences between these two functions. Take a look at the example
below. Channels are hot, so they immediately start calculating the
values. This calculation starts in a separate coroutine. This is why
produce needs to be a coroutine builder that is defined as an ex-
tension function on CoroutineScope. The calculation starts imme-
diately, but since the default buffer size is 0 (rendezvous) it will
soon be suspended until the receiver is ready in the example below.
Note that there is a difference between stopping production when
there is no receiver and producing on-demand. Channels, as hot data
streams, produce elements independently of their consumption and
then keep them. They do not care how many receivers there are.
Since each element can be received only once, after the first receiver
consumes all the elements, the second one will find a channel that is
empty and closed already. This is why it will receive no elements at
all.
Hot and cold data sources 250

private fun [Link]() = produce {


println("Channel started")
for (i in 1..3) {
delay(1000)
send(i)
}
}

suspend fun main() = coroutineScope {


val channel = makeChannel()

delay(1000)
println("Calling channel...")
for (value in channel) { println(value) }
println("Consuming again...")
for (value in channel) { println(value) }
}
// Channel started
// (1 sec)
// Calling channel...
// 1
// (1 sec)
// 2
// (1 sec)
// 3
// Consuming again...

The same processing using Flow is very different. Since it is a cold


data source, the production happens on demand. This means that
flow is not a builder and does no processing. It is only a definition
of how elements should be produced that will be used when a ter-
minal operation (like collect) is used. This is why the flow builder
does not need a CoroutineScope. It will run on the scope from the
terminal operation that executed it (it takes the scope from the
suspending function’s continuation, just like coroutineScope and
other coroutine scope functions). Each terminal operation on a flow
starts processing from scratch. Compare the examples above and
below because they show the key differences between Channel and
Flow.
Hot and cold data sources 251

private fun makeFlow() = flow {


println("Flow started")
for (i in 1..3) {
delay(1000)
emit(i)
}
}

suspend fun main() = coroutineScope {


val flow = makeFlow()

delay(1000)
println("Calling flow...")
[Link] { value -> println(value) }
println("Consuming again...")
[Link] { value -> println(value) }
}
// (1 sec)
// Calling flow...
// Flow started
// (1 sec)
// 1
// (1 sec)
// 2
// (1 sec)
// 3
// Consuming again...
// Flow started
// (1 sec)
// 1
// (1 sec)
// 2
// (1 sec)
// 3

RxJava streams share most characteristics with Kotlin’s Flow. Some


Hot and cold data sources 252

even say that Flow could be called “RxCoroutines”⁴⁸.

Summary

Most data sources are either hot or cold:

• Hot data sources are eager. They make elements as soon as


possible and store them. They create elements independently
of their consumption. These are collections (List, Set) and
Channel.
• Cold data sources are lazy. They process elements on-demand
on the terminal operation. All intermediate functions just de-
fine what should be done (most often using the Decorator pat-
tern). They generally do not store elements and create them on
demand. They do the minimal number of operations and can
be infinite. Their creation and processing of elements is typ-
ically the same process as consumption. These elements are
Sequence, Java Stream, Flow and RxJava streams (Observable,
Single, etc).

This explains the essential difference between Channel and Flow.


Now it is time to discuss all the different features supported by the
latter.
⁴⁸I heard this first from Alex Piotrowski during Kotlin/Every-
where Warsaw 21.11.2019, [Link] Who
knows, maybe he is the one who popularized this term.
Flow introduction 253

Flow introduction

A flow represents a stream of values that are computed


asynchronously. The Flow interface itself only allows the flowing
elements to be collected, which means handling each element as
it reaches the end of the flow (collect for Flow is like forEach for
collections).

interface Flow<out T> {


suspend fun collect(collector: FlowCollector<T>)
}

As you can see, collect is the only member function in Flow. All oth-
ers are defined as extensions. This is similar to Iterable or Sequence,
both of which only have iterator as a member function.

interface Iterable<out T> {


operator fun iterator(): Iterator<T>
}

interface Sequence<out T> {


operator fun iterator(): Iterator<T>
}

Comparing flow to other ways of representing values

The concept of Flow should be well known to those using RxJava or


Reactor, but others might need a better explanation. Imagine that
you need a function to return more than a single value. When all
these values are provided at the same time, we use a collection like
List or Set.

fun allUsers(): List<User> =


[Link]().map { [Link]() }

The essence here is that List and Set represent a fully calculated
collection. Since the process of calculating these values takes time,
we need to wait for all the values before we can have them returned.
Flow introduction 254

fun getList(): List<Int> = List(3) {


[Link](1000)
"User$it"
}

fun main() {
val list = getList()
println("Function started")
[Link] { println(it) }
}
// (3 sec)
// Function started
// User0
// User1
// User2

If the elements are calculated one by one, we prefer to have the next
elements as soon as they appear. One way of doing this is by using
Sequence, which we’ve already learned about in the Sequence builder
chapter.

fun getSequence(): Sequence<String> = sequence {


repeat(3) {
[Link](1000)
yield("User$it")
}
}

fun main() {
val list = getSequence()
println("Function started")
[Link] { println(it) }
}
// Function started
// (1 sec)
// User0
// (1 sec)
// User1
// (1 sec)
Flow introduction 255

// User2

Sequences are perfect for representing a flow of values calculated on


demand when calculating them might be CPU-intensive (like calcu-
lating complex results) or blocking (like reading files). However, it is
essential to know that sequence terminal operations (like forEach)
are not suspending, so any suspension inside a sequence builder
means blocking the thread that waits for the value. This is why,
in the scope of a sequence builder, you cannot use any suspending
function except for those called on the SequenceScope receiver (yield
and yieldAll).

fun getSequence(): Sequence<String> = sequence {


repeat(3) {
delay(1000) // Compilation error
yield("User$it")
}
}

This mechanism was introduced so sequences are not misused. For


example, someone might want to use a sequence to fetch, in a pag-
inated manner, a list of all the users from an HTTP endpoint until
an empty page is received. Even if the above example could compile,
it wouldn’t be correct anyway because the terminal operation (like
forEach) would be blocking the thread instead of the suspending
coroutine, which could lead to unexpected thread-blocking.

// Don't do that, we should use Flow instead of Sequence


fun allUsersSequence(
api: UserApi
): Sequence<User> = sequence {
var page = 0
do {
val users = [Link](page++) // suspending,
// so compilation error
yieldAll(users)
} while (![Link]())
}

I hope you already have some idea that thread blocking can be very
dangerous and lead to unexpected situations. To make this crystal
Flow introduction 256

clear, take a look at the example below. We use Sequence, so its


forEach is a blocking operation. This is why a coroutine started on
the same thread with launch will wait, so one coroutine’s execution
blocks another’s.

fun getSequence(): Sequence<String> = sequence {


repeat(3) {
[Link](1000)
// the same result as if there were delay(1000) here
yield("User$it")
}
}

suspend fun main() {


withContext(newSingleThreadContext("main")) {
launch {
repeat(3) {
delay(100)
println("Processing on coroutine")
}
}

val list = getSequence()


[Link] { println(it) }
}
}
// (1 sec)
// User0
// (1 sec)
// User1
// (1 sec)
// User2
// Processing on coroutine
// (0.1 sec)
// Processing on coroutine
// (0.1 sec)
// Processing on coroutine

This is a case where we should use Flow instead of Sequence. Such


an approach fully supports coroutines on its operations. Its builder
Flow introduction 257

and operations are suspending functions, and it supports structured


concurrency and proper exception handling. We will explain all this
in the next chapters, but for now let’s see how it helps with this case.

fun getFlow(): Flow<String> = flow {


repeat(3) {
delay(1000)
emit("User$it")
}
}

suspend fun main() {


withContext(newSingleThreadContext("main")) {
launch {
repeat(3) {
delay(100)
println("Processing on coroutine")
}
}

val list = getFlow()


[Link] { println(it) }
}
}
// (0.1 sec)
// Processing on coroutine
// (0.1 sec)
// Processing on coroutine
// (0.1 sec)
// Processing on coroutine
// (1 - 3 * 0.1 = 0.7 sec)
// User0
// (1 sec)
// User1
// (1 sec)
// User2

Flow should be used for streams of data that need to use coroutines.
For example, it can be used to produce a stream of users that
Flow introduction 258

are fetched from an API page by page. Notice that the caller
of this function can handle the next pages as they come and
decide how many pages will be fetched. For instance, if we call
allUsersFlow(api).first(), we will fetch only the first page; if we
call allUsersFlow(api).toList(), we will fetch all of them; if we
call allUsersFlow(api).find { [Link] == id }, we will fetch pages
until we find the one we’re looking for.

fun allUsersFlow(
api: UserApi
): Flow<User> = flow {
var page = 0
do {
val users = [Link](page++) // suspending
emitAll(users)
} while (![Link]())
}

The characteristics of Flow

Flow’s terminal operations (like collect) suspend a coroutine in-


stead of blocking a thread. They also support other coroutine func-
tionalities, such as respecting the coroutine context and handling
exceptions. Flow processing can be cancelled, and structured concur-
rency is supported out of the box. The flow builder is not suspending
and does not require any scope. It is the terminal operation that is
suspending and builds a relation to its parent coroutine (similar to
the coroutineScope function).
The below example shows how CoroutineName context is passed
from collect to the lambda expression in the flow builder. It also
shows that launch cancellation also leads to proper flow processing
cancellation.
Flow introduction 259

// Notice, that this function is not suspending


// and does not need CoroutineScope
fun usersFlow(): Flow<String> = flow {
repeat(3) {
delay(1000)
val ctx = currentCoroutineContext()
val name = ctx[CoroutineName]?.name
emit("User$it in $name")
}
}

suspend fun main() {


val users = usersFlow()

withContext(CoroutineName("Name")) {
val job = launch {
// collect is suspending
[Link] { println(it) }
}

launch {
delay(2100)
println("I got enough")
[Link]()
}
}
}
// (1 sec)
// User0 in Name
// (1 sec)
// User1 in Name
// (0.1 sec)
// I got enough

Flow nomenclature

Every flow consists of a few elements:


Flow introduction 260

• Flow needs to start somewhere. It often starts with a flow


builder, conversion from a different object, or from some
helper function. The most important option will be explained
in the next chapter, Flow building.
• The last operation on the flow is called the terminal operation,
which is very important as it is often the only one that is sus-
pending or requires a scope. The typical terminal operation is
collect, either with or without a lambda expression. However,
there are also other terminal operations. Some of them will be
explained in the Flow processing chapter.
• Between the start operation and the terminal operation, we
might have intermediate operations, each of which modifies
the flow in some way. We will learn about different inter-
mediate operations in the Flow lifecycle and Flow processing
chapters.

Real-life use cases

Practice shows that we more often need a flow instead of a channel.


If you request a stream of data, you typically want to request it on-
demand. If you need to observe something, such as changes in your
database or events from UI widgets or sensors, you likely want these
events to be received by each observer. You also need to stop listening
when no one is observing. This is why, for all these cases, using a flow
is preferred over using a channel (although in some cases we will use
a hybrid of these two).
The most typical usages of flow include:

• receiving messages that are communicated through Server-


Sent Events, such as WebSockets, notifications, etc.;
• observing user actions, such as text changes or clicks;
Flow introduction 261

• receiving updates from sensors or other information about a


device, such as its location or orientation;
• observing changes in databases.

Here is how we can observe changes in an SQL database using the


Room library:

@Dao
interface MyDao {
@Query("SELECT * FROM somedata_table")
fun getData(): Flow<List<SomeData>>
}

Let’s see some examples of how we might use a flow to handle a


stream of responses from an API. First, imagine that you implement
a chat in which messages are sent via Server-Sent Events and noti-
fications. It is convenient to have both sources of changes as a flow,
merge them together, and this flow then updates the view. Another
example might be a search that provides better and better responses.
For example, when we search for the best flight on SkyScanner,
some offers arrive quickly, but then more arrive over time; therefore,
you see better and better results. This is also a great case for a flow.
Flow introduction 262

On SkyScanner we can see better and better flight search results as airlines respond
to your offer request.

In addition to these situations, a flow is also a useful tool for different


concurrent processing cases. For example, imagine that you have a
list of sellers, for each of which you need to fetch their offers. We’ve
already learned that we can do this using async inside collection
processing:

suspend fun getOffers(


sellers: List<Seller>
): List<Offer> = coroutineScope {
sellers
.map { seller ->
async { [Link]([Link]) }
}
.flatMap { [Link]() }
}

The above approach is correct in many cases, but it has one downside:
Flow introduction 263

if the list of sellers is big, sending so many requests at once would be


good neither for us nor for the server we are requesting from. Sure,
this can be limited in the repository with a rate limiter, but we also
might want to control it on the use side, fow which we might use
Flow. In this case, to limit the number of concurrent calls to 20, we
can use flatMapMerge (one of the flow processing functions we will
explain in the Flow processing chapter) with concurrency modifier
set to 20.

suspend fun getOffers(


sellers: List<Seller>
): List<Offer> = sellers
.asFlow()
.flatMapMerge(concurrency = 20) { seller ->
suspend { [Link]([Link]) }.asFlow()
}
.toList()

Operating on Flow instead of on a collection gives us much more


control over concurrency behavior, contexts, exceptions, and much
more. We will discover these functionalities in the next chapters.
This is where (in my experience) Flow is most useful. I hope you will
see this clearly once we have covered all its different functionalities.
Finally, because they prefer a reactive style of programming, some
teams like to use flow instead of suspending functions. Such a style
became popular on Android, where RxJava was popular, but now
Flow is often treated as a better alternative. In such teams, Flow is
often used when only a single value is returned from functions. I
prefer just suspending functions in such cases, but both approaches
are valid.
As you can see, there are quite a few use cases for flows. In some
projects, they will be used commonly, while in others they will be
used only from time to time, but I hope you can see that they are
useful and worth learning about.

Summary

In this chapter, we’ve introduced the concept of Flow. It represents a


stream of asynchronously computed values that supports coroutines
(unlike sequences). There are quite a few use cases where Flow is
useful. We will explore them in the next chapters as we learn more
about Flow capabilities.
Flow building 264

Flow building

Each flow needs to start somewhere. There are many ways to do this,
depending on what we need. In this chapter, we will focus on the
most important options.

Flow raw values

The simplest way to create a flow is by using the flowOf function,


where we just define what values this flow should have(similar to the
listOf function for a list).

suspend fun main() {


flowOf(1, 2, 3, 4, 5)
.collect { print(it) } // 12345
}

At times, we might also need a flow with no values. For this, we have
the emptyFlow() function (similar to the emptyList function for a
list).

suspend fun main() {


emptyFlow<Int>()
.collect { print(it) } // (nothing)
}

Converters

We can also convert every Iterable, Iterator or Sequence into a Flow


using the asFlow function.

suspend fun main() {


listOf(1, 2, 3, 4, 5)
// or setOf(1, 2, 3, 4, 5)
// or sequenceOf(1, 2, 3, 4, 5)
.asFlow()
.collect { print(it) } // 12345
}
Flow building 265

These functions produce a flow of elements that are available imme-


diately. They are useful to start a flow of elements that we can then
process using the flow processing functions.

Converting a function to a flow

Flow is frequently used to represent a single value delayed in time


(like a Single in RxJava). So, it makes sense to convert a suspending
function into a flow. The result of this function will be the only value
in this flow. For that, there is the asFlow extension function, which
works on function types (both suspend () -> T and () -> T). Here
it is used to convert a suspending lambda expression into Flow.

suspend fun main() {


val function = suspend {
// this is suspending lambda expression
delay(1000)
"UserName"
}

[Link]()
.collect { println(it) }
}
// (1 sec)
// UserName

To convert a regular function, we need to reference it first. We do this


using :: in Kotlin.

suspend fun getUserName(): String {


delay(1000)
return "UserName"
}

suspend fun main() {


::getUserName
.asFlow()
.collect { println(it) }
}
// (1 sec)
// UserName
Flow building 266

Flow and Reactive Streams

If you use Reactive Streams in your application (like Reactor, RxJava


2.x. or RxJava 3.x), you don’t need to make big changes in your
code. All objects like Flux, Flowable or Observable implement the
Publisher interface, which can be converted to Flow with the asFlow
function from the kotlinx-coroutines-reactive library.

suspend fun main() = coroutineScope {


[Link](1, 5).asFlow()
.collect { print(it) } // 12345
[Link](1, 5).asFlow()
.collect { print(it) } // 12345
[Link](1, 5).asFlow()
.collect { print(it) } // 12345
}

To convert the other way around, you need more specific libraries.
With kotlinx-coroutines-reactor, you can convert Flow to Flux.
With kotlinx-coroutines-rx3 (or kotlinx-coroutines-rx2), you can
convert Flow to Flowable or Observable.

suspend fun main(): Unit = coroutineScope {


val flow = flowOf(1, 2, 3, 4, 5)

[Link]()
.doOnNext { print(it) } // 12345
.subscribe()

[Link]()
.subscribe { print(it) } // 12345

[Link]()
.subscribe { print(it) } // 12345
}

Flow builders

The most popular way to make a flow is using the flow builder,
which we’ve already used in previous chapters. It behaves similarly
Flow building 267

to sequence builder for building a sequence, or produce builder for


building a channel. We start the builder with the flow function call,
and inside its lambda expression we emit the next values using the
emit function. We can also use emitAll to emit all the values from
Channel or Flow (emitAll(flow) is shorthand for [Link] {
emit(it) }).

fun makeFlow(): Flow<Int> = flow {


repeat(3) { num ->
delay(1000)
emit(num)
}
}

suspend fun main() {


makeFlow()
.collect { println(it) }
}
// (1 sec)
// 0
// (1 sec)
// 1
// (1 sec)
// 2

This builder has already been used in previous chapters, and it will
be used many times in the upcoming ones, so we will see many use
cases for it. For now, I will just revisit one from the Sequence builder
chapter. Here, the flow builder is used to produce a stream of users
that need to be requested page by page from our network API.

fun allUsersFlow(
api: UserApi
): Flow<User> = flow {
var page = 0
do {
val users = [Link](page++) // suspending
emitAll(users)
} while (![Link]())
}
Flow building 268

Understanding flow builder

The flow builder is the most basic way to create a flow. All other
options are based on it.

public fun <T> flowOf(vararg elements: T): Flow<T> = flow {


for (element in elements) {
emit(element)
}
}

When we understand how this builder works, we will understand


how flow works. flow builder is very simple under the hood: it just
creates an object implementing the Flow interface, which just calls
the block function inside the collect method⁴⁹.

fun <T> flow(


block: suspend FlowCollector<T>.() -> Unit
): Flow<T> = object : Flow<T>() {
override suspend fun collect(collector: FlowCollector<T>) {
[Link]()
}
}

interface Flow<out T> {


suspend fun collect(collector: FlowCollector<T>)
}

fun interface FlowCollector<in T> {


suspend fun emit(value: T)
}

Knowing this, let’s analyze how the following code works:

⁴⁹The code below is simplified. In real code, there would be an


additional mechanism for releasing a continuation interceptor.
Flow building 269

fun main() = runTest {


flow { // 1
emit("A")
emit("B")
emit("C")
}.collect { value -> // 2
println(value)
}
}
// A
// B
// C

When we call a flow builder, we just create an object. However,


calling collect means calling the block function on the collector
interface. The block function in this example is the lambda expres-
sion defined at 1. Its receiver is the collector, which is defined at 2
with a lambda expression. When we define a function interface (like
FlowCollector) with a lambda expression, the body of this lambda
expression will be used as the body of that function interface only
function (emit in this case). So, the body of the emit function is
println(value). Thus, when we call collect, we start executing the
lambda expression defined at 1, and when it calls emit, it calls the
lambda expression defined at 2. This is how flow works. Everything
else is built on top of that.

channelFlow

Flow is a cold data stream, so it produces values on demand when


they are needed. If you think of the allUsersFlow presented above,
the next page of users will be requested when the receiver asks for it.
This is desired in some situations. For example, imagine that we are
looking for a specific user. If it is in the first page, we don’t need to
request any more pages. To see this in practice, in the example below
we produce the next elements using the flow builder. Notice that the
next page is requested lazily when it is needed.
Flow building 270

data class User(val name: String)

interface UserApi {
suspend fun takePage(pageNumber: Int): List<User>
}

class FakeUserApi : UserApi {


private val users = List(20) { User("User$it") }
private val pageSize: Int = 3

override suspend fun takePage(


pageNumber: Int
): List<User> {
delay(1000) // suspending
return users
.drop(pageSize * pageNumber)
.take(pageSize)
}
}

fun allUsersFlow(api: UserApi): Flow<User> = flow {


var page = 0
do {
println("Fetching page $page")
val users = [Link](page++) // suspending
emitAll([Link]())
} while (![Link]())
}

suspend fun main() {


val api = FakeUserApi()
val users = allUsersFlow(api)
val user = users
.first {
println("Checking $it")
delay(1000) // suspending
[Link] == "User3"
}
println(user)
Flow building 271

}
// Fetching page 0
// (1 sec)
// Checking User(name=User0)
// (1 sec)
// Checking User(name=User1)
// (1 sec)
// Checking User(name=User2)
// (1 sec)
// Fetching page 1
// (1 sec)
// Checking User(name=User3)
// (1 sec)
// User(name=User3)

On the other hand, we might have cases in which we want to fetch


pages in advance when we are still processing the elements. Doing
this in the presented case could lead to more network calls, but it
might also produce a faster result. To achieve this, we would need
independent production and consumption. Such independence is
typical of hot data streams, like channels. So, we need a hybrid of
Channel and Flow. Yes, this is supported: we just need to use the
channelFlow function, which is like Flow because it implements the
Flow interface. This builder is a regular function, and it is started
with a terminal operation (like collect). It is also like a Channel be-
cause once it is started, it produces the values in a separate coroutine
without waiting for the receiver. Therefore, fetching the next pages
and checking users happens concurrently.

fun allUsersFlow(api: UserApi): Flow<User> = channelFlow {


var page = 0
do {
println("Fetching page $page")
val users = [Link](page++) // suspending
[Link] { send(it) }
} while (![Link]())
}

suspend fun main() {


val api = FakeUserApi()
Flow building 272

val users = allUsersFlow(api)


val user = users
.first {
println("Checking $it")
delay(1000)
[Link] == "User3"
}
println(user)
}
// Fetching page 0
// (1 sec)
// Checking User(name=User0)
// Fetching page 1
// (1 sec)
// Checking User(name=User1)
// Fetching page 2
// (1 sec)
// Checking User(name=User2)
// Fetching page 3
// (1 sec)
// Checking User(name=User3)
// Fetching page 4
// (1 sec)
// User(name=User3)

Inside channelFlow we operate on ProducerScope<T>. ProducerScope


is the same type as used by the produce builder. It implements
CoroutineScope, so we can use it to start new coroutines with
builders. To produce elements, we use send instead of emit. We
can also access the channel or control it directly with SendChannel
functions.

interface ProducerScope<in E>:


CoroutineScope, SendChannel<E> {

val channel: SendChannel<E>


}

A typical use case for channelFlow is when we need to independently


compute values. To support this, channelFlow creates a coroutine
Flow building 273

scope, so we can directly start coroutine builders like launch. The


code below would not work for flow because it does not create the
scope needed by coroutine builders.

fun <T> Flow<T>.merge(other: Flow<T>): Flow<T> =


channelFlow {
launch {
collect { send(it) }
}
[Link] { send(it) }
}

fun <T> contextualFlow(): Flow<T> = channelFlow {


launch([Link]) {
send(computeIoValue())
}
launch([Link]) {
send(computeCpuValue())
}
}

Just like all the other coroutines, channelFlow doesn’t finish until all
its children are in a terminal state.

callbackFlow

Let’s say that you need a flow of events you listen for, like user
clicks or other kinds of actions. The listening process should
be independent from the process of handling these events, so
channelFlow would be a good candidate. However, there is a better
one: callbackFlow.
For a very long time, there was no difference between channelFlow
and callbackFlow. In version 1.3.4, small changes were introduced
to make it less error-prone when using callbacks. However, the
biggest difference is in how people understand these functions:
callbackFlow is for wrapping callbacks.

Inside callbackFlow, we also operate on ProducerScope<T>. Here are


a few functions that might be useful for wrapping callbacks:
Flow building 274

• awaitClose { ... } - a function that suspends until the


channel is closed. Once it is closed, it invokes its argument.
awaitClose is very important for callbackFlow. Take a look
at the example below. Without awaitClose, the coroutine will
end immediately after registering a callback. This is natural
for a coroutine: its body has ended and it has no children to
wait for, so it ends. We use awaitClose (even with an empty
body) to prevent this, and we listen for elements until the
channel is closed in some other way.
• trySendBlocking(value) - similar to send, but it is blocking
instead of suspending, so it can be used on non-suspending
functions.
• close() - ends this channel.
• cancel(throwable) - ends this channel and sends an exception
to the flow.

Here is a typical example of how callbackFlow is used:

fun flowFrom(api: CallbackBasedApi): Flow<T> = callbackFlow {


val callback = object : Callback {
override fun onNextValue(value: T) {
try {
trySendBlocking(value)
} catch (e: Exception) {
// Handle exception from the channel:
// failure in flow or premature closing
}
}
override fun onApiError(cause: Throwable) {
cancel(CancellationException("API Error", cause))
}
override fun onCompleted() = [Link]()
}
[Link](callback)
awaitClose { [Link](callback) }
}

Summary

In this chapter, we’ve reviewed different ways in which flows can be


created. There are many functions for starting a flow, from simple
Flow building 275

ones like flowOf or emptyFlow, conversion asFlow, up to flow builders.


The simplest flow builder is just a flow function, where you can use
the emit function to produce the next values. There are also the
channelFlow and callbackFlow builders, which create a flow that has
some of the characteristics of Channel. Each of these functions has
its own use cases, and it’s useful to know them in order to leverage
the full potential of Flow.
Flow lifecycle functions 276

Flow lifecycle functions

Flow can be imagined as a pipe in which requests for next values flow
in one direction, and the corresponding produced values flow in the
other direction. When flow is completed or an exception occurs, this
information is also propagated and it closes the intermediate steps
on the way. So, as they all flow, we can listen for values, exceptions,
or other characteristic events (like starting or completing). To do
this, we use methods such as onEach, onStart, onCompletion, onEmpty
and catch. Let’s explain these one by one.

onEach

To react to each flowing value, we use the onEach function.

suspend fun main() {


flowOf(1, 2, 3, 4)
.onEach { print(it) }
.collect() // 1234
}

The onEach lambda expression is suspending, and elements are pro-


cessed one after another in order (sequentially). So, if we add delay
in onEach, we will delay each value as it flows.

suspend fun main() {


flowOf(1, 2)
.onEach { delay(1000) }
.collect { println(it) }
}
// (1 sec)
// 1
// (1 sec)
// 2

onStart

The onStart function sets a listener that should be called immedi-


ately once the flow is started, i.e., once the terminal operation is
called. It is important to note that onStart does not wait for the first
element: it is called when we request the first element.
Flow lifecycle functions 277

suspend fun main() {


flowOf(1, 2)
.onEach { delay(1000) }
.onStart { println("Before") }
.collect { println(it) }
}
// Before
// (1 sec)
// 1
// (1 sec)
// 2

It is good to know that in onStart (as well as in onCompletion, onEmpty


and catch) we can emit elements. Such elements will flow down-
stream from this place.

suspend fun main() {


flowOf(1, 2)
.onEach { delay(1000) }
.onStart { emit(0) }
.collect { println(it) }
}
// 0
// (1 sec)
// 1
// (1 sec)
// 2

onCompletion

There are a few ways in which a flow can be completed. The most
common one is when the flow builder is done (i.e., the last element
has been sent), although this also happens in the case of an uncaught
exception or a coroutine cancellation. In all these cases, we can add
a listener for flow completion by using the onCompletion method.
Flow lifecycle functions 278

suspend fun main() = coroutineScope {


flowOf(1, 2)
.onEach { delay(1000) }
.onCompletion { println("Completed") }
.collect { println(it) }
}
// (1 sec)
// 1
// (1 sec)
// 2
// Completed

suspend fun main() = coroutineScope {


val job = launch {
flowOf(1, 2)
.onEach { delay(1000) }
.onCompletion { println("Completed") }
.collect { println(it) }
}
delay(1100)
[Link]()
}
// (1 sec)
// 1
// (0.1 sec)
// Completed

In Android, we often use onStart to show a progress bar (the in-


dicator that we are waiting for a network response), and we use
onCompletion to hide it.
Flow lifecycle functions 279

fun updateNews() {
[Link] {
newsFlow()
.onStart { showProgressBar() }
.onCompletion { hideProgressBar() }
.collect { [Link](it) }
}
}

onEmpty

A flow might complete without emitting any value, which might be


an indication of an unexpected event. For such cases, there is the
onEmpty function, which invokes the given action when this flow
completes without emitting any elements. onEmpty might then be
used to emit some default value.

suspend fun main() = coroutineScope {


flow<List<Int>> { delay(1000) }
.onEmpty { emit(emptyList()) }
.collect { println(it) }
}
// (1 sec)
// []

catch

At any point of flow building or processing, an exception might


occur. Such an exception will flow down, closing each processing
step on the way; however, it can be caught and managed. To do so,
we can use the catch method. This listener receives the exception as
an argument and allows you to perform recovering operations.
Flow lifecycle functions 280

class MyError : Throwable("My error")

val flow = flow {


emit(1)
emit(2)
throw MyError()
}

suspend fun main(): Unit {


[Link] { println("Got $it") }
.catch { println("Caught $it") }
.collect { println("Collected $it") }
}
// Got 1
// Collected 1
// Got 2
// Collected 2
// Caught MyError: My error

In the example above, notice that onEach does not react


to an exception. The same happens with other functions
like map, filter etc. Only the onCompletion handler will
be called.

The catch method stops an exception by catching it. The previous


steps have already been completed, but catch can still emit new
values and keep the rest of the flow alive.

val flow = flow {


emit("Message1")
throw MyError()
}

suspend fun main(): Unit {


[Link] { emit("Error") }
.collect { println("Collected $it") }
}
// Collected Message1
// Collected Error
Flow lifecycle functions 281

The catch will only react to the exceptions thrown in the function
defined upstream (you can imagine that the exception needs to be
caught as it flows down).

In Android, we often use catch to show exceptions that happened in


a flow.

fun updateNews() {
[Link] {
newsFlow()
.catch { [Link](it) }
.onStart { showProgressBar() }
.onCompletion { hideProgressBar() }
.collect { [Link](it) }
}
}

We could also use catch to emit default data to display on the screen,
such as an empty list.

fun updateNews() {
[Link] {
newsFlow()
.catch {
[Link](it)
emit(emptyList())
}
.onStart { showProgressBar() }
Flow lifecycle functions 282

.onCompletion { hideProgressBar() }
.collect { [Link](it) }
}
}

Uncaught exceptions

Uncaught exceptions in a flow immediately cancel this flow, and


collect rethrows this exception. This behavior is typical of suspend-
ing functions, and coroutineScope behaves the same way. Excep-
tions can be caught outside flow using the classic try-catch block.

val flow = flow {


emit("Message1")
throw MyError()
}

suspend fun main(): Unit {


try {
[Link] { println("Collected $it") }
} catch (e: MyError) {
println("Caught")
}
}
// Collected Message1
// Caught

Notice that using catch does not protect us from an exception in the
terminal operation (because catch cannot be placed after the last
operation). So, if there is an exception in the collect, it won’t be
caught, and an error will be thrown.
Flow lifecycle functions 283

val flow = flow {


emit("Message1")
emit("Message2")
}

suspend fun main(): Unit {


[Link] { println("Before") }
.catch { println("Caught $it") }
.collect { throw MyError() }
}
// Before
// Exception in thread "..." MyError: My error

Therefore, it is common practice to move the operation from


collect to onEach and place it before the catch. This is specifically
useful if we suspect that collect might raise an exception. If we
move the operation from collect, we can be sure that catch will
catch all exceptions.

val flow = flow {


emit("Message1")
emit("Message2")
}

suspend fun main(): Unit {


[Link] { println("Before") }
.onEach { throw MyError() }
.catch { println("Caught $it") }
.collect()
}
// Before
// Caught MyError: My error

flowOn

Lambda expressions used as arguments for flow operations (like


onEach, onStart, onCompletion, etc.) and its builders (like flow or
channelFlow) are all suspending in nature. Suspending functions
need to have a context and should be in relation to their parent (for
Flow lifecycle functions 284

structured concurrency). So, you might be wondering where these


functions take their context from. The answer is: from the context
where collect is called.

fun usersFlow(): Flow<String> = flow {


repeat(2) {
val ctx = currentCoroutineContext()
val name = ctx[CoroutineName]?.name
emit("User$it in $name")
}
}

suspend fun main() {


val users = usersFlow()
withContext(CoroutineName("Name1")) {
[Link] { println(it) }
}
withContext(CoroutineName("Name2")) {
[Link] { println(it) }
}
}
// User0 in Name1
// User1 in Name1
// User0 in Name2
// User1 in Name2

How does this code work? The terminal operation call requests el-
ements from upstream, thereby propagating the coroutine context.
However, it can also be modified by the flowOn function.

suspend fun present(place: String, message: String) {


val ctx = coroutineContext
val name = ctx[CoroutineName]?.name
println("[$name] $message on $place")
}

fun messagesFlow(): Flow<String> = flow {


present("flow builder", "Message")
emit("Message")
Flow lifecycle functions 285

suspend fun main() {


val users = messagesFlow()
withContext(CoroutineName("Name1")) {
users
.flowOn(CoroutineName("Name3"))
.onEach { present("onEach", it) }
.flowOn(CoroutineName("Name2"))
.collect { present("collect", it) }
}
}
// [Name3] Message on flow builder
// [Name2] Message on onEach
// [Name1] Message on collect

Remember that flowOn works only for functions that are upstream
in the flow.

launchIn

collect is a suspending operation that suspends a coroutine until


the flow is completed. It is common to wrap it with a launch builder
so that flow processing can start on another coroutine. To help with
such cases, there is the launchIn function, which launches collect
in a new coroutine on the scope passed as the only argument.

fun <T> Flow<T>.launchIn(scope: CoroutineScope): Job =


[Link] { collect() }

launchIn is often used to start flow processing in a separate corou-


tine.
Flow lifecycle functions 286

suspend fun main(): Unit = coroutineScope {


flowOf("User1", "User2")
.onStart { println("Users:") }
.onEach { println(it) }
.launchIn(this)
}
// Users:
// User1
// User2

Summary

In this chapter, we’ve learned about different Flow functionalities.


Now we know how to do something when our flow starts, when it is
closing, or on each element; we also know how to catch exceptions
and how to launch a flow in a new coroutine. These are typical
tools that are widely used, especially in Android development. For
instance, here is how a flow might be used on Android:

fun updateNews() {
newsFlow()
.onStart { showProgressBar() }
.onCompletion { hideProgressBar() }
.onEach { [Link](it) }
.catch { [Link](it) }
.launchIn(viewModelScope)
}
Flow processing 287

Flow processing

We’ve presented Flow as a pipe through which values flow. As they


do so, they can be changed in different ways: dropped, multiplied,
transformed, or combined. These operations between flow creation
and the terminal operation are called Flow processing. In this chapter,
we will learn about the functions we use for this.

The functions presented here might remind you of the


functions we use for Collection processing. This is no
coincidence as they represent the same concepts, with
the difference that flow elements can be spread in time.

map

The first important function we need to learn about is map, which


transforms each flowing element according to its transformation
function. So, if you have a flow of numbers and your operation is
calculating the squares of these numbers, then the resulting flow
will have the squares of these numbers.

suspend fun main() {


flowOf(1, 2, 3) // [1, 2, 3]
.map { it * it } // [1, 4, 9]
.collect { print(it) } // 149
}
Flow processing 288

I will use the diagrams shown above to visualize how


flow processing functions change elements over time.
The horizontal line represents time, and the elements on
this line are elements that are emitted in the flow at this
point in time. The line above represents a flow before
the presented operation, and the line below represents
a flow after the operation. This diagram can also be used
to represent multiple operations used one after another,
like map and filter in the diagram below.

Most of the Flow processing functions are quite simple to implement


with tools we already know from previous chapters. To implement
map, we might use the flow builder to create a new flow. Then, we
might collect elements from the previous flow and emit each
collected element transformed. The implementation below
is just a slightly simplified version of the actual one from the
[Link] library.
Flow processing 289

fun <T, R> Flow<T>.map(


transform: suspend (value: T) -> R
): Flow<R> = flow { // here we create a new flow
collect { value -> // here we collect from receiver
emit(transform(value))
}
}

map is a very popular function. Its use cases include unpacking or


converting values into a different type.

// Here we use map to have user actions from input events


fun actionsFlow(): Flow<UserAction> =
observeInputEvents()
.map { toAction([Link]) }

// Here we use map to convert from User to UserJson


fun getAllUser(): Flow<UserJson> =
[Link]()
.map { [Link]() }

filter

The next important function is filter, which returns a flow con-


taining only values from the original flow that match the given
predicate.

suspend fun main() {


(1..10).asFlow() // [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
.filter { it <= 5 } // [1, 2, 3, 4, 5]
.filter { isEven(it) } // [2, 4]
.collect { print(it) } // 24
}

fun isEven(num: Int): Boolean = num % 2 == 0


Flow processing 290

This function can also be implemented quite easily using the flow
builder. We would just need to introduce an if statement with the
predicate (instead of transformation).

fun <T> Flow<T>.filter(


predicate: suspend (T) -> Boolean
): Flow<T> = flow { // here we create a new flow
collect { value -> // here we collect from receiver
if (predicate(value)) {
emit(value)
}
}
}

filter is typically used to eliminate elements we are not interested


in.

// Here we use filter to drop invalid actions


fun actionsFlow(): Flow<UserAction> =
observeInputEvents()
.filter { isValidAction([Link]) }
.map { toAction([Link]) }

take and drop

We use take to pass only a certain number of elements.


Flow processing 291

suspend fun main() {


('A'..'Z').asFlow()
.take(5) // [A, B, C, D, E]
.collect { print(it) } // ABCDE
}

We use drop to ignore a certain number of elements.

suspend fun main() {


('A'..'Z').asFlow()
.drop(20) // [U, V, W, X, Y, Z]
.collect { print(it) } // UVWXYZ
}
Flow processing 292

How does collection processing work?

We’ve seen quite a few flow processing and lifecycle functions. Their
implementation is quite simple, so you can guess that there is no
magic going on there. Most such functions can be implemented with
flow builder and collect with a lambda. Here is a simple example
of flow processing and some simplified map and flowOf implementa-
tions:

suspend fun main() {


flowOf('a', 'b')
.map { [Link]() }
.collect { print(it) } // AB
}

fun <T, R> Flow<T>.map(


transform: suspend (value: T) -> R
): Flow<R> = flow {
collect { value ->
emit(transform(value))
}
}

fun <T> flowOf(vararg elements: T): Flow<T> = flow {


for (element in elements) {
emit(element)
}
}

If you inline filter and map functions, you will end up with the
following code (I added labels on lambdas and comments with num-
bers).
Flow processing 293

suspend fun main() {


flow map@{ // 1
flow flowOf@{ // 2
for (element in arrayOf('a', 'b')) { // 3
this@[Link](element) // 4
}
}.collect { value -> // 5
this@[Link]([Link]()) // 6
}
}.collect { // 7
print(it) // 8
}
}

Let’s analyze this step by step. We start a flow at 1 and collect it at


7. When we start collecting, we invoke the lambda @map (which
starts at 1), which calls another builder at 2 and collects it at 5.
When we collect, we start lambda @flowOn (which starts at 2). This
lambda (at 2) iterates over an array with 'a' and 'b'. The first value
'a' is emitted at 4, which calls the lambda at 5. This lambda (at 5)
transforms the value to 'A' and emits it to the flow @map, thus
calling the lambda at 7. The value is printed; we then finish the
lambda at 7 and resume the lambda at 6. It finishes, so we resume
@flowOf at 4. We continue the iteration and emit 'b' at 4. So, we call
the lambda at 5, transform the value to 'B', and emit it at 6 to the
flow @map. The value is collected at 7 and printed at 8. The lambda
at 7 is finished, so we resume the lambda at 6. This is finished, so we
resume the lambda @flowOf at 4. This is also finished, so we resume
the @map on collect at 5. Since there is nothing more, we reach the
end of @map. With that, we resume the collect at 7, and we reach
the end of the main function.
The same happens in most flow processing and lifecycle functions,
so understanding this gives us quite a good understanding of how
Flow works.

merge, zip and combine

Let’s talk about combining two flows into one. There are a few
ways to do this. The simplest involves merging the elements from
two flows into one. No modifications are made, no matter from
which flow elements originate. To do this, we use the top-level merge
function.
Flow processing 294

suspend fun main() {


val ints: Flow<Int> = flowOf(1, 2, 3)
val doubles: Flow<Double> = flowOf(0.1, 0.2, 0.3)

val together: Flow<Number> = merge(ints, doubles)


print([Link]())
// [1, 0.1, 0.2, 0.3, 2, 3]
// or [1, 0.1, 0.2, 0.3, 2, 3]
// or [0.1, 1, 2, 3, 0.2, 0.3]
// or any other combination
}

It is important to know that when we use merge the elements from


one flow do not wait for another flow. For instance, in the example
below, elements from the first flow are delayed, but this does not stop
the elements from the second flow.

suspend fun main() {


val ints: Flow<Int> = flowOf(1, 2, 3)
.onEach { delay(1000) }
val doubles: Flow<Double> = flowOf(0.1, 0.2, 0.3)

val together: Flow<Number> = merge(ints, doubles)


[Link] { println(it) }
}
Flow processing 295

// 0.1
// 0.2
// 0.3
// (1 sec)
// 1
// (1 sec)
// 2
// (1 sec)
// 3

We use merge when we have multiple sources of events that should


lead to the same actions.

fun listenForMessages() {
merge(userSentMessages, messagesNotifications)
.onEach { displayMessage(it) }
.launchIn(scope)
}

The next function is zip, which makes pairs from both flows. We
also need to specify a function that decides how elements are paired
(transformed into one what will be emitted in the new flow). Each
element can only be part of one pair, so it needs to wait for its pair.
Elements left without a pair are lost, so when the zipping of a flow is
complete, the resulting flow is also complete (as is the other flow).

suspend fun main() {


val flow1 = flowOf("A", "B", "C")
.onEach { delay(400) }
val flow2 = flowOf(1, 2, 3, 4)
.onEach { delay(1000) }
[Link](flow2) { f1, f2 -> "${f1}_${f2}" }
.collect { println(it) }
}
// (1 sec)
// A_1
// (1 sec)
// B_2
// (1 sec)
// C_3
Flow processing 296

The zip function reminds me of the polonaise - a tradi-


tional Polish dance. One feature of this dance is that a
line of pairs is separated down the middle, then these
pairs reform when they meet again.

A still from the movie Pan Tadeusz, directed by Andrzej Wajda, presenting the
polonaise dance.

The last important function when combining two flows is combine.


Just like zip, it also forms pairs from elements, which have to
wait for the slower flow to produce the first pair. However, the
similarities to the polonaise dance end here. When we use combine,
every new element replaces its predecessor. If the first pair has been
formed already, it will produce a new pair together with the previous
element from the other flow.
Flow processing 297

Notice that zip needs pairs, so it closes when the first flow closes.
combine does not have such a limitation, so it will emit until both
flows are closed.

suspend fun main() {


val flow1 = flowOf("A", "B", "C")
.onEach { delay(400) }
val flow2 = flowOf(1, 2, 3, 4)
.onEach { delay(1000) }
[Link](flow2) { f1, f2 -> "${f1}_${f2}" }
.collect { println(it) }
}
// (1 sec)
// B_1
// (0.2 sec)
// C_1
// (0.8 sec)
// C_2
// (1 sec)
// C_3
// (1 sec)
// C_4

combine is typically used when we need to actively observe two


sources of changes. If you want to have elements emitted whenever
Flow processing 298

a change occurs, you can add initial values to each combined flow (to
have the initial pair).

[Link] { emit(currentUser) }

A typical use case might be when a view needs to be either of two


observable element changes. For example, when a notification badge
depends on both the current state of a user and some notifications,
we might observe them both and combine their changes to update a
view.

userStateFlow
.combine(notificationsFlow) { userState, notifications ->
updateNotificationBadge(userState, notifications)
}
.collect()

fold and scan

If you use collection processing functions, you might recognize fold.


It is used to combine all the values in this collection into one by
applying an operation that combines two values into one for each
element (starting from the initial value).
For example, if the initial value is 0 and the operation is addition,
then the result is the sum of all the numbers: we first take the initial
value 0; then, we add the first element 1 to it; to the result 1, we add
the second number 2; to the result 3, we add the third number 3; to
the result 6, we add the last number 4. The result of this operation,
10, is what will be returned from fold.

fun main() {
val list = listOf(1, 2, 3, 4)
val res = [Link](0) { acc, i -> acc + i }
println(res) // 10
val res2 = [Link](1) { acc, i -> acc * i }
println(res2) // 24
}
Flow processing 299

fold is a terminal operation. It can also be used for Flow, but it will
suspend until this flow is completed (just like collect).

suspend fun main() {


val list = flowOf(1, 2, 3, 4)
.onEach { delay(1000) }
val res = [Link](0) { acc, i -> acc + i }
println(res)
}
// (4 sec)
// 10

There is an alternative to fold called scan. It is an intermediate


operation that produces all intermediate accumulator values.
Flow processing 300

fun main() {
val list = listOf(1, 2, 3, 4)
val res = [Link](0) { acc, i -> acc + i }
println(res) // [0, 1, 3, 6, 10]
}

scan is useful withFlow because it produces a new value immediately


after receiving one from the previous step.

suspend fun main() {


flowOf(1, 2, 3, 4)
.onEach { delay(1000) }
.scan(0) { acc, v -> acc + v }
.collect { println(it) }
}
// 0
// (1 sec)
// 1
// (1 sec)
// 3
// (1 sec)
// 6
Flow processing 301

// (1 sec)
// 10

We can implement scan easily using the flow builder and collect. We
first emit the initial value, then with each new element we emit the
result of the next value accumulation.

fun <T, R> Flow<T>.scan(


initial: R,
operation: suspend (accumulator: R, value: T) -> R
): Flow<R> = flow {
var accumulator: R = initial
emit(accumulator)
collect { value ->
accumulator = operation(accumulator, value)
emit(accumulator)
}
}

The typical use case for scan is when we have a flow of updates or
changes, and we need an object that is the result of these changes.
Flow processing 302

val userStateFlow: Flow<User> = userChangesFlow


.scan(user) { acc, change -> [Link](change) }

val messagesListFlow: Flow<List<Message>> = messagesFlow


.scan(messages) { acc, message -> acc + message }

flatMapConcat, flatMapMerge and flatMapLatest

Another well-known function for collections is flatMap. In the case


of collections, it is similar to a map, but the transformation function
needs to return a collection that is then flattened. For example, if you
have a list of departments, each of which has a list of employees, you
can use flatMap to make a list of all employees in all departments.

val allEmployees: List<Employee> = departments


.flatMap { department -> [Link] }

// If we had used map, we would have a list of lists instead


val listOfListsOfEmployee: List<List<Employee>> = departments
.map { department -> [Link] }

How should flatMap look on a flow? It seems intuitive that we might


expect its transformation function to return a flow that should then
be flattened. The problem is that flow elements can be spread in
time. So, should the flow produced from the second element wait
for the one produced from the first one, or should it process them
concurrently? Since there is no clear answer, there is no flatMap
function for Flow, but instead there are flatMapConcat, flatMapMerge
and flatMapLatest.
The flatMapConcat function processes the produced flows one after
another. So, the second flow can start when the first one is done.
In the following example, we make a flow from the characters “A”,
“B”, and “C”. The flow produced by each of them includes these
characters and the numbers 1, 2, and 3, with a 1-second delay in
between.
Flow processing 303

fun flowFrom(elem: String) = flowOf(1, 2, 3)


.onEach { delay(1000) }
.map { "${it}_${elem} " }

suspend fun main() {


flowOf("A", "B", "C")
.flatMapConcat { flowFrom(it) }
.collect { println(it) }
}
// (1 sec)
// 1_A
// (1 sec)
// 2_A
// (1 sec)
// 3_A
// (1 sec)
// 1_B
// (1 sec)
// 2_B
// (1 sec)
// 3_B
// (1 sec)
// 1_C
// (1 sec)
// 2_C
// (1 sec)
// 3_C
Flow processing 304

The second mentioned function, flatMapMerge, is the most intuitive


to me. It processes produced flows concurrently.

fun flowFrom(elem: String) = flowOf(1, 2, 3)


.onEach { delay(1000) }
.map { "${it}_${elem} " }

suspend fun main() {


flowOf("A", "B", "C")
.flatMapMerge { flowFrom(it) }
.collect { println(it) }
}
// (1 sec)
// 1_A
// 1_B
// 1_C
// (1 sec)
// 2_A
// 2_B
// 2_C
// (1 sec)
// 3_A
// 3_B
// 3_C
Flow processing 305

The number of flows that can be concurrently processed can be set


using the concurrency parameter. The default value of this param-
eter is 16, but it can be changed in the JVM using the DEFAULT_-
CONCURRENCY_PROPERTY_NAME property. Beware of this default limita-
tion because if you use flatMapMerge on a flow with many elements,
only 16 will be processed at the same time.

suspend fun main() {


flowOf("A", "B", "C")
.flatMapMerge(concurrency = 2) { flowFrom(it) }
.collect { println(it) }
}
// (1 sec)
// 1_A
// 1_B
// (1 sec)
// 2_A
// 2_B
// (1 sec)
// 3_A
// 3_B
// (1 sec)
// 1_C
// (1 sec)
// 2_C
Flow processing 306

// (1 sec)
// 3_C

The typical use of flatMapMerge is when we need to request data for


each element in a flow. For instance, we have a list of categories, and
you need to request offers for each of them. You already know that
you can do this with the async function. There are two advantages of
using a flow with flatMapMerge instead:

• we can control the concurrency parameter and decide how


many categories we want to fetch at the same time (to avoid
sending hundreds of requests at the same time);
• we can return Flow and send the next elements as they arrive
(so, on the function-use side, they can be handled immedi-
ately).

suspend fun getOffers(


categories: List<Category>
): List<Offer> = coroutineScope {
categories
.map { async { [Link](it) } }
.flatMap { [Link]() }
}

// A better solution
suspend fun getOffers(
categories: List<Category>
): Flow<Offer> = categories
.asFlow()
.flatMapMerge(concurrency = 20) {
suspend { [Link](it) }.asFlow()
// or flow { emit([Link](it)) }
}

The last function is flatMapLatest. It forgets about the previous flow


once a new one appears. With every new value, the previous flow
processing is forgotten. So, if there is no delay between “A”, “B”c and
“C”, then you will only see “1_C”, “2_C”, and “3_C”.
Flow processing 307

fun flowFrom(elem: String) = flowOf(1, 2, 3)


.onEach { delay(1000) }
.map { "${it}_${elem} " }

suspend fun main() {


flowOf("A", "B", "C")
.flatMapLatest { flowFrom(it) }
.collect { println(it) }
}
// (1 sec)
// 1_C
// (1 sec)
// 2_C
// (1 sec)
// 3_C

It gets more interesting when the elements from the initial flow
are delayed. What happens in the example below is that (after 1.2
sec) “A” starts its flow, which was created using flowFrom. This flow
produces an element “1_A” in 1 second, but 200 ms later “B” appears
and this previous flow is closed and forgotten. “B” flow managed
to produce “1_B” when “C” appeared and started producing its flow.
This one will finally produce elements “1_C”, “2_C”, and “3_C”, with
a 1-second delay in between.
Flow processing 308

suspend fun main() {


flowOf("A", "B", "C")
.onEach { delay(1200) }
.flatMapLatest { flowFrom(it) }
.collect { println(it) }
}
// (2.2 sec)
// 1_A
// (1.2 sec)
// 1_B
// (1.2 sec)
// 1_C
// (1 sec)
// 2_C
// (1 sec)
// 3_C

Terminal operations

Finally, we have operations that end flow processing. These are


called terminal operations. Until now, we have only used collect,
but there are also others that are similar to those offered by collec-
tions and Sequence: count (counts the number of elements in the
flow), first and firstOrNull (to get the first element emitted by
Flow processing 309

the flow), fold and reduce (to accumulate elements into an object).
Terminal operations are suspended and they return the value once
the flow is complete (or they complete the flow themselves).

suspend fun main() {


val flow = flowOf(1, 2, 3, 4) // [1, 2, 3, 4]
.map { it * it } // [1, 4, 9, 16]

println([Link]()) // 1
println([Link]()) // 4

println([Link] { acc, value -> acc * value }) // 576


println([Link](0) { acc, value -> acc + value }) // 30
}

There are currently not many more terminal operations for flow,
but if you need something different you can always implement it
yourself. This is, for instance, how you can implement sum for a flow
of Int:

suspend fun Flow<Int>.sum(): Int {


var sum = 0
collect { value ->
sum += value
}
return sum
}

Similarly, you can implement nearly any terminal operation with


just the collect method.

Summary

There are many tools that support Flow processing. It is good to have
some idea about them because they are useful in both backend and
Android development. Also, if you need some different functions,
they can be implemented quite easily thanks to the collect method
and the flow builder.
SharedFlow and StateFlow 310

SharedFlow and StateFlow

Flow is typically cold, so its values are calculated on demand. How-


ever, there are cases in which we want multiple receivers to be
subscribed to one source of changes. This is where we use Shared-
Flow, which is conceptually similar to a mailing list. We also have
StateFlow, which is similar to an observable value. Let’s explain
them both step by step.

SharedFlow

Let’s start with MutableSharedFlow, which is like a broadcast chan-


nel: everyone can send (emit) messages which will be received by
every coroutine that is listening (collecting).

suspend fun main(): Unit = coroutineScope {


val mutableSharedFlow =
MutableSharedFlow<String>(replay = 0)
// or MutableSharedFlow<String>()

launch {
[Link] {
println("#1 received $it")
}
}
launch {
[Link] {
println("#2 received $it")
}
}

delay(1000)
[Link]("Message1")
[Link]("Message2")
}
// (1 sec)
// #1 received Message1
// #2 received Message1
// #1 received Message2
SharedFlow and StateFlow 311

// #2 received Message2
// (program never ends)

The above program never ends because the


coroutineScope is waiting for the coroutines that
were started with launch and which keep listening on
MutableSharedFlow. Apparently, MutableSharedFlow is
not closable, so the only way to fix this problem is to
cancel the whole scope.

MutableSharedFlow can also keep sending messages. If we set the


replay parameter (it defaults to 0), the defined number of last values
will be kept. If a coroutine now starts observing, it will receive these
values first. This cache can also be reset with resetReplayCache.

suspend fun main(): Unit = coroutineScope {


val mutableSharedFlow = MutableSharedFlow<String>(
replay = 2,
)
[Link]("Message1")
[Link]("Message2")
[Link]("Message3")

println([Link])
// [Message2, Message3]

launch {
[Link] {
println("#1 received $it")
}
// #1 received Message2
// #1 received Message3
}

delay(100)
[Link]()
println([Link]) // []
}

MutableSharedFlow is conceptually similar to RxJava


Subjects. When the replay parameter is set to 0, it
SharedFlow and StateFlow 312

is similar to a PublishSubject. When replay is 1,


it is similar to a BehaviorSubject. When replay is
Int.MAX_VALUE, it is similar to ReplaySubject.

In Kotlin, we like to have a distinction between interfaces


that are used to only listen and those that are used to modify.
For instance, we’ve already seen the distinction between
SendChannel, ReceiveChannel and just Channel. The same rule
applies here. MutableSharedFlow inherits from both SharedFlow
and FlowCollector. The former inherits from Flow and is used to
observe, while FlowCollector is used to emit values.

interface MutableSharedFlow<T> :
SharedFlow<T>, FlowCollector<T> {

fun tryEmit(value: T): Boolean


val subscriptionCount: StateFlow<Int>
fun resetReplayCache()
}

interface SharedFlow<out T> : Flow<T> {


val replayCache: List<T>
}

interface FlowCollector<in T> {


suspend fun emit(value: T)
}

These interfaces are often used to expose only functions, to emit, or


only to collect.

suspend fun main(): Unit = coroutineScope {


val mutableSharedFlow = MutableSharedFlow<String>()
val sharedFlow: SharedFlow<String> = mutableSharedFlow
val collector: FlowCollector<String> = mutableSharedFlow

launch {
[Link] {
println("#1 received $it")
SharedFlow and StateFlow 313

}
}
launch {
[Link] {
println("#2 received $it")
}
}

delay(1000)
[Link]("Message1")
[Link]("Message2")
}
// (1 sec)
// #1 received Message1
// #2 received Message1
// #1 received Message2
// #2 received Message2

Here is an example of typical usage on Android:

class UserProfileViewModel {
private val _userChanges =
MutableSharedFlow<UserChange>()
val userChanges: SharedFlow<UserChange> = _userChanges

fun onCreate() {
[Link] {
[Link](::applyUserChange)
}
}

fun onNameChanged(newName: String) {


// ...
_userChanges.emit(NameChange(newName))
}

fun onPublicKeyChanged(newPublicKey: String) {


// ...
_userChanges.emit(PublicKeyChange(newPublicKey))
SharedFlow and StateFlow 314

}
}

shareIn

Flow is often used to observe changes, like user actions, database


modifications, or new messages. We already know the different ways
in which these events can be processed and handled. We’ve learned
how to merge multiple flows into one. But what if multiple classes
are interested in these changes and we would like to turn one flow
into multiple flows? The solution is SharedFlow, and the easiest way
to turn a Flow into a SharedFlow is by using the shareIn function.

suspend fun main(): Unit = coroutineScope {


val flow = flowOf("A", "B", "C")
.onEach { delay(1000) }

val sharedFlow: SharedFlow<String> = [Link](


scope = this,
started = [Link],
// replay = 0 (default)
)

delay(500)

launch {
[Link] { println("#1 $it") }
}

delay(1000)

launch {
[Link] { println("#2 $it") }
}

delay(1000)

launch {
[Link] { println("#3 $it") }
SharedFlow and StateFlow 315

}
}
// (1 sec)
// #1 A
// (1 sec)
// #1 B
// #2 B
// (1 sec)
// #1 C
// #2 C
// #3 C

The shareIn function creates a SharedFlow and sends elements from


its Flow. Since we need to start a coroutine to collect elements on
flow, shareIn expects a coroutine scope as the first argument. The
third argument is replay, which is 0 by default. The second argu-
ment is interesting: started determines when listening for values
should start, depending on the number of listeners. The following
options are supported:

• [Link] - immediately starts listening for val-


ues and sending them to a flow. Notice that if you have a
limited replay value and your values appear before you start
subscribing, you might lose some values (if your replay is 0, you
will lose all such values).

suspend fun main(): Unit = coroutineScope {


val flow = flowOf("A", "B", "C")

val sharedFlow: SharedFlow<String> = [Link](


scope = this,
started = [Link],
)

delay(100)
launch {
[Link] { println("#1 $it") }
}
print("Done")
SharedFlow and StateFlow 316

}
// (0.1 sec)
// Done

• [Link] - starts listening when the first sub-


scriber appears. This guarantees that this first subscriber gets
all the emitted values, while subsequent subscribers are only
guaranteed to get the most recent replay values. The upstream
flow continues to be active even when all subscribers disap-
pear, but only the most recent replay values are cached without
subscribers.

suspend fun main(): Unit = coroutineScope {


val flow1 = flowOf("A", "B", "C")
val flow2 = flowOf("D")
.onEach { delay(1000) }

val sharedFlow = merge(flow1, flow2).shareIn(


scope = this,
started = [Link],
)

delay(100)
launch {
[Link] { println("#1 $it") }
}
delay(1000)
launch {
[Link] { println("#2 $it") }
}
}
// (0.1 sec)
// #1 A
// #1 B
// #1 C
// (1 sec)
// #2 D
// #1 D
SharedFlow and StateFlow 317

• WhileSubscribed() - starts listening on the flow when the first


subscriber appears; it stops when the last subscriber disap-
pears. If a new subscriber appears when our SharedFlow is
stopped, it will start again. WhileSubscribed has additional
optional configuration parameters: stopTimeoutMillis (how
long to listen after the last subscriber disappears, 0 by default)
and replayExpirationMillis (how long to keep replay after
stopping, Long.MAX_VALUE by default).

suspend fun main(): Unit = coroutineScope {


val flow = flowOf("A", "B", "C", "D")
.onStart { println("Started") }
.onCompletion { println("Finished") }
.onEach { delay(1000) }

val sharedFlow = [Link](


scope = this,
started = [Link](),
)

delay(3000)
launch {
println("#1 ${[Link]()}")
}
launch {
println("#2 ${[Link](2).toList()}")
}
delay(3000)
launch {
println("#3 ${[Link]()}")
}
}
// (3 sec)
// Started
// (1 sec)
// #1 A
// (1 sec)
// #2 [A, B]
// Finished
SharedFlow and StateFlow 318

// (1 sec)
// Started
// (1 sec)
// #3 A
// Finished

• It is also possible to define a custom strategy by implementing


the SharingStarted interface.

Using shareIn is very convenient when multiple services are inter-


ested in the same changes. Let’s say that you need to observe how
stored locations change over time. This is how a DTO (Data Transfer
Object) could be implemented on Android using the Room library:

@Dao
interface LocationDao {
@Insert(onConflict = [Link])
suspend fun insertLocation(location: Location)

@Query("DELETE FROM location_table")


suspend fun deleteLocations()

@Query("SELECT * FROM location_table ORDER BY time")


fun observeLocations(): Flow<List<Location>>
}

The problem is that if multiple services need to depend on these


locations, then it would not be optimal for each of them to observe
the database separately. Instead, we could make a service that listens
to these changes and shares them into SharedFlow. This is where we
will use shareIn. But how should we configure it? You need to decide
for yourself. Do you want your subscribers to immediately receive
the last list of locations? If so, set replay to 1. If you only want to react
to change, set it to 0. How about started? WhileSubscribed() sounds
best for this use case.
SharedFlow and StateFlow 319

class LocationService(
private val locationDao: LocationDao,
private val scope: CoroutineScope
) {
private val locations = [Link]()
.shareIn(
scope = scope,
started = [Link](),
)

fun observeLocations(): Flow<List<Location>> = locations


}

Beware! Do not create a new SharedFlow for each call.


Create one, and store it in a property.

StateFlow

StateFlow is an extension of the SharedFlow concept. It works simi-


larly to SharedFlow when the replay parameter is set to 1. It always
stores one value, which can be accessed using the value property.

interface StateFlow<out T> : SharedFlow<T> {


val value: T
}

interface MutableStateFlow<T> :
StateFlow<T>, MutableSharedFlow<T> {

override var value: T

fun compareAndSet(expect: T, update: T): Boolean


}

Please note how the value property is overridden inside


MutableStateFlow. In Kotlin, an open val property can
be overridden with a var property. val only allows get-
ting a value (getter), while var also supports setting a new
value (setter).
SharedFlow and StateFlow 320

The initial value needs to be passed to the constructor. We both


access and set the value using the value property. As you can see,
MutableStateFlow is like an observable holder for a value.

suspend fun main() = coroutineScope {


val state = MutableStateFlow("A")
println([Link]) // A
launch {
[Link] { println("Value changed to $it") }
// Value changed to A
}

delay(1000)
[Link] = "B" // Value changed to B

delay(1000)
launch {
[Link] { println("and now it is $it") }
// and now it is B
}

delay(1000)
[Link] = "C" // Value changed to C and now it is C
}

On Android, StateFlow is used as a modern alternative to LiveData.


First, it has full support for coroutines. Second, it has an initial value,
so it does not need to be nullable. So, StateFlow is often used on
ViewModels to represent its state. This state is observed, and a view
is displayed and updated on this basis.

class LatestNewsViewModel(
private val newsRepository: NewsRepository
) : ViewModel() {
private val _uiState =
MutableStateFlow<NewsState>(LoadingNews)
val uiState: StateFlow<NewsState> = _uiState

fun onCreate() {
SharedFlow and StateFlow 321

[Link] {
_uiState.value =
NewsLoaded([Link]())
}
}
}

stateIn

stateIn is a function that transforms Flow<T> into StateFlow<T>. It


can only be called with a scope, but it is a suspending function. Re-
member that StateFlow needs to always have a value; so, if you don’t
specify it, then you need to wait until the first value is calculated.

suspend fun main() = coroutineScope {


val flow = flowOf("A", "B", "C")
.onEach { delay(1000) }
.onEach { println("Produced $it") }
val stateFlow: StateFlow<String> = [Link](this)

println("Listening")
println([Link])
[Link] { println("Received $it") }
}
// (1 sec)
// Produced A
// Listening
// A
// Received A
// (1 sec)
// Produced B
// Received B
// (1 sec)
// Produced C
// Received C

The second variant of stateIn is not suspending but it requires an


initial value and a started mode. This mode has the same options as
shareIn (as previously explained).
SharedFlow and StateFlow 322

suspend fun main() = coroutineScope {


val flow = flowOf("A", "B")
.onEach { delay(1000) }
.onEach { println("Produced $it") }

val stateFlow: StateFlow<String> = [Link](


scope = this,
started = [Link],
initialValue = "Empty"
)

println([Link])

delay(2000)
[Link] { println("Received $it") }
}
// Empty
// (2 sec)
// Received Empty
// (1 sec)
// Produced A
// Received A
// (1 sec)
// Produced B
// Received B

We typically use stateIn when we want to observe a value from one


source of changes. On the way, these changes can be processed, and
in the end they can be observed by our views.

class LocationsViewModel(
private val locationService: LocationService
) : ViewModel() {

private val location = [Link]()


.map { [Link]() }
.stateIn(
scope = viewModelScope,
started = [Link],
SharedFlow and StateFlow 323

initialValue = emptyList(),
)

// ...
}

Summary

In this chapter, we’ve learned about SharedFlow and StateFlow, both


of which are especially important for Android developers as they are
commonly used as a part of the MVVM pattern. Remember them and
consider using them, especially if you use view models in Android
development.
Ending
All good things must come to an end, and the same is sadly true of
this book. Of course, there is still much more to say about Kotlin
Coroutines, but I believe we’ve covered the essentials well. Now it is
your turn to start using this knowledge in practice and deepen your
understanding.
As a short summary, I would like to present the most important use
cases and reflections on good style. Firstly, if we introduce corou-
tines into our code, it is best to use them from the bottom to the top
of your application. We should avoid mixing suspending functions
with blocking functions other concurrency styles, like callbacks or
explicit thread starting. The easiest way to achieve this is by using
libraries that have built-in support for suspending functions.

// Retrofit
class GithubApi {
@GET("orgs/{organization}/repos?per_page=100")
suspend fun getOrganizationRepos(
@Path("organization") organization: String
): List<Repo>
}

// Room
@Dao
interface LocationDao {
@Insert(onConflict = [Link])
suspend fun insertLocation(location: Location)

@Query("DELETE FROM location_table")


suspend fun deleteLocations()

@Query("SELECT * FROM location_table ORDER BY time")


fun observeLocations(): Flow<List<Location>>
}

324
325

If you cannot avoid calling a blocking function, wrap it with


withContext and set a proper dispatcher⁵⁰ (either [Link]
or a custom dispatcher that is built using [Link] and
limitedParallelism).

class DiscUserRepository(
private val discReader: DiscReader
) : UserRepository {
private val dispatcher = [Link]
.limitedParallelism(5)

override suspend fun getUser(): UserData =


withContext(dispatcher) {
[Link]<UserData>("userName")
}
}

If you need to use a callback function, wrap it with


suspendCancellableCoroutine. If possible, handle exceptions⁵¹
and cancellation⁵².

suspend fun requestNews(): News {


return suspendCancellableCoroutine<News> { cont ->
val call = requestNews(
onSuccess = { news -> [Link](news) },
onError = { e -> [Link](e) }
)
[Link] {
[Link]()
}
}
}

If a process involves CPU-intensive operations, use


[Link]; if it involves UI modifications that need
to happen on the main thread, use [Link]⁵³.
⁵⁰For details, see the Dispatchers chapter.
⁵¹For details, see the How does suspension work? chapter.
⁵²For details, see the Cancellation chapter.
⁵³For details, see the Dispatchers chapter.
326

suspend fun calculateModel() =


withContext([Link]) {
[Link](
dataset = newTrain,
epochs = 10,
batchSize = 100,
verbose = false
)
}

suspend fun setUserName(name: String) =


withContext([Link]) {
[Link] = name
}

Coroutines need to start somewhere. On backend frameworks, like


Spring or Ktor, there is built-in support for suspending functions, so
it is mainly the framework’s responsibility to start coroutines.

@Controller
class UserController(
private val tokenService: TokenService,
private val userService: UserService,
) {
@GetMapping("/me")
suspend fun findUser(
@PathVariable userId: String,
@RequestHeader("Authorization") authorization: String
): UserJson {
val userId = [Link](authorization)
val user = [Link](userId)
return [Link]()
}
}

However, Android and some other backend libraries do not offer


such convenience. The solution is to make our own scope and use
it to start coroutines. On Android, we can use viewModelScope or
lifecycleScope⁵⁴.
⁵⁴For details, see the Constructing a coroutine scope chapter.
327

class UserProfileViewModel(
private val loadProfileUseCase: LoadProfileUseCase,
private val updateProfileUseCase: UpdateProfileUseCase,
) {
private val userProfile =
MutableSharedFlow<UserProfileData>()

val userName: Flow<String> = userProfile


.map { [Link] }
val userSurname: Flow<String> = userProfile
.map { [Link] }
// ...

fun onCreate() {
[Link] {
val userProfileData =
[Link]()
[Link] = userProfileData
// ...
}
}

fun onNameChanged(newName: String) {


[Link] {
val newProfile = [Link](name = newName)
[Link] = newProfile
[Link](newProfile)
}
}
}
328

class UserProfileViewModel {
private val _userChanges =
MutableSharedFlow<UserChange>()
val userChanges: SharedFlow<UserChange> = _userChanges

fun onCreate() {
[Link] {
[Link](::applyUserChange)
}
}

fun onNameChanged(newName: String) {


// ...
_userChanges.emit(NameChange(newName))
}

fun onPublicKeyChanged(newPublicKey: String) {


// ...
_userChanges.emit(PublicKeyChange(newPublicKey))
}
}

In other cases, we need to create a custom scope. This is described in


detail in the Constructing a coroutine scope chapter, but here are a few
examples:

// On any platform
val analyticsScope = CoroutineScope(SupervisorJob())

// Android example with cancellation and exception handler


abstract class BaseViewModel : ViewModel() {
private val _failure: MutableLiveData<Throwable> =
MutableLiveData()
val failure: LiveData<Throwable> = _failure

private val exceptionHandler =


CoroutineExceptionHandler { _, throwable ->
_failure.value = throwable
}
329

private val context =


[Link] + SupervisorJob() + exceptionHandler

protected val scope = CoroutineScope(context)

override fun onCleared() {


[Link]()
}
}

// Spring example with custom exception handler


@Configuration
public class CoroutineScopeConfiguration {

@Bean(name = "coroutineDispatcher")
fun coroutineDispatcher(): CoroutineDispatcher =
[Link](5)

@Bean(name = "coroutineExceptionHandler")
fun exceptionHandler(): CoroutineExceptionHandler =
CoroutineExceptionHandler { _, throwable ->
[Link]()
.recordException(throwable)
}

@Bean
fun coroutineScope(
coroutineDispatcher: CoroutineDispatcher,
exceptionHandler: CoroutineExceptionHandler,
) = CoroutineScope(
SupervisorJob() +
coroutineDispatcher +
coroutineExceptionHandler
)
}

Such scopes might be useful when we need to start new processes.


For example, when some task scheduler regularly starts a process
330

that sends notifications. In this case, we need a regular function


that starts coroutines. If we need this function to be blocked until
all coroutines are finished, use runBlocking, as presented in the
example below.

class NotificationsSender(
private val client: NotificationsClient,
private val exceptionCollector: ExceptionCollector,
) {
private val handler = CoroutineExceptionHandler { _, e ->
[Link](e)
}
private val job = SupervisorJob()
private val scope = CoroutineScope(job + handler)

fun sendNotifications(notifications: List<Notification>) {


val jobs = [Link] { notification ->
[Link] {
[Link](notification)
}
}
runBlocking { [Link]() }
}

fun cancel() {
[Link]()
}
}

Between starting coroutines and suspending functions from repos-


itories, we mainly have suspending functions calling suspending
functions.
331

class NetworkUserRepository(
private val api: UserApi,
) : UserRepository {
override suspend fun getUser(): User =
[Link]().toDomainUser()
}

class NetworkNewsService(
private val newsRepo: NewsRepository,
private val settings: SettingsRepository,
) {

suspend fun getNews(): List<News> = newsRepo


.getNews()
.map { [Link]() }

suspend fun getNewsSummary(): List<News> {


val type = [Link]()
return [Link](type)
}
}

When you need to introduce concurrency into suspending functions,


wrap their body with coroutineScope and use async builder⁵⁵.

suspend fun getArticlesForUser(


userToken: String?,
): List<ArticleJson> = coroutineScope {
val articles = async { [Link]() }
val user = [Link](userToken)
[Link]()
.filter { canSeeOnList(user, it) }
.map { toArticleJson(it) }
}

This can be scaled to a bigger number of async processes, but


we can also limit it to a certain concurrency by using Flow and
flatMapMerge⁵⁶.
⁵⁵For details, see the Coroutine builders chapter.
⁵⁶For details, see the Flow processing chapter.
332

suspend fun getOffers(


categories: List<Category>
): List<Offer> = coroutineScope {
categories
.map { async { [Link](it) } }
.flatMap { [Link]() }
}

// A better solution
suspend fun getOffers(
categories: List<Category>
): Flow<Offer> = categories
.asFlow()
.flatMapMerge(concurrency = 20) {
suspend { [Link](it) }.asFlow()
// or flow { emit([Link](it)) }
}

If you need to ignore exceptions, use supervisorScope instead of


coroutineScope.

suspend fun notifyAnalytics(actions: List<UserAction>) =


supervisorScope {
[Link] { action ->
launch {
notifyAnalytics(action)
}
}
}

If you use async, you also need to catch exceptions that are thrown
from await⁵⁷.

⁵⁷For details, see the Coroutine scope functions chapter.


333

class ArticlesRepositoryComposite(
private val articleRepositories: List<ArticleRepository>,
) : ArticleRepository {
override suspend fun fetchArticles(): List<Article> =
supervisorScope {
articleRepositories
.map { async { [Link]() } }
.mapNotNull {
try {
[Link]()
} catch (e: Throwable) {
[Link]()
null
}
}
.flatten()
.sortedByDescending { [Link] }
}
}

To set a timeout for a coroutine, use withTimeout or


withTimeoutOrNull⁵⁸.

suspend fun getUserOrNull(): User? =


withTimeoutOrNull(5000) {
fetchUser()
}

For coroutine testing use cases, see the Testing Kotlin Coroutines
chapter. For use cases that involve safely accessing a shared state, see
The problem with shared states chapter.

⁵⁸For details, see the Coroutine scope functions chapter.

You might also like