0% found this document useful (0 votes)
130 views14 pages

Processes and Threads

Processes and threads are the two basic units of execution. A process has its own memory space and resources while threads exist within a process and share resources. While processes allow communication through IPC, threads are lightweight processes that provide efficient communication. The Java platform uses threads for concurrency and each application has at least one main thread that can create more threads. Threads are represented by Thread objects and started by calling the start method.

Uploaded by

krishna524
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
130 views14 pages

Processes and Threads

Processes and threads are the two basic units of execution. A process has its own memory space and resources while threads exist within a process and share resources. While processes allow communication through IPC, threads are lightweight processes that provide efficient communication. The Java platform uses threads for concurrency and each application has at least one main thread that can create more threads. Threads are represented by Thread objects and started by calling the start method.

Uploaded by

krishna524
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Processes and Threads

In concurrent programming, there are two basic units of execution: processes and threads. In the Java
programming language, concurrent programming is mostly concerned with threads. However, processes
are also important.

A computer system normally has many active processes and threads. This is true even in systems that
only have a single execution core, and thus only have one thread actually executing at any given moment.
Processing time for a single core is shared among processes and threads through an OS feature called
time slicing.

It's becoming more and more common for computer systems to have multiple processors or processors
with multiple execution cores. This greatly enhances a system's capacity for concurrent execution of
processes and threads — but concurrency is possible even on simple systems, without multiple
processors or execution cores.

Processes

A process has a self-contained execution environment. A process generally has a complete, private set of
basic run-time resources; in particular, each process has its own memory space.

Processes are often seen as synonymous with programs or applications. However, what the user sees as a
single application may in fact be a set of cooperating processes. To facilitate communication between
processes, most operating systems support Inter Process Communication (IPC) resources, such as pipes
and sockets. IPC is used not just for communication between processes on the same system, but
processes on different systems.

Most implementations of the Java virtual machine run as a single process. A Java application can create
additional processes using a ProcessBuilder object. Multiprocess applications are beyond the scope of
this lesson.

Threads

Threads are sometimes called lightweight processes. Both processes and threads provide an execution
environment, but creating a new thread requires fewer resources than creating a new process.

Threads exist within a process — every process has at least one. Threads share the process's resources,
including memory and open files. This makes for efficient, but potentially problematic, communication.

Multithreaded execution is an essential feature of the Java platform. Every application has at least one
thread — or several, if you count "system" threads that do things like memory management and signal
handling. But from the application programmer's point of view, you start with just one thread, called
the main thread. This thread has the ability to create additional threads, as we'll demonstrate in the next
section.

Thread Objects
Each thread is associated with an instance of the class Thread. There are two basic strategies for
using Thread objects to create a concurrent application.
 To directly control thread creation and management, simply instantiate Thread each time the
application needs to initiate an asynchronous task.
 To abstract thread management from the rest of your application, pass the application's tasks to
an executor.

Defining and Starting a Thread


An application that creates an instance of Thread must provide the code that will run in that thread. There
are two ways to do this:

 Provide a Runnable object. The Runnable interface defines a single method, run, meant to contain


the code executed in the thread. The Runnable object is passed to the Thread constructor, as in
the HelloRunnable example:

public class HelloRunnable implements Runnable {

public void run() {


System.out.println("Hello from a thread!");
}

public static void main(String args[]) {


(new Thread(new HelloRunnable())).start();
}

 Subclass Thread. The Thread class itself implements Runnable, though its run method does


nothing. An application can subclass Thread, providing its own implementation of run, as in
the HelloThread example:

public class HelloThread extends Thread {

public void run() {


System.out.println("Hello from a thread!");
}

public static void main(String args[]) {


(new HelloThread()).start();
}

Notice that both examples invoke Thread.start in order to start the new thread.

Which of these idioms should you use? The first idiom, which employs a Runnable object, is more
general, because the Runnable object can subclass a class other than Thread. The second idiom is easier to
use in simple applications, but is limited by the fact that your task class must be a descendant of Thread.
This lesson focuses on the first approach, which separates the Runnable task from the Thread object that
executes the task. Not only is this approach more flexible, but it is applicable to the high-level thread
management APIs covered later.
The Thread class defines a number of methods useful for thread management. These
include static methods, which provide information about, or affect the status of, the thread invoking the
method. The other methods are invoked from other threads involved in managing the thread
and Thread object. We'll examine some of these methods in the following sections.

Pausing Execution with Sleep


Thread.sleep causes the current thread to suspend execution for a specified period. This is an efficient
means of making processor time available to the other threads of an application or other applications that
might be running on a computer system. The sleep method can also be used for pacing, as shown in the
example that follows, and waiting for another thread with duties that are understood to have time
requirements, as with the SimpleThreads example in a later section.

Two overloaded versions of sleep are provided: one that specifies the sleep time to the millisecond and
one that specifies the sleep time to the nanosecond. However, these sleep times are not guaranteed to be
precise, because they are limited by the facilities provided by the underlying OS. Also, the sleep period
can be terminated by interrupts, as we'll see in a later section. In any case, you cannot assume that
invoking sleep will suspend the thread for precisely the time period specified.

The SleepMessages example uses sleep to print messages at four-second intervals:

public class SleepMessages {


public static void main(String args[])
throws InterruptedException {
String importantInfo[] = {
"Mares eat oats",
"Does eat oats",
"Little lambs eat ivy",
"A kid will eat ivy too"
};

for (int i = 0;
i < importantInfo.length;
i++) {
//Pause for 4 seconds
Thread.sleep(4000);
//Print a message
System.out.println(importantInfo[i]);
}
}
}

Notice that main declares that it throws InterruptedException. This is an exception that sleep throws


when another thread interrupts the current thread while sleep is active. Since this application has not
defined another thread to cause the interrupt, it doesn't bother to catch InterruptedException.

Interrupts
An interrupt is an indication to a thread that it should stop what it is doing and do something else. It's up
to the programmer to decide exactly how a thread responds to an interrupt, but it is very common for the
thread to terminate. This is the usage emphasized in this lesson.
A thread sends an interrupt by invoking interrupt on the Thread object for the thread to be interrupted.
For the interrupt mechanism to work correctly, the interrupted thread must support its own interruption.

Supporting Interruption

How does a thread support its own interruption? This depends on what it's currently doing. If the thread
is frequently invoking methods that throw InterruptedException, it simply returns from the run method
after it catches that exception. For example, suppose the central message loop in
the SleepMessages example were in the run method of a thread's Runnable object. Then it might be
modified as follows to support interrupts:
for (int i = 0; i < importantInfo.length; i++) {
// Pause for 4 seconds
try {
Thread.sleep(4000);
} catch (InterruptedException e) {
// We've been interrupted: no more messages.
return;
}
// Print a message
System.out.println(importantInfo[i]);
}

Many methods that throw InterruptedException, such as sleep, are designed to cancel their current
operation and return immediately when an interrupt is received.

What if a thread goes a long time without invoking a method that throws InterruptedException? Then it
must periodically invoke Thread.interrupted, which returnstrue if an interrupt has been received. For
example:
for (int i = 0; i < inputs.length; i++) {
heavyCrunch(inputs[i]);
if (Thread.interrupted()) {
// We've been interrupted: no more crunching.
return;
}
}

In this simple example, the code simply tests for the interrupt and exits the thread if one has been
received. In more complex applications, it might make more sense to throw an InterruptedException:
if (Thread.interrupted()) {
throw new InterruptedException();
}

This allows interrupt handling code to be centralized in a catch clause.

The Interrupt Status Flag

The interrupt mechanism is implemented using an internal flag known as the interrupt status.
Invoking Thread.interrupt sets this flag. When a thread checks for an interrupt by invoking the static
method Thread.interrupted, interrupt status is cleared. The non-static isInterrupted method, which is
used by one thread to query the interrupt status of another, does not change the interrupt status flag.
By convention, any method that exits by throwing an InterruptedException clears interrupt status when
it does so. However, it's always possible that interrupt status will immediately be set again, by another
thread invoking interrupt.

Joins
The join method allows one thread to wait for the completion of another. If t is a Thread object whose
thread is currently executing,
t.join();

causes the current thread to pause execution until t's thread terminates. Overloads of join allow the
programmer to specify a waiting period. However, as with sleep, join is dependent on the OS for timing,
so you should not assume that join will wait exactly as long as you specify.

Like sleep, join responds to an interrupt by exiting with an InterruptedException.

The SimpleThreads Example


The following example brings together some of the concepts of this section. SimpleThreads consists of
two threads. The first is the main thread that every Java application has. The main thread creates a new
thread from the Runnable object, MessageLoop, and waits for it to finish. If the MessageLoop thread takes
too long to finish, the main thread interrupts it.

The MessageLoop thread prints out a series of messages. If interrupted before it has printed all its
messages, the MessageLoop thread prints a message and exits.

public class SimpleThreads {

// Display a message, preceded by


// the name of the current thread
static void threadMessage(String message) {
String threadName =
Thread.currentThread().getName();
System.out.format("%s: %s%n",
threadName,
message);
}

private static class MessageLoop


implements Runnable {
public void run() {
String importantInfo[] = {
"Mares eat oats",
"Does eat oats",
"Little lambs eat ivy",
"A kid will eat ivy too"
};
try {
for (int i = 0;
i < importantInfo.length;
i++) {
// Pause for 4 seconds
Thread.sleep(4000);
// Print a message
threadMessage(importantInfo[i]);
}
} catch (InterruptedException e) {
threadMessage("I wasn't done!");
}
}
}

public static void main(String args[])


throws InterruptedException {

// Delay, in milliseconds before


// we interrupt MessageLoop
// thread (default one hour).
long patience = 1000 * 60 * 60;

// If command line argument


// present, gives patience
// in seconds.
if (args.length > 0) {
try {
patience = Long.parseLong(args[0]) * 1000;
} catch (NumberFormatException e) {
System.err.println("Argument must be an integer.");
System.exit(1);
}
}

threadMessage("Starting MessageLoop thread");


long startTime = System.currentTimeMillis();
Thread t = new Thread(new MessageLoop());
t.start();

threadMessage("Waiting for MessageLoop thread to finish");


// loop until MessageLoop
// thread exits
while (t.isAlive()) {
threadMessage("Still waiting...");
// Wait maximum of 1 second
// for MessageLoop thread
// to finish.
t.join(1000);
if (((System.currentTimeMillis() - startTime) > patience)
&& t.isAlive()) {
threadMessage("Tired of waiting!");
t.interrupt();
// Shouldn't be long now
// -- wait indefinitely
t.join();
}
}
threadMessage("Finally!");
}
}

Synchronization
Threads communicate primarily by sharing access to fields and the objects reference fields refer to. This
form of communication is extremely efficient, but makes two kinds of errors possible: thread
interference and memory consistency errors. The tool needed to prevent these errors is synchronization.

However, synchronization can introduce thread contention, which occurs when two or more threads try
to access the same resource simultaneously and cause the Java runtime to execute one or more threads
more slowly, or even suspend their execution. Starvation and livelock are forms of thread contention. See
the section Liveness for more information.

This section covers the following topics:

 Thread Interference describes how errors are introduced when multiple threads access shared
data.
 Memory Consistency Errors describes errors that result from inconsistent views of shared
memory.
 Synchronized Methods describes a simple idiom that can effectively prevent thread interference
and memory consistency errors.
 Implicit Locks and Synchronization describes a more general synchronization idiom, and describes
how synchronization is based on implicit locks.
 Atomic Access talks about the general idea of operations that can't be interfered with by other
threads.

Thread Interference
Consider a simple class called Counter

class Counter {
private int c = 0;

public void increment() {


c++;
}

public void decrement() {


c--;
}

public int value() {


return c;
}

Counter is designed so that each invocation of increment will add 1 to c, and each invocation
of decrement will subtract 1 from c. However, if a Counter object is referenced from multiple threads,
interference between threads may prevent this from happening as expected.

Interference happens when two operations, running in different threads, but acting on the same
data, interleave. This means that the two operations consist of multiple steps, and the sequences of steps
overlap.

It might not seem possible for operations on instances of Counter to interleave, since both operations
on c are single, simple statements. However, even simple statements can translate to multiple steps by the
virtual machine. We won't examine the specific steps the virtual machine takes — it is enough to know
that the single expression c++ can be decomposed into three steps:

1. Retrieve the current value of c.


2. Increment the retrieved value by 1.
3. Store the incremented value back in c.

The expression c-- can be decomposed the same way, except that the second step decrements instead of
increments.

Suppose Thread A invokes increment at about the same time Thread B invokes decrement. If the initial
value of c is 0, their interleaved actions might follow this sequence:

1. Thread A: Retrieve c.
2. Thread B: Retrieve c.
3. Thread A: Increment retrieved value; result is 1.
4. Thread B: Decrement retrieved value; result is -1.
5. Thread A: Store result in c; c is now 1.
6. Thread B: Store result in c; c is now -1.

Thread A's result is lost, overwritten by Thread B. This particular interleaving is only one possibility.
Under different circumstances it might be Thread B's result that gets lost, or there could be no error at all.
Because they are unpredictable, thread interference bugs can be difficult to detect and fix.

Memory Consistency Errors


Memory consistency errors occur when different threads have inconsistent views of what should be the
same data. The causes of memory consistency errors are complex and beyond the scope of this tutorial.
Fortunately, the programmer does not need a detailed understanding of these causes. All that is needed is
a strategy for avoiding them.

The key to avoiding memory consistency errors is understanding the happens-before relationship. This


relationship is simply a guarantee that memory writes by one specific statement are visible to another
specific statement. To see this, consider the following example. Suppose a simple int field is defined and
initialized:
int counter = 0;

The counter field is shared between two threads, A and B. Suppose thread A increments counter:


counter++;

Then, shortly afterwards, thread B prints out counter:


System.out.println(counter);

If the two statements had been executed in the same thread, it would be safe to assume that the value
printed out would be "1". But if the two statements are executed in separate threads, the value printed out
might well be "0", because there's no guarantee that thread A's change to counter will be visible to thread
B — unless the programmer has established a happens-before relationship between these two statements.

There are several actions that create happens-before relationships. One of them is synchronization, as we
will see in the following sections.
We've already seen two actions that create happens-before relationships.

 When a statement invokes Thread.start, every statement that has a happens-before relationship


with that statement also has a happens-before relationship with every statement executed by
the new thread. The effects of the code that led up to the creation of the new thread are visible
to the new thread.
 When a thread terminates and causes a Thread.join in another thread to return, then all the
statements executed by the terminated thread have a happens-before relationship with all the
statements following the successful join. The effects of the code in the thread are now visible to
the thread that performed the join.

For a list of actions that create happens-before relationships, refer to the Summary page of
the java.util.concurrent package..

Synchronized Methods
The Java programming language provides two basic synchronization idioms: synchronized
methods and synchronized statements. The more complex of the two, synchronized statements, are
described in the next section. This section is about synchronized methods.

To make a method synchronized, simply add the synchronized keyword to its declaration:


public class SynchronizedCounter {
private int c = 0;

public synchronized void increment() {


c++;
}

public synchronized void decrement() {


c--;
}

public synchronized int value() {


return c;
}
}

If count is an instance of SynchronizedCounter, then making these methods synchronized has two effects:

 First, it is not possible for two invocations of synchronized methods on the same object to
interleave. When one thread is executing a synchronized method for an object, all other threads
that invoke synchronized methods for the same object block (suspend execution) until the first
thread is done with the object.
 Second, when a synchronized method exits, it automatically establishes a happens-before
relationship with any subsequent invocation of a synchronized method for the same object. This
guarantees that changes to the state of the object are visible to all threads.

Note that constructors cannot be synchronized — using the synchronized keyword with a constructor is a


syntax error. Synchronizing constructors doesn't make sense, because only the thread that creates an
object should have access to it while it is being constructed.
Warning: When constructing an object that will be shared between threads, be very careful that a
reference to the object does not "leak" prematurely. For example, suppose you want to maintain
a List called instances containing every instance of class. You might be tempted to add the following
line to your constructor:
instances.add(this);
But then other threads can use instances to access the object before construction of the object is
complete.

Synchronized methods enable a simple strategy for preventing thread interference and memory
consistency errors: if an object is visible to more than one thread, all reads or writes to that object's
variables are done through synchronized methods. (An important exception: final fields, which cannot
be modified after the object is constructed, can be safely read through non-synchronized methods, once
the object is constructed) This strategy is effective, but can present problems with liveness, as we'll see
later in this lesson.

Intrinsic Locks and Synchronization


Synchronization is built around an internal entity known as the intrinsic lock or monitor lock. (The API
specification often refers to this entity simply as a "monitor.") Intrinsic locks play a role in both aspects
of synchronization: enforcing exclusive access to an object's state and establishing happens-before
relationships that are essential to visibility.

Every object has an intrinsic lock associated with it. By convention, a thread that needs exclusive and
consistent access to an object's fields has to acquire the object's intrinsic lock before accessing them, and
then release the intrinsic lock when it's done with them. A thread is said to own the intrinsic lock
between the time it has acquired the lock and released the lock. As long as a thread owns an intrinsic
lock, no other thread can acquire the same lock. The other thread will block when it attempts to acquire
the lock.

When a thread releases an intrinsic lock, a happens-before relationship is established between that action
and any subsequent acquistion of the same lock.

Locks In Synchronized Methods

When a thread invokes a synchronized method, it automatically acquires the intrinsic lock for that
method's object and releases it when the method returns. The lock release occurs even if the return was
caused by an uncaught exception.

You might wonder what happens when a static synchronized method is invoked, since a static method is
associated with a class, not an object. In this case, the thread acquires the intrinsic lock for
the Class object associated with the class. Thus access to class's static fields is controlled by a lock that's
distinct from the lock for any instance of the class.
Synchronized Statements

Another way to create synchronized code is with synchronized statements. Unlike synchronized methods,
synchronized statements must specify the object that provides the intrinsic lock:
public void addName(String name) {
synchronized(this) {
lastName = name;
nameCount++;
}
nameList.add(name);
}

In this example, the addName method needs to synchronize changes to lastName and nameCount, but also


needs to avoid synchronizing invocations of other objects' methods. (Invoking other objects' methods
from synchronized code can create problems that are described in the section on Liveness.) Without
synchronized statements, there would have to be a separate, unsynchronized method for the sole purpose
of invoking nameList.add.

Synchronized statements are also useful for improving concurrency with fine-grained synchronization.
Suppose, for example, class MsLunch has two instance fields, c1 and c2, that are never used together. All
updates of these fields must be synchronized, but there's no reason to prevent an update of c1 from being
interleaved with an update of c2 — and doing so reduces concurrency by creating unnecessary blocking.
Instead of using synchronized methods or otherwise using the lock associated with this, we create two
objects solely to provide locks.
public class MsLunch {
private long c1 = 0;
private long c2 = 0;
private Object lock1 = new Object();
private Object lock2 = new Object();

public void inc1() {


synchronized(lock1) {
c1++;
}
}

public void inc2() {


synchronized(lock2) {
c2++;
}
}
}

Use this idiom with extreme care. You must be absolutely sure that it really is safe to interleave access of
the affected fields.

Reentrant Synchronization

Recall that a thread cannot acquire a lock owned by another thread. But a thread can acquire a lock that it
already owns. Allowing a thread to acquire the same lock more than once enables reentrant
synchronization. This describes a situation where synchronized code, directly or indirectly, invokes a
method that also contains synchronized code, and both sets of code use the same lock. Without reentrant
synchronization, synchronized code would have to take many additional precautions to avoid having a
thread cause itself to block.

Atomic Access
In programming, an atomic action is one that effectively happens all at once. An atomic action cannot
stop in the middle: it either happens completely, or it doesn't happen at all. No side effects of an atomic
action are visible until the action is complete.

We have already seen that an increment expression, such as c++, does not describe an atomic action. Even
very simple expressions can define complex actions that can decompose into other actions. However,
there are actions you can specify that are atomic:

 Reads and writes are atomic for reference variables and for most primitive variables (all types
except long and double).
 Reads and writes are atomic for all variables
declared volatile (including long and double variables).

Atomic actions cannot be interleaved, so they can be used without fear of thread interference. However,
this does not eliminate all need to synchronize atomic actions, because memory consistency errors are
still possible. Using volatile variables reduces the risk of memory consistency errors, because any write
to a volatile variable establishes a happens-before relationship with subsequent reads of that same
variable. This means that changes to a volatile variable are always visible to other threads. What's more,
it also means that when a thread reads a volatile variable, it sees not just the latest change to
the volatile, but also the side effects of the code that led up the change.

Using simple atomic variable access is more efficient than accessing these variables through
synchronized code, but requires more care by the programmer to avoid memory consistency errors.
Whether the extra effort is worthwhile depends on the size and complexity of the application.

Some of the classes in the java.util.concurrent package provide atomic methods that do not rely on
synchronization. We'll discuss them in the section on High Level Concurrency Objects.

Liveness
A concurrent application's ability to execute in a timely manner is known as its liveness. This section
describes the most common kind of liveness problem, deadlock, and goes on to briefly describe two other
liveness problems, starvation and livelock.

Deadlock
Deadlock describes a situation where two or more threads are blocked forever, waiting for each other.
Here's an example.

Alphonse and Gaston are friends, and great believers in courtesy. A strict rule of courtesy is that when
you bow to a friend, you must remain bowed until your friend has a chance to return the bow.
Unfortunately, this rule does not account for the possibility that two friends might bow to each other at
the same time. This example application, Deadlock, models this possibility:
public class Deadlock {
static class Friend {
private final String name;
public Friend(String name) {
this.name = name;
}
public String getName() {
return this.name;
}
public synchronized void bow(Friend bower) {
System.out.format("%s: %s"
+ " has bowed to me!%n",
this.name, bower.getName());
bower.bowBack(this);
}
public synchronized void bowBack(Friend bower) {
System.out.format("%s: %s"
+ " has bowed back to me!%n",
this.name, bower.getName());
}
}

public static void main(String[] args) {


final Friend alphonse =
new Friend("Alphonse");
final Friend gaston =
new Friend("Gaston");
new Thread(new Runnable() {
public void run() { alphonse.bow(gaston); }
}).start();
new Thread(new Runnable() {
public void run() { gaston.bow(alphonse); }
}).start();
}
}

When Deadlock runs, it's extremely likely that both threads will block when they attempt to
invoke bowBack. Neither block will ever end, because each thread is waiting for the other to exit bow.

Starvation and Livelock


Starvation and livelock are much less common a problem than deadlock, but are still problems that every
designer of concurrent software is likely to encounter.

Starvation

Starvation describes a situation where a thread is unable to gain regular access to shared resources and is
unable to make progress. This happens when shared resources are made unavailable for long periods by
"greedy" threads. For example, suppose an object provides a synchronized method that often takes a long
time to return. If one thread invokes this method frequently, other threads that also need frequent
synchronized access to the same object will often be blocked.

Livelock

A thread often acts in response to the action of another thread. If the other thread's action is also a
response to the action of another thread, then livelock may result. As with deadlock, livelocked threads
are unable to make further progress. However, the threads are not blocked — they are simply too busy
responding to each other to resume work. This is comparable to two people attempting to pass each other
in a corridor: Alphonse moves to his left to let Gaston pass, while Gaston moves to his right to let
Alphonse pass. Seeing that they are still blocking each other, Alphone moves to his right, while Gaston
moves to his left. They're still blocking each other, so...

You might also like