Quantcast
Channel: Baeldung
Viewing all articles
Browse latest Browse all 3522

Thread Safe LIFO Data Structure Implementations

$
0
0

1. Introduction

In this tutorial, we’ll discuss various options for Thread-safe LIFO Data structure implementations.

In the LIFO data structure, elements are inserted and retrieved according to the Last-In-First-Out principle. This means the last inserted element is retrieved first.

In computer science, stack is the term used to refer to such data structure. 

A stack is handy to deal with some interesting problems like expression evaluation, implementing undo operations, etc. Since it can be used in concurrent execution environments, we might need to make it thread-safe. 

2. Understanding Stacks

Basically, a Stack must implement the following methods:

  1. push() – add an element at the top
  2. pop() – fetch and remove the top element
  3. peek() – fetch the element without removing from the underlying container

As discussed before, let’s assume we want a command processing engine.

In this system, undoing executed commands is an important feature.

In general, all the commands are pushed onto the stack and then undo operation can simply be implemented:

  • pop() method to get the last executed command
  • call the undo() method on the popped command object

3. Understanding Thread Safety in Stacks

If a data structure is not thread-safe, when accessed concurrently, it might end up having race conditions.

Race conditions, in a nutshell, occur when the correct execution of code depends on the timing and sequence of threads. This happens mainly if more than one thread shares the data structure and this structure is not designed for this purpose.

Let’s examine a method below from a Java Collection class, ArrayDeque:

public E pollFirst() {
    int h = head;
    E result = (E) elements[h];
    // ... other book-keeping operations removed, for simplicity
    head = (h + 1) & (elements.length - 1);
    return result;
}

To explain the potential race condition in the above code, let us assume two threads executing this code as given in the below sequence:

  • First thread executes the third line: sets the result object with the element at the index ‘head’
  • The second thread executes the third line: sets the result object with the element at the index ‘head’
  • First thread executes the fifth line: resets the index ‘head’ to the next element in the backing array
  • The second thread executes the fifth line: resets the index ‘head’ to the next element in the backing array

Oops! Now, both the executions would return the same result object

To avoid such race conditions, in this case, a thread shouldn’t execute the first line till the other thread finishes resetting the ‘head’ index at the fifth line. In other words, accessing the element at the index ‘head’ and resetting the index ‘head’ should happen atomically for a thread.

Clearly, in this case, correct execution of code depends on the timing of threads and hence it’s not thread-safe.

4. Thread-safe Stacks Using Locks

In this section, we’ll discuss two possible options for concrete implementations of a thread-safe stack. 

In particular, we’ll cover the Java Stack and a thread-safe decorated ArrayDeque. 

Both use Locks for mutually exclusive access.

4.1. Using the Java Stack

Java Collections has a legacy implementation for thread-safe Stack, based on Vector which is basically a synchronized variant of ArrayList.

However, the official doc itself suggests considering using ArrayDeque. Hence we won’t get into too much detail.

Although the Java Stack is thread-safe and straight-forward to use, there are major disadvantages with this class:

  • It doesn’t have support for setting the initial capacity
  • It uses locks for all the operations. This might hurt the performance for single threaded executions.

4.2. Using ArrayDeque

Using the Deque interface is the most convenient approach for LIFO data structures as it provides all the needed stack operations. ArrayDeque is one such concrete implementation.  

Since it’s not using locks for the operations, single-threaded executions would work just fine. But for multi-threaded executions, this is problematic.

However, we can implement a synchronization decorator for ArrayDeque. Though this performs similarly to Java Collection Framework’s Stack class, the important issue of Stack class, lack of initial capacity setting, is solved.

Let’s have a look at this class:

public class DequeBasedSynchronizedStack<T> {

    // Internal Deque which gets decorated for synchronization.
    private ArrayDeque<T> dequeStore;

    public DequeBasedSynchronizedStack(int initialCapacity) {
        this.dequeStore = new ArrayDeque<>(initialCapacity);
    }

    public DequeBasedSynchronizedStack() {
        dequeStore = new ArrayDeque<>();
    }

    public synchronized T pop() {
        return this.dequeStore.pop();
    }

    public synchronized void push(T element) {
        this.dequeStore.push(element);
    }

    public synchronized T peek() {
        return this.dequeStore.peek();
    }

    public synchronized int size() {
        return this.dequeStore.size();
    }
}

Note that our solution does not implement Deque itself for simplicity, as it contains many more methods.

Also, Guava contains SynchronizedDeque which is a production-ready implementation of a decorated ArrayDequeue.

5. Lock-free Thread-safe Stacks

ConcurrentLinkedDeque is a lock-free implementation of Deque interface. This implementation is completely thread-safe as it uses an efficient lock-free algorithm.

Lock-free implementations are immune to the following issues, unlike lock based ones.

  • Priority inversion – This occurs when the low-priority thread holds the lock needed by a high priority thread. This might cause the high-priority thread to block
  • Deadlocks – This occurs when different threads lock the same set of resources in a different order.

On top of that, Lock-free implementations have some features which make them perfect to use in both single and multi-threaded environments.

  • For unshared data structures and for single-threaded access, performance would be at par with ArrayDeque
  • For shared data structures, performance varies according to the number of threads that access it simultaneously.

And in terms of usability, it is no different than ArrayDeque as both implement the Deque interface.

6. Conclusion

In this article, we’ve discussed the stack data structure and its benefits in designing systems like Command processing engine and Expression evaluators.

Also, we have analyzed various stack implementations in the Java collections framework and discussed their performance and thread-safety nuances.

As usual, code examples can be found over on GitHub.


Viewing all articles
Browse latest Browse all 3522

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>