22 Jvm Thread Heap and Other Data Analysis Curl a Thousand Times and Show Your Ability After Understanding a Thousand Swords

22 JVM Thread Heap and Other Data Analysis- Curl a Thousand Times and Show Your Ability After Understanding A Thousand Swords #

Introduction to Java Threads with Examples #

The use and optimization of multithreading is also an important part of Java application performance. In this section, we will mainly discuss this part.

Threads are important kernel-level resources in the system and cannot be created and used indefinitely. Creating threads is expensive and thread management is complex. When writing multithreaded code, if there is something not set correctly, it may result in some inexplicable bugs.

In development, the resource pool pattern is generally used, which is the “thread pool”. By delegating the scheduling and management of threads to the thread pool, applications can use a small number of threads to perform a large number of tasks.

The idea and principle of thread pools are roughly as follows: instead of creating a thread for each task and destroying it after execution, it is better to uniformly create a small number of threads, and then wrap the execution logic as individual tasks to be processed, and submit them to the thread pool for scheduling and execution. When there is a task to be scheduled, the thread pool finds an idle thread and notifies it to do the work. After the task is completed, the thread is returned to the pool and waits for the next scheduling. This avoids the overhead of creating and destroying a large number of threads each time, and isolates the task processing and thread pool management into two different code parts, allowing developers to focus on the logic of task processing. At the same time, by managing and scheduling, the actual number of threads is controlled, avoiding the creation of too many threads (far exceeding the number of CPU cores) at once, resulting in excessive thread context switching and reduced performance.

Java has supported multithreading since its inception, but in early versions, developers had to manually create and manage threads.

Starting from Java 5.0, standard thread pool APIs were provided: the Executor and ExecutorService interfaces, which define the thread pool and supported interactive operations. The relevant classes and interfaces are located in the java.util.concurrent package and can be used directly when writing simple concurrent tasks. Generally speaking, we can use the static factory methods of Executors to instantiate ExecutorService.

Now let’s explain through example code.

First, create a thread factory:

package demo.jvm0205;
import java.util.concurrent.ThreadFactory;
import java.util.concurrent.atomic.AtomicInteger;
// Demo thread factory
public class DemoThreadFactory implements ThreadFactory {
    // Thread name prefix
    private String threadNamePrefix;
    // Thread ID counter
    private AtomicInteger counter = new AtomicInteger();
    public DemoThreadFactory(String threadNamePrefix) {
        this.threadNamePrefix = threadNamePrefix;
    }
    @Override
    public Thread newThread(Runnable r) {
        // Create a new thread
        Thread t = new Thread(r);
        // Set a meaningful name
        t.setName(threadNamePrefix + "-" + counter.incrementAndGet());
        // Set as daemon thread
        t.setDaemon(Boolean.TRUE);
        // Set different priorities; for example, we have multiple thread pools, each handling normal tasks and urgent tasks respectively.
        t.setPriority(Thread.MAX_PRIORITY);
        // Set the class loader of a certain class or a custom one
        // t.setContextClassLoader();
        // Set the outermost exception handler for this thread
        // t.setUncaughtExceptionHandler();
        // No need to start; return directly;
        return t;
    }
}

Generally speaking, in the thread factory, it is recommended to assign a name to each thread for easy monitoring, diagnostics, and debugging.

As needed, the flag of whether it is a “daemon thread” will also be set. Daemon threads are equivalent to background threads, and if the JVM determines that all threads are daemon threads, it will exit automatically.

Then we create a “heavy” task class that implements the Runnable interface:

package demo.jvm0205;
import java.util.Random;
import java.util.concurrent.TimeUnit;
// Simulate heavy task
public class DemoHeavyTask implements Runnable {
    // Thread name prefix
    private int taskId;

    public DemoHeavyTask(int taskId) {
        this.taskId = taskId;
    }

    @Override
    public void run() {
        // Perform some business logic
        try {
            int mod = taskId % 50;
            if (0 == mod) {
                // Simulate waiting indefinitely;
                synchronized (this) {
                    this.wait();
                }
            }
}
// Simulate time-consuming task
TimeUnit.MILLISECONDS.sleep(new Random().nextInt(400) + 50);
} catch (InterruptedException e) {
e.printStackTrace();
}
String threadName = Thread.currentThread().getName();
System.out.println("JVM core technology: " + taskId + "; by: " + threadName);
}
}

Finally, create a thread pool and submit tasks to be executed:

package demo.jvm0205;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
/**
 * Thread pool example;
 */
public class GitChatThreadDemo {
public static void main(String[] args) throws Exception {
// 1. Thread factory
DemoThreadFactory threadFactory
= new DemoThreadFactory("JVM.GitChat");
// 2. Create a Cached thread pool; FIXME: there's a pit here...
ExecutorService executorService =
Executors.newCachedThreadPool(threadFactory);
// 3. Submit tasks;
int taskSum = 10000;
for (int i = 0; i < taskSum; i++) {
// Execute task
executorService.execute(new DemoHeavyTask(i + 1));
// Submit task interval
TimeUnit.MILLISECONDS.sleep(5);
}
// 4. Shutdown the thread pool
executorService.shutdownNow();
}
}



After starting the execution, the output is roughly like this:



```java
......
JVM core technology: 9898; by: JVM.GitChat-219
JVM core technology: 9923; by: JVM.GitChat-185
JVM core technology: 9918; by: JVM.GitChat-204
JVM core technology: 9922; by: JVM.GitChat-209
JVM core technology: 9903; by: JVM.GitChat-246
JVM core technology: 9886; by: JVM.GitChat-244
......
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java502)
at demo.jvm0205.DemoHeavyTask.run(DemoHeavyTask.java23)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java624)
at java.lang.Thread.run(Thread.java748)

As you can see, InterruptedException is thrown here.

This is because in our code, the main method submits tasks and then forcibly shuts down the thread pool without waiting for these tasks to complete.

This is something to be aware of. If you don’t need to force shutdown, you should use the shutdown method.

In general, the shutdown logic of the thread pool will be added to the application’s shutdown hook, such as registering a listener for web applications and executing it in the destroy method. This is also known as “graceful shutdown”.

JVM Thread Model #

From the previous example, we can see that multiple threads can be executed concurrently in Java.

So how does the JVM implement the underlying threads and scheduling?

Each thread has its own thread stack, and of course, the heap memory is shared by all threads.

Taking Hotspot as an example, this JVM maps Java threads (java.lang.Thread) to underlying operating system threads in a 1:1 manner. It’s very simple! But this is the most basic JVM thread model.

To troubleshoot issues, we need to understand some of the details.

Thread Creation and Destruction #

At the language level, the class corresponding to a thread is java.lang.Thread, and the startup method is Thread#start().

When a Java thread is started, it will create an underlying thread (native Thread) and automatically reclaim it after the task is completed.

All threads in the JVM are scheduled by the operating system to be executed on available CPUs.

Based on an understanding of the Hotspot thread model, we have created the following diagram:

62445939.png

From the diagram, we can see that when the start() method of the Thread object is called, the JVM will perform a series of operations internally.

Because the Hotspot JVM is written in C++, there are many C++ objects related to threads at the JVM level.

  • In Java code, the java.lang.Thread object represents a thread.
  • The JavaThread instance represents the java.lang.Thread internally in the JVM. This instance is a C++ object that stores various additional information to support thread state tracking and monitoring.
  • The OSThread instance represents an operating system thread (sometimes referred to as a physical thread), which includes system-level information required for tracking thread status. Of course, OSThread holds the corresponding “handle” to identify the underlying thread.

The associated java.lang.Thread object and JavaThread instance hold references to each other (address/OOP pointers). Of course, JavaThread also holds a reference to the corresponding OSThread.

When a java.lang.Thread is started, the JVM creates the corresponding JavaThread and OSThread objects, and finally creates the native thread.

After preparing all the VM states (such as thread-local storage, object allocation buffers, synchronization objects, etc.), the native thread is started.

After the native thread completes initialization, it executes a startup method, in which the run() method of the java.lang.Thread object is called.

After the run() method is finished, the result or exception returned is captured and processed accordingly.

Then the thread terminates and notifies the VM thread to determine whether to stop the entire virtual machine after the thread terminates (by judging whether there are still foreground threads).

When a thread ends, it releases all allocated resources, removes the JavaThread instance from the known thread set, calls the destructor of OSThread and JavaThread, and finally stops after the hook method executed by the underlying thread is finished.

Now we know that in Java code, we can start a thread by calling the start() method of the java.lang.Thread object. Besides, is there any other way to increase threads in the JVM? We can also include existing native threads into the JVM in JNI code. After that, the data structures created by the JVM are basically the same as normal Java threads.

The relationship between Java thread priority and operating system thread priority is complex and varies among different systems. This article does not provide a detailed explanation.

Thread States #

The JVM uses different states to indicate what each thread is doing. This helps to coordinate the interaction between threads and provides useful debugging information when problems occur.

The thread state changes when the thread performs different operations. The corresponding code at these transition points checks whether the thread is suitable for executing the requested operation at that time. For specific information, please refer to the safepoint section later.

From the perspective of the JVM, thread states mainly include four types:

  • _thread_new: A new thread being initialized.
  • _thread_in_Java: A thread executing Java code.
  • _thread_in_vm: A thread executing within the JVM.
  • _thread_blocked: A thread blocked for some reason (such as acquiring a lock, waiting for a condition, sleeping, performing blocking I/O operations, etc.)

For debugging purposes, additional information is maintained in the thread state. These information is maintained in the OSThread, some of which have been deprecated.

These information is used by relevant tools during thread dumps and call stack traces.

The states used in thread dumps and other reports include:

  • MONITOR_WAIT: The thread is waiting to acquire a contested monitor.
  • CONDVAR_WAIT: The thread is waiting on an internal condition variable used by the JVM (not associated with any Java-level objects).
  • OBJECT_WAIT: The thread is executing an Object.wait() call.

Other subsystems and libraries may also add their own state information, such as the JVMTI system, and the java.lang.Thread class itself exposes ThreadState.

Generally speaking, these additional information introduced later is irrelevant to the internal thread management of the JVM, and the JVM does not utilize these information.

Internal JVM Threads #

We will find that even when running a simple “Hello World” program, dozens of threads are created in the Java process.

These dozens of threads are mainly internal JVM threads and Lib-related threads (such as reference processors, finalizers, etc.).

The internal JVM threads can be mainly divided into the following categories:

  • VM thread: A singleton object of VMThread responsible for executing VM operations, which will be discussed later;
  • Timer thread: A singleton object of WatcherThread simulating a timer interrupt for executing timed operations in the VM;
  • GC thread: Thread in the garbage collector used to support parallel and concurrent garbage collection;
  • Compiler thread: Compiles bytecode into native machine code;
  • Signal dispatcher thread: Waits for signals indicated by the process and assigns them to Java-level signal handlers.

All threads in the JVM are instances of the Thread class, and all threads that execute Java code are instances of JavaThread (a subclass of Thread).

The JVM tracks all threads in the Threads_list linked list and uses the Threads_lock (a core synchronization lock used internally by the JVM) to protect it.

Thread Coordination and Communication #

In most cases, a subthread only needs to focus on its own tasks. However, there are also situations where multiple threads need to collaborate to complete a task, which involves inter-thread communication.

There are several ways for threads to communicate, including:

  • Thread waiting: Using the threadA.join() method allows the current thread to wait for another thread to complete before “joining” with it.
  • Synchronization: Including the synchronized keyword and object.wait(), object.notify(), etc.
  • Using concurrent utility classes, such as CountdownLatch, CyclicBarrier, etc.
  • Manageable thread pool-related interfaces, such as the FutureTask class, Callable interface, etc.
  • Java also supports other synchronization mechanisms, such as volatile fields and classes in the java.util.concurrent package (sometimes referred to as juc).

The most basic and simplest is synchronization, which can be implemented by the JVM using the monitor provided by the operating system, commonly referred to as an object lock or monitor lock.

Basics of synchronized #

Broadly speaking, “synchronization” refers to a mechanism used to prevent unexpected contamination (commonly known as “race conditions”) in concurrent operations.

HotSpot provides a monitor lock (Monitor) for Java, which allows threads to achieve mutual exclusion through the monitor while executing program code. The monitor has two states: locked and unlocked. When a thread obtains ownership of the monitor, it can enter the critical section protected by the monitor. In Java, this critical section is called a “synchronized block” and is identified by the synchronized statement in the code.

Each Java object has an associated monitor by default, and a thread can lock and unlock the monitor it holds. A thread can lock the same monitor multiple times, and unlocking is the reverse operation of locking.

At any given time, only one thread can hold the monitor lock, and other threads attempting to acquire the same monitor will be blocked. In other words, different threads are mutually exclusive on the monitor lock, allowing only one thread to access the protected code or data at any given time.

In Java, using a synchronized block requires the thread to first acquire the monitor lock on a specific object. Only after obtaining the corresponding monitor lock can the thread continue to run and execute the code inside the synchronized block. The lock is automatically released once normal or exceptional execution is complete.

Calling a method marked as synchronized also automatically performs a lock operation and requires the corresponding lock to execute the method. An instance method of a class locks the object lock pointed to by this, while a static method locks the monitor of the Class object, affecting all instances. When entering/exiting a method, the appropriate monitor’s lock/unlock operation is automatically triggered. If a thread tries to lock a monitor and the monitor is not locked, the thread immediately gains ownership of the monitor.

If, in the case of a locked monitor, a second thread attempts to gain ownership of the monitor, it is not allowed to enter the critical section (i.e., the code inside the synchronized block). After the owner of the monitor unlocks it, the second thread must also try to acquire (or be granted) exclusive ownership of the lock.

Here are some terms related to monitor locks:

  • “Enter” means gaining exclusive ownership of the monitor lock and being able to execute the critical section.
  • “Exit” means releasing ownership of the monitor and exiting the critical section.
  • “Owns” means that the thread locking the monitor owns it.
  • “Uncontended” refers to only one thread performing synchronization operations on an unlocked monitor.

Additionally, it should be mentioned that the Java language does not handle deadlock detection; it is the responsibility of the programmer.

To summarize, the synchronized keyword, which uses monitor locks, is used to coordinate access to a section of code by multiple threads. It can be applied to static methods, instance methods, and code blocks.

Lock hierarchy: Static methods (applied to the class) > Methods (applied to instances) > Code blocks (applied to a block of code).

Wait and Notify #

Every object has an associated monitor lock, and the JVM maintains a corresponding wait set, which contains references to threads.

For newly created objects, their wait set is empty. The addition or removal from the wait set is an atomic operation, performed by the methods Object#wait, Object#notify, and Object#notifyAll.

Thread interruption also affects the wait set, but Thread#sleep and Thread#join are not included in this scope.

Optimizations for Synchronization in Hotspot JVM #

The HotSpot JVM employs two advanced techniques, “uncontended synchronization” and “contended synchronization,” to improve the performance of synchronized statements.

In the case of uncontended synchronization, which is the most common synchronization scenario in most business scenarios, optimization is achieved using biased locking. In general, this type of synchronization has minimal performance overhead.

This is because, in the lifecycle of most objects, they are usually locked and used by only one thread. Thus, the lock is biased towards that thread.

Once biased, the thread can effortlessly lock and unlock the object in subsequent operations, without using expensive atomic instructions.

For synchronization in a contention scenario, advanced adaptive spin techniques are used to optimize and increase throughput. This optimization is also effective for heavily concurrent and contention-prone scenarios.

With these optimizations, Java’s built-in synchronization operations no longer suffer from the performance issues present in previous versions, for most systems. The Cost of Thread Context Switching:

The time slice for Linux is typically 0.75~6ms, while for Windows XP it is about 10~15ms. There may be slight variations between different systems, but they are generally in the millisecond range. Assuming the CPU frequency is 2GHz, each time slice corresponds to approximately 2 million clock cycles. If there is such a large overhead for each context switch, the system’s performance will be greatly affected.

That’s why the semaphore implementation in the JDK has been optimized with spin operations, making full use of the time slice already allocated to the current thread by the operating system. Otherwise, this time slice would be wasted.

If you perform performance tests on multiple threads’ synchronized and wait-notify operations in Java code, you will find that the performance of the program is not significantly affected by the time slice cycle.

In HotSpot JVM, most synchronization operations are handled by so-called “fast paths” code.

JVM has two just-in-time compilers (JIT) and an interpreter, both of which generate fast path code.

These two JITs are “C1” (the -client compiler) and “C2” (the -server compiler). Both C1 and C2 generate fast path code directly at the synchronization points.

In the absence of contention, the synchronization operations will be completed entirely in the fast path. However, if we need to block or wake up threads (in the monitorenter or monitorexit, respectively), the fast path code will call the slow path.

The slow path implementation is done in native C++ code, while the fast path is generated by the JIT.

Mark Word #

The synchronization status of object locks needs to be recorded somewhere. HotSpot encodes it into the first position in the object header in memory, which is called the “mark word”.

The mark word is used to identify various states, and this position can also be reused to point to other synchronization metadata.

In addition, the mark word can also be used to store GC age data and the unique hash code value of the object.

The states of the mark word include:

  • Neutral: representing an unlocked state.
  • Biased: representing a locked/unlocked or non-shared state.
  • Stack-Locked: representing locked and shared, but without a competing mark pointing to the owner thread stack’s displaced mark word.
  • Inflated: representing locked/unlocked and shared, with competing threads blocked in monitorenter or wait(). This mark points to the “object monitor” structure corresponding to the heavyweight lock.

Safepoints #

There are several concepts related to safepoints that need to be distinguished:

  • The safepoint check entry implanted in method code.
  • Thread in safepoint state: the thread pauses execution, and at this time, the thread stack no longer changes.
  • The JVM’s safepoint state: all threads are in the safepoint state.

In short, when the JVM is at a safepoint, all other threads in the JVM are blocked. At this time, when the VMThread performs operations, no business thread will modify the Java heap memory, and all threads are in a checkable state, meaning that their thread stack will not change (think about why?).

JVM has a special internal thread called “VMThread”. VMThread waits for operations appearing in the VMOperationQueue and executes them after the virtual machine reaches a safepoint.

Why do we need to extract these operations and execute them in a separate thread?

Because many operations require the JVM to reach the so-called “safepoint” before execution. As mentioned earlier, the heap memory does not change during a safepoint.

These operations can only be sent to VMThread for execution, such as the STW phase in garbage collection algorithms, biased lock revocation, thread stack dumping, thread suspension or termination, and many check/modification operations requested by JVMTI, etc.

Safepoints are initiated using a poll-based cooperative mechanism.

In simple terms, threads may frequently perform checks like “should I pause at the safepoint?”.

Efficient checking is not simple. Places where safepoint checks are performed include:

  • During thread state transitions. Most state transitions perform this operation, but not all, for example, when a thread leaves the JVM and enters native code.
  • Other places where inquiries are issued include returning from compiled native code methods or certain stages in loop iterations.

After requesting a safepoint, VMThread must wait for all known threads to be in the safepoint state before executing the VM operation.

During the safepoint, all running threads are blocked using the Threads_lock. After performing the VM operation, VMThread releases the Threads_lock.

Many VM operations are synchronous, meaning that the requester is blocked until the operation is completed. However, there are also asynchronous or concurrent operations, which means that the requester can be executed in parallel with VMThread (of course, before entering the safepoint state).

Thread Dump #

A thread dump is a snapshot of the state of all threads in the JVM. It is usually in text format, which can be saved to a text file for manual viewing and analysis, or automatically analyzed by a program.

The state of each thread can be represented by a call stack. The thread dump shows the behavior of each thread, which is very useful for diagnosis and troubleshooting.

In short, a thread dump is a snapshot of the thread’s status, mainly the well-known StackTrace, which is the method call stack.

JVM supports multiple ways to perform a thread dump, including:

  • JDK tools, including: jstack, jcmd, jconsole, jvisualvm, Java Mission Control, etc.
  • Shell command or system console, such as Linux’s kill -3 or Windows’s Ctrl + Break;
  • JMX technology, mainly using ThreadMxBean. We can call it in the program or JMX client, and the returned result is a text string that can be flexibly processed.

We usually use the command line tools provided by JDK to obtain thread dumps of Java applications.

jstack Tool #

In the previous chapters, we have detailed the jstack tool, which is used specifically for thread dumps. Generally, it connects to the local JVM:

jstack [-F] [-l] [-m] <pid>

where pid refers to the Java process id. The following options are supported:

  • -F option forces a thread dump to be performed; sometimes jstack pid may freeze, in which case the -F flag can be used.
  • -l option searches for synchronizers and locks in the heap memory.
  • -m option additionally prints native stack frames (C and C++).

Example usage:

jstack 8248 > ./threaddump.txt

jcmd Tool #

In the previous chapters, we have also detailed the jcmd tool, which essentially sends a series of commands to the target JVM. Here is an example:

jcmd 8248 Thread.print

JMX Approach #

JMX technology supports various fancy operations. We can use ThreadMxBean for thread dumps.

Here is an example code:

package demo.jvm0205;
import java.lang.management.*;
/**
 * Thread dump example
 */
public class JMXDumpThreadDemo {
    public static void main(String[] args) {
        String threadDump = snapThreadDump();
        System.out.println("=================");
        System.out.println(threadDump);
    }
    public static String snapThreadDump() {
        StringBuffer threadDump = new StringBuffer(System.lineSeparator());
        ThreadMXBean threadMXBean = ManagementFactory.getThreadMXBean();
        for (ThreadInfo threadInfo : threadMXBean.dumpAllThreads(true, true)) {
            threadDump.append(threadInfo.toString());
        }
        return threadDump.toString();
    }
}

Thread Dump Result #

Because they are all in string form, the results of thread dumps obtained in various ways are similar.

For example, running the previous JMX thread dump example program in debug mode yields the following result:

"JDWP Command Reader" Id=7 RUNNABLE (in native)

"JDWP Event Helper Thread" Id=6 RUNNABLE

"JDWP Transport Listener: dt_socket" Id=5 RUNNABLE

"Signal Dispatcher" Id=4 RUNNABLE
"Finalizer" Id=3 WAITING on java.lang.ref.ReferenceQueue$Lock@606d8acf
 at java.lang.Object.wait(Native Method)
 - waiting on java.lang.ref.ReferenceQueue$Lock@606d8acf
 at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
 at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:164)
 at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:212)

"Reference Handler" Id=2 WAITING on java.lang.ref.Reference$Lock@782830e
 at java.lang.Object.wait(Native Method)
 - waiting on java.lang.ref.Reference$Lock@782830e
 at java.lang.Object.wait(Object.java:502)
 at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
 at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153)

"main" Id=1 RUNNABLE
 at sun.management.ThreadImpl.dumpThreads0(Native Method)
 at sun.management.ThreadImpl.dumpAllThreads(ThreadImpl.java:454)
 at demo.jvm0205.JMXDumpThreadDemo.snapThreadDump(JMXDumpThreadDemo.java:21)
 at demo.jvm0205.JMXDumpThreadDemo.main(JMXDumpThreadDemo.java:13)

Analysis:

A simple analysis shows that in a basic Java program, the following threads are present:

  • JDWP-related threads, which were discussed in previous courses.
  • Signal Dispatcher, which distributes operating system signals (e.g. kill -3) to different processors for processing. We can also register our own signal handlers in programs, and interested students can search for relevant keywords.
  • Finalizer, the finalizer thread, which handles the finalization method for resource release. This is generally not something we need to pay much attention to nowadays.
  • Reference Handler, the reference processor.
  • main, the main thread, which is the foreground thread and is essentially no different from ordinary threads.

If the program runs for a long time, in addition to business threads, there may be GC threads and other threads. Please refer to the previous text for specific situations.

I recommend that students try using different commands and perform simple analysis.

Deadlock Example and Analysis #

Let’s simulate a thread deadlock with the following simple code:

package demo.jvm0207;
import java.util.concurrent.TimeUnit;
public class DeadLockSample {
    private static Object lockA = new Object();
    private static Object lockB = new Object();

    public static void main(String[] args) {
        ThreadTask1 task1 = new ThreadTask1();
        ThreadTask2 task2 = new ThreadTask2();
        //
        new Thread(task1).start();
        new Thread(task2).start();
    }

    private static class ThreadTask1 implements Runnable {
        public void run() {
            synchronized (lockA) {
                System.out.println("lockA by thread:"
                        + Thread.currentThread().getId());
                try {
                    TimeUnit.SECONDS.sleep(2);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
                synchronized (lockB) {
                    System.out.println("lockB by thread:"
                            + Thread.currentThread().getId());
                }
            }
        }
    }

    private static class ThreadTask2 implements Runnable {
        public void run() {
            synchronized (lockB) {
                System.out.println("lockB by thread:"
                        + Thread.currentThread().getId());
                try {
                    TimeUnit.SECONDS.sleep(2);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
                synchronized (lockA) {
                    System.out.println("lockA by thread:"
                            + Thread.currentThread().getId());
                }
            }
        }
    }
}

The code is a few dozen lines long, but the logic is simple: the two locks are acquired in different orders, and both threads are waiting indefinitely for the other’s lock resource.

Discovering Deadlocks with Thread Stack Dumps #

After the program is started, we can use various tools introduced earlier to take thread stack dumps, for example:

# Get the process ID
jps -v
# Dump the threads using jstack
jstack 8248
# Dump the threads using jcmd
jcmd 8248 Thread.print

The content obtained by these two command line tools is similar:

Found one Java-level deadlock:
=============================
"Thread-1":
  waiting to lock monitor 0x00007f8d9d030818 (object 0x000000076abef128, a java.lang.Object),
  which is held by "Thread-0"
"Thread-0":
  waiting to lock monitor 0x00007f8d9d032e98 (object 0x000000076abef138, a java.lang.Object),
  which is held by "Thread-1"

Java stack information for the threads listed above:
===================================================
"Thread-1":
 at demo.jvm0207.DeadLockSample$ThreadTask2.run(DeadLockSample.java:46)
 - waiting to lock <0x000000076abef128> (a java.lang.Object)
 - locked <0x000000076abef138> (a java.lang.Object)
 at java.lang.Thread.run(Thread.java:748)
"Thread-0":
 at demo.jvm0207.DeadLockSample$ThreadTask1.run(DeadLockSample.java:28)
 - waiting to lock <0x000000076abef138> (a java.lang.Object)
 - locked <0x000000076abef128> (a java.lang.Object)
 at java.lang.Thread.run(Thread.java:748)

Found 1 deadlock.

As seen, these tools automatically detect deadlocks and print the call stacks of the relevant threads.

Discovering Deadlocks with Visualization Tools #

Of course, we can also use the visualization tools mentioned earlier, such as jconsole:

79277126.png

Or JVisualVM:

79394987.png

The thread dumps obtained by these tools are similar to the ones previously mentioned. Please refer to the previous text for details.

Are there any tools for automatically analyzing threads? Please refer to the section “Introduction to fastthread-related Tools” later in this course.

References #