Description

When there is a performance bottleneck in the application, most of the threads will start to accumulate on that problematic bottleneck area. Those threads will have same stack trace. Thus whenever a significant number of threads exhibit identical/repetitive stack trace then those stack trace should be investigated. It may be indicative of performance problems.

Here are few such scenarios:

  1. Say your SOR or external service is slowing down then a significant number of threads will start to wait for its response. In such circumstance, those threads will exhibit same stack trace.
  2. Say a thread acquired a lock & it never released then, then several other threads which are in the same execution path will get into the blocked state, exhibiting same stack trace.
  3. If a loop (for loop, while loop, do..while loop) condition doesn’t terminate then several threads which execute that loop will exhibit the same stack trace.

When any of the above scenarios occurs application’s performance, and availability will be questioned.

Example

Below is an excerpt from a thread dump of a major B2B application. This application was running fine, but all of a sudden it became unresponsive. Thread dump from this application was captured. It revealed that 225 threads out of 400 threads were exhibiting same stack trace. Here goes that stack trace:

"ajp-bio-192.168.100.128-9022-exec-79" daemon prio=10 tid=0x00007f4d2001c000 nid=0x1d1c waiting for monitor entry [0x00007f4ce91fa000]
   java.lang.Thread.State: BLOCKED (on object monitor)
	at com.xxxxxxxxx.xx.xxx.xxxx.ReferenceNumberGeneratorDaoImpl.getNextItineryReferenceNumber(ReferenceNumberGeneratorDaoImpl.java:55)
	- waiting to lock 0x00000006afaa5a60 (a com.xxxxxxxxx.sp.dao.impl.ReferenceNumberGeneratorDaoImpl)
	at sun.reflect.GeneratedMethodAccessor3112.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:601)
	at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
	at org.springframework.orm.hibernate3.HibernateInterceptor.invoke(HibernateInterceptor.java:111)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
	at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
	at com.sun.proxy.$Proxy36.getNextItineryReferenceNumber(Unknown Source)
	at com.xxxxxxxxx.xx.xxxxxxxx.xxx.ReferenceNumberGeneratorServiceImpl.getNextItineryReferenceNumber(ReferenceNumberGeneratorServiceImpl.java:15)
	:
	:

From the stack trace, you can infer that thread was blocked and waiting for the lock on the object 0x00000006afaa5a60. 225 such threads were waiting to obtain lock on this same object. It’s definitely a bad sign. It’s a clear indication of thread starvation.

Apparently this lock was held by “ajp-bio-192.168.100.128-9022-exec-84″. Below is the stack trace this thread. You can notice that this thread acquired the lock on the object 0x00000006afaa5a60, but after acquiring the lock, it got stuck waiting for response from the database. Apparently for this application database timeout wasn’t set. Due to that this thread’s database call never returned back. Due to that 225 other threads were stuck.  Thus application became unresponsive.

After setting proper database time out value, this problem went away.

“ajp-bio-192.168.100.128-9022-exec-84″ daemon prio=10 tid=0x00007f4d2000a800 nid=0x1d26 runnable [0x00007f4ce6ce1000]
    java.lang.Thread.State: RUNNABLE
	at java.net.SocketInputStream.socketRead0(Native Method)
	at java.net.SocketInputStream.read(SocketInputStream.java:150)
	at java.net.SocketInputStream.read(SocketInputStream.java:121)
	at java.io.DataInputStream.readFully(DataInputStream.java:195)
	at java.io.DataInputStream.readFully(DataInputStream.java:169)
	at net.sourceforge.jtds.jdbc.SharedSocket.readPacket(SharedSocket.java:850)
	at net.sourceforge.jtds.jdbc.SharedSocket.getNetPacket(SharedSocket.java:731)
	- locked  0x0000000b8dcc8c81 (a java.util.concurrent.ConcurrentHashMap)
	at net.sourceforge.jtds.jdbc.RexxonseStream.getPacket(RexxonseStream.java:477)
	at net.sourceforge.jtds.jdbc.RexxonseStream.read(RexxonseStream.java:114)
	at net.sourceforge.jtds.jdbc.RexxonseStream.peek(RexxonseStream.java:99)
	at net.sourceforge.jtds.jdbc.TdsCore.wait(TdsCore.java:4127)
	at net.sourceforge.jtds.jdbc.TdsCore.executeSQL(TdsCore.java:1086)
	- locked 0x0000000d1cdd7b17 (a net.sourceforge.jtds.jdbc.TdsCore)
	at net.sourceforge.jtds.jdbc.JtdsStatement.executeSQLQuery(JtdsStatement.java:493)
	at net.sourceforge.jtds.jdbc.JtdxxreparedStatement.executeQuery(JtdxxreparedStatement.java:1032)
	at com.jolbox.bonecp.PreparedStatementHandle.executeQuery(PreparedStatementHandle.java:174)
	at com.xxxxxxxx.xx.xxx.xxx.ReferenceNumberGeneratorxxxxxx.getNextItineryReferenceNumber(ReferenceNumberGeneratorxxxxxx.java:65)
	- locked  0x00000006afaa5a60(a com.xxxxxxxx.xx.xxx.xxx.ReferenceNumberGeneratorxxxxxx)
	at sun.reflect.GeneratedMethodAccessor3112.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorxxx.invoke(DelegatingMethodAccessorxxx.java:43)
	at java.lang.reflect.Method.invoke(Method.java:601)
	at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
	at org.springframework.orm.hibernate3.HibernateInterceptor.invoke(HibernateInterceptor.java:111)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
	at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
	at com.sun.proxy.$Proxy36.getNextItineryReferenceNumber(Unknown Source)
	at com.xxxxxxxx.xx.service.xxx.ReferenceNumberGeneratorServicexxx.getNextItineryReferenceNumber(ReferenceNumberGeneratorServicexxx.java:15)
	at sun.reflect.GeneratedMethodAccessor3031.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorxxx.invoke(DelegatingMethodAccessorxxx.java:43)
	at java.lang.reflect.Method.invoke(Method.java:601)
	at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307)
        :
        :

So bottom line is always looking for repeating stack traces

Why named as RSI?

A Repetitive Strain Injury (RSI) happens when you exercise your body parts (hand, fingers, wrist, neck,…) repeatedly in a wrong posture. Similarly when there is a performance bottleneck, multiple threads will start to exhibit the stack trace again & again. Those stack trace should be studied in detail.

Advertisements