![thread deadlock thread deadlock](https://devopedia.org/images/article/269/4239.1589437309.png)
During the day the application was experiencing just over 4,000 business transactions per minute, which works out at just under 1 million transactions a day. Approximately 2.5% of these transactions were impacted by the slowdown, which was the result of the 92 code deadlocks you see here that occurred during peak hours.ĪppDynamics is able to dynamically baseline the performance of every business transaction type before classifying each execution as normal, slow, very slow or stalled depending on its deviation from its unique performance baseline. If we look at the AppDynamics problem pane (right) as the customer saw things, it shows the severity of their issues. Here is a quick glimpse of what that architecture looked like from a high level: The architecture was heavily distributed with several hundred application tiers that included JVMs, LDAP servers, CMS server, message queues, databases and 3rd party web services.
![thread deadlock thread deadlock](https://avaldes.com/wp-content/uploads/2013/03/thread_deadlock_scenario.png)
The customer application in question was a busy e-commerce retail website in the US. Code Deadlock in a distributed E-Commerce Application Sounds too good to be true, right? Well, a few weeks ago an AppDynamics customer did just that and the story they told was quite compelling. Now imagine for one minute that operations could actually figure out the business impact of production issues, along with identifying the root cause, and communicate this information to Dev so problems could be fixed rapidly. It’s true restarting a JVM or CLR will solve a fair few issues in production, but it’s only a temporary fix over the real problems that exist within the application logic and configuration. Therefore, little pressure is applied to development to investigate data like thread dumps so that root causes can be found and production slowdowns can be avoided again in future. The stark reality is that no one in operations has the time or visibility to figure out the real business impact behind issues like this. The story I’ve just told may seem contrived, but I’ve witnessed it several times with customers over the years. Now suppose your application returns back to normal–end users stop complaining, you pat yourself on the back and beat your chest, and basically resume what you were doing before you were rudely interrupted. An experienced sys admin might perform a kill -3 on the Java process, capture a thread dump, and pass this back to dev for analysis. The average sys admin at this point would just kill and restart the Java process, cross their fingers, and hope everything returns back to normal (this actually does work most of the time). You bring up your console, check all related processes, and notice one java.exe process isn’t using any CPU but the other Java processes are. you’re an operations guy and you’ve just received a phone call or alert notifying you that the application your responsible for is running slow. ("Thread1 locked resource3: " + resource3)
![thread deadlock thread deadlock](https://i.pinimg.com/originals/70/8c/93/708c93909ff4590d94f54122033a2773.jpg)
("Thread2 locked resource1: " + resource1) ("Thread1 locked resource2: " + resource2) ("Thread2 locked resource3: " + resource3) ("Thread2 locked resource2: " + resource2) ("Thread1 locked resource1: " + resource1) */ public class DeadLockExample //start threads. * This program is used to show deadlock situation in multithreading.