Using a polling database adapter in a OSB proxy service, one may have noticed the following behaviour in Weblogic server:
- an exception in the server logs about one or even more stuck threads like this:
<[STUCK] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)' has been busy for "705" seconds working on the request "weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl@21b9db0", which is more than the configured time (StuckThreadMaxTime) of "600" seconds. Stack trace:
Thread-118 „[STUCK] ExecuteThread: ’10‘ for queue: ‚weblogic.kernel.Default (self-tuning)'“
— Waiting for notification on: oracle.tip.adapter.db.InboundWork@21b8e86[fat lock]
- and/or server health state from OSB managed server changing from state „Ok“ to state „Warning“
Such a behaviour alerts administrators, thinking that something is wrong with the deployed applications or OSB services.
Looking in the Oracle documentation one can find the information that this is behaviour by design and that it can be ignored. To verify the OSB proxy service’s database adapter as the source for this the proxy service has to be simply disabled in OSB console. Doing so makes the stuck threads disappear. The behaviour seems strange at the – so why this?
When defining an inbound database adapter, Weblogic threads are used to perform the polling on events occurring in the defined database. Because OSB is designed to deliver high performance and throughput, a number of threads, which depends on the numberOfThreads property in the adapter’s JCA file, is exclusively reserved for the database adapter to perform the inbound polling. Such reserved threads will, due to performance reasons, never be released and never be returned to the corresponding thread pool. After the configured thread timeout, which is by default about 600 seconds, the stuck thread behaviour occurs.
Although this is the default behaviour, it is really confusing and could lead into serious problems, if real threading problems occur in deployed applications/services that are not caused by the adapter threads and which will not be noticed and handled in time. So what can be done to get rid of the stuck adapter threads?
The Oracle documentation proposes to define a separate Work manager and configure this with the proxy service transport’s dispatch policy. To do so, the following steps has to be performed:
- Define a custom global Work manager using weblogic console with the OSB managed server as deployment target
- Configure the new defined Work manager to ignore stuck threads
- Configure OSB proxy service transport’s dispatch policy to use the new defined Work manager
Afterwards the stuck threads behaviour caused the OSB proxy service or by its configured inbound database adapter should not show up again.