Uploaded image for project: 'logback'
  1. logback
  2. LOGBACK-268

Introduce fair locking in OutputStreamAppender



    • Icon: Improvement Improvement
    • Resolution: Fixed
    • Icon: Major Major
    • 1.1.2
    • None
    • logback-core
    • None


      Hi all!

      In a multitreaded system with large amounts of logging, under heavy load, some processing threads work fast (0 ms execution time), while others work very slow.

      Profiling showed the problem in logging framework (logback): many theads got locked in ch.qos.logback.core.OutputStreamAppender lock. Simple "synchronize" lock does not guarantee order of getting lock by waiting threads, so, sometimes, "last entered" thread gets the lock, while "first entered" thread may wait for a very long time (in my case, up to 70000-80000 ms).

      I tried to replace this lock with java.util.concurrent.locks.ReentrantLock working in "fair" mode, and the situation improved greatly! When threads getting the lock exactly in the same order they entered, work times of different threads are very close to each other, none of them wait too long. Server responsiveness became much more realtime.

      I attached simple test application (test-logback-concurrency.zip), and results of test with different types of lock (results.odt).

      So, I would suggest the same improvement to be made in logback trunk, to be available to all logback users. Fair locking adds a little overhead, while benefits are huge: much better realtime behaviour, and
      eliminated thread execution reordering (as I suppose, logging framework should not interfere with thread execution order...).

      This solution is tested both in test application and in real-world system, and works well.

      Alternatively, one could refactor the logback core to diminish overall lock contention time (I didn't examined logback architecture deeply, so I am not sure if it is possible).




            tony19 Tony Trinh
            sbykov Sergey Bykov
            1 Vote for this issue
            5 Start watching this issue