Using HikariCP instead of C3P0

I created this issue as a proposition to move away from using C3P0 to HikariCP. HikariCP is a newer Connection Pooling Library and a much better one than C3P0. I have been working on adding HikariCP support to a Clojure based ORM, and have found HikariCP code relatively better than C3P0 by miles. I don’t mean to say HikariCP is good code, but I have never come across good code in a Connection Pooling Library. But C3P0 is just a huge codebase and a pretty bad one at that. HikariCP is relatively better and smaller.

Advantages of using HikariCP: Significant performance benefits Better and smaller codebase, makes it easier to understand. Staying up to date with whats new, rather than legacy Being developed more actively than C3P0 (but both of them have concentrated contributors, rather than distributed)

Disadvantages: If any of the modules rather than going using OpenMRS or Hibernate’s Session Factory abstractions use C3P0’s constructs, then their module is going to break. Other regular disadvantages of moving to a new core library in an Open Source codebase with a lot of contributors like shared knowledge/understanding etc.

Wanted to know the community’s thoughts on this proposal.

1 Like
  • Is there any performance problem that needs to be fixed? I doubt chaning the connection pool used by Hibernate will have a significant impact because connection pooling is not rocket science. Most part of the processing is spent executing queries (the DB engine - the optimizer- is key).

  • You claim that it’s much better, is there any document to show it? Maybe it’s faster but not so mature, or Hibernate is not tested enough. The links you provide are from the creator, perhaps it’s biased.


This feels like an “if it ain’t broke, don’t fix it” situation to me.

If there is some real-world scenario where c3p0 is underperforming, it would make sense to look for a replacement. But we shouldn’t do it just to have a “better” library under the hood, if there’s no end-user-visible impact.

(In practice, I doubt that there are any modules that do anything non-standard with connection-pooling, so swapping this out in openmrs-core

  • testing it in the reference application is probably sufficient to be confident it works across the board.)

I know one place in Bahmni it is definitely going to help, i.e we have a bulk CSV import which calls in multiple OpenMRS APIs per insert of a single CSV row. And the import happens over multiple threads concurrently. I don’t know about other modules which can use this. But the thing with performance is its best if we keep it upto the best standards available currently.

@darius: At Bahmni once we were bypassing the Hibernate abstraction and straightaway using C3P0 datasource, because we needed a Datasource object, we fixed it later on. But it can be a case with other modules.

And you know that the bottleneck of this process (the bulk CSV import) is the connection pooling, and not another reason? Perhaps the DB shows contention if multiple threads are creating rows and updating indexes. Have you tried loading the CSV in a single thread?

There is not really a bottleneck, but its just really slow at the moment, usable but I was hoping to get a speed bump on the same, and also this proposal was more along the lines of keeping up to date with whats more performant, rather than more of a requirement. Moving to a faster connection pooling library will help a bit on the bulk CSV import functionality, but won’t make as much of an impact, but in general it will have small benefits in a lot of other places as well. It won’t cause a performance change in bulk but minor improvements at various places.

The problem is that there’s almost no information about HikariCP in the Hibernate forums (or in any other place). C3P0 maybe slower and bloated but at least is tested.

In first place you should locate the culprit of the problem (bad query, lack of indexes, too much Hibernate objects in memory … the list goes on). Is the source code available? I’m curious about how do you parallelize the loading of the file, if I understood correctly you use multiple threads.

1 Like

Maybe I should have posted this here, instead of in the originating enhancement request in JIRA…

Hi, guys. I’m the original developer of HikariCP, and wanted to offer my thoughts, hopefully promoting without over-promoting.

HikariCP is young in world-time (started September 2013), but mature in internet-time. HikariCP is now the default connection pool of Play Framework, Slick, and others. Tens of thousands of developers use HikariCP every day.

While HikariCP is ridiculously efficient, our unwavering target is not speed but reliability. Everybody plugs in a connection pool, and “Hey, look it works”! Until it doesn’t.

Many developers and deployment teams test their products under fairly ideal conditions, never testing what happens to their application under adverse conditions. As a repository of medical records, I would expect that OpenMRS’ need for “availability” is high.

As far as I know, our team is the only team that has tortured pools under adverse conditions, and HikariCP is the only pool that is specifically hardened for fast recovery. Whenever someone asks why they should use HikariCP, “reliability” is the first word out of our mouths.

If you have time, you might find reason to use HikariCP in reading our analysis Bad Behavior: Handling Database Down.



Interesting that you mention Hibernate, because RedHat/Hibernate core developer Brett Meyer is the one who created the ConnectionProvider implementation for Hibernate. He tweeted this last Fall:

If you are interested in another 3rd-party analysis, the guys over at Wix have written a nice one. Wix has over 100 million users.

The reason for relatively low traffic on various forums we attribute to HikariCP being extremely simple to configure (not much to go wrong), especially when switching from another pool. It would be easy to attribute it to low popularity, but our latest download numbers from just the SonaType maven repository have broken through 10,000/mo.

1 Like

@mihirkh, @lluismf, @darius, and especially @hikaricp… thanks for your suggestions and comments. :smile:

Not sure one could ask for a better rounded group of folks to work through this. :smile:

@mihirkh, and @hikaricp: we tend to be driven by needs above all else. It would be nice to identify specific benefits or fixes we’d arrive at when we make a code change like this.

@hikaricp: if we were to consider such a change, can we rely upon you and your team for the technical assistance needed to see this through successfully? :smile:

Thanks a ton for the contributions in advance!

@paul We pride ourselves on supporting our users as quickly and effectively as possible. Feel free to peruse the issue tracker to get a feel for the support we provide.

Issue Stats

Certainly justifying a change when there is no immediate problem to be solved can be difficult. I’ll lay out in a short skit:

:grinning:: “You should get this new car.”

:smirk:: “My car is running fine, never had a minute’s trouble, why should I replace it?”

A perfectly natural response, to which I say…

:grinning: : “The model of car you are driving has a history of random fires. Break-seizures when traveling at high-speed have also been reported. And finally, your car has no airbags.”

Aaaand scene.

HikariCP has never had a reported hang or deadlock (it would be difficult with the lock-free design), and as noted in my previous comment, has been designed and tested to provide predictable behavior even in the face of network failures and recoveries.


Ditto to what @darius said. I don’t see any harm in setting up an OpenMRS instance with HikariCP. Assuming things simply keep working, then – at a minimum – we can document how any implementation can choose their preferred pooling. If we can demonstrate significant performance benefits or even some behind-the-scene benefits… for example, like this:

then we may decide that HikariCP is a better out-of-the-box connection pooling solution relative to c3p0 and make it our default.

@mihirkh, would you be willing/able to set up a demonstration server and get some data on how OpenMRS runs with HikariCP for us?


-@burke :burke:

Testing a scenario of multiple concurrent users is hard, do we have any tool for that?

@Burke I don’t understand your graph, is a flat line a good thing?

Agree that load testing would be the way to exercise the connection pool. We’ve played with Grinder before for load testing. Nothing is currently set up for it. The bad/good news is it won’t take too much of a load to induce performance issues with OpenMRS. :wink:

The graph represents a relatively unstable pool of connections becoming stable with the introduction of HikariCP. It comes from a post at this link that @hikaricp shared. I don’t know if the stability (straight line) would be meaningful to an implementation… the point I was trying to make is that there could be tangible, measurable (and worthwhile) benefits beyond simply performance improvements.

There’s no harm if someone wants to test out OpenMRS with HikariCP in place of C3PO. It will take some real world experience before we’d consider switching to it (e.g., implementations convinced to try it out and reporting tangible benefits and/or the hibernate community migrating to it). On one hand, if it’s simple to switch, then the barrier to trying it out will be low. On the other hand, if none of the implementations are experiencing problems with C3PO, it may be harder to convince them why they should change.

Can you expand this statement a bit further? Seems that hibernate-hikaricp is already a core module in Hibernate 4.3.6 and later:

1 Like

I meant that OpenMRS is one of the crusty ol’ communities that’s still using C3PO and everyone else has moved onto HikariCP, looking at us in their review mirrors.


Ha! Don’t feel too special yet. It would be interesting to find out the % of adoption among Hibernate users though… Just because it’s joined the others doesn’t necessarily mean it’s becoming commonly used yet. (Especially if it’s still poorly documented!)

1 Like

I’m not a connection pool expert by any means, but you can allocate for instance 10 connections from the DB and keep them ad infinitum even if there are no users working. So the “stable” argument doesn’t show much in my opinion. In periods of low activity maybe it’s convenient to give back the connections so other apps can use them. Maybe the “unstable” scenario has its advantages.

@lluismf The graph cited above is from a website ( with almost constant traffic/activity. However, even in relatively inactive applications you typically have “turnover” occurring in the pool.

The reasons you see troughs and spikes in the graph is that connections in pools are typically configured with idle timeouts as well as “maximum lifetimes”. For example, say that a user configures a minimum of 10 connections, an idle timeout of 2 minutes, and a maximum lifetime of 20 minutes…

In an inactive pool, many connections will reach the idle timeout and be closed/removed from the pool, followed by new connections replacing them (to get back to the minimum capacity setting).

In an active pool, connections rarely if ever reach the idle timeout, but reach a maximum lifetime at 20 minutes and are similarly closed/removed from the pool, and then replaced.

Many pools, C3P0 apparently among them, have difficulty primarily with the later. If the pool is populated with 10 connections at startup, then all 10 connections reach maximum lifetime at nearly the same time, and are closed. Now the pool is in deficit. Incoming getConnection() calls create connections “on-demand” if the pool is still below maximum capacity. Additionally, C3P0 also has background worker threads that also try to keep the pool at the minimum setting.

Poor timing/synchronization/threading logic leads to the pool falling into deficit, and subsequently over compensating. You can see this clearly in the graph in two places.


  • First a dip.
  • Followed by a spike to maximum connection capacity.
  • Then a few minutes later a “die off” of the over capacity that returns back to steady state.

The trouble with this behavior is that, according to Edulify, during the dips and spikes connection timeouts were experienced by the application, and memory/load on the DB server also spiked.

Under the same constant load, and configured with the same pool sizes, idle timeout, and maximum lifetime, HikariCP maintains a very stable pool. If you zoom-in you can see a few “wiggles” in the connection count. The application never experiences connection timeouts, and the database is never overloaded with new connection attempts.

1 Like

Thanks for the detailed explanation. Am getting to know a bit more about connection pool internals. :slight_smile:

@hikaricp do you have the time to create a pull request showing us the necessary changes that it would require to switch to HicariCP?

Then developers with spare cycles, can try it out with OpenMRS instances and see the benefits we would get in comparison to C3P0.