Thoughts on ranking ORM tools

Posting of the previous performance comparison results (see here) has caused a much greater response I ever expected as well as some confusion about the goal of this test that I would like to respond to.

How real (world) is this?

Most importantly a performance test is useful if it can be used for predicting performance of a real world application. Now the performance test in I did was a really primitive, batch based CRUD test. Unless you are developing an application that does batch actions such as in this test, the results can’t really be used for predicting performance of real world applications. We’re talking about two mature frameworks with lots of additional services – e.g. caching and transaction handling – that are not evaluated in this test even though they can have a huge impact on performance.

Be a little sceptic

I would suggest treating results with caution and checking on the source. There are a lot of people expert on a single ORM framework, some are experts on two but it mostly stops there. Unless stated that the author(s) is/are experts on the topic I would approach these results in a sceptic way.

As for my own case: I do have experience with the tested frameworks however am far from being an expert with any of them – so treat my results with caution. I’ve published the so anyone can analyze it and if there are some issues in the code resulting in major performance drop, hopefully someone will let me know – just as some members of the NHibernate community did after publisihing the first results.

There is no “best”

I believe there is no valid way to decide which framework is ultimately better. There is an ongoing debate about ranking ORM tools and how well a test like this is good for deciding on which framework is better. I think it is not. Just to mention a few reasons:

  • Most real world applications have a far more complex working than this test simulates. What’s more, commonly used additional framework features – e.g. caching, transaction handling – are not tested at all even though these features can have a great impact on real world performance
  • Performance may not be everything. Even if these performance tests would reflect performance in a real world environment (as they don’t), ORM is not used based on performance. When using an ORM tool you make a compromise to have worse performance than using native SQL and gain additional features. It’s usually a tradeoff: the more features you get, the bigger the overhead will be.

There is one case when this kind of test can be useful though. If you find a great performance difference between two frameworks (e.g 50-1000 times), it’s a good indicator that there’s either some issue with the test itself or the framework. If you’re sure that the test is written correctly this comparison could lead to the analysis of why the framework is unacceptably better than the other. If you can find the valid answer, then good. If you can’t, now that’s when things get interesting and the test was worth doing.

The best valid way of comparing an ORM tool in my opinion is based on the project you need it for. Write together your needs, weigh them and evaluate the frameworks accordingly. ORM tools are comparable just as cars are. You can’t decide which one’s better unless you know what to use it for. An F1 car may be great for a speed race but would utterly fail in a desert challenge.

2 Responses to Thoughts on ranking ORM tools
  1. [...] The blog of Gergely Orosz, Sense/Net 6.0 developer on IT related topics: web, software design, .NET, PHP...
  2. [...] and the rest « Why I couldn’t compare Goliath with David: Vignette with Sense/Net Thoug...