> This problem is usually caused by a leaky abstraction; the ORM, or whatever database abstraction you are using, can’t anticipate that it would need to send N queries, so it can’t automatically optimize this down to a single query.
If you do an if-statement with an ORM before doing an update-statement, the result of the if-statement is already stale before the update occurs. If you skipped the ORM and put your if-statement as a where-clause, it's less of a problem.
> However, this only works since Haskell is declarative / pure; the low-level operational semantics (like evaluation order) are abstracted away from the programmer, and as such are amenable to optimization.
In C#, there is Linq to SQL. Linq to SQL is an ORM. It does SQL code generation, even from user-provided code as long as it is in Linq Expressions form.
With DelegateDecompiler, you can turn native lambdas into Linq Expressions. (You just need to re-wrap the decompiled method body to remove the extra "this" parameter from the lambda). With this, you can write C# code, and it will generate SQL code.
Thank you. People so often forget (or don’t realize) that their RDBMS is also doing a ton of optimization on their query, and a primary reason it’s able to do so in real-time is because SQL is declarative.
The author is right to note that Haskell can optimise across module (abstraction) boundaries. However I remember that in my childhood that Debray [1] did a lot of work on link-time optimisations (late 1980s). And of course there's the classic self work that fed into the JVM [2], and the whole-of-program compilers that are receiving renewed attention now; mlton [3] being a classic of the genre, "supercompiler" being the active area of research AIUI. So at least sometimes abstraction boundaries are transparent to optimisers.
On the other hand the classic data abstraction story (signatures/interfaces for structures/modules) naturally allows for selecting or optimising implementations depending on uses. There was some great work done in the early 2000s on that (see [4]) and I'm sure the state-of-the-art has moved on since [5].
To answer the OP, no. Your compiler will never do the optimization you want it to do, no matter how high you try and move up the abstraction.
The fundamental issue isn't just that your compiler doesn't understand SQL. The problem is that your compiler doesn't understand how data is or will be stored. It's blind to the current state of the dataset.
For example, maybe it's the case that the data is stored in a hashtable and it's rarely written. In that case, N+1 might actually be the right way to query the data to avoid excessive read locks across the database.
Perhaps the data has 1, 2 or several indexes created against it. Those indexes can change at anytime and have to be consciously made as building them can take a lot of resources.
RDBMS build up a huge amount of statistics for optimizations purposes. That's all information you'd have to have the compiler JIT into the application to get similar performance or to have a good feeling for how to do the optimization.
Is the goal to make good ORM queries easier or to prevent bad queries? It's not clear there's really a compiler solution to the latter. If you're inside a loop in which a database cursor is in scope, then further database queries are prohibited? It's hard to see how that could be enforced other than something like What Color Is Your Function (https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...) with some functions marked as making queries and others as not.
To solve this, maybe instead best practice would be to ensure the database connection is not in a global variable and must be passed down. That would make it more obvious when a database is improperly used within a loop.
The same problem exists for any expensive operation within a loop (say, a database query while parsing the results of an API call, or vice versa).
Technically ORMs attempt to deal with the mismatch between relations[1] and the rich data structures (objects) general purpose programming languages normally allow expression of. Hence the literal name: Object-Relation Mapping. That SQL, the language, is declarative is immaterial.
> but Codd's goals when developing the relational model was to allow non programmers to access data.
That is unlikely. He was staunchly opposed to database vendors adding "veneer" to make use more friendly and considered SQL an abomination. Codd was dead set on mathematical purity, which is, I'd argue, also why his vision ultimately failed in practice as actual relational databases are too hard to understand for those not well versed in the math.
[1] Technically tables, since we're talking about SQL, which isn't relational. But the mapping idea is the same either way.
> However, what if we raise the abstraction boundary and make the ORM part of the language? This means that we could formulate rewrite rules for the ORM, allowing it to eg merge the N queries into a single query.
It's correct that abstraction boundaries are optimization boundaries, but I don't think you need to make queries part of the language itself to raise the boundary.
To give a concrete example, take the Django ORM in Python. If you write a function which returns a single database record, then calling that function many times in a loop is naturally going to result in an n+1 query. However, if you instead return a QuerySet, then what you're returning is a lazily-evaluated sequence of records. Then, the caller can make the choice on whether to immediately evaluate the query (when they only need one record) or collect together a bunch of QuerySets and union them into a single query.
In other words we give the caller more control and thus more opportunity to optimize for their use case.
Abstraction boundaries can be optimization opportunities if you choose the right abstractions. You want to present interfaces that go well with the underlying capabilities. In the case of ORMs, the underlying capabilities include various kinds of set manipulation, so you should present an interface that can filter, union, lazy evaluation, etc.
The key is capabilities rather than implementation. If your data structure is good at iteration and bad at random access, present an abstraction that supports enumeration but not indexing. But don’t present an abstraction that hands out Nodes and lets the caller mess around with their Next pointers.
Just avoid ORM. It’s designed to encapsulate, not to be efficient. Lazy loading is part of initial design. Turning it off destroys encapsulation and requires you to know what the code below will fetch.
In that case you might just abandon ORM and preload the context with something more efficient.
> This problem is usually caused by a leaky abstraction; the ORM, or whatever database abstraction you are using, can’t anticipate that it would need to send N queries, so it can’t automatically optimize this down to a single query.
If you do an if-statement with an ORM before doing an update-statement, the result of the if-statement is already stale before the update occurs. If you skipped the ORM and put your if-statement as a where-clause, it's less of a problem.
> However, this only works since Haskell is declarative / pure; the low-level operational semantics (like evaluation order) are abstracted away from the programmer, and as such are amenable to optimization.
What else is declarative / pure? SQL. Not ORMS.
With DelegateDecompiler, you can turn native lambdas into Linq Expressions. (You just need to re-wrap the decompiled method body to remove the extra "this" parameter from the lambda). With this, you can write C# code, and it will generate SQL code.
Thank you. People so often forget (or don’t realize) that their RDBMS is also doing a ton of optimization on their query, and a primary reason it’s able to do so in real-time is because SQL is declarative.
On the other hand the classic data abstraction story (signatures/interfaces for structures/modules) naturally allows for selecting or optimising implementations depending on uses. There was some great work done in the early 2000s on that (see [4]) and I'm sure the state-of-the-art has moved on since [5].
[1] https://dblp.org/pid/d/SKDebray.html
[2] https://en.wikipedia.org/wiki/Self_(programming_language)
[3] http://mlton.org/
[4] https://dblp.org/pid/59/4501.html
[5] https://en.wikipedia.org/wiki/Interprocedural_optimization
The fundamental issue isn't just that your compiler doesn't understand SQL. The problem is that your compiler doesn't understand how data is or will be stored. It's blind to the current state of the dataset.
For example, maybe it's the case that the data is stored in a hashtable and it's rarely written. In that case, N+1 might actually be the right way to query the data to avoid excessive read locks across the database.
Perhaps the data has 1, 2 or several indexes created against it. Those indexes can change at anytime and have to be consciously made as building them can take a lot of resources.
RDBMS build up a huge amount of statistics for optimizations purposes. That's all information you'd have to have the compiler JIT into the application to get similar performance or to have a good feeling for how to do the optimization.
To solve this, maybe instead best practice would be to ensure the database connection is not in a global variable and must be passed down. That would make it more obvious when a database is improperly used within a loop.
The same problem exists for any expensive operation within a loop (say, a database query while parsing the results of an API call, or vice versa).
SQL's declarative model has an impedance mismatch with imperative programming, ORMs attempt to deal with that mismatch.
SQL hides complexity but Codd's goals when developing the relational model was to allow non programmers to access data.
The design decisions and tradeoff analysis was massively different than just an abstraction targeting developers.
There are many different persistence models that have vastly different tradeoffs, costs and benefits.
Balancing integration and disintegration drivers are complex and can impact optimization in many ways.
Technically ORMs attempt to deal with the mismatch between relations[1] and the rich data structures (objects) general purpose programming languages normally allow expression of. Hence the literal name: Object-Relation Mapping. That SQL, the language, is declarative is immaterial.
> but Codd's goals when developing the relational model was to allow non programmers to access data.
That is unlikely. He was staunchly opposed to database vendors adding "veneer" to make use more friendly and considered SQL an abomination. Codd was dead set on mathematical purity, which is, I'd argue, also why his vision ultimately failed in practice as actual relational databases are too hard to understand for those not well versed in the math.
[1] Technically tables, since we're talking about SQL, which isn't relational. But the mapping idea is the same either way.
It's correct that abstraction boundaries are optimization boundaries, but I don't think you need to make queries part of the language itself to raise the boundary.
To give a concrete example, take the Django ORM in Python. If you write a function which returns a single database record, then calling that function many times in a loop is naturally going to result in an n+1 query. However, if you instead return a QuerySet, then what you're returning is a lazily-evaluated sequence of records. Then, the caller can make the choice on whether to immediately evaluate the query (when they only need one record) or collect together a bunch of QuerySets and union them into a single query.
In other words we give the caller more control and thus more opportunity to optimize for their use case.
The key is capabilities rather than implementation. If your data structure is good at iteration and bad at random access, present an abstraction that supports enumeration but not indexing. But don’t present an abstraction that hands out Nodes and lets the caller mess around with their Next pointers.
In that case you might just abandon ORM and preload the context with something more efficient.