Re: pg, mysql comparison with "group by" clause

From: Greg Stark <gsstark(at)mit(dot)edu>
To: Scott Marlowe <smarlowe(at)g2switchworks(dot)com>
Cc: Greg Stark <gsstark(at)MIT(dot)EDU>, Stephan Szabo <sszabo(at)megazone(dot)bigpanda(dot)com>, Rick Schumeyer <rschumeyer(at)ieee(dot)org>, pgsql-sql(at)postgresql(dot)org
Subject: Re: pg, mysql comparison with "group by" clause
Date: 2005-10-13 18:26:58
Message-ID: 878xwxi87h.fsf@stark.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-sql


Scott Marlowe <smarlowe(at)g2switchworks(dot)com> writes:

> Sorry, but it's worse than that. It is quite possible that two people
> could run this query at the same time and get different data from the
> same set and the same point in time. That shouldn't happen accidentally
> in SQL, you should know it's coming.

I'm pretty unsympathetic to the "we should make a language less powerful and
more awkward because someone might use it wrong" argument.

> > In standard SQL you have to
> > write GROUP BY ... and list every single column you need from the master
> > table. Forcing the database to do a lot of redundant comparisons and sort on
> > uselessly long keys where in fact you only really need it to sort and group by
> > the primary key.
>
> But again, you're getting whatever row the database feels like giving
> you. A use of a simple, stupid aggregate like an any() aggregate would
> be fine here, and wouldn't require a lot of overhead, and would meet the
> SQL spec.

Great, so I have a user table with, oh, say, 40 columns. And I want to return
all those columns plus their current account balance in a single query.

The syntax under discussion would be:

select user.*, sum(money) from user join user_money using (user_id) group by user_id

You would prefer:

select user_id,
any(username) as username, any(firstname) as firstname,
any(lastname) as lastname, any(address) as address,
any(city) as city, any(street) as street, any(phone) as phone,
any(last_update) as last_update, any(last_login) as last_login,
any(referrer_id) as referrer_id, any(register_date) as register_date,
...
sum(money) as balance,
count(money) as num_txns
from user join user_money using (user_id) group by user_id

Having a safeties is fine but when I have to disengage the safety for every
single column it starts to get more than a little annoying.

Note that you cannot write the above as a subquery since there are two
aggregates. You could write it as a join against a view but don't expect to
get the same plans from Postgres for that.

> Actually, for things like aggregates, I've often been able to improve
> performance with sub selects in PostgreSQL.

If your experience is like mine it's a case of two wrongs cancelling each
other out. The optimizer underestimates the efficiency of nested loops which
is another problem. Since subqueries' only eligible plan is basically a nested
loop it often turns out to be faster than the more exotic plans a join can
reach.

In an ideal world subqueries would be transformed into the equivalent join (or
some more general join structure that can cover both sets of semantics) and
then planned through the same code path. In an ideal world the user should be
guaranteed that equivalent queries would always result in the same plan
regardless of how they're written.

--
greg

In response to

Responses

Browse pgsql-sql by date

  From Date Subject
Next Message Andy 2005-10-13 18:28:57 Re: Strange join...maybe some improvements???
Previous Message Mike Diehl 2005-10-13 17:45:16 Re: [DOCS] Update timestamp on update