| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> | 
|---|---|
| To: | Chris Browne <cbbrowne(at)acm(dot)org> | 
| Cc: | pgsql-performance(at)postgresql(dot)org | 
| Subject: | Re: Very long SQL strings | 
| Date: | 2007-06-21 20:59:18 | 
| Message-ID: | 5574.1182459558@sss.pgh.pa.us | 
| Views: | Whole Thread | Raw Message | Download mbox | Resend email | 
| Thread: | |
| Lists: | pgsql-performance | 
Chris Browne <cbbrowne(at)acm(dot)org> writes:
> I once ran into the situation where Slony-I generated a query that
> made the parser blow out (some sort of memory problem / running out of
> stack space somewhere thing); it was just short of 640K long, and so
> we figured that evidently it was wrong to conclude that "640K ought to
> be enough for anybody."
> Neil Conway was an observer; he was speculating that, with some
> (possibly nontrivial) change to the parser, we should have been able
> to cope with it.
> The query consisted mostly of a NOT IN clause where the list had some
> atrocious number of entries in it (all integers).
FWIW, we do seem to have improved that as of 8.2.  Assuming your entries
were 6-or-so-digit integers, that would have been on the order of 80K
entries, and we can manage it --- not amazingly fast, but it doesn't
blow out the stack anymore.
> (Aside: I wound up writing a "query compressor" (now in 1.2) which
> would read that list and, if it was at all large, try to squeeze any
> sets of consecutive integers into sets of "NOT BETWEEN" clauses.
> Usually, the lists, of XIDs, were more or less consecutive, and
> frequently, in the cases where the query got to MBs in size, there
> would be sets of hundreds or even thousands of consecutive integers
> such that we'd be left with a tiny query after this...)
Probably still a win.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Rainer Bauer | 2007-06-21 21:26:43 | Re: Data transfer very slow when connected via DSL | 
| Previous Message | Steven Flatt | 2007-06-21 20:37:49 | Re: Database-wide VACUUM ANALYZE |