From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Bill Thoen <bthoen(at)gisnet(dot)com> |
Cc: | Andrej Ricnik-Bay <andrej(dot)groups(at)gmail(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: PG Seg Faults Performing a Query |
Date: | 2007-08-22 14:33:06 |
Message-ID: | 7379.1187793186@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-hackers |
Bill Thoen <bthoen(at)gisnet(dot)com> writes:
> My PostgreSQL is working great for small SQL queries even from my large
> table (18 million records). But when I ask it to retrieve anything that
> takes it more than 10 minutes to assemble, it crashes with this
> "Segmentation Fault" error. I get so little feedback and I'm still pretty
> unfamiliar with Postgresql that I don't even know where to begin.
Running the client under gdb and getting a stack trace would be a good
place to begin.
FWIW, when I deliberately try to read a query result that's too large
for client memory, I get reasonable behavior:
regression=# select x, y, repeat('xyzzy',200) from generate_series(1,10000) x, generate_series(1,100) y;
out of memory for query result
regression=#
If you're seeing a segfault in psql then it sounds like a PG bug. If
you're seeing a segfault in a homebrew program then I wonder whether
it's properly checking for an error return from libpq ...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Patrick Lindeman | 2007-08-22 14:45:25 | Re: could not open file "pg_clog/0BFF" |
Previous Message | Scott Marlowe | 2007-08-22 14:31:29 | Re: could not open file "pg_clog/0BFF" |
From | Date | Subject | |
---|---|---|---|
Next Message | Stefan Kaltenbrunner | 2007-08-22 14:34:42 | Re: A couple of tsearch loose ends |
Previous Message | Bruce Momjian | 2007-08-22 14:24:41 | Re: Crash with empty dictionary |