> Tatsuo Ishii <t-ishii(at)sra(dot)co(dot)jp> writes:
> > Kenji Sugita has identified a problem with cost_sort() in costsize.c.
> > In the following code fragment, sortmembytes is defined as long. So
> > double nruns = nbytes / (sortmembytes * 2);
> > may cause an integer overflow if sortmembytes exceeds 2^30, which in
> > turn make optimizer to produce wrong query plan(this actually happned
> > in a large PostgreSQL installation which has tons of memory).
> I find it really really hard to believe that it's wise to run with
> sort_mem exceeding 2 gig ;-). Does that installation have so much
> RAM that it can afford to run multiple many-Gb sorts concurrently?
The process is assigned 1 gig sort mem to speed up a batch job by
uisng backend-process-only sort mem setting, and they do not modify
postgresql.conf for ordinaly user.
BTW it does not 2 gig, but 1 gig (remember that we do sortmembytes *
> This is far from being the only place that multiplies SortMem by 1024.
> My inclination is that a safer fix is to alter guc.c's entry for
> SortMem to establish a maximum value of INT_MAX/1024 for the variable.
> Probably some of the other GUC variables like shared_buffers ought to
> have overflow-related maxima established, too.
> regards, tom lane
In response to
pgsql-hackers by date
|Next:||From: Ed L.||Date: 2003-04-01 01:06:35|
|Subject: Re: index corruption?|
|Previous:||From: Kevin Brown||Date: 2003-03-31 23:41:28|
|Subject: Re: deadlock problem|