Re: [HACKERS] sort on huge table

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: t-ishii(at)sra(dot)co(dot)jp
Cc: pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: [HACKERS] sort on huge table
Date: 1999-11-02 05:31:04
Message-ID: 10624.941520664@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Tatsuo Ishii <t-ishii(at)sra(dot)co(dot)jp> writes:
> I have compared current with 6.5 using 1000000 tuple-table (243MB) (I
> wanted to try 2GB+ table but 6.5 does not work in this case). The
> result was strange in that current is *faster* than 6.5!

> RAID5
> current 2:29
> 6.5.2 3:15

> non-RAID
> current 1:50
> 6.5.2 2:13

> Seems my previous testing was done in wrong way or the behavior of
> sorting might be different if the table size is changed?

Well, I feel better now, anyway ;-). I thought that my first cut
ought to have been about the same speed as 6.5, and after I added
the code to slurp up multiple tuples in sequence, it should've been
faster than 6.5. The above numbers seem to be in line with that
theory. Next question: is there some additional effect that comes
into play once the table size gets really huge? I am thinking maybe
there's some glitch affecting performance once the temp file size
goes past one segment (1Gb). Tatsuo, can you try sorts of say
0.9 and 1.1 Gb to see if something bad happens at 1Gb? I could
try rebuilding here with a small RELSEG_SIZE, but right at the
moment I'm not certain I'd see the same behavior you do...

regards, tom lane

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 1999-11-02 05:43:21 Re: [HACKERS] Regression Testing on REL6_5_PATCHES
Previous Message Tom Lane 1999-11-02 05:19:44 Re: [HACKERS] Backend terminated abnormally