Re: Large Tables(>1 Gb)

From: mikeo <mikeo(at)spectrumtelecorp(dot)com>
To: "Mitch Vincent" <mitch(at)venux(dot)net>, "Jeffery Collins" <collins(at)onyx-technologies(dot)com>, <Fred_Zellinger(at)seagate(dot)com>
Cc: <pgsql-general(at)hub(dot)org>
Subject: Re: Large Tables(>1 Gb)
Date: 2000-06-30 14:47:45
Message-ID: 3.0.1.32.20000630104745.009374e0@pop.spectrumtelecorp.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

we use cursors and they perform well for us for
selects.
our largest table is just over 7.5g containing
38mil+ rows...but we have a lot of tables over
1 gig...

mikeo

At 10:19 AM 6/30/00 -0400, Mitch Vincent wrote:
>You could also use LIMIT and OFFSET.. That's what I do (though my database
>isn't to a gigabyte yet)..
>
>Maybe using a CURSOR is better, I'm not sure...
>
>-Mitch
>
>----- Original Message -----
>From: Jeffery Collins <collins(at)onyx-technologies(dot)com>
>To: <Fred_Zellinger(at)seagate(dot)com>
>Cc: <pgsql-general(at)hub(dot)org>
>Sent: Friday, June 30, 2000 8:47 AM
>Subject: Re: [GENERAL] Large Tables(>1 Gb)
>
>
>> Fred_Zellinger(at)seagate(dot)com wrote:
>>
>> > (I have a 500MHz PentIII, with 256 Mb RAM, UW SCSI, running Linux Kernel
>> > 2.2.9, with libc-2.1.2
>> > I am running Postgres 7.0 which I compiled myself.)
>> >
>> > So, I created a database, a table, and started dumping data into it.
>Then
>> > I added an index on the table. Life was good.
>> >
>> > After a few weeks, my table eclipsed approximately 1Gb, and when I
>looked
>> > at it in my PG_DATA/database directory, I noticed that there were two
>> > files: MYTABLE and MYTABLE.1. I was curious why this happened, but I
>> > figured that Postgres must break up tables over 1Gb into multiple
>> > files.(right?)
>> >
>> > Then, while running psql, I did a "select * from MYTABLE;" Well, psql
>just
>> > sits there while the hard drive light blinks like crazy, pulling the
>table
>> > up into memory. I have 256Mb of RAM, so this takes awhile. When I
>start
>> > up "top" and watch my process table, the postgres backend is sucking up
>the
>> > CPU time pulling the data and the psql frontend is sucking up the memory
>> > accepting the results.
>> >
>> > Fred
>>
>> Okay, I didn't laugh the entire time...
>>
>> I suggest you take a look at cursors. I have the same thing. There are
>times
>> I will need to select my entire >2Gig table but instead of doing:
>>
>> SELECT * FROM table ;
>>
>> I do
>>
>> DECLARE tmp CURSOR FOR SELECT * FROM table ;
>>
>> do {
>> FETCH 100 FORWARD FROM tmp ;
>> } while there are rows left.
>>
>> This only pulls 100 (or whatever number you specify) into memory at a
>time.
>>
>> Jeff
>>
>>
>>
>

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Martijn van Oosterhout 2000-06-30 15:15:58 Re: disk backups
Previous Message Stephen Lawrence Jr. 2000-06-30 14:46:06 Footer Text