From: | Curt Sampson <cjs(at)cynic(dot)net> |
---|---|
To: | Yuva Chandolu <ychandolu(at)ebates(dot)com> |
Cc: | "'pgsql-hackers(at)postgresql(dot)org'" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Will postgress handle too big tables? |
Date: | 2002-06-11 05:45:27 |
Message-ID: | Pine.NEB.4.43.0206111427370.3382-100000@angelic.cynic.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, 10 Jun 2002, Yuva Chandolu wrote:
> We are moving to Postgres from Oracle. We have a few tables that have around
> 8 to 10 millions of rows and their size increases very rapidly(deletions are
> very less on these tables). How will Postgres hanlde very big tables like
> this?
Uh..."what big tables?" :-)
Have a look back through the archives. I'm mucking about quite
happily with 500 million row tables, without much difficulty.
I've found that my main barrier is disk I/O. If you're doing it on a
little dual-IDE disk system as I am, things just ain't so fast. I'm
hoping that in the next couple of weeks I get the go-ahead to put
together a system with ten or so disks (based around a 3ware Escalade
IDE RAID controller) that will make trillion-row-tables quite practical.
> or would it be very slow when compared to Oracle? Do you have any case
> studies in this regd?
It all depends entirely on the application. Really. Some applications
will work just as well on Postgres as they will on Oracle; others
will be almost impossible with Postgres.
cjs
--
Curt Sampson <cjs(at)cynic(dot)net> +81 90 7737 2974 http://www.netbsd.org
Don't you know, in this new Dark Age, we're all light. --XTC
From | Date | Subject | |
---|---|---|---|
Next Message | Karel Zak | 2002-06-11 07:34:36 | Re: Timestamp/Interval proposals: Part 2 |
Previous Message | Christopher Kings-Lynne | 2002-06-11 03:18:09 | Re: [HACKERS] Efficient DELETE Strategies |