Skip site navigation (1) Skip section navigation (2)

Re: [PERFORM] Postgres and really huge tables

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Brian Hurt <bhurt(at)janestcapital(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org, pgsql-advocacy(at)postgresql(dot)org
Subject: Re: [PERFORM] Postgres and really huge tables
Date: 2007-01-18 21:52:58
Message-ID: 6593.1169157178@sss.pgh.pa.us (view raw or flat)
Thread:
Lists: pgsql-advocacypgsql-performance
Brian Hurt <bhurt(at)janestcapital(dot)com> writes:
> Is there any experience with Postgresql and really huge tables?  I'm 
> talking about terabytes (plural) here in a single table.

The 2MASS sky survey point-source catalog
http://www.ipac.caltech.edu/2mass/releases/allsky/doc/sec2_2a.html
is 470 million rows by 60 columns; I don't have it loaded up but
a very conservative estimate would be a quarter terabyte.  (I've
got a copy of the data ... 5 double-sided DVDs, gzipped ...)
I haven't heard from Rae Stiening recently but I know he's been using
Postgres to whack that data around since about 2001 (PG 7.1 or so,
which is positively medieval compared to current releases).  So at
least for static data, it's certainly possible to get useful results.
What are your processing requirements?

			regards, tom lane

In response to

Responses

pgsql-performance by date

Next:From: Jeremy HaileDate: 2007-01-18 21:53:21
Subject: Re: Autoanalyze settings with zero scale factor
Previous:From: Chris MairDate: 2007-01-18 21:42:40
Subject: Re: Postgres and really huge tables

pgsql-advocacy by date

Next:From: Luke LonerganDate: 2007-01-18 22:41:30
Subject: Re: Postgres and really huge tables
Previous:From: Chris MairDate: 2007-01-18 21:42:40
Subject: Re: Postgres and really huge tables

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group