Re: hundreds of millions row dBs

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Dann Corbit" <DCorbit(at)connx(dot)com>
Cc: "Wes" <wespvp(at)syntegra(dot)com>, "Guy Rouillier" <guyr(at)masergy(dot)com>, pgsql-general(at)postgresql(dot)org, "Greer, Doug [NTK]" <doug(dot)r(dot)greer(at)mail(dot)sprint(dot)com>
Subject: Re: hundreds of millions row dBs
Date: 2005-01-04 21:36:36
Message-ID: 20464.1104874596@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

"Dann Corbit" <DCorbit(at)connx(dot)com> writes:
> Here is an instance where a really big ram disk might be handy.
> You could create a database on a big ram disk and load it, then build
> the indexes.
> Then shut down the database and move it to hard disk.

Actually, if you have a RAM disk, just change the $PGDATA/base/nnn/pgsql_tmp
subdirectory into a symlink to some temp directory on the RAM disk.
Should get you pretty much all the win with no need to move stuff around
afterwards.

You have to be sure the RAM disk is bigger than your biggest index though.

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Lonni J Friedman 2005-01-04 22:26:12 vacuum is failing
Previous Message Tom Lane 2005-01-04 21:20:03 Re: hundreds of millions row dBs