On Mon, Sep 18, 2006 at 07:14:56PM -0400, Alex Turner wrote:
>If you have a table with 100million records, each of which is 200bytes long,
>that gives you roughtly 20 gig of data (assuming it was all written neatly
>and hasn't been updated much).
If you're in that range it doesn't even count as big or challenging--you
can keep it memory resident for not all that much money.
In response to
pgsql-performance by date
|Next:||From: Francisco Reyes||Date: 2006-09-19 00:21:16|
|Subject: Re: Vacuums on large busy databases|
|Previous:||From: Marc McIntyre||Date: 2006-09-18 23:48:10|
|Subject: LIKE query problem|