From: | "Dan Browning" <danb(at)cyclonecomputers(dot)coZZZZZZZZZZZm> |
---|---|
To: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: Large database help |
Date: | 2001-04-24 06:10:28 |
Message-ID: | 9c35de$1l4p$1@news.tht.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
> and dropped into a single table, which will become ~20GB. Analysis happens
> on a Windows client (over a network) that queries the data in chunks
across
> parallel connections. I'm running the DB on a dual gig p3 w/ 512 memory
> under Redhat 6 (.0 I think). A single index exists that gives the best
case
> for lookups, and the table is clustered against this index.
Sorry for my ignorant question, but I think I'll learn if I ask it:
Wouldn't one *expect* lots of heavy disk activity if one were querying a
20GB database on a system with only 512MB of RAM? Does the same thing
happen on, say, 300MB of data?
-Clueless in Seattle,
Dan B.
From | Date | Subject | |
---|---|---|---|
Next Message | Cedar Cox | 2001-04-24 10:57:18 | Re: [SQL] select ... for update in plpgsql |
Previous Message | Jared Sulem | 2001-04-24 00:39:50 | Restricting user access to databases |