Re: Table partition for very large table

From: Michael Fuhr <mike(at)fuhr(dot)org>
To: Yudie Gunawan <yudiepg(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Table partition for very large table
Date: 2005-03-28 17:59:10
Message-ID: 20050328175909.GA71569@winnie.fuhr.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Mon, Mar 28, 2005 at 11:32:04AM -0600, Yudie Gunawan wrote:

> I have table with more than 4 millions records and when I do select
> query it gives me "out of memory" error.

What's the query and how are you issuing it? Where are you seeing
the error? This could be a client problem: the client might be
trying to fetch all rows before doing anything with them, thereby
exhausting all memory. If that's the case then a cursor might be
useful.

> Does postgres has feature like table partition to handle table with
> very large records.

Let's identify the problem before guessing how to fix it.

--
Michael Fuhr
http://www.fuhr.org/~mfuhr/

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Ed L. 2005-03-28 18:03:34 Re: LWM 2004 Readers' Choice Nomination
Previous Message Joshua D. Drake 2005-03-28 17:56:20 Re: Table partition for very large table