Skip site navigation (1) Skip section navigation (2)

Re: Deleting millions of rows

From: "Jerry Champlin" <jchamplin(at)absolute-performance(dot)com>
To: "'Brian Cox'" <brian(dot)cox(at)ca(dot)com>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Deleting millions of rows
Date: 2009-02-02 19:15:29
Message-ID: 05d401c9856a$9c8b4410$d5a1cc30$@com (view raw, whole thread or download thread mbox)
Lists: pgsql-performance

One approach we use for large tables is to partition and then drop
partitions as the data becomes obsolete.  This way you never have the
problem.  Our general rule is to never delete data from a table because it
is too slow.  We have found this to be the preferred approach regardless of
database platform.


Jerry Champlin|Absolute Performance Inc.|Mobile:  303-588-2547

-----Original Message-----
From: pgsql-performance-owner(at)postgresql(dot)org
[mailto:pgsql-performance-owner(at)postgresql(dot)org] On Behalf Of Brian Cox
Sent: Monday, February 02, 2009 11:18 AM
To: pgsql-performance(at)postgresql(dot)org
Subject: [PERFORM] Deleting millions of rows

I'm using 8.3.5. Table ts_defects has 48M rows. Through psql: delete 
from ts_defects;
Result: out of memory/Can't allocate size: 32
I then did 10 or so deletes to get rid of the rows. Afterwards, inserts 
into or queries on this
table performed significantly slower. I tried a vacuum analyze, but this 
didn't help. To fix this,
I dumped and restored the database.

1) why can't postgres delete all rows in a table if it has millions of rows?
2) is there any other way to restore performance other than restoring 
the database?


In response to

pgsql-performance by date

Next:From: Dave DutcherDate: 2009-02-02 19:35:22
Subject: Re: Deleting millions of rows
Previous:From: Robert HaasDate: 2009-02-02 18:38:50
Subject: Re: Deleting millions of rows

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group