Skip site navigation (1) Skip section navigation (2)

Re: DELETE with LIMIT (or my first hack)

From: Daniel Loureiro <loureirorg(at)gmail(dot)com>
To: Jaime Casanova <jaime(at)2ndquadrant(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: DELETE with LIMIT (or my first hack)
Date: 2010-11-30 02:55:00
Message-ID: AANLkTim+dhH9FQ4K8KPTUA3YqheC5xs_akvmkko58aAD@mail.gmail.com (view raw or flat)
Thread:
Lists: pgsql-hackers
good point. But when you use a LIMIT in a SELECT statement you WANT n RANDOM
tuples - its wrong to get RANDOM tuples ? So, in the same logic, its wrong
to exclude n random tuples ? Besides, if you want DELETE just 1 tuple, why
the executor have to scan the entire table, and not just stoping after find
the 1 tuple ? Why the LIMIT clause should be used to speedup only SELECT
statements ? if the programmer know the expected number of affected rows why
not use it to speed up DELETE/UPDATE ?

cheers,
--
Daniel Loureiro
http://diffcoder.blogspot.com/

2010/11/30 Jaime Casanova <jaime(at)2ndquadrant(dot)com>

> On Mon, Nov 29, 2010 at 9:08 PM, Daniel Loureiro <loureirorg(at)gmail(dot)com>
> wrote:
> >
> > 3) change the executor to stop after “n” successful iterations. Is
> > this correct ?
> >
>
> no. it means you will delete the n first tuples that happen to be
> found, if you don't have a WHERE clause that means is very possible
> you delete something you don't want to... the correct solution is to
> use always try DELETE's inside transactions and only if you see the
> right thing happening issue a COMMIT
>
> besides i think this has been proposed and rejected before
>
> --
> Jaime Casanova         www.2ndQuadrant.com
> Professional PostgreSQL: Soporte y capacitación de PostgreSQL
>

In response to

Responses

pgsql-hackers by date

Next:From: Bruce MomjianDate: 2010-11-30 03:03:44
Subject: Re: ALTER TABLE ... IF EXISTS feature?
Previous:From: Josh KupershmidtDate: 2010-11-30 02:37:10
Subject: Re: [GENERAL] column-level update privs + lock table

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group