From: | Ron Johnson <ron(dot)l(dot)johnson(at)cox(dot)net> |
---|---|
To: | Postgres general mailing list <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Delete/update with limit |
Date: | 2007-07-23 20:26:27 |
Message-ID: | 46A50EF3.8010909@cox.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 07/23/07 10:56, Csaba Nagy wrote:
> Hi all,
>
> This subject was touched a few times in the past, I looked into the
> archives... the result is invariably key developers saying such a
> feature is unsafe because the result is unpredictable, while the people
> requesting is saying it is OK that way, it is expected... but no
> compelling use case for it.
>
[snip]
>
> Now I don't put too much hope I can convince anybody that the limit on
> the delete/update commands has valid usage scenarios, but then can
> anybody help me find a good solution to chunk-wise process such a buffer
> table where insert speed is the highest priority (thus no indexes, the
> minimum of fields), and batch processing should still work fine with big
> table size, while not impacting at all the inserts, and finish in short
> time to avoid long running transactions ? Cause I can't really think of
> one... other than our scheme with the delete with limit + trigger +
> private temp table thing.
Maybe add OIDs to the table, and delete based on the OID number?
- --
Ron Johnson, Jr.
Jefferson LA USA
Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
iD8DBQFGpQ7zS9HxQb37XmcRArXQAJ9qcrWphVgtINdGlcwGubg/SEsjMgCeKyLt
I8xPs0NEGqg22Cvgf4awNVQ=
=l/yz
-----END PGP SIGNATURE-----
From | Date | Subject | |
---|---|---|---|
Next Message | Sibte Abbas | 2007-07-23 20:38:45 | Re: [HACKERS] 8.2.4 signal 11 with large transaction |
Previous Message | Andrew Sullivan | 2007-07-23 19:32:55 | Re: two phase commit |