From: | "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com> |
---|---|
To: | Shelby Cain <alyandon(at)yahoo(dot)com> |
Cc: | Csaba Nagy <nagy(at)ecircle-ag(dot)com>, SCassidy(at)overlandstorage(dot)com, Postgres general mailing list <pgsql-general(at)postgresql(dot)org>, pgsql-general-owner(at)postgresql(dot)org |
Subject: | Re: allow LIMIT in UPDATE and DELETE |
Date: | 2006-05-22 22:04:51 |
Message-ID: | 20060522220451.GZ64371@pervasive.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, May 19, 2006 at 10:25:19AM -0700, Shelby Cain wrote:
> ----- Original Message ----
> >From: Csaba Nagy <nagy(at)ecircle-ag(dot)com>
> >To: Shelby Cain <alyandon(at)yahoo(dot)com>
> >Cc: SCassidy(at)overlandstorage(dot)com; Postgres general mailing list ><pgsql-general(at)postgresql(dot)org>; pgsql-general-owner(at)postgresql(dot)org
> >Sent: Friday, May 19, 2006 11:46:42 AM
> >Subject: Re: [GENERAL] allow LIMIT in UPDATE and DELETE
> >
> >Well, sometimes it's not that easy. How would you handle a batch
> >processing system which stores the incoming requests in a queue table in
> >the data base, and then periodically processes a batch of it, with the
> >additional constraint that it is allowed to process at most 1000 at a
> >time so it won't produce a too long running transaction ? Suppose the
> >processing is quite costly, and the queue can have bursts of incoming
> >requests which then have to be slowly processed... the requests are
> >coming from the web and must be processed asynchronously, the insert
> >into the data base must be very fast.
>
> I can't imagine a case where a properly tuned Postgresql installation with appropriate hardware backing it couldn't handle that particular kind of workload pattern. However, I usually work with Oracle so tables used as queues don't have the same performance issues you'd run into with Postgresql.
Just try and do (what should stay) a small queue table in the same
database as long-running reporting transactions. As long as a
long-running report is going you might as well suspend all vacuuming on
that queue table, because it won't do you any good; the report
transaction means that vacuum can't remove anything.
I've seen a case where a queue table should always fit into a single
database page; 2 at most. But because some transactions will run for a
minute or two, that table is normally about 40 pages, almost entirely
all dead space. Of course the same problem affects all the indexes on
that table as well.
I can't imagine how bad this would be if the database actually had
hour-long reports that had to run... and luckily the system is quiet at
night when pg_dump runs.
> Regardless, this type of queue problem can also be tackled by having your data layer persisting the input from the web in memory (which maintains a low perceived response time to the client) and posting to the table as fast as the database allows.
Uh, and just what happens when your web front-end crashes then??
--
Jim C. Nasby, Sr. Engineering Consultant jnasby(at)pervasive(dot)com
Pervasive Software http://pervasive.com work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
From | Date | Subject | |
---|---|---|---|
Next Message | Jim C. Nasby | 2006-05-22 22:07:48 | Re: background writer process (PID 1400) exited with exit code 0 -- repeatedly && incomplete startup packet |
Previous Message | Jim C. Nasby | 2006-05-22 21:55:50 | Re: allow LIMIT in UPDATE and DELETE |