With some experimentation, it seems critical that the SELECT statement use
the exact same order of columns as the table (which means I have to fill in
NULL values and the like). That is an acceptable nuisance, but I thought
I'd ask in case there's a better way.
I'm not sure if this approach will work anyway though since I've got
concurrency (about a dozen API servers constantly writing to the database).
Locking tables seems like a bad idea in this case. What would happen if I
didn't lock and I tried this? It seems like it should just throw the same
error I'm already used to getting, though hopefully with less frequency (as
it would only occur if the same insert was attempted twice simultaneously).
Is there any chance I could actually end up getting dupes if I tried this
without a lock?
On Tue, Feb 14, 2012 at 6:43 AM, Merlin Moncure <mmoncure(at)gmail(dot)com> wrote:
> If your insertion process is single threaded (you don't have to worry
> about concurrent inserts on the same key), convert your INSERT ...
> VALUES to a INSERT SELECT ... WHERE NOT EXISTS().
> If you have some concurrency, but not a lot such that you can
> serialize all your inserts, you can do the above like this:
> LOCK foo;
> INSERT INTO FOO SELECT ... WHERE NOT EXISTS().
> One reason to maybe not do that is if you have a high latency
> connection to the database and your client api does not support
> sending statements in batches.
> Finally, if you have a lot of concurrency, you have to do the try
> insert/loop on failure method on the client (which pollutes the log)
> or the server (which does not, at least in plpgsql).
In response to
pgsql-novice by date
|Next:||From: Léa Massiot||Date: 2012-02-17 19:31:27|
|Subject: Re: Clusters list - Windows PostgreSQL server|
|Previous:||From: Léa Massiot||Date: 2012-02-17 10:52:04|
|Subject: Clusters list - Windows PostgreSQL server|