benchmarking update/insert and random record update

From: Ivan Sergio Borgonovo <mail(at)webthatworks(dot)it>
To: pgsql-general(at)postgresql(dot)org
Subject: benchmarking update/insert and random record update
Date: 2008-01-08 23:48:44
Message-ID: 20080109004844.3b4a6741@webthatworks.it
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I've to sync 2 tables with pk (serial/identity).The source comes from
MS SQL, the destination is pg (ODBC is not an option currently).

One solution would be to truncate the destination and just copy the
new data in it but I prefer a slower sync but avoid a period where no
data is available.

So I thought to delete the records not present in source then do an
update/insert

update...
if(not found) then
insert ...
end if;

Supposed it is the best solution to keep in sync 2 tables[1] I was
going to simulate such update to get a rough idea about how long it
takes.

I've a 700K record table and I expect to have a maximum of 10K
updates, 20K insert and 2K delete.
The inserts will have higher, mostly consecutive pk.
Deletes and updates will have random pk.

Considering pg internals does it have any sense to simulate even the
"position" of delete/update/inserts?

I know how to delete random records and it's not a problem to delete
a range of records whose pk is in an interval but...

How can I randomly update records?

I need to insert random values in some of the columns of randomly
picked up records.

[1] will renaming tables (dest -> old_dest, src-> dest) break pk, fk
relationships and function reference to objects?

thx

--
Ivan Sergio Borgonovo
http://www.webthatworks.it

Browse pgsql-general by date

  From Date Subject
Next Message Christopher Siwy 2008-01-08 23:50:20 Re: Cannot connect to PgPool
Previous Message Tom Lane 2008-01-08 23:47:47 Re: Index trouble with 8.3b4