From: | Ron Johnson <ron(dot)l(dot)johnson(at)cox(dot)net> |
---|---|
To: | PgSQL Performance ML <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: performance of insert/delete/update |
Date: | 2002-11-26 01:41:03 |
Message-ID: | 1038274863.26988.10.camel@haggis |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-performance |
On Mon, 2002-11-25 at 18:23, scott.marlowe wrote:
> On Mon, 25 Nov 2002, Tim Gardner wrote:
>
[snip]
>
> There are two factors that make Postgresql so weird in regards to
> transactions. One it that everything happens in a transaction (we won't
> mention truncate for a while, it's the only exception I know of.)
Why is this so weird? Do I use the /other/ weird RDBMS? (Rdb/VMS)
> The next factor that makes for fast inserts of large amounts of data in a
> transaction is MVCC. With Oracle and many other databases, transactions
> are written into a seperate log file, and when you commit, they are
> inserted into the database as one big group. This means you write your
> data twice, once into the transaction log, and once into the database.
You are just deferring the pain. Whereas others must flush from log
to "database files", they do not have to VACUUM or VACUUM ANALYZE.
> With Postgresql's implementation of MVCC, all your data are inserted in
> real time, with a transaction date that makes the other clients ignore
> them (mostly, other read committed transactions may or may not see them.)
Is this unusual? (Except that Rdb/VMS uses a 64-bit integer (a
Transaction Sequence Number) instead of a timestamp, because Rdb,
cominging from VAX/VMS is natively cluster-aware, and it's not
guaranteed that all nodes have the exact same timestamp.
[snip]
> In the event you roll back a transaction, the tuples are all just marked
> as dead and they get ignored.
What if you are in a 24x365 environment? Doing a VACUUM ANALYZE would
really slow down the nightly operations.
> It's interesting when you work with folks who came from other databases.
> My coworker, who's been using Postgresql for about 2 years now, had an
> interesting experience when he first started here. He was inserting
> something like 10,000 rows. He comes over and tells me there must be
> something wrong with the database, as his inserts have been running for 10
> minutes, and he's not even halfway through. So I had him stop the
> inserts, clean out the rows (it was a new table for a new project) and
> wrap all 10,000 inserts into a transaction. What had been running for 10
> minutes now ran in about 30 seconds.
Again, why is this so unusual?????
--
+------------------------------------------------------------+
| Ron Johnson, Jr. mailto:ron(dot)l(dot)johnson(at)cox(dot)net |
| Jefferson, LA USA http://members.cox.net/ron.l.johnson |
| |
| "they love our milk and honey, but preach about another |
| way of living" |
| Merle Haggard, "The Fighting Side Of Me" |
+------------------------------------------------------------+
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2002-11-26 01:41:42 | Re: performance of insert/delete/update |
Previous Message | Christopher Kings-Lynne | 2002-11-26 01:24:28 | Postgres Security Expert??? |
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2002-11-26 01:41:42 | Re: performance of insert/delete/update |
Previous Message | Rich Scott | 2002-11-26 00:43:39 | Re: performance of insert/delete/update |