Re: vacuum, performance, and MVCC

From: "Mark Woodward" <pgsql(at)mohawksoft(dot)com>
To: "Christopher Browne" <cbbrowne(at)acm(dot)org>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: vacuum, performance, and MVCC
Date: 2006-06-22 10:52:43
Message-ID: 18435.24.91.171.78.1150973563.squirrel@mail.mohawksoft.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

> Clinging to sanity, pgsql(at)mohawksoft(dot)com ("Mark Woodward") mumbled into
> her beard:
>> We all know that PostgreSQL suffers performance problems when rows are
>> updated frequently prior to a vacuum. The most serious example can be
>> seen
>> by using PostgreSQL as a session handler for a busy we site. You may
>> have
>> thousands or millions of active sessions, each being updated per page
>> hit.
>>
>> Each time the record is updated, a new version is created, thus
>> lengthening the "correct" version search each time row is accessed,
>> until,
>> of course, the next vacuum comes along and corrects the index to point
>> to
>> the latest version of the record.
>>
>> Is that a fair explanation?
>
> No, it's not.
>
> 1. The index points to all the versions, until they get vacuumed out.

It can't point to "all" versions, it points to the last "current" version
as updated by vacuum, or the first version of the row.

>
> 2. There may simultaneously be multiple "correct" versions. The
> notion that there is one version that is The Correct One is wrong, and
> you need to get rid of that thought.

Sorry, this is misunderstanding. By "correct version search" it was
implied "for this transaction." Later I mention finding the first row with
a transaction lower than the current.

>
>> If my assertion is fundimentally true, then PostgreSQL will always
>> suffer
>> performance penalties under a heavy modification load. Of course, tables
>> with many inserts are not an issue, it is mainly updates. The problem is
>> that there are classes of problems where updates are the primary
>> operation.
>
> The trouble with your assertion is that it is true for *all* database
> systems except for those whose only transaction mode is READ
> UNCOMMITTED, where the only row visible is the "Latest" version.

Not true. Oracle does not seem to exhibit this problem.

>
>> I was thinking, just as a hypothetical, what if we reversed the
>> problem, and always referenced the newest version of a row and
>> scanned backwards across the versions to the first that has a lower
>> transacton number?
>
> That would require an index on transaction number, which is an
> additional data structure not in place now. That would presumably
> worsen things.

All things being equal, perhaps not. It would proably be a loser if you
have a static database, but in a database that undergoes modification, it
would be the same or less work if the average row has two versions.
(assuming nothing else changes)
>
>> One possible implementation: PostgreSQL could keep an indirection array
>> of
>> index to table ref for use by all the indexes on a table. The various
>> indexes return offsets into the array, not direct table refs. Because
>> the
>> table refs are separate from the index, they can be updated each time a
>> transaction is commited.
>
> You mean, this index would be "VACUUMed" as a part of each transaction
> COMMIT? I can't see that turning out well...

No, it would not be vacuumed!!!

Right now, the indexes point to the lowest row version. When an index
returns the row ID, it is checked if there are newer versions, if so, the
newer versions are searched until the last one is found or exceeds the
current TID.

>
>> This way, the newest version of a row is always the first row
>> found. Also, on a heavily updated site, the most used rows would
>> always be at the end of the table, reducing amount of disk reads or
>> cache memory required to find the correct row version for each
>> query.
>
> I can't see how it follows that most-used rows would migrate to the
> end of the table.

Sorry, OK, as assumtion it ignores the FSM, but the idea is that there is
only one lookup.

> That would only be true in a database that is never
> VACUUMed; as soon as a VACUUM is done, free space opens up in the
> interior, so that new tuples may be placed in the "interior."

Regardless, the point is that you have to search the [N] versions of a row
to find the latest correct version of the row for your transacation. This
is done, AFAICT, from first to last version, meaning that the work
required to find a row increases with every update prior to vacuum.

PostgreSQL fails miserably as a web session handler because of this
behavior and it requires too frequent vacuums and inconsistent
performance.

OK, forget the version array, it was just an off the top idea. How about
this:

Currently a row does this:

row_TID[0] -> row_TID[1] ->row_TID[2] ./. row_TID[LAST-1] -> row_TID[LAST]

Pointing to each subsequent row. What if it did this:

row_TID[0] -> row_TID[LAST] -> row_TID[LAST-1] ./. -> row_TID[2] ->
row_TID[1]

The base tuple of a version chain gets updated to point to the latest
commited row. It should be fairly low impact on performance on a static
database, but REALLY speed up PostgreSQL on a heavily modified database
and provide more consistent performance between vacuums and require fewer
vacuums to maintain performance.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Zeugswetter Andreas DCP SD 2006-06-22 11:54:27 Re: vacuum, performance, and MVCC
Previous Message Gaetano Mendola 2006-06-22 08:54:32 Re: checking on buildfarm member thrush