Patches for TODO item: Avoid truncating empty OCDR temp tables on COMMIT

From: Gurjeet Singh <singh(dot)gurjeet(at)gmail(dot)com>
To: PGSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Patches for TODO item: Avoid truncating empty OCDR temp tables on COMMIT
Date: 2013-01-14 22:31:42
Message-ID: CABwTF4UgtAgX=xWrz+VL-Hj6K8XopnFTzq-rT62=mmWVNuqORA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

TODO Item: Prevent temporary tables created with ON COMMIT DELETE ROWS from
repeatedly truncating the table on every commit if the table is already
empty

Please find attached two patches, implementing two different approaches to
fix the issue of COMMIT truncating empty TEMP tables that have the ON
COMMIT DELETE ROWS attribute.

v2.patch: This approach introduces a boolean 'rd_rows_inserted' in
RelationData struct, and sets this struct to true for every TEMP table in
which a row is inserted. During commit, we avoid truncating those OCDR temp
tables that haven't been inserted into in this transaction.

v3.patch: This is the original suggestion by Robert Haas, where we keep a
global variable to indicate if any TEMP table has been a target of INSERT,
and if not, we skip truncating all OCDR temp tables. The downside of this
approach I see is that that if a transaction inserts a row into even one of
the OCDR temp tables, we end up attempting truncating all temp tables, even
those that are empty.

I am attaching the test case, a psql script, I used to get the timing of
BEGIN and COMMIT operations. I executed the test like this:

$ for (( i = 1 ; i <= 4; ++i )) ; do pgsql -f ~/empty_temp_tables_test.psql
| tee post_patchv2_run${i}.log; done

And then extracted the timing info of BEGIN and COMMITs using this pipeline:

$ grep -A 1 -E 'BEGIN|COMMIT' post_patchv2_run4.log | grep Time: | cut -d :
-f 2 | cut -d ' ' -f 2

Also attached is the PDF of the test runs. It includes the times, their
averages and '% Change' across the averages. '% Change' column is derived
as round((pre_patch_avg - post_patch_avg)/pre_patch_avg*100, 2).

The tests start with a VACUUM FULL, of the database. This is to ensure that
there are no dead rows in pg_class and other system tables, leftover from
previous run. It also helps in bringing all the database tables into
shared_buffers, so this also helps in decreasing variability of the test
runs.

I tried quite hard to eliminate any variability of the test environment,
and for this I disabled Autovacuum, increased checkpoint_segments,
increased shared_buffers, etc. I then isolated each type of test into
session of its own, by disconnecting and reconnecting again. But during the
last test I realized that the disconnection is not instantaneous, and the
backend process from the previous process lingered around for a few
seconds, for as log as 7-8 seconds, consuming nearly 100% CPU. And during
this period then next connection running the test was also consuming about
100% CPU.

So even though I tried to isolate the tests, I am sure this delay in
backend death and the CPU consumption by the dying process must be
interfering with the results. So test results need to be taken with a pinch
of salt.
--
Gurjeet Singh

http://gurjeet.singh.im/

Attachment Content-Type Size
improve_commit_with_OCDR_v2.patch application/octet-stream 1.9 KB
improve_commit_with_OCDR_v3.patch application/octet-stream 2.1 KB
Test V2 Patches 2 and 3.pdf application/pdf 48.8 KB
empty_temp_tables_test.psql application/octet-stream 2.1 KB

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Heikki Linnakangas 2013-01-14 22:37:44 Re: Curious buildfarm failures
Previous Message Alvaro Herrera 2013-01-14 22:27:53 Re: recent ALTER whatever .. SET SCHEMA refactoring