Re: Create and drop temp table in 8.3.4

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: pgsql-performance(at)postgresql(dot)org, Heikki Linnakangas <heikki(at)enterprisedb(dot)com>
Subject: Re: Create and drop temp table in 8.3.4
Date: 2008-11-06 01:35:12
Message-ID: 6251.1225935312@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

"Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> writes:
> I'm trying to quantify the issue, and would appreciate any
> suggestions, either for mitigation or collecting useful data to find
> the cause of the performance regression. I create a script which
> brackets 1000 lines like the following within a single begin/commit:

> create temporary table tt (c1 int not null primary key, c2 text, c3
> text); drop table tt;

I poked at this a little bit. The test case is stressing the system
more than might be apparent: there's an index on c1 because of the
PRIMARY KEY, and the text columns force a toast table to be created,
which has its own index. So that means four separate filesystem
files get created for each iteration, and then dropped at the end of
the transaction. (The different behavior you notice at COMMIT must
be the cleanup phase where the unlink()s get issued.)

Even though nothing ever gets put in the indexes, their metapages get
created immediately, so we also allocate and write 8K per index.

So there are three cost components:

1. Filesystem overhead to create and eventually delete all those
thousands of files.

2. Write traffic for the index metapages.

3. System catalog tuple insertions and deletions (and the ensuing
WAL log traffic).

I'm not too sure which of these is the dominant cost --- it might
well vary from system to system anyway depending on what filesystem
you use. But I think it's not #2 since that one would only amount
to 16MB over the length of the transaction.

As far as I can tell with strace, the filesystem overhead ought to be
the same in 8.2 and 8.3 because pretty much the same series of syscalls
occurs. So I suspect that the slowdown you saw comes from making a
larger number of catalog updates in 8.3; though I can't think what that
would be offhand.

A somewhat worrisome point is that the filesystem overhead is going to
essentially double in CVS HEAD, because of the addition of per-relation
FSM files. (In fact, Heikki is proposing to triple the overhead by also
adding DSM files ...) If cost #1 is significant then that could really
hurt.

regards, tom lane

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message David Rees 2008-11-06 03:25:31 Re: Occasional Slow Commit
Previous Message Kevin Grittner 2008-11-05 23:16:11 Re: Create and drop temp table in 8.3.4