Re: Drop indexes inside transaction?

From: Steve Lane <slane(at)moyergroup(dot)com>
To: <pgsql-admin(at)postgresql(dot)org>
Subject: Re: Drop indexes inside transaction?
Date: 2004-02-06 06:54:01
Message-ID: BC489629.9893%slane@moyergroup.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

On 2/6/04 12:23 AM, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:

> Steve Lane <slane(at)moyergroup(dot)com> writes:
>> Now we want to use COPY to bring the data in. The target table has 6
>> indexes. Without indexes, naturally, we can load 80K rows in 2 seconds. With
>> indexes, 46 seconds. (oddly, ONE index contributes 40+ seconds of that, yet
>> they're all similar, single-column indexes. Anyway, that's another story).
>
> No, I'd say that's the key part of the story. Details?

I have another post out there called "index re-creation speed" that delves
into this. Here's the table:

Table "test_responses_2"
Attribute | Type | Modifier
-------------------+-------------------+----------
id_response | integer |
id_code | integer |
id_survey | integer |
id_topic | integer |
id_item | integer |
id_administration | integer |
response_int | smallint |
response_txt | character varying |
rec_handle | character varying |
Indices: id_administration_test_key,
id_code_test_key,
id_item_test_key,
id_response_test_key,
id_survey_test_key,
id_topic_test_key

Id_item_test_key is the one that drives up the speed of the COPY
dramatically. 65 distinct values in that column for the given data set.
>
>> So I hit on the idea of doing the same thing, but inside a transaction. In
>> theory that should affect no one else.
>
> ... other than locking them out of the table while the transaction runs.

Ha! I knew that that tasty-looking lunch had to have a price tag.

> That doesn't sound like what you want to do. In any case, reindexing
> the table will get slower and slower as the pre-existing data in the
> table expands.

Yes, I've been running some tests and it stops being very acceptable around
3-5M rows.

So am I more or less out of luck? The issue, again, is how to insert at most
80-100K rows into a running system, at a time determined by users, into a
fairly heavily-indexed table.

I thought of dropping the indexes and deferring index recreation -- maybe
even having a very frequent cron job rebuild the indexes. All that buys me
is being able to return control to the initiating user quickly. It still has
the issue of expanding reindex time and, I'm guessing, the issue of locking
out other users as well.

Am I simply asking too much of my tools here?

-- sgl

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Steve Lane 2004-02-06 07:03:48 Re: Index (re)-creation speed
Previous Message Tom Lane 2004-02-06 06:30:15 Re: Index (re)-creation speed