From: | Wes <wespvp(at)syntegra(dot)com> |
---|---|
To: | Zeugswetter Andreas DCP SD <ZeugswetterA(at)spardat(dot)at>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com>, <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [GENERAL] Concurrency problem building indexes |
Date: | 2006-04-25 13:32:34 |
Message-ID: | C0738F22.23B75%wespvp@syntegra.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 4/25/06 2:18 AM, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> So invent some made-up data. I'd be seriously surprised if this
> behavior has anything to do with the precise data being indexed.
> Experiment around till you've got something you don't mind posting
> that exhibits the behavior you see.
My initial attempts last night at duplicating it with a small result set
were not successful. I'll see what I can do.
On 4/25/06 3:25 AM, "Zeugswetter Andreas DCP SD" <ZeugswetterA(at)spardat(dot)at>
wrote:
> Wes, you could most likely solve your immediate problem if you did an
> analyze before
> creating the indexes.
I can try that. Is that going to be a reasonable thing to do when there's
100 million rows per table? I obviously want to minimize the number of
sequential passes through the database.
Wes
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2006-04-25 14:09:50 | Re: Implementing RESET CONNECTION ... |
Previous Message | Bruce Momjian | 2006-04-25 13:16:47 | Re: Google SoC--Idea Request |