From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | David Raymond <David(dot)Raymond(at)tomtom(dot)com> |
Cc: | "pgsql-bugs(at)lists(dot)postgresql(dot)org" <pgsql-bugs(at)lists(dot)postgresql(dot)org> |
Subject: | Re: BUG #15922: Simple select with multiple exists filters returns duplicates from a primary key field |
Date: | 2019-07-23 21:49:39 |
Message-ID: | 31206.1563918579@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
David Raymond <David(dot)Raymond(at)tomtom(dot)com> writes:
> Update so far: I did manage to go and replace all the UUIDs with random ones and it's still doing it, so I do have a sanitized version now. No real luck with trimming down the record count though. When deleting too many records it would change the query plan to something not broken. Even after replacing the UUIDs and not deleting anything I ran analyze and it came up clean, and I had to vacuum analyze for it to pick the broken plan again. (That example pasted below) The dump file is at least consistently doing the same thing where immediately after load the plan chosen gives a consistent answer, but once analyzed it gives the bad duplicates. As it stands the dump file is 130 MB (30MB zipped), is that too big to send in to you?
Given that the problem seems to be specific to parallel query, likely
the reason is that reducing the number of rows brings it below the
threshold where the planner wants to use parallel query. So you could
probably reduce the parallel-query cost parameters to get a failure
with a smaller test case. However, if you don't feel like doing that,
that's fine.
Please *don't* send a 30MB message to the whole list, but you can
send it to me privately.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Jatinder Sandhu | 2019-07-23 22:46:07 | partition table slow planning |
Previous Message | David Raymond | 2019-07-23 21:42:06 | RE: BUG #15922: Simple select with multiple exists filters returns duplicates from a primary key field |