From: | Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com> |
---|---|
To: | Justin Pryzby <pryzby(at)telsasoft(dot)com> |
Cc: | Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: pg11.1: dsa_area could not attach to segment |
Date: | 2019-02-06 09:40:25 |
Message-ID: | CAEepm=1M8Db_5OWjg_-dD2S5nH1HZOaWL8Sr4L0_z8dRWnUJOA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Feb 6, 2019 at 4:22 PM Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> On Wed, Feb 6, 2019 at 1:10 PM Justin Pryzby <pryzby(at)telsasoft(dot)com> wrote:
> > This is a contrived query which I made up to try to exercise/stress bitmap
> > scans based on Thomas's working hypothesis for this error/bug. This seems to
> > be easier to hit than the other error ("could not attach to segment") - a loop
> > around this query has run into "free pages" several times today.
>
> Thanks. I'll go and try to repro this with queries that look like that.
No luck so far. My colleague Robert pointed out that the
fpm->contiguous_pages_dirty mechanism (that lazily maintains
fpm->contiguous_pages) is suspicious here, but we haven't yet found a
theory to explain how fpm->contiguous_pages could have a value that is
too large. Clearly such a bug could result in a segment that claims
too high a number, and that'd result in this error.
--
Thomas Munro
http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Daniel Gustafsson | 2019-02-06 09:50:27 | Re: Tighten up a few overly lax regexes in pg_dump's tap tests |
Previous Message | Arseny Sher | 2019-02-06 09:21:27 | Re: Too rigorous assert in reorderbuffer.c |