Re: BRIN indexes - TRAP: BadArgument

From: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
To: Erik Rijkers <er(at)xs4all(dot)nl>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Emanuel Calvo <3manuek(at)esdebian(dot)org>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Nicolas Barbier <nicolas(dot)barbier(at)gmail(dot)com>, Claudio Freire <klaussfreire(at)gmail(dot)com>, Josh Berkus <josh(at)agliodbs(dot)com>, Andres Freund <andres(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: BRIN indexes - TRAP: BadArgument
Date: 2014-09-23 18:04:09
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Here's an updated version, rebased to current master.

Erik Rijkers wrote:

> I get into a BadArgument after:

Fixed in the attached, thanks.

Emanuel Calvo wrote:

> Debuging VACUUM VERBOSE ANALYZE over a concurrent table being
> updated/insert.
> (gbd)
> Breakpoint 1, errfinish (dummy=0) at elog.c:411
> 411 ErrorData *edata = &errordata[errordata_stack_depth];
> The complete backtrace is at

The file/line info in the backtrace says that this is reporting this

(errmsg("scanned index \"%s\" to remove %d row versions",
errdetail("%s.", pg_rusage_show(&ru0))));
Not sure why you're reporting it, since this is expected.

There were thousands of WARNINGs being emitted by IndexBuildHeapScan
when concurrent insertions occurred; I fixed that by setting the
ii_Concurrent flag, which makes that function obtain a snapshot to use
for the scan. This is okay because concurrent insertions will be
detected via the placeholder tuple mechanism as previously described.
(There is no danger of serializable transactions etc, because this only
runs in vacuum. I added an Assert() nevertheless.)

> Also, I found pages with an unkown type (using deafult parameters for
> the index
> creation):
> brin_page_type | array_agg
> ----------------+-----------
> unknown (00) | {3,4}
> revmap | {1}
> regular | {2}
> meta | {0}
> (4 rows)

Ah, we had an issue with the vacuuming of the FSM. I had to make that
more aggressive; I was able to reproduce the problem and it is fixed

Heikki Linnakangas wrote:

> Hmm. So the union support proc is only called if there is a race
> condition? That makes it very difficult to test, I'm afraid.

Yes. I guess we can fix that by having an assert-only block that uses
the union support proc to verify consistency of generated tuples. This
might be difficult for types involving floating point arithmetic.

> It would make more sense to pass BrinValues to the support
> functions, rather than DeformedBrTuple. An opclass'es support
> function should never need to access the values for other columns.

Agreed -- fixed. I added attno to BrinValues, which makes this easier.

> Does minmaxUnion handle NULLs correctly?

Nope, fixed.

> minmaxUnion pfrees the old values. Is that necessary? What memory
> context does the function run in? If the code runs in a short-lived
> memory context, you might as well just let them leak. If it runs in
> a long-lived context, well, perhaps it shouldn't. It's nicer to
> write functions that can leak freely. IIRC, GiST and GIN runs the
> support functions in a temporary context. In any case, it might be
> worth noting explicitly in the docs which functions may leak and
> which may not.

Yeah, I had tried playing with contexts in general previously but it
turned out that there was too much bureaucratic overhead (quite visible
in profiles), so I ripped it out and did careful retail pfree instead
(it's not *that* difficult). Maybe I went overboard with it, and that
with more careful planning we can do better; I don't think this is
critical ATM -- we can certainly stand later cleanup in this area.

> If you add a new datatype, and define b-tree operators for it, what
> is required to create a minmax opclass for it? Would it be possible
> to generalize the functions in brin_minmax.c so that they can be
> reused for any datatype (with b-tree operators) without writing any
> new C code? I think we're almost there; the only thing that differs
> between each data type is the opcinfo function. Let's pass the type
> OID as argument to the opcinfo function. You could then have just a
> single minmax_opcinfo function, instead of the macro to generate a
> separate function for each built-in datatype.

Yeah, that's how I had that initially. I changed it to what it's now as
part of a plan to enable building cross-type opclasses, so you could
have "WHERE int8col=42" without requiring a cast of the constant to type
int8. This might have been a thinko, because AFAICS it's possible to
build them with a constant opcinfo as well (I changed several other
things to support this, as described in a previous email.) I will look
into this later.

Thanks for the review!

Álvaro Herrera
PostgreSQL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
minmax-19.patch text/x-diff 194.4 KB

In response to


Browse pgsql-hackers by date

  From Date Subject
Next Message Fabien COELHO 2014-09-23 18:15:38 Re: add modulo (%) operator to pgbench
Previous Message Robert Haas 2014-09-23 17:58:21 Re: Hide 'Execution time' in EXPLAIN (COSTS OFF)