|From:||Robert Haas <robertmhaas(at)gmail(dot)com>|
|To:||Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>|
|Cc:||Michael Paquier <michael(at)paquier(dot)xyz>, Sergei Kornilov <sk(at)zsrv(dot)org>, Amit Langote <langote_amit_f8(at)lab(dot)ntt(dot)co(dot)jp>, David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Simon Riggs <simon(at)2ndquadrant(dot)com>|
|Subject:||Re: ATTACH/DETACH PARTITION CONCURRENTLY|
|Views:||Raw Message | Whole Thread | Download mbox|
On Fri, Feb 1, 2019 at 9:00 AM Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> I don't think we'd be using pqmq here, or shm_mq either, but I think
> the bigger issues is that starting a parallel query is already a
> pretty heavy operation, and so the added overhead of this is probably
> not very noticeable. I agree that it seems a bit expensive, but since
> we're already waiting for the postmaster to fork() a new process which
> then has to initialize itself, this probably won't break the bank.
> What bothers me more is that it's adding a substantial amount of code
> that could very well contain bugs to fix something that isn't clearly
> a problem in the first place.
I spent most of the last 6 hours writing and debugging a substantial
chunk of the code that would be needed. Here's an 0006 patch that
adds functions to serialize and restore PartitionDesc in a manner
similar to what parallel query does for other object types. Since a
PartitionDesc includes a pointer to a PartitionBoundInfo, that meant
also writing functions to serialize and restore those. If we want to
go this route, I think the next thing to do would be to integrate this
into the PartitionDirectory infrastructure.
Basically what I'm imagining we would do there is have a hash table
stored in shared memory to go with the one that is already stored in
backend-private memory. The shared table stores serialized entries,
and the local table stores normal ones. Any lookups try the local
table first, then the shared table. If we get a hit in the shared
table, we deserialize whatever we find there and stash the result in
the local table. If we find it neither place, we generate a new entry
in the local table and then serialize it into the shard table. It's
not quite clear to me at the moment how to solve the concurrency
problems associated with this design, but it's probably not too hard.
I don't have enough mental energy left to figure it out today, though.
After having written this code, I'm still torn about whether to go
further with this design. On the one hand, this is such boilerplate
code that it's kinda hard to imagine it having too many more bugs; on
the other hand, as you can see, it's a non-trivial amount of code to
add without a real clear reason, and I'm not sure we have one, even
though in the abstract it seems like a better way to go.
Still interesting in hearing more opinions.
The Enterprise PostgreSQL Company
|Next Message||James Coleman||2019-02-01 21:04:58||Re: Index Skip Scan|
|Previous Message||Jesper Pedersen||2019-02-01 19:24:38||Re: Index Skip Scan|