Re: proposal: possibility to read dumped table's name from file

From: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
To: Daniel Gustafsson <daniel(at)yesql(dot)se>
Cc: vignesh C <vignesh21(at)gmail(dot)com>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: proposal: possibility to read dumped table's name from file
Date: 2020-07-13 15:33:38
Message-ID: CAFj8pRDXzY0GDbEF2mG25Nb8NvCmSyh1ZVxYD88ooDhDuja81A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

po 13. 7. 2020 v 16:57 odesílatel Daniel Gustafsson <daniel(at)yesql(dot)se>
napsal:

> > On 13 Jul 2020, at 13:02, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> wrote:
>
> > I like JSON format. But why here? For this purpose the JSON is over
> engineered.
>
> I respectfully disagree, JSON is a commonly used and known format in
> systems
> administration and most importantly: we already have code to parse it in
> the
> frontend.
>

I disagree with the idea so if we have a client side JSON parser we have
to use it everywhere.
For this case, parsing JSON means more code, not less. I checked the
parse_manifest.c. More
the JSON API is DOM type. For this purpose the SAX type is better. But
still, things should be simple as possible.
There is not any necessity to use it.

JSON is good for a lot of purposes, and can be good if the document uses
more lexer types, numeric, ... But nothing is used there

> > This input file has no nested structure - it is just a stream of lines.
>
> Well, it has a set of object types which in turn have objects. There is
> more
> structure than meets the eye.
>

> Also, the current patch allows arbitrary whitespace before object names,
> but no
> whitespace before comments etc. Using something where the rules of
> parsing are
> known is rarely a bad thing.
>

if I know - JSON hasn't comments at all.

> > I don't think so introducing JSON here can be a good idea.
>
> Quite possibly it isn't, but not discussing options seems like a worse
> idea so
> I wanted to bring it up.
>
> > It is a really different case than pg_dump manifest file - in this case,
> in this case pg_dump is consument.
>
> Right, as I said these are two different, while tangentially related,
> things.
>

Backup manifest format has no trivial complexity - and using JSON has
sense. Input filter file is a trivial - +/- list of strings (and it will be
everytime).

In this case I don't see any benefits from JSON - on both sides (producent,
consuments). It is harder (little bit) to parse it, it is harder (little
bit) to generate it.

Regards

Pavel

> cheers ./daniel

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Geoghegan 2020-07-13 16:20:32 Re: Default setting for enable_hashagg_disk
Previous Message Pavel Borisov 2020-07-13 15:32:24 Re: [PATCH] fix GIN index search sometimes losing results