Re: Idea: INSERT INTO ... NATURAL SELECT ...

From: Sven Berkvens-Matthijsse <sven(at)postgresql(dot)berkvens(dot)net>
To: Martin Stöcker <martin(dot)stoecker(at)stb-datenservice(dot)de>, pgsql-sql(at)lists(dot)postgresql(dot)org
Subject: Re: Idea: INSERT INTO ... NATURAL SELECT ...
Date: 2019-02-04 16:19:29
Message-ID: 7edf1575-d544-f643-e331-d340b26e87b4@berkvens.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-sql

Hi Martin,

On 01/02/2019 19.41, Martin Stöcker wrote:
> Hi Sven,
>
> in many cases I prefer "copy from" to import data.  So I can create my
> test data via spreadsheet and csv.

Sure, that works, but it would then be useful if the COPY command would
actually read the first line of a CSV file and use it to find the target
columns in the table, but it does not.

> Sometimes it is helpfull to do some shell stuff to create it or import
> with psql.

Yes, that definitely works.

> But if you prefer to have column names and column data near to each
> other, why not using json?
>
> postgres=#create table test ( i integer, t text);
> postgres=# insert into test(select * from
> json_to_recordset('[{"i":1,"t":"foo"},{"i":"7","t":"bar"}]') as x(i
> int, t text));

That also works, but requires one to name all the columns and their
types in the "AS x" part. That makes the statement very verbose. My
proposal would not require the types to be stated and would only require
the column names in the "data part" of the statement.

> INSERT 0 2
> postgres=# select * from test;
>  i |  t
> ---+-----
>   1 | foo
>  7 | bar
> (2 rows)
>
> Regards Martin

Thanks for your thoughts!

With kind regards,
Sven

In response to

Browse pgsql-sql by date

  From Date Subject
Next Message Peter Eisentraut 2019-02-05 14:11:32 Re: Weird "could not determine which collation to use for string comparison" with LEAST/GREATEST on PG11 procedure
Previous Message Sven Berkvens-Matthijsse 2019-02-04 16:03:31 Re: Idea: INSERT INTO ... NATURAL SELECT ...