Re: Load data from a csv file without using COPY

From: "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com>
To: Ravi Krishna <srkrishna(at)yahoo(dot)com>
Cc: Alban Hertroys <haramrae(at)gmail(dot)com>, PG mailing List <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: Load data from a csv file without using COPY
Date: 2018-06-19 21:32:10
Message-ID: CAKFQuwaUfLBHZp15dCQTddi37cwdGs2pV7VjfMQ-ae_NOdLiEQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Tue, Jun 19, 2018 at 2:17 PM, Ravi Krishna <srkrishna(at)yahoo(dot)com> wrote:

> >
> > I think an easy approach would be to COPY the CSV files into a separate
> database using psql's \copy command and then pg_dump that as separate
> insert statements with pg_dump —inserts.
> >
>
> This was my first thought too. However, as I understand, pg_dump --insert
> basically runs INSERT INTO ... sql for every row.
> In other words, each row is un-prepared and executed individually. That
> is also not real life scenario.
>

​You really need to describe what you consider to be a "real life​
scenario"; and probably give a better idea of creation and number of these
csv files. In addition to describing the relevant behavior of the
application you are testing.

If you want maximum realism you should probably write integration tests for
your application and then execute those at high volume.

Or at minimum give an example of the output you would want from this
unknown program...

David J.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Tim Cross 2018-06-19 21:59:10 Re: Load data from a csv file without using COPY
Previous Message Paul Jungwirth 2018-06-19 21:31:50 Re: Is postorder tree traversal possible with recursive CTE's?