Re: Parallel copy

From: Greg Nancarrow <gregn4422(at)gmail(dot)com>
To: Bharath Rupireddy <bharath(dot)rupireddyforpostgres(at)gmail(dot)com>
Cc: vignesh C <vignesh21(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Parallel copy
Date: 2020-09-16 07:50:03
Message-ID: CAJcOf-dUchi35jTZu7Qdjs9P6=u3t73oLsLXSiW6EqK0=eY6dg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi Bharath,

On Tue, Sep 15, 2020 at 11:49 PM Bharath Rupireddy
<bharath(dot)rupireddyforpostgres(at)gmail(dot)com> wrote:
>
> Few questions:
> 1. Was the run performed with default postgresql.conf file? If not,
> what are the changed configurations?
Yes, just default settings.

> 2. Are the readings for normal copy(190.891sec, mentioned by you
> above) taken on HEAD or with patch, 0 workers?
With patch

>How much is the runtime
> with your test case on HEAD(Without patch) and 0 workers(With patch)?
TBH, I didn't test that. Looking at the changes, I wouldn't expect a
degradation of performance for normal copy (you have tested, right?).

> 3. Was the run performed on release build?
For generating the perf data I sent (normal copy vs parallel copy with
1 worker), I used a debug build (-g -O0), as that is needed for
generating all the relevant perf data for Postgres code. Previously I
ran with a release build (-O2).

> 4. Were the readings taken on multiple runs(say 3 or 4 times)?
The readings I sent were from just one run (not averaged), but I did
run the tests several times to verify the readings were representative
of the pattern I was seeing.

Fortunately I have been given permission to share the exact table
definition and data I used, so you can check the behaviour and timings
on your own test machine.
Please see the attachment.
You can create the table using the table.sql and index_4.sql
definitions in the "sql" directory.
The data.csv file (to be loaded by COPY) can be created with the
included "dupdata" tool in the "input" directory, which you need to
build, then run, specifying a suitable number of records and path of
the template record (see README). Obviously the larger the number of
records, the larger the file ...
The table can then be loaded using COPY with "format csv" (and
"parallel N" if testing parallel copy).

Regards,
Greg Nancarrow
Fujitsu Australia

Attachment Content-Type Size
table_data_generation_files_to_share.zip application/zip 4.6 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Surafel Temesgen 2020-09-16 08:19:06 Re: [PATCH] distinct aggregates within a window function WIP
Previous Message Andrey M. Borodin 2020-09-16 07:27:09 Re: Yet another fast GiST build