Re: [E] Re: Most effective and fast way to load few Tbyte of data from flat files into postgresql

From: "Saha, Sushanta K" <sushanta(dot)saha(at)verizonwireless(dot)com>
To: "Peter J(dot) Holzer" <hjp-pgsql(at)hjp(dot)at>
Cc: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: [E] Re: Most effective and fast way to load few Tbyte of data from flat files into postgresql
Date: 2020-08-25 12:18:08
Message-ID: CAHty+vNGwmrLFMytWEB6v4x8ScPCMdMF0DKiW5m_XS2tYR8Kog@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

You can explore "pgloader" also.

.... Sushanta

On Tue, Aug 25, 2020 at 7:24 AM Peter J. Holzer <hjp-pgsql(at)hjp(dot)at> wrote:

> On 2020-08-24 21:17:36 +0000, Dirk Krautschick wrote:
> > what would be the fastest or most effective way to load few (5-10) TB
> > of data from flat files into a postgresql database, including some 1TB
> > tables and blobs?
> >
> > There is the copy command but there is no way for native parallelism,
> > right? I have found pg_bulkload but haven't tested it yet. As far I
> > can see EDB has its EDB*Loader as a commercial option.
>
> A single COPY isn't parallel, but you can run several of them in
> parallel (that's what pg_restore -j N does). So the total time may be
> dominated by your largest table (or I/O bandwidth).
>
> hp
>
> --
> _ | Peter J. Holzer | Story must make more sense than reality.
> |_|_) | |
> | | | hjp(at)hjp(dot)at | -- Charles Stross, "Creative writing
> __/ | http://www.hjp.at/ | challenge!"
>

--

*Sushanta Saha|*MTS IV-Cslt-Sys Engrg|WebIaaS_DB Group|HQ -
* VerizonWireless O 770.797.1260 C 770.714.6555 Iaas Support Line
949-286-8810*

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Achilleas Mantzios 2020-08-25 12:44:49 Re: Creating many tables gets logical replication stuck
Previous Message iulian dragos 2020-08-25 12:01:18 Re: Query plan prefers hash join when nested loop is much faster