Most effective and fast way to load few Tbyte of data from flat files into postgresql

From: Dirk Krautschick <Dirk(dot)Krautschick(at)trivadis(dot)com>
To: "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Most effective and fast way to load few Tbyte of data from flat files into postgresql
Date: 2020-08-24 21:17:36
Message-ID: AM0PR05MB6082D02993A44A96B2F6D70EE9560@AM0PR05MB6082.eurprd05.prod.outlook.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi,

what would be the fastest or most effective way to load few (5-10) TB of data from flat files into
a postgresql database, including some 1TB tables and blobs?

There is the copy command but there is no way for native parallelism, right? I have found pg_bulkload
but haven't tested it yet. As far I can see EDB has its EDB*Loader as a commercial option.

Anything else to recommend?

Thanks and best regards

Dirk

Responses

Browse pgsql-general by date

  From Date Subject
Next Message David Rowley 2020-08-24 22:26:49 Re: Query plan prefers hash join when nested loop is much faster
Previous Message Pavel Stehule 2020-08-24 19:43:49 Re: Row estimates for empty tables