Re: Loading 500m json files to database

From: Rob Sargent <robjsargent(at)gmail(dot)com>
To: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: Loading 500m json files to database
Date: 2020-03-23 15:16:40
Message-ID: acf05c41-3f9d-64fd-3823-bfd09bbeb009@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 3/23/20 4:24 AM, pinker wrote:
> Hi, do you have maybe idea how to make loading process faster?
>
> I have 500 millions of json files (1 json per file) that I need to load to
> db.
> My test set is "only" 1 million files.
>
> What I came up with now is:
>
> time for i in datafiles/*; do
> psql -c "\copy json_parts(json_data) FROM $i"&
> done
>
> which is the fastest so far. But it's not what i expect. Loading 1m of data
> takes me ~3h so loading 500 times more is just unacceptable.
>
> some facts:
> * the target db is on cloud so there is no option to do tricks like turning
> fsync off
> * version postgres 11
> * i can spin up huge postgres instance if necessary in terms of cpu/ram
> * i tried already hash partitioning (to write to 10 different tables instead
> of 1)
>
>
> Any ideas?
>
Most advanced languages have a bulk copy implementation. I've found this to be blindingly fast when the receiving table has no indices, constraints. It's not clear how large your files are, but you might take this time to "normalized" them: extract any id, datatype, etc into table attributes.

> --
> Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
>
>

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Julien Rouhaud 2020-03-23 15:22:47 Re: PG12 autovac issues
Previous Message Radu Radutiu 2020-03-23 14:46:02 Runtime partition pruning