Bulk Insert into PostgreSQL

From: Srinivas Karthik V <skarthikv(dot)iitb(at)gmail(dot)com>
To: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Bulk Insert into PostgreSQL
Date: 2018-06-27 11:18:57
Message-ID: CAEfuzeRtZ-k14+ozqrQ+1T1wPG8tfw6xZit1MxH=Oh4Yb_+xYA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,
I am performing a bulk insert of 1TB TPC-DS benchmark data into PostgreSQL
9.4. It's taking around two days to insert 100 GB of data. Please let me
know your suggestions to improve the performance. Below are the
configuration parameters I am using:
shared_buffers =12GB
maintainence_work_mem = 8GB
work_mem = 1GB
fsync = off
synchronous_commit = off
checkpoint_segments = 256
checkpoint_timeout = 1h
checkpoint_completion_target = 0.9
checkpoint_warning = 0
autovaccum = off
Other parameters are set to default value. Moreover, I have specified the
primary key constraint during table creation. This is the only possible
index being created before data loading and I am sure there are no other
indexes apart from the primary key column(s).

Regards,
Srinivas Karthik

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Pavel Stehule 2018-06-27 11:22:10 Re: [HACKERS] proposal: schema variables
Previous Message Raymond O'Donnell 2018-06-27 11:11:30 Re: Code of Conduct committee: call for volunteers