RE: Bulk Insert into PostgreSQL

From: ROS Didier <didier(dot)ros(at)edf(dot)fr>
To: "skarthikv(dot)iitb(at)gmail(dot)com" <skarthikv(dot)iitb(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: RE: Bulk Insert into PostgreSQL
Date: 2018-06-27 11:46:38
Message-ID: fbcff668e7e74cde8cc0cb060c9a0c13@PCYINTPEXMU001.NEOPROD.EDF.FR
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi
I suggest to split the data to insert into several text files ( the number of CPUs) , create extension pg_background, and create a main transaction which calls x (number of CPUs) autonomous transactions.
Each one insert the data from a specific test file via the COPY command.
NB : autonomous transaction can commit
It would normally divide the duration of the import by the number of CPUs.

Best Regards
[cid:image002(dot)png(at)01D14E0E(dot)8515EB90]

Didier ROS
Expertise SGBD
DS IT/IT DMA/Solutions Groupe EDF/Expertise Applicative - SGBD
Nanterre Picasso - E2 565D (aile nord-est)
32 Avenue Pablo Picasso
92000 Nanterre
didier(dot)ros(at)edf(dot)fr<mailto:didier(dot)ros(at)edf(dot)fr>

De : skarthikv(dot)iitb(at)gmail(dot)com [mailto:skarthikv(dot)iitb(at)gmail(dot)com]
Envoyé : mercredi 27 juin 2018 13:19
À : pgsql-hackers(at)postgresql(dot)org
Objet : Bulk Insert into PostgreSQL

Hi,
I am performing a bulk insert of 1TB TPC-DS benchmark data into PostgreSQL 9.4. It's taking around two days to insert 100 GB of data. Please let me know your suggestions to improve the performance. Below are the configuration parameters I am using:
shared_buffers =12GB
maintainence_work_mem = 8GB
work_mem = 1GB
fsync = off
synchronous_commit = off
checkpoint_segments = 256
checkpoint_timeout = 1h
checkpoint_completion_target = 0.9
checkpoint_warning = 0
autovaccum = off
Other parameters are set to default value. Moreover, I have specified the primary key constraint during table creation. This is the only possible index being created before data loading and I am sure there are no other indexes apart from the primary key column(s).

Regards,
Srinivas Karthik

Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.

Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.

Il est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.
____________________________________________________

This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.

If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.

E-mail communication cannot be guaranteed to be timely secure, error or virus-free.

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Geoff Winkless 2018-06-27 12:02:58 Re: Code of Conduct committee: call for volunteers
Previous Message Pavel Stehule 2018-06-27 11:25:06 Re: Bulk Insert into PostgreSQL