From: | Rick Otten <rottenwindfish(at)gmail(dot)com> |
---|---|
To: | Daniel Blanch Bataller <daniel(dot)blanch(dot)bataller(at)gmail(dot)com> |
Cc: | Mariel Cherkassky <mariel(dot)cherkassky(at)gmail(dot)com>, "pgsql-performa(dot)" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: performance problem on big tables |
Date: | 2017-08-14 15:45:01 |
Message-ID: | CAMAYy4+C+FtGOJTB6OtK2cisJtkoZSMMrEKdxRQS13oWBgtpGA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Moving that many gigs of data across your network could also take a long
time simply depending on your network configuration. Before spending a
huge amount of energy tuning postgresql, I'd probably look at how long it
takes to simply copy 20 or 30 G of data between the two machines.
> El 14 ago 2017, a las 15:24, Mariel Cherkassky <
> mariel(dot)cherkassky(at)gmail(dot)com> escribió:
>
> I have performance issues with two big tables. Those tables are located on
> an oracle remote database. I'm running the quert : insert into
> local_postgresql_table select * from oracle_remote_table.
>
> The first table has 45M records and its size is 23G. The import of the
> data from the oracle remote database is taking 1 hour and 38 minutes. After
> that I create 13 regular indexes on the table and it takes 10 minutes per
> table ->2 hours and 10 minutes in total.
>
> The second table has 29M records and its size is 26G. The import of the
> data from the oracle remote database is taking 2 hours and 30 minutes. The
> creation of the indexes takes 1 hours and 30 minutes (some are indexes on
> one column and the creation takes 5 min and some are indexes on multiples
> column and it takes 11 min.
>
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Janes | 2017-08-14 16:39:06 | Re: performance problem on big tables |
Previous Message | Daniel Blanch Bataller | 2017-08-14 15:11:55 | Re: performance problem on big tables |