Skip site navigation (1) Skip section navigation (2)

Re: Question about disk IO an index use and seeking advice

From: Matthew Wakeling <matthew(at)flymine(dot)org>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Question about disk IO an index use and seeking advice
Date: 2008-04-24 15:27:39
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
On Thu, 24 Apr 2008, Nikolas Everett wrote:
> The setup is kind of a beast.

No kidding.

> When I run dstat I see only around 2M/sec and it is not consistent at all.

Well, it is having to seek over the disc a little. Firstly, your table may 
not be wonderfully ordered for index scans, but goodness knows how long a 
CLUSTER operation might take with that much data. Secondly, when doing an 
index scan, Postgres unfortunately can only use the performance equivalent 
of a single disc, because it accesses the pages one by one in a 
single-threaded manner. A large RAID array will give you a performance 
boost if you are doing lots of index scans in parallel, but not if you are 
only doing one. Greg Stark has a patch in the pipeline to improve this 

> When I do a similar set of queries on the hardware raid I see similar 
> performance except the numbers are both more than doubled.

Hardware RAID is often better than software RAID. 'Nuff said.

> Here is the explain output for the queries:

EXPLAIN ANALYSE is even better.

> Sort  (cost=16948.80..16948.81 rows=1 width=10)"
>   Sort Key: count(*)"
>   ->  HashAggregate  (cost=16948.78..16948.79 rows=1 width=10)"
>         ->  Index Scan using date_idx on bigtable (cost=0.00..16652.77 rows=59201 width=10)"
>               Index Cond: (date > '2008-04-21 00:00:00'::timestamp without time zone)"

That doesn't look like it should take too long. How long does it take? 
(EXPLAIN ANALYSE, in other words). It's a good plan, anyway.

> So now the asking for advice part.  I have two questions:
> What is the fastest way to copy data from the smaller table to the larger
> table?

INSERT INTO bigtable (field1, field2) SELECT whatever FROM staging_table
        ORDER BY

That will do it all in Postgres. The ORDER BY clause may slow down the 
insert, but it will certainly speed up your subsequent index scans.

If the bigtable isn't getting any DELETE or UPDATE traffic, you don't need 
to vacuum it. However, make sure you do vacuum the staging table, 
preferably directly after moving all that data to the bigtable and 
deleting it from the staging table.

> Can someone point me to a good page on partitioning? My
> gut tells me it should be better, but I'd like to learn more about why.

You could possibly not bother with a staging table, and replace the mass 
copy with making a new partition. Not sure of the details myself though.


Me... a skeptic?  I trust you have proof?

In response to

pgsql-performance by date

Next:From: Bruce MomjianDate: 2008-04-24 15:46:49
Subject: Re: Sun Talks about MySQL
Previous:From: Viktor RosenfeldDate: 2008-04-24 14:31:58
Subject: Re: Performance of the Materialize operator in a query plan

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group