Re: Large DB

From: Manfred Koizar <mkoi-pg(at)aon(dot)at>
To: "Mooney, Ryan" <ryan(dot)mooney(at)pnl(dot)gov>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Large DB
Date: 2004-03-31 09:17:35
Message-ID: rlvk60p8o4oib9bsvglsqchiuug97u9i26@email.aon.at
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Tue, 30 Mar 2004 17:48:14 -0800, "Mooney, Ryan" <ryan(dot)mooney(at)pnl(dot)gov>
wrote:
>I have a single table that just went over 234GB in size with about 290M+
>rows.

That would mean ~ 800 bytes/row which, given your schema, is hard to
believe unless there are lots of dead tuples lying around.

>queries use the indexes fairly well, although I suspect that the order
>of host/starttime is suboptimal (fewer hosts than starttime, and the
>table is naturally in starttime order). I'm going to try adding an
>index on just starttime (and later just host) and see if I can tune the
>queries on that more.

Yes, if you are ready to switch OS for a 10% performance gain, getting
your indices right should be no question.

> I never delete rows from the table, only do
>inserts (up to around 11,000/minute mostly in one big burst every
>minute, this is anticipated to go up some over time).

How often do you ANALYSE?

Have there been DELETEs or UPDATEs or aborted transactions in the past?
Did you VACUUM or VACUUM FULL since then?

> I'm only doing sub 15MB from the disk
>array (from iostat) and I know it can do in the 40-60MB range when we
>tested the raw speed,

Sounds plausible for nonsequential I/O.

>However the two indexes are also - large (which may be part of the
>problem, which is why I'm trying just starttime for an index; They are
>currently in the 140-150G range).

This would be extreme index bloat which is only possible after massive
DELETEs/UPDATEs.

>stats=> explain select count(*) from tp3;
> -> Seq Scan on tp3 (cost=0.00..6906493.16 rows=290602016 width=0)

The planner thinks that the table size is 4M pages, 32GB. The average
tuple size of ~110 bytes (including tuple header) suits your schema
quite nicely.

> Table "public.tp3"
> Column | Type | Modifiers
>-------------+-----------------------------+-----------
> host | character(4) |
> point | character varying(64) |
> type | character(1) |
> cooked | character(1) |
> starttime | timestamp without time zone |
> intervallen | interval |
> arrivetime | timestamp without time zone |
> pvalue | numeric |
>Indexes:
> "tp3_host_starttime" btree (host, starttime, cooked)
> "tp3_point_starttime" btree (point, starttime, cooked)

In my experience any reduction in average tuple size results directly in
a proportional increase of throughput for large tables. So here are
some random thoughts:

You said there are only a few hosts. So moving the hosts into a
separate table with an integer primary key would save 4 bytes per row.

Datatype "char" (with quotes) needs only 1 byte, char(1) needs 5 bytes,
both before padding. Changing type and cooked from char(1) to "char"
would save 12 bytes.

And if you want to push it, you change hostid to smallint and rearrange
the fields, saving 4 more padding bytes:
hostid | smallint
type | "char"
cooked | "char"

What about point? If there is a known small number of different values,
move it into its own table.

I'm not sure about the storage needs of numeric, might be at least 8
bytes. Consider using bigint. Someone please correct me if I'm wrong.

Did you CREATE TABLE tp3 (...) WITHOUT OIDS?

>Sample data mining query:
>----------------------------
>select point, avg(pvalue) as avg from tp3 where host in ('node', 'node',
>....) and starttime between 'timestamp' and 'timestamp' group by point

Show us EXPLAIN ANALYSE, please.

>shared_buffers = 60800

Looks a bit large to me. But if your tests have shown it to be the best
value, it should be ok.

>sort_mem = 1286720 # min 64, size in KB

This is more than 1GB, I think this is too high.

>fsync=false # Play fast and loose - whee

How much did this help?

>effective_cache_size = 160000

Try more, say 320000 or even 400000.

Servus
Manfred

In response to

  • Large DB at 2004-03-31 01:48:14 from Mooney, Ryan

Browse pgsql-general by date

  From Date Subject
Next Message Jeff Amiel 2004-03-31 10:22:30 blocking? locking? Concurrent connections slows things down....
Previous Message Alexander S 2004-03-31 08:39:52 bug in 7.4.2, with Handling of Double Quotation Marks