Re: Huge number of disk writes after migration to 8.1

From: "Magnus Hagander" <mha(at)sollentuna(dot)net>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Alvaro Herrera" <alvherre(at)alvh(dot)no-ip(dot)org>
Cc: <pgsql-bugs(at)postgreSQL(dot)org>
Subject: Re: Huge number of disk writes after migration to 8.1
Date: 2006-01-19 11:51:44
Message-ID: 6BCB9D8A16AC4241919521715F4D8BCE6C7ED8@algol.sollentuna.se
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

> Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org> writes:
> > Maybe the fact that the stat file is completely rewritten
> every 500 ms
> > should be reconsidered, if in the future someone chooses to rewrite
> > the stat system. We can reconsider this part then, as well.
>
> Yeah, it's becoming pretty obvious that that design does not
> scale very well. I don't immediately have any ideas about a
> better way though.
>
> I am working on some marginal hacks like not writing more of
> the backend activity strings than is needed, but it'd be
> nicer to think of a different solution.

In most cases you're going to see extremely few reads compared to writes
on pg_stats, right? So why not have the backends connect to the stats
process (or perhaps use UDP, or use the pipe, or whatever) and fetch the
data when needed. So when nobody fetches any data, there is no overhead
(except for the stats process adding up values, of course).

Then you could also push down some filtering to the stats process - for
example, when you are reading from pg_stat_activity there is no need to
send over the row level stats. IIRC, today you have to read (and write)
the whole stats file anyways.

//Magnus

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message marc mamin 2006-01-19 12:57:56 BUG #2185: function compilation error with "Create [TEMP] table?
Previous Message Surya 2006-01-19 11:16:05 BUG #2184: Insertion problem