Skip site navigation (1) Skip section navigation (2)

Re: Postgres Connections Requiring Large Amounts of Memory

From: SZŰCS Gábor <surrano(at)mailbox(dot)hu>
To: <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Postgres Connections Requiring Large Amounts of Memory
Date: 2003-06-18 07:11:25
Message-ID: 004a01c33568$d48235c0$0403a8c0@fejleszt4 (view raw or flat)
Thread:
Lists: pgsql-performance
----- Original Message ----- 
From: "Dawn Hollingsworth" <dmh(at)airdefense(dot)net>
Sent: Tuesday, June 17, 2003 11:42 AM


> I'm not starting any of my own transactions and I'm not calling stored
> procedures from withing stored procedures. The stored procedures do have
> large parameters lists, up to 100. The tables are from 300 to 500

Geez! I don't think it'll help you find the memory leak (if any), but
couldn't you normalize the tables to smaller ones? That may be a pain when
updating (views and rules),  but I think it'd worth in resources (time and
memory, but maybe not disk space). I wonder what is the maximum number of
updated  cols and the minimum correlation between their semantics in a
single transaction (i.e. one func call), since there are "only" 100 params
for a proc.

> columns. 90% of the columns are either INT4 or INT8.  Some of these
> tables are inherited. Could that be causing problems?

Huh. It's still 30-50 columns (a size of a fairly large table for me) of
other types :)

G.
------------------------------- cut here -------------------------------


In response to

pgsql-performance by date

Next:From: Bruno Wolff IIIDate: 2003-06-18 15:02:10
Subject: Recent 7.4 change slowed down a query by a factor of 3
Previous:From: Tom LaneDate: 2003-06-18 00:12:18
Subject: Re: [PERFORM] Interesting incosistent query timing

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group