Skip site navigation (1) Skip section navigation (2)

Re: Analyzing foreign tables & memory problems

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Albe Laurenz" <laurenz(dot)albe(at)wien(dot)gv(dot)at>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Analyzing foreign tables & memory problems
Date: 2012-04-30 14:24:30
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
"Albe Laurenz" <laurenz(dot)albe(at)wien(dot)gv(dot)at> writes:
> During ANALYZE, in analyze.c, functions compute_minimal_stats
> and compute_scalar_stats, values whose length exceed
> WIDTH_THRESHOLD (= 1024) are not used for calculating statistics
> other than that they are counted as "too wide rows" and assumed
> to be all different.

> This works fine with regular tables; values exceeding that threshold
> don't get detoasted and won't consume excessive memory.

> With foreign tables the situation is different.  Even though
> values exceeding WIDTH_THRESHOLD won't get used, the complete
> rows will be fetched from the foreign table.  This can easily
> exhaust maintenance_work_mem.

I'm fairly skeptical that this is a real problem, and would prefer not
to complicate wrappers until we see some evidence from the field that
it's worth worrying about.  The WIDTH_THRESHOLD logic was designed a
dozen years ago when common settings for work_mem were a lot smaller
than today.  Moreover, to my mind it's always been about avoiding
detoasting operations as much as saving memory, and we don't have
anything equivalent to that consideration in foreign data wrappers.

			regards, tom lane

In response to


pgsql-hackers by date

Next:From: Kevin GrittnerDate: 2012-04-30 14:26:19
Subject: Re: default_transaction_isolation = serializable causes crash under Hot Standby
Previous:From: Alvaro HerreraDate: 2012-04-30 13:02:33
Subject: Re: [PATCH] Allow breaking out of hung connection attempts

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group