"Albe Laurenz" <laurenz(dot)albe(at)wien(dot)gv(dot)at> writes:
> During ANALYZE, in analyze.c, functions compute_minimal_stats
> and compute_scalar_stats, values whose length exceed
> WIDTH_THRESHOLD (= 1024) are not used for calculating statistics
> other than that they are counted as "too wide rows" and assumed
> to be all different.
> This works fine with regular tables; values exceeding that threshold
> don't get detoasted and won't consume excessive memory.
> With foreign tables the situation is different. Even though
> values exceeding WIDTH_THRESHOLD won't get used, the complete
> rows will be fetched from the foreign table. This can easily
> exhaust maintenance_work_mem.
I'm fairly skeptical that this is a real problem, and would prefer not
to complicate wrappers until we see some evidence from the field that
it's worth worrying about. The WIDTH_THRESHOLD logic was designed a
dozen years ago when common settings for work_mem were a lot smaller
than today. Moreover, to my mind it's always been about avoiding
detoasting operations as much as saving memory, and we don't have
anything equivalent to that consideration in foreign data wrappers.
regards, tom lane
In response to
pgsql-hackers by date
|Next:||From: Kevin Grittner||Date: 2012-04-30 14:26:19|
|Subject: Re: default_transaction_isolation = serializable
causes crash under Hot Standby|
|Previous:||From: Alvaro Herrera||Date: 2012-04-30 13:02:33|
|Subject: Re: [PATCH] Allow breaking out of hung connection attempts|