Skip site navigation (1) Skip section navigation (2)

Re: Analyzing foreign tables & memory problems

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Albe Laurenz" <laurenz(dot)albe(at)wien(dot)gv(dot)at>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Analyzing foreign tables & memory problems
Date: 2012-04-30 15:23:25
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
"Albe Laurenz" <laurenz(dot)albe(at)wien(dot)gv(dot)at> writes:
> Tom Lane wrote:
>> I'm fairly skeptical that this is a real problem, and would prefer not
>> to complicate wrappers until we see some evidence from the field that
>> it's worth worrying about.

> If I have a table with 100000 rows and default_statistics_target
> at 100, then a sample of 30000 rows will be taken.

> If each row contains binary data of 1MB (an Image), then the
> data structure returned will use about 30 GB of memory, which
> will probably exceed maintenance_work_mem.

> Or is there a flaw in my reasoning?

Only that I don't believe this is a real-world scenario for a foreign
table.  If you have a foreign table in which all, or even many, of the
rows are that wide, its performance is going to suck so badly that
you'll soon look for a different schema design anyway.

I don't want to complicate FDWs for this until it's an actual bottleneck
in real applications, which it may never be, and certainly won't be
until we've gone through a few rounds of performance refinement for
basic operations.

			regards, tom lane

In response to


pgsql-hackers by date

Next:From: Simon RiggsDate: 2012-04-30 15:29:17
Subject: Re: Analyzing foreign tables & memory problems
Previous:From: Tom LaneDate: 2012-04-30 15:12:56
Subject: Re: Patch: add timing of buffer I/O requests

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group