Michael Akinde <michael(dot)akinde(at)met(dot)no> writes:
> Why does it make a difference to lo_open what the size of the blob is?
> Other than simply opening the blob to get the file descriptor, after
> all, we don't touch the blob itself.
I believe lo_open() fetches the first chunk of the blob's data,
essentially as a way of validating that there is a blob of that OID.
With larger blobs those first chunks would be spread across more pages
of pg_largeobject, thus this process would involve touching more
> Also, since the blob is opened and closed, why does the process allocate
> new memory to open a new blob, rather than reuse existing memory? If
> this is the intended behavior (as it seems), is there someway we could
> force lo_open to reuse the memory (as this would seem to be a desirable
> behavior, at least to us)?
It will recycle those buffers, once it runs out of unused ones. Again,
if you don't like the amount of memory that's going into this, maybe
you need to back off your shared_buffer setting.
> Firstly, we expect both much bigger retrieval queries in production (1
> million rows, rather than 100 thousand) , and we've already seen that
> the database will max out physical memory usage at around 14 GB (shared
> memory usage is still reported at 2GB) and allocate huge globs of
> virtual memory (~30 GB) for queries of this kind.
To be blunt, I'm not sure that either of us knows what you're measuring
here. Are you counting OS-level disk cache as consumed memory? It's
really not a problem if that's where unused memory is going.
regards, tom lane
In response to
pgsql-bugs by date
|Next:||From: Tom Lane||Date: 2008-01-22 17:42:58|
|Subject: Re: why provide cross type arithmetic operators |
|Previous:||From: Alvaro Herrera||Date: 2008-01-22 16:18:16|
|Subject: Re: Gentoo shared_buffers setting (was: BUG #3888:postmaster...)|