we've developed a function which reads a huge amount
of data from postgres and, being recursive, does
several memory-intensive elaborations and writes the
results back on two postgres tables. No memory context
switch has been done in our function.
Now we have to compare this function with another one
which performs the same elaborations but reads the
data from a binary file and stores the results on
Both of them work exactly in the same way (as we've
simply ported our postgres module to work in memory)
but we've noticed a rather different memory usage in
the two cases. The in-memory function seems to have a
lot more of memory to work on, while the postgres one
stops for memory exhausted as soon as the data size
increases over a certain limit.
As far as we know, this could be due to the limited
size of the TopMemoryContext in which the dynamically
loadable modules work.
Is there a way to expand the size of memory available
to our function?
Thanks a lot!
alice and lorena
Yahoo! Mail: 6MB di spazio gratuito, 30MB per i tuoi allegati, l'antivirus, il filtro Anti-spam
pgsql-hackers by date
|Next:||From: greg||Date: 2003-05-30 18:26:53|
|Subject: Re: XML and postgres|
|Previous:||From: Bruno Wolff III||Date: 2003-05-30 18:02:13|
|Subject: Re: index suggestion for 7.4|