Re: Does RelCache/SysCache shrink except when relations are deleted?

From: Merlin Moncure <mmoncure(at)gmail(dot)com>
To: MauMau <maumau307(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Does RelCache/SysCache shrink except when relations are deleted?
Date: 2011-09-28 20:29:26
Message-ID: CAHyXU0yccHzBmWJsXJM257zJ9twRWzLJARsEdeL0O-M8QtJjXQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

2011/9/28 MauMau <maumau307(at)gmail(dot)com>:
> Hello,
>
> Please let me ask you some questions about RelCache/SysCache/CatCache
> design. I know I should post this to pgsql-general, but I decided to post
> here because the content includes design questions.
>
> <<Background>>
> My customer is facing a "out of memory" problem during a batch job. I'd like
> to know the cause and solutions. PostgreSQL version is 8.2.7 (32-bit on
> Linux).
>
> The batch job consists of two steps in a single psql session:
>
> 1. call some PL/pgSQL function (say "somefunc" here)
> 2. VACUUM tables (at this time, maintenance_work_mem=256MB)
>
> The step 2 emitted the following messages in syslog.
>
> ERROR:  out of memory
> DETAIL:  Failed on request of size 268435452.
> STATEMENT:  VACUUM some_table_name
>
> somefunc copies rows from a single table to 100,000 tables (table_1 -
> table_100000) as follows:
>
> [somefunc]
> FOR id in 1 .. 100000 LOOP
>   check if the table "table_${ID}" exists by searching pg_class
>   if the table exists
>       INSERT INTO table_${id} SELECT * FROM some_table
>           WHERE pk = id;
>   else /* the table does not exist */
>       CREATE TABLE table_${id} AS SELECT * FROM some_table
>           WHERE pk = id;
> END LOOP;
>
> Before starting somefunc, the virtual memory of the backend postgres is
> 1.6GB, as reported by top command as "VIRT" column. When somefunc completes,
> it becomes 2.6GB. So, VACUUM cannot allocate 256MB because the virtual
> memory space is full.
>
> This is all the information I have now. I requested the customer to collect
> PostgreSQL server log so that memory context statistics can be obtained when
> "out of memory" occurs. Plus, I asked for the result of "SHOW ALL" and the
> minimal procedure to reproduce the problem. However, I'd like to ask your
> opinions rather than waiting for the problem to happen again.
>
>
> <<Question>>
> I'm guessing that CacheMemoryContext might be using much memory, because
> somefunc accesses as many as 100,000 tables. But I don't understand
> RelCache/SysCache implementation yet.
>
> Q1: When are the RelCache/SysCache entries removed from CacheMemoryContext?
> Are they removed only when the corresponding relations are deleted? If so,
> "many tables and indexes" is not friendly for the current PostgreSQL?
>
> Q2: somefunc increased 1GB of virtual memory after accessing 100,000 tables.
> This means that one table uses 10KB of local memory.
> Is it common that this much memory is used for RelCache/SysCache or other
> control information?
> Does the number of attributes in a table affect local memory usage much?
>
> Q3: I think one solution is to run VACUUM in a separate psql session.
> Are there any other solutions you can think of?
>
> Q4: The customer says one strange thing. If the 100,000 tables exist before
> somefunc starts (i.e., somefunc just copy records), the virtual memory of
> postgres does not increase.
> Is there anything to reason about his comment?

can we see all of your memory settings plus physical memory? the
solution is probably going to be reducing shared buffers an/or adding
physical memory.

merlin

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Oleg Bartunov 2011-09-28 20:53:49 Re: Feature proposal: www_fdw
Previous Message Kevin Grittner 2011-09-28 20:02:28 Re: feature request: auto savepoint for interactive psql when in transaction.