Skip site navigation (1) Skip section navigation (2)

Does RelCache/SysCache shrink except when relations are deleted?

From: "MauMau" <maumau307(at)gmail(dot)com>
To: <pgsql-hackers(at)postgresql(dot)org>
Subject: Does RelCache/SysCache shrink except when relations are deleted?
Date: 2011-09-28 11:52:00
Message-ID: 876694F21A094BA8A8FADE489B926DAA@maumau (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers

Please let me ask you some questions about RelCache/SysCache/CatCache 
design. I know I should post this to pgsql-general, but I decided to post 
here because the content includes design questions.

My customer is facing a "out of memory" problem during a batch job. I'd like 
to know the cause and solutions. PostgreSQL version is 8.2.7 (32-bit on 

The batch job consists of two steps in a single psql session:

1. call some PL/pgSQL function (say "somefunc" here)
2. VACUUM tables (at this time, maintenance_work_mem=256MB)

The step 2 emitted the following messages in syslog.

ERROR:  out of memory
DETAIL:  Failed on request of size 268435452.
STATEMENT:  VACUUM some_table_name

somefunc copies rows from a single table to 100,000 tables (table_1 - 
table_100000) as follows:

FOR id in 1 .. 100000 LOOP
    check if the table "table_${ID}" exists by searching pg_class
    if the table exists
        INSERT INTO table_${id} SELECT * FROM some_table
            WHERE pk = id;
    else /* the table does not exist */
        CREATE TABLE table_${id} AS SELECT * FROM some_table
            WHERE pk = id;

Before starting somefunc, the virtual memory of the backend postgres is 
1.6GB, as reported by top command as "VIRT" column. When somefunc completes, 
it becomes 2.6GB. So, VACUUM cannot allocate 256MB because the virtual 
memory space is full.

This is all the information I have now. I requested the customer to collect 
PostgreSQL server log so that memory context statistics can be obtained when 
"out of memory" occurs. Plus, I asked for the result of "SHOW ALL" and the 
minimal procedure to reproduce the problem. However, I'd like to ask your 
opinions rather than waiting for the problem to happen again.

I'm guessing that CacheMemoryContext might be using much memory, because 
somefunc accesses as many as 100,000 tables. But I don't understand 
RelCache/SysCache implementation yet.

Q1: When are the RelCache/SysCache entries removed from CacheMemoryContext?
Are they removed only when the corresponding relations are deleted? If so, 
"many tables and indexes" is not friendly for the current PostgreSQL?

Q2: somefunc increased 1GB of virtual memory after accessing 100,000 tables. 
This means that one table uses 10KB of local memory.
Is it common that this much memory is used for RelCache/SysCache or other 
control information?
Does the number of attributes in a table affect local memory usage much?

Q3: I think one solution is to run VACUUM in a separate psql session.
Are there any other solutions you can think of?

Q4: The customer says one strange thing. If the 100,000 tables exist before 
somefunc starts (i.e., somefunc just copy records), the virtual memory of 
postgres does not increase.
Is there anything to reason about his comment?



pgsql-hackers by date

Next:From: Bruce MomjianDate: 2011-09-28 12:11:32
Subject: Re: pg_upgrade automatic testing
Previous:From: panamDate: 2011-09-28 11:48:03
Subject: Re: fix for pg_upgrade

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group