From: | Thomas Poindessous <thomas(at)poindessous(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-bugs(at)postgresql(dot)org |
Subject: | Re: BUG #5196: Excessive memory consumption when using csvlog |
Date: | 2009-11-19 05:59:51 |
Message-ID: | 1e0e09af0911182159g5d87b27rcd3197cd49980beb@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
Hi,
for csv output, we have a 750 Mo logfile. But on another site, we have
an logfile of 1,6 Go and logger process was using more than 3 Go of
RAM.
Even with our configuration (log collector, silent mode and
csv/stderr), we launched potsgresql daemon like this :
pg_ctl -l ${HOME}/pgsql/logs/postgres.log start
so we have three logfiles :
postgresql.log (always empty)
postgresql-YYYY-MM-DD.csv (big file if set to csvlog)
postgresql-YYYY-MM-DD.log (always empty if set to csvlog)
Thanks.
2009/11/19 Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>:
> "Poindessous Thomas" <thomas(at)poindessous(dot)com> writes:
>> we have a weird bug. When using csvlog instead of stderr, the postgres
>> logger process uses a lot of memory. We even had an OOM error with kernel.
>
> I poked at this a bit and noted that if only one of the two possible
> output files is rotated, logfile_rotate() leaks a copy of the other
> file's name. At the default settings this would only amount to one
> filename string for every 10MB of output ... how much log output
> does your test scenario generate?
>
> regards, tom lane
>
From | Date | Subject | |
---|---|---|---|
Next Message | Dave Page | 2009-11-19 06:58:43 | Re: pgsql-jdbc/pgsql-odbc |
Previous Message | Joseph Shraibman | 2009-11-19 04:26:13 | BUG #5197: JDBC: selecting oid results in Exception |