From: | Steve Kehlet <steve(dot)kehlet(at)gmail(dot)com> |
---|---|
To: | Forums postgresql <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: is there a way to firmly cap postgres worker memory consumption? |
Date: | 2014-04-10 00:39:12 |
Message-ID: | CA+bfosEHQRtpySEc203XUq0bEz3jbgb=c0dRSZo6AOTzOasFSA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, Apr 8, 2014 at 6:32 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Well, here's the problem:
> > ExprContext: 812638208 total in 108 blocks; 183520 free (171
> > chunks); 812454688 used
>
> So something involved in expression evaluation is eating memory.
> Looking at the query itself, I'd have to bet on this:
>
> > ARRAY_TO_STRING(ARRAY_AGG(MM.ID::CHARACTER VARYING), ',')
>
> My guess is that this aggregation is being done across a lot more rows
> than you were expecting, and the resultant array/string therefore eats
> lots of memory. You might try replacing that with COUNT(*), or even
> better SUM(LENGTH(MM.ID::CHARACTER VARYING)), just to get some definitive
> evidence about what the query is asking to compute.
>
The devs have moved on and want to stick with their new query, so I'll just
chalk this up to a bad query and let it go. But I'm glad to have learned a
few new tricks, thanks.
> Meanwhile, it seems like ulimit -v would provide the safety valve
> you asked for originally.
Thank you Amador and Tom for the ulimit solution, that's exactly what I
needed.
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Langote | 2014-04-10 02:41:04 | About upgrading a (tuple?) lock in a rollback'd sub-transaction |
Previous Message | Michael Paquier | 2014-04-09 23:32:44 | Re: import .sql file into PostgreSQL database |