Question about lazy_space_alloc() / linux over-commit

From: Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com>
To: <pgsql-hackers(at)postgresql(dot)org>
Subject: Question about lazy_space_alloc() / linux over-commit
Date: 2015-02-25 22:06:02
Message-ID: 54EE474A.4040601@BlueTreble.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Could the large allocation[2] for the dead tuple array in
lazy_space_alloc cause problems with linux OOM? [1] and some other
things I've read indicate that a large mmap will count towards total
system memory, including producing a failure if overcommit is disabled.

Would it be worth avoiding the full size allocation when we can?

If we did this, I think allocate reltuples * autovacuum_vacuum_threshold
slots initially, and then growing the array if needed.

1:
http://stackoverflow.com/questions/9129004/why-does-calling-mmap-with-large-size-not-fail

2:
In lazy_space_alloc() we will palloc the dead tuple array to be as large
as maintenance_work_mem, with the only limit being that we won't make it
larger than the maximum possible number of tuples in the relation.

My understanding is that this doesn't suck only because palloc passes
such a large allocation direct to malloc, which in turn uses mmap, which
won't actually allocate the memory until we access it.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andrew Dunstan 2015-02-25 22:10:10 Re: contrib/fuzzystrmatch/dmetaphone.c license
Previous Message Peter Geoghegan 2015-02-25 22:04:55 Re: INSERT ... ON CONFLICT UPDATE and logical decoding