Skip site navigation (1) Skip section navigation (2)

patch review : Add ability to constrain backend temporary file space

From: Cédric Villemain <cedric(dot)villemain(dot)debian(at)gmail(dot)com>
To: Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: patch review : Add ability to constrain backend temporary file space
Date: 2011-05-31 21:24:48
Message-ID: BANLkTi=-MTigvB+q6AxXH6-hOgGVkwu2Tw@mail.gmail.com (view raw or flat)
Thread:
Lists: pgsql-hackers
Hello

here is a partial review of your patch, better than keeping it
sleeping in the commitfest queue I hope.

 Submission review
================

    * The patch is not in context diff format.
    * The patch apply, but contains some extra whitespace.
    * Documentation is here but not explicit about 'temp tables',
maybe worth adding that this won't limit temporary table size ?
    * There is no test provided. One can be expected to check that the
feature work.

Code review
=========

* in fd.c, I think that "temporary_files_size -=
(double)vfdP->fileSize;" should be done later in the function once we
have successfully unlink the file, not before.

* I am not sure it is better to add a fileSize like you did or use
relationgetnumberofblock() when file is about to be truncated or
unlinked, this way the seekPos should be enough to increase the global
counter.

* temporary_files_size, I think it is better to have a number of pages
à la postgresql than a kilobyte size

* max_temp_files_size, I'll prefer an approach like shared_buffers
GUC: you can use pages, or KB, MB, ...


Simple Feature test
==============

either explain buffers is wrong or the patch is wrong:
cedric=# explain (analyze,buffers) select * from foo  order by 1 desc ;
                                                   QUERY PLAN
-----------------------------------------------------------------------------------------------------------------
 Sort  (cost=10260.02..10495.82 rows=94320 width=4) (actual
time=364.373..518.940 rows=100000 loops=1)
   Sort Key: generate_series
   Sort Method: external merge  Disk: 1352kB
   Buffers: local hit=393, temp read=249 written=249
   ->  Seq Scan on foo  (cost=0.00..1336.20 rows=94320 width=4)
(actual time=0.025..138.754 rows=100000 loops=1)
         Buffers: local hit=393
 Total runtime: 642.874 ms
(7 rows)

cedric=# set max_temp_files_size to 1900;
SET
cedric=# explain (analyze,buffers) select * from foo  order by 1 desc ;
ERROR:  aborting due to exceeding max temp files size
STATEMENT:  explain (analyze,buffers) select * from foo  order by 1 desc ;
ERROR:  aborting due to exceeding max temp files size

Do you have some testing method I can apply to track that without
explain (analyze, buffers) before going to low-level monitoring ?

Architecture review
==============

max_temp_files_size is used for the global space used per backend.
Based on how work_mem work I expect something like "work_disk" to
limit per file and maybe a backend_work_disk (and yes maybe a
backend_work_mem ?!) per backend.
So I propose to rename the current GUC to something like backend_work_disk.

Patch is not large and easy to read.
I like the idea and it sounds useful.

-- 
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

Responses

pgsql-hackers by date

Next:From: Alvaro HerreraDate: 2011-05-31 21:31:53
Subject: Re: Getting a bug tracker for the Postgres project
Previous:From: Tom LaneDate: 2011-05-31 21:22:30
Subject: Re: Please test peer (socket ident) auth on *BSD

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group