| From: | "Rocco Altier" <RoccoA(at)Routescape(dot)com> |
|---|---|
| To: | "Martijn van Oosterhout" <kleptog(at)svana(dot)org>, <pgsql-patches(at)postgresql(dot)org> |
| Cc: | "Simon Riggs" <simon(at)2ndquadrant(dot)com> |
| Subject: | Re: [PATCH] Improve EXPLAIN ANALYZE overhead by sampling |
| Date: | 2006-05-09 21:16:57 |
| Message-ID: | 6E0907A94904D94B99D7F387E08C4F57010A5E8C@FALCON.INSIGHT |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-patches |
> - To get this close it needs to get an estimate of the sampling
overhead.
> It does this by a little calibration loop that is run once per
backend.
> If you don't do this, you end up assuming all tuples take the same
time
> as tuples with the overhead, resulting in nodes apparently taking
> longer than their parent nodes. Incidently, I measured the overhead to
> be about 3.6us per tuple per node on my (admittedly slightly old)
> machine.
Could this be deferred until the first explain analyze? So that we
aren't paying the overhead of the calibration in all backends, even the
ones that won't be explaining?
-rocco
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Martijn van Oosterhout | 2006-05-09 21:38:54 | Re: [PATCH] Improve EXPLAIN ANALYZE overhead by sampling |
| Previous Message | Simon Riggs | 2006-05-09 20:52:14 | Re: [PATCH] Improve EXPLAIN ANALYZE overhead by sampling |