## Re: explain analyze rows=%.0f

From: Ron Mayer Euler Taveira de Oliveira Robert Haas , "pgsql-hackers(at)postgresql(dot)org" Re: explain analyze rows=%.0f 2009-06-02 03:30:49 4A249CE9.6050708@cheapcomplexdevices.com (view raw, whole thread or download thread mbox) 2009-05-29 01:30:14 from Robert Haas  2009-05-29 03:00:37 from Euler Taveira de Oliveira   2009-05-29 03:12:42 from Robert Haas    2009-05-29 04:16:23 from Joshua Tolley     2009-05-29 17:30:36 from Tom Lane      2009-05-29 18:39:23 from Robert Haas   2009-06-02 03:30:49 from Ron Mayer    2009-06-02 13:41:00 from Simon Riggs     2009-06-02 14:06:18 from Robert Haas      2009-06-02 14:38:27 from Tom Lane       2009-06-02 14:56:59 from Robert Haas pgsql-hackers
```Euler Taveira de Oliveira wrote:
> Robert Haas escreveu:
>> ...EXPLAIN ANALYZE reports the number of rows as an integer...  Any
>> chance we could reconsider this decision?  I often find myself wanting
>> to know the value that is here called ntuples, but rounding
>> ntuples/nloops off to the nearest integer loses too much precision.
>>
> Don't you think is too strange having, for example, 6.67 rows? I would confuse
> users and programs that parses the EXPLAIN output. However, I wouldn't object

I don't think it's that confusing.   If it says "0.1 rows", I imagine most
people would infer that this means "typically 0, but sometimes 1 or a few" rows.

What I'd find strange about "6.67 rows" in your example is more that on
the estimated rows side, it seems to imply an unrealistically precise estimate
in the same way that "667 rows" would seem unrealistically precise to me.
Maybe rounding to 2 significant digits would reduce confusion?

```

### pgsql-hackers by date

 Next: From: Robert Haas Date: 2009-06-02 03:46:39 Subject: Re: from_collapse_limit vs. geqo_threshold Previous: From: Joe Conway Date: 2009-06-02 03:25:06 Subject: Re: dblink patches for comment