| From: | Julien Rouhaud <rjuju123(at)gmail(dot)com> |
|---|---|
| To: | Andrus <kobruleht2(at)hot(dot)ee> |
| Cc: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
| Subject: | Re: Why there is 30000 rows is sample |
| Date: | 2020-04-04 07:28:51 |
| Message-ID: | 20200404072851.3w4xokdm5mlzkpos@nol |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
Hi,
On Sat, Apr 04, 2020 at 10:07:51AM +0300, Andrus wrote:
> Hi!
>
> vacuumdb output:
>
> vacuumdb: vacuuming database "mydb"
> INFO: analyzing "public.mytable"
> INFO: "mytable": scanned 2709 of 2709 pages, containing 10834 live rows and
> 0 dead rows; 10834 rows in sample, 10834 estimated total rows
>
> For tables with more than 30000 rows, it shows that there are 30000 rows in sample.
>
> postgresql.conf does not set default_statistics_target value.
> It contains
>
> #default_statistics_target = 100 # range 1-10000
>
> So I expect that there should be 100 rows is sample.
> Why Postgres uses 30000 or number of rows in table for smaller tables ?
>
> Is 30000 some magical value, how to control it.
That's because default_statistics_target's unit isn't 1 row but 300 rows, so
yeah the 300 here is a little bit of a magical value.
You can see more details about this value at
https://github.com/postgres/postgres/blob/master/src/backend/commands/analyze.c#L1723-L1756.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Andrus | 2020-04-04 08:08:56 | Re: Using compression on TCP transfer |
| Previous Message | Andrus | 2020-04-04 07:07:51 | Why there is 30000 rows is sample |