| From: | Ranier Vilela <ranier(dot)vf(at)gmail(dot)com> |
|---|---|
| To: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
| Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
| Subject: | Re: Parallel Seq Scan vs kernel read ahead |
| Date: | 2020-05-20 23:14:12 |
| Message-ID: | CAEudQApf+Q0kde4MVZJG6piJEezvDOc-g4uVMT2cTc8qzg9spg@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Em qua., 20 de mai. de 2020 às 18:49, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>
escreveu:
> On Wed, May 20, 2020 at 11:03 PM Ranier Vilela <ranier(dot)vf(at)gmail(dot)com>
> wrote:
> > Time: 47767,916 ms (00:47,768)
> > Time: 32645,448 ms (00:32,645)
>
> Just to make sure kernel caching isn't helping here, maybe try making
> the table 2x or 4x bigger? My test was on a virtual machine with only
> 4GB RAM, so the table couldn't be entirely cached.
>
4x bigger.
Postgres defaults settings.
postgres=# create table t as select generate_series(1, 800000000)::int i;
SELECT 800000000
postgres=# \timing
Timing is on.
postgres=# set max_parallel_workers_per_gather = 0;
SET
Time: 8,622 ms
postgres=# select count(*) from t;
count
-----------
800000000
(1 row)
Time: 227238,445 ms (03:47,238)
postgres=# set max_parallel_workers_per_gather = 1;
SET
Time: 20,975 ms
postgres=# select count(*) from t;
count
-----------
800000000
(1 row)
Time: 138027,351 ms (02:18,027)
regards,
Ranier Vilela
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Thomas Munro | 2020-05-20 23:47:47 | Re: Parallel Seq Scan vs kernel read ahead |
| Previous Message | Jonathan S. Katz | 2020-05-20 22:44:20 | Re: PostgreSQL 13 Beta 1 Release Announcement Draft |