From: | Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at> |
---|---|
To: | James(王旭) <wangxu(at)gu360(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: How should I specify work_mem/max_worker_processes if I want to do big queries now and then? |
Date: | 2019-11-20 08:00:52 |
Message-ID: | 31811a45f443e5dbf08352440d635ee739130ac4.camel@cybertec.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, 2019-11-20 at 15:56 +0800, James(王旭) wrote:
> I am doing a query to fetch about 10000000 records in one time. But the query seems
> very slow, like "mission impossible".
> I am very confident that these records should be fit into my shared_buffers settings(20G),
> and my query is totally on my index, which is this big:(19M x 100 partitions), this index
> size can also be put into shared_buffers easily.(actually I even made a new partial index
> which is smaller and delete the bigger old index)
>
> This kind of situation makes me very disappointed.How can I make my queries much faster
> if my data grows more than 10000000 in one partition? I am using pg11.6.
There are no parameters that make queries faster wholesale.
If you need help with a query, please include the table definitions
and the EXPLAIN (ANALYZE, BUFFERS) output for the query.
Including a list of parameters you changed from the default is helpful too.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
From | Date | Subject | |
---|---|---|---|
Next Message | James (王旭) | 2019-11-20 08:05:50 | Re: How should I specify work_mem/max_worker_processes if I want to do big queries now and then? |
Previous Message | James (王旭) | 2019-11-20 07:56:05 | How should I specify work_mem/max_worker_processes if I want to do big queries now and then? |