Re: PostgreSQL Read IOPS limit per connection

From: Mark Hogg <mark(dot)hogg(at)2ndquadrant(dot)com>
To: "Merlin Moncure (via Accelo)" <mmoncure(at)gmail(dot)com>
Cc: Haroldo Kerry <hkerry(at)callix(dot)com(dot)br>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, postgres performance list <pgsql-performance(at)postgresql(dot)org>
Subject: Re: PostgreSQL Read IOPS limit per connection
Date: 2019-01-10 02:49:00
Message-ID: CAAw7-4bRikBKVPsJiAstqss3q8jtu+kZrAe65EHcFwj7GPKHzQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hello,

I am happy to hear that you have received all the help.

Please feel free to contact us for professional assistance any time you may
need in the future.

Most Welcome!

Regards,

Mark Avinash Hogg

Director of Business Development

2ndQuadrant

+1(647) 770 9821 Cell

www.2ndquadrant.com

mark(dot)hogg(at)2ndquadrant(dot)com

On Wed, 9 Jan 2019 at 19:20, Merlin Moncure (via Accelo) <mmoncure(at)gmail(dot)com>
wrote:

> On Wed, Jan 9, 2019 at 3:52 PM Haroldo Kerry <hkerry(at)callix(dot)com(dot)br> wrote:
>
>> @Justin @Merlin @ Jeff,
>> Thanks so much for your time and insights, they improved our
>> understanding of the underpinnings of PostgreSQL and allowed us to deal the
>> issues we were facing.
>> Using parallel query on our PG 9.6 improved a lot the query performance -
>> it turns out that a lot of our real world queries could benefit of parallel
>> query, we saw about 4x improvements after turning it on, and now we see
>> much higher storage IOPS thanks to the multiple workers.
>> On our tests effective_io_concurrency did not show such a large effect as
>> the link you sent, I'll have a new look at it, maybe we are doing something
>> wrong or the fact that the SSDs are on the SAN and not local affects the
>> results.
>> On the process we also learned that changing the default Linux I/O
>> scheduler from CFQ to Deadline worked wonders for our Dell SC2020 SAN
>> Storage setup, we used to see latency peaks of 6,000 milliseconds on busy
>> periods (yes, 6 seconds), we now see 80 milliseconds, an almost 100 fold
>> improvement.
>>
>
> The links sent was using a contrived query to force a type of scan that
> benefits from that kind of query; it's a very situational benefit. It
> would be interesting if you couldn't reproduce using the same mechanic.
>
> merlin
>
>>

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Etsuro Fujita 2019-01-10 06:07:01 Re: Query with high planning time at version 11.1 compared versions 10.5 and 11.0
Previous Message Amit Langote 2019-01-10 01:41:56 Re: Query with high planning time at version 11.1 compared versions 10.5 and 11.0