Re: reducing random_page_cost from 4 to 2 to force index scan

From: Josh Berkus <josh(at)agliodbs(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: reducing random_page_cost from 4 to 2 to force index scan
Date: 2011-05-15 18:08:53
Message-ID: 4DD016B5.2080202@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Robert,

> All true. I suspect that in practice the different between random and
> sequential memory page costs is small enough to be ignorable, although
> of course I might be wrong.

This hasn't been my experience, although I have not carefully measured
it. In fact, there's good reason to suppose that, if you were selecting
50% of more of a table, sequential access would still be faster even for
an entirely in-memory table.

As a parallel to our development, Redis used to store all data as linked
lists, making every object lookup effectively a random lookup. They
found that even with a database which is pinned in memory, creating a
data page structure (they call it "ziplists") and supporting sequential
scans was up to 10X faster for large lists.

So I would assume that there is still a coefficient difference between
seeks and scans in memory until proven otherwise.

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Josh Berkus 2011-05-15 18:20:10 Re: reducing random_page_cost from 4 to 2 to force index scan
Previous Message Stuart Bishop 2011-05-15 03:49:02 Re: reducing random_page_cost from 4 to 2 to force index scan