On 22/06/12 09:02, Maxim Boguk wrote:
> Hi all,
> May be I completely wrong but I always assumed that the access speed
> to the array element in PostgreSQL should be close to constant time.
> But in tests I found that access speed degrade as O(N) of array size.
> Test case (performed on large not busy server with 1GB work_mem to
> ensure I working with memory only):
> t AS (SELECT ARRAY(SELECT * FROM generate_series(1,N)) AS _array)
> SELECT count((SELECT _array[i] FROM t)) FROM generate_series(1,10000)
> as g(i);
> Results for N between 1 and 10.000.000 (used locally connected psql
> with \timing):
> N: Time:
> 1 5.8ms
> 10 5.8ms
> 100 5.8ms
> 1000 6.7ms
> --until there all reasonable
> 5k 21ms
> 10k 34ms
> 50k 177ms
> 100k 321ms
> 500k 4100ms
> 1M 8100ms
> 2M 22000ms
> 5M 61000ms
> 10M 220000ms = 22ms to sinlge array element access.
> Is that behaviour is correct?
> PS: what I actually lookin for - constant fast access by position
> tuplestore for use with recursive queries and/or pl/pgsql, but without
> using C programming.
Default column storage is to "compress it, and store in TOAST" with
This it what is causing the shift. Try to change the column storage of
to EXTERNAL instead and rerun the test.
ALTER TABLE <tablename> ALTER COLUMN <column name> SET STORAGE EXTERNAL
Default is EXTENDED which runs compression on it, which again makes it
position into without reading and decompressing everything.
Let us know what you get.?
In response to
pgsql-performance by date
|Next:||From: Marc Mamin||Date: 2012-06-26 07:53:07|
|Subject: Re: Performance of a large array access by position (tested version 9.1.3)|
|Previous:||From: Tom Lane||Date: 2012-06-25 20:15:31|
|Subject: Re: MemSQL the "world's fastest database"?|