Re: Very large tables

From: Grzegorz Jaśkiewicz <gryzman(at)gmail(dot)com>
To: "William Temperley" <willtemperley(at)gmail(dot)com>
Cc: "Alvaro Herrera" <alvherre(at)commandprompt(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: Very large tables
Date: 2008-11-28 16:58:11
Message-ID: 2f4958ff0811280858s370e7a62x94aa26e1a72f47a7@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

2008/11/28 William Temperley <willtemperley(at)gmail(dot)com>

>
> Any more normalized and I'd have 216 billion rows! Add an index and
> I'd have - well, a far bigger table than 432 million rows each
> containing a float array - I think?
>
> Really I'm worried about reducing storage space and network overhead
> - therefore a nicely compressed chunk of binary would be perfect for
> the 500 values - wouldn't it?
>

true, if you don't want to search on values too much ,or at all - use
float[]. But otherwise, keep stuff in a tables as such.
It might be humongous in size, but at the end of the day - prime thing when
designing a db is speed of queries.

Still, I wouldn't go too far down the 'compress and stick in as bytea' road,
cos it is quite slippery, even tho might look shiny at first,

you can also consider vertical partition (separate machines). Honestly, I
would try different approaches first, on scaled down data set, but focusing
on retrieval/update (well, whatever your applications are going to use it
for).

--
GJ

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Simon Riggs 2008-11-28 17:46:14 Re: Very large tables
Previous Message Alvaro Herrera 2008-11-28 16:57:36 Re: Very large tables