From: | Björn Wittich <Bjoern_Wittich(at)gmx(dot)de> |
---|---|
To: | Szymon Guz <mabewlun(at)gmail(dot)com> |
Cc: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: query a table with lots of coulmns |
Date: | 2014-09-19 12:48:03 |
Message-ID: | 541C2603.5060203@gmx.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi Szymon,
yes I have indexes on both columns (one in each table) which I am using
for join operation.
Am 19.09.2014 14:04, schrieb Szymon Guz:
>
>
> On 19 September 2014 13:51, Björn Wittich <Bjoern_Wittich(at)gmx(dot)de
> <mailto:Bjoern_Wittich(at)gmx(dot)de>> wrote:
>
> Hi mailing list,
>
> I am relatively new to postgres. I have a table with 500 coulmns
> and about 40 mio rows. I call this cache table where one column is
> a unique key (indexed) and the 499 columns (type integer) are some
> values belonging to this key.
>
> Now I have a second (temporary) table (only 2 columns one is the
> key of my cache table) and I want do an inner join between my
> temporary table and the large cache table and export all matching
> rows. I found out, that the performance increases when I limit the
> join to lots of small parts.
> But it seems that the databases needs a lot of disk io to gather
> all 499 data columns.
> Is there a possibilty to tell the databases that all these colums
> are always treated as tuples and I always want to get the whole
> row? Perhaps the disk oraganization could then be optimized?
>
> Hi,
> do you have indexes on the columns you use for joins?
>
> Szymon
From | Date | Subject | |
---|---|---|---|
Next Message | Pavel Stehule | 2014-09-19 13:32:00 | Re: query a table with lots of coulmns |
Previous Message | Szymon Guz | 2014-09-19 12:04:30 | Re: query a table with lots of coulmns |