From: | Robert DiFalco <robert(dot)difalco(at)gmail(dot)com> |
---|---|
To: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Number of Columns and Update |
Date: | 2014-12-22 20:53:03 |
Message-ID: | CAAXGW-xXFDWHEBJExt40bWD1+vOo6_UbytCUjEe625GqBXmxUA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
This may fall into the category of over-optimization but I've become
curious.
I have a user table with about 14 columns that are all 1:1 data - so they
can't be normalized.
When I insert a row all columns need to be set. But when I update, I
sometimes only update 1-2 columns at a time. Does the number of columns
impact update speed?
For example:
UPDATE users SET email = ? WHERE id = ?;
I can easily break this up into logical tables like user_profile,
user_credential, user_contact_info, user_summary, etc with each table only
having 1-4 columns. But with the multiple tables I would often be joining
them to bring back a collection of columns.
I know I'm over thinking this but I'm curious of what the performance trade
offs are for breaking up a table into smaller logically grouped tables.
Thanks.
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2014-12-22 21:40:13 | Re: Number of Columns and Update |
Previous Message | Kevin Grittner | 2014-12-18 20:03:13 | Re: Question about trigram GIST index |