From: | Jonathan Vanasco <postgres(at)2xlp(dot)com> |
---|---|
To: | PostgreSQL mailing lists <pgsql-general(at)postgresql(dot)org> |
Subject: | splitting up tables based on read/write frequency of columns |
Date: | 2015-01-19 21:47:41 |
Message-ID: | E6BFE6DF-C8A4-4886-A72A-1779ADD150A8@2xlp.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
This is really a theoretical/anecdotal question, as I'm not at a scale yet where this would measurable. I want to investigate while this is fresh in my mind...
I recall reading that unless a row has columns that are TOASTed, an `UPDATE` is essentially an `INSERT + DELETE`, with the previous row marked for vacuuming.
A few of my tables have the following characteristics:
- The Primary Key has many other tables/columns that FKEY onto it.
- Many columns (30+) of small data size
- Most columns (90%) are 1 WRITE(UPDATE) for 1000 READS
- Some columns (10%) do a bit of internal bookkeeping and are 1 WRITE(UPDATE) for 50 READS
Has anyone done testing/benchmarking on potential efficiency/savings by consolidating the frequent UPDATE columns into their own table?
From | Date | Subject | |
---|---|---|---|
Next Message | Robert DiFalco | 2015-01-19 21:56:25 | Re: Simple Atomic Relationship Insert |
Previous Message | Robert DiFalco | 2015-01-19 19:25:57 | Re: asynchronous commit |