> I have gone through the following stuff
> 1) previous emails on the patch
> 2) http://wiki.postgresql.org/wiki/In-place_upgrade
> 3) http://www.pgcon.org/2008/schedule/attachments/57_pg_upgrade_2008.pdf
> 4) http://wiki.postgresql.org/wiki/In-place_upgrade:Storage
> Here is what I have understood so far, (correct me if I am wrong)
> The on disk representation of data has changed from version to version
> over the years. For some strange reason (performance may be) the newer
> versions of pg were not backwards compatible, meaning that the newer
> version would not read data written by an older version if the on disk
> representation has changed in between.
> The end user would be required to port the data stored using older
> version to the newer version format using offline import export.
> This project aims upgrades from older to newer version on the fly.
> On-disk representation is not the only change that the system should
> accommodate, it should also accommodate catalog changes, conf file
> changes etc.
It is correct.
> Of the available design choices I think you have chosen to go with
> on-line data conversion, meaning that pg would now be aware of all the
> previous page layouts and based on a switch on page version would handle
> each page layout. This will only be done to read old data, newer data
> will be written in newer format.
> I am supposed to test the patch and for that I have downloaded pg
> versions 7.4, 8.0, 8.1, 8.2 and 8.3.
> I plan to create a data directory using each of the versions and then
> try to read the same using the 8.4 with your patch applied.
It does not work. The patch is only prototype. It contains framework for
implementing old page layout version and it contains partial version 3.
The main purpose of this prototype is to make decision if this approach is
acceptable or not. Or if some part is acceptable - it contains for example
useful page API rework and implementation which is useful (by my opinion) in
> What database objects should I create in the test database, should I
> just create objects of my choice?
> Does sizes (both length and breadth) of tables matter?
These test does not make sense at this moment. I have test script (created by
Nidhi) for catalog upgrade already done. However, it uses currently Sun's
internal framework. I will modify it and release it.
> Do I have to perform performance tests too?
Yes, please. My colleague tested it and got 5% performance drop, but it was not
complete version and I tested full patch on Friday and It was surprise for me
... I got little bit better throughput (about 0,5%) with patch. I'm going to
retest it again but it would be good to get result also from others.
> On Fri, 2008-09-19 at 14:28 +0200, Zdenek Kotala wrote:
>> Abbas napsal(a):
>>> Even with that a hunk failed for bufpage.c, but I applied that part
>>> manually to move on.
>>> On Thu, 2008-09-18 at 12:17 +0200, Zdenek Kotala wrote:
>>>> Abbas napsal(a):
>>>>> I downloaded latest postgresql source code from
>>>>> git clone git://git.postgresql.org/git/postgresql.git
>>>>> and tried to apply the patch
>>>>> It does not apply cleanly, see the failures in attached file.
>>>> It clash with hash index patch which was committed four days ago. Try to use
>>>> little bit older revision from git (without hash index modification).
Zdenek Kotala Sun Microsystems
Prague, Czech Republic http://sun.com/postgresql
In response to
pgsql-hackers by date
|Next:||From: Tom Lane||Date: 2008-09-29 12:46:46|
|Subject: Re: [PATCHES] Infrastructure changes for recovery |
|Previous:||From: Tom Lane||Date: 2008-09-29 12:39:24|
|Subject: Re: parallel pg_restore - WIP patch |