Re: Thanks, naming conventions, and count()

From: Casey Lyon <casey(at)earthcars(dot)com>
To: The Hermit Hacker <scrappy(at)hub(dot)org>
Cc: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Thanks, naming conventions, and count()
Date: 2001-04-30 03:50:04
Message-ID: 3AECE0EC.50801@earthcars.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

If this isn't incorporated into a utility, it would certainly be prime
for inclusion for the yet-to-be-written chapter 11 of the PG Admin Manual
"Database Recovery."

Thanks for your responses, -Casey

The Hermit Hacker wrote:

> On Sun, 29 Apr 2001, Bruce Momjian wrote:
>
>
>>>> Yes, I like that idea, but the problem is that it is hard to update just
>>>> one table in the file. You sort of have to update the entire file each
>>>> time a table changes. That is why I liked symlinks because they are
>>>> per-table, but you are right that the symlink creation could fail
>>>> because the new table file was never created or something, leaving the
>>>> symlink pointing to nothing. Not sure how to address this. Is there a
>>>> way to update a flat file when a single table changes?
>>>
>>> Why not just dump the whole file? That way, if a previosu dump failed for
>>> whatever reason, the new dump would correct that omission ...
>>
>> Yes, you can do that, but it is only updated during a dump, right?
>> Makes it hard to use during the day, no?
>>
>>
>>> Then again, why not some sort of 'lsdb' command that looks at where it is
>>> and gives you info as appropriate?
>>
>>
>> I want to do that for oid2name. I had the plan layed out, but never got
>> to it.
>>
>>
>>> if in data/base, then do a connect to template1 using postgres so that you
>>> can dump and parse the raw data from pg_database ... if in a directory,
>>> you should be able to connect to that database in a similar way to grab
>>> the contents of pg_class ...
>>>
>>> no server would need to be running for this to work, and if it was
>>> readonly, it should be workable if a server is running, no?
>>
>> I think parsing the file contents is too hard. The database would have
>> to be running and I would use psql.
>
>
> I don't know, I recovered someone's database using a "raw" connection ...
> wasn't that difficult once I figured out the format *shrug*
>
> the following gets the oid,relname's for a database in the format:
>
> echo "select oid,relname from pg_class" | postgres -L -D /usr/local/pgsql/data eceb | egrep "oid|relname"
>
> then just parse the output using a simple perl script:
>
> 1: oid = "163338" (typeid = 26, len = 4, typmod = -1, byval = t)
> 2: relname = "auth_info_uid_key" (typeid = 19, len = 32, typmod = -1, byval = f)
> 1: oid = "163341" (typeid = 26, len = 4, typmod = -1, byval = t)
> 2: relname = "auth_info_id" (typeid = 19, len = 32, typmod = -1, byval = f)
> 1: oid = "56082" (typeid = 26, len = 4, typmod = -1, byval = t)
> 2: relname = "auth_info" (typeid = 19, len = 32, typmod = -1, byval = f)
>
> the above won't work on a live database, did try that, so best is to test
> for a connection first, and this would be a fall back ... but you'd at
> least have a live *and* non live way of parsing the data *shrug*

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Bruce Momjian 2001-04-30 03:54:01 Re: Thanks, naming conventions, and count()
Previous Message Bruce Momjian 2001-04-30 03:46:38 Re: Thanks, naming conventions, and count()