From: | Terry <td3201(at)gmail(dot)com> |
---|---|
To: | John R Pierce <pierce(at)hogranch(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: continuous copy/update one table to another |
Date: | 2010-03-01 04:23:46 |
Message-ID: | 8ee061011002282023l7bcf22c7s3de600e4398315f5@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Sun, Feb 28, 2010 at 7:12 PM, John R Pierce <pierce(at)hogranch(dot)com> wrote:
> Terry wrote:
>>
>> One more question. This is a pretty decent sized table. It is
>> estimated to be 19,038,200 rows. That said, should I see results
>> immediately pouring into the destination table while this is running?
>>
>
> SQL transactions are atomic. you wont' see anything in the 'new' table
> until the INSERT finishes committing, then you'll see it all at once.
>
> you will see a fair amount of disk write activity while its running. 20M
> rows will take a while to run the first time, and probably a fair amount of
> memory.
This is working very well. The initial load worked great. Took a
little while but fine after that. I am using this:
INSERT INTO client_logs SELECT * FROM clients_event_log as t1 where
t1.ev_id > (select max(t.ev_id) from client_logs as t);
However, I got lost in this little problem and overlooked another. I
need to convert the unix time in the ev_time column to a timestamp. I
have the idea with this little bit but not sure how to integrate it
nicely:
select timestamptz 'epoch' + 1267417261 * interval '1 second'
From | Date | Subject | |
---|---|---|---|
Next Message | Terry | 2010-03-01 05:21:03 | Re: continuous copy/update one table to another |
Previous Message | Tom Lane | 2010-03-01 01:48:56 | Re: Confusion about users and roles |