From: | Terry <td3201(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: continuous copy/update one table to another |
Date: | 2010-03-01 05:21:03 |
Message-ID: | 8ee061011002282121j4f4c1327i1f2b29c91e031b8d@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Sun, Feb 28, 2010 at 10:23 PM, Terry <td3201(at)gmail(dot)com> wrote:
> On Sun, Feb 28, 2010 at 7:12 PM, John R Pierce <pierce(at)hogranch(dot)com> wrote:
>> Terry wrote:
>>>
>>> One more question. This is a pretty decent sized table. It is
>>> estimated to be 19,038,200 rows. That said, should I see results
>>> immediately pouring into the destination table while this is running?
>>>
>>
>> SQL transactions are atomic. you wont' see anything in the 'new' table
>> until the INSERT finishes committing, then you'll see it all at once.
>>
>> you will see a fair amount of disk write activity while its running. 20M
>> rows will take a while to run the first time, and probably a fair amount of
>> memory.
>
> This is working very well. The initial load worked great. Took a
> little while but fine after that. I am using this:
> INSERT INTO client_logs SELECT * FROM clients_event_log as t1 where
> t1.ev_id > (select max(t.ev_id) from client_logs as t);
>
> However, I got lost in this little problem and overlooked another. I
> need to convert the unix time in the ev_time column to a timestamp. I
> have the idea with this little bit but not sure how to integrate it
> nicely:
> select timestamptz 'epoch' + 1267417261 * interval '1 second'
>
I love overcomplicating things:
SELECT *,to_timestamp(ev_time) FROM clients_event_log as t1 where
t1.ev_id > (select max(t.ev_id) from client_logs as t)
From | Date | Subject | |
---|---|---|---|
Next Message | Grzegorz Jaśkiewicz | 2010-03-01 08:40:21 | Re: continuous copy/update one table to another |
Previous Message | Terry | 2010-03-01 04:23:46 | Re: continuous copy/update one table to another |