using pg_basebackup for point in time recovery

From: Pierre Timmermans <ptim007(at)yahoo(dot)com>
To: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: using pg_basebackup for point in time recovery
Date: 2018-06-19 12:03:58
Message-ID: 202988089.1841507.1529409838414@mail.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi,I find the documentation about pg_basebackup misleading : the documentation states that standalone hot backups cannot be used for point in time recovery, however I don't get the point : if one has a combination of the nightly pg_basebackup and the archived wals, then it is totally OK to do point in time I assume ? (of course the recovery.conf must be manually changed to set the restore_command and the recovery target time) Here is the doc, the sentence that I find misleading is "There are backups that cannot be used for point-in-time recovery", also mentioning that they are faster than pg_dumps add to confusion (since pg_dumps cannot be used for PITR)Doc: https://www.postgresql.org/docs/current/static/continuous-archiving.html
It is possible to use PostgreSQL's backup facilities to produce standalone hot backups. These are backups that cannot be used for point-in-time recovery, yet are typically much faster to backup and restore than pg_dump dumps. (They are also much larger than pg_dump dumps, so in some cases the speed advantage might be negated.)
As with base backups, the easiest way to produce a standalone hot backup is to use the pg_basebackup tool. If you include the -X parameter when calling it, all the write-ahead log required to use the backup will be included in the backup automatically, and no special action is required to restore the backup.
Thanks and regards,

Pierre

On Tuesday, June 19, 2018, 1:38:40 PM GMT+2, Ron <ronljohnsonjr(at)gmail(dot)com> wrote:

On 06/15/2018 11:26 AM, Data Ace wrote:


Well I think my question is somewhat away from my intention cause of my poor understanding and questioning :( 

 

Actually, I have 1TB data and have hardware spec enough to handle this amount of data, but the problem is that it needs too many join operations and the analysis process is going too slow right now.

 

I've searched and found that graph model nicely fits for network data like social data in query performance.


If your data is hierarchal, then storing it in a network database is perfectly reasonable.  I'm not sure, though, that there are many network databases for Linux.  Raima is the only one I can think of.



 Should I change my DB (I mean my DB for analysis)? or do I need some other solutions or any extension?


Thanks


--
Angular momentum makes the world go 'round.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Jeremy Finzel 2018-06-19 13:25:33 Re: found xmin from before relfrozenxid on pg_catalog.pg_authid
Previous Message Ron 2018-06-19 11:37:52 Re: PostgreSQL Volume Question