Re: PostgreSQL historical database

From: Keith Fiske <keith(dot)fiske(at)crunchydata(dot)com>
To: Erik Serrano <eserranos(at)gmail(dot)com>
Cc: Samed YILDIRIM <samed(at)reddoc(dot)net>, Pgsql-admin <pgsql-admin(at)lists(dot)postgresql(dot)org>, pgsql-admin <pgsql-admin(at)postgresql(dot)org>
Subject: Re: PostgreSQL historical database
Date: 2024-11-05 18:15:12
Message-ID: CAODZiv6e4fBROiXjZ8iKEcEMrt=qUeP+vyoc5RmjKpqu8Z_THw@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-admin

On Tue, Nov 5, 2024 at 12:30 PM Erik Serrano <eserranos(at)gmail(dot)com> wrote:

>
> Dear Sirs,
>
> I'll tell you a little about what I need. Normally, during the day,
> records are made or recorded in the main database, which at the end of the
> day are consolidated (accounting closings) and are recorded in the
> database. In order not to make the main database grow without measure
> (which will only maintain the range between 3 months to 1 year). For this
> reason, this data must be transferred to another database so that it lasts
> over time and can be consulted by other areas. (This action is done humanly
> every day of the year at the end of the day)
> Therefore, the project seeks to be able to carry out this extraction of
> the consolidated data to another database, but automatically.
>
> I was thinking of doing this with some triggers or with jobs that allow me
> to carry out these actions. I also thought of creating a replication of
> only the consolidated tables to the new historical database server, but I
> have not yet defined the method.
>
> That's why I need to know if there is a tool that allows me to create this
> database.
>
> I hope this clarifies a little the scope of the new historical database.
>
> Thank you very much in advance
> Regards
>
>
> *Erik R. Serrano Saavedra*
> * Data Base Administrator*
>
>

I would first recommend looking into partitioning for managing data
retention like this. As Ron says, you'll want to look into the performance
implications of this, but it allows for the most efficient method of
removing old data from PostgreSQL and is typically worth the overhead
costs. Otherwise you're dealing with potentially expensive deletion
operations and managing bloat vs just detaching/dropping a table.

From there, you can decide how to process the table to move it to another
database. The simplest method would be to pg_dump it from the old database,
drop it, then restore it to the "archive" database.

For time-based partitioning, pg_partman is a common tool that is used. It
also includes features to help manage retention as well as reliably dumping
out old tables to either be archived or used elsewhere.

https://github.com/pgpartman/pg_partman

--
Keith Fiske
Senior Database Engineer
Crunchy Data - http://crunchydata.com

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Wasim Devale 2024-11-06 13:05:30 Bytea datatype content to view
Previous Message Ron Johnson 2024-11-05 18:10:30 Re: PostgreSQL historical database