From: | Ron Johnson <ronljohnsonjr(at)gmail(dot)com> |
---|---|
To: | pgsql-admin <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: PostgreSQL historical database |
Date: | 2024-11-05 18:10:30 |
Message-ID: | CANzqJaBbmzeWoc1kK8S88Cr=exm4AwJuuUdAtdJB2-S-zA81GA@mail.gmail.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
On Tue, Nov 5, 2024 at 12:30 PM Erik Serrano <eserranos(at)gmail(dot)com> wrote:
>
> Dear Sirs,
>
> I'll tell you a little about what I need. Normally, during the day,
> records are made or recorded in the main database, which at the end of the
> day are consolidated (accounting closings) and are recorded in the
> database. In order not to make the main database grow without measure
> (which will only maintain the range between 3 months to 1 year). For this
> reason, this data must be transferred to another database so that it lasts
> over time and can be consulted by other areas. (This action is done humanly
> every day of the year at the end of the day)
> Therefore, the project seeks to be able to carry out this extraction of
> the consolidated data to another database, but automatically.
>
> I was thinking of doing this with some triggers or with jobs that allow me
> to carry out these actions. I also thought of creating a replication of
> only the consolidated tables to the new historical database server, but I
> have not yet defined the method.
>
> That's why I need to know if there is a tool that allows me to create this
> database.
>
psql in a bash script, run in a cron job. I wrote something quite similar
that runs every month to:
1. copy old records into CSV files,
2. move them to S3 buckets,
3. delete the rows,
4. VACUUM and ANALYZE the tables.
If there are FK relationships, you'll have to determine the proper order in
which to delete them, but that's relatively easy and just has to be done
once.
(The only difference is that your Step 2 would be "copy them into the new
database".)
Depending on the number of rows per year, table partitioning might also
simplify things. Or it could add complexity and slow things down; you'll
have to test that yourself.
> I hope this clarifies a little the scope of the new historical database.
>
> Thank you very much in advance
> Regards
>
>
> *Erik R. Serrano Saavedra*
> * Data Base Administrator*
>
>
>
> El mar, 5 nov 2024 a las 12:37, Samed YILDIRIM (<samed(at)reddoc(dot)net>)
> escribió:
>
>> Hello Erik,
>>
>> It is not very clear for me what you are looking for. But, pg_bitemporal
>> may be answer for you. I recommend to you to check the repository below. If
>> this is not what you want, can you elaborate a little more?
>>
>> https://github.com/hettie-d/pg_bitemporal
>>
>> Best regards.
>> Samed YILDIRIM
>>
>> On Tue, 5 Nov 2024, 17:31 Erik Serrano, <eserranos(at)gmail(dot)com> wrote:
>>
>>> Dear,
>>>
>>> Along with greetings, I would like to ask if there is any product, way
>>> (architecture), system that allows me to create a historical database from
>>> a main transactional database in PostgreSQL.
>>>
>>> I thank you in advance for any contributions that help me to approach
>>> this new project.
>>>
>>> Thank you very much, Guys,
>>> Regards
>>>
>>>
>>>
>>> *Erik R. Serrano Saavedra*
>>> * Data Base Administrator*
>>>
>>>
>>
--
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> crustacean!
From | Date | Subject | |
---|---|---|---|
Next Message | Keith Fiske | 2024-11-05 18:15:12 | Re: PostgreSQL historical database |
Previous Message | Erik Serrano | 2024-11-05 17:30:35 | Re: PostgreSQL historical database |