From: | Motog Plus <mplus7535(at)gmail(dot)com> |
---|---|
To: | Pgsql-admin <pgsql-admin(at)lists(dot)postgresql(dot)org> |
Subject: | Seeking Suggestions for Best Practices: Archiving and Migrating Historical Data in PostgreSQL |
Date: | 2025-05-30 07:51:25 |
Message-ID: | CAL5GnivMgBgRdY9YTLmAQKQa=TQVTRwghiGovK6Q6XxScdGOzg@mail.gmail.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Hi Team,
We are currently planning a data archival initiative for our production
PostgreSQL databases and would appreciate suggestions or insights from the
community regarding best practices and proven approaches.
**Scenario:**
- We have a few large tables (several hundred million rows) where we want
to archive historical data (e.g., older than 1 year).
- The archived data should be moved to a separate PostgreSQL database (on a
same or different server).
- Our goals are: efficient data movement, minimal downtime, and safe
deletion from the source after successful archival.
- PostgreSQL version: 15.12
- Both source and target databases are PostgreSQL.
We explored using `COPY TO` and `COPY FROM` with CSV files, uploaded to a
SharePoint or similar storage system. However, our infrastructure team
raised concerns around the computational load of large CSV processing and
potential security implications with file transfers.
We’d like to understand:
- What approaches have worked well for you in practice?
- Are there specific tools or strategies you’d recommend for ongoing
archival?
- Any performance or consistency issues we should watch out for?
Your insights or any relevant documentation/pointers would be immensely
helpful.
Thanks in advance for your guidance!
Best regards,
Ramzy
From | Date | Subject | |
---|---|---|---|
Next Message | hubert depesz lubaczewski | 2025-05-30 10:09:48 | Re: pg_dump verbose start and stop times? |
Previous Message | Laurenz Albe | 2025-05-30 05:45:00 | Re: pg_dump verbose start and stop times? |