enhance the efficiency of migrating particularly large tables

From: David Zhang <david(dot)zhang(at)highgo(dot)ca>
To: Pgsql Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: enhance the efficiency of migrating particularly large tables
Date: 2024-04-08 21:52:13
Message-ID: 480e2d1e-9ed9-41e1-877d-a4c62e736975@highgo.ca
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi Postgres hackers,

I'm reaching out to gather some comments on enhancing the efficiency of
migrating particularly large tables with significant data volumes in
PostgreSQL.

When migrating a particularly large table with a significant amount of
data, users sometimes tend to split the table into multiple segments and
utilize multiple sessions to process data from different segments in
parallel, aiming to enhance efficiency. When segmenting a large table,
it's challenging if the table lacks fields suitable for segmentation or
if the data distribution is uneven. I believe that the data volume in
each block should be relatively balanced when vacuum is enabled.
Therefore, the ctid can be used to segment a large table, and I am
thinking the entire process can be outlined as follows:
1) determine the minimum and maximum ctid.
2) calculate the number of data blocks based on the maximum and minimum
ctid.
3) generate multiple SQL queries, such as SELECT * FROM tbl WHERE ctid
>= '(xx,1)' AND ctid < '(xxx,1)'.

However, when executing SELECT min(ctid) and max(ctid), it performs a
Seq Scan, which can be slow for a large table. Is there a way to
retrieve the minimum and maximum ctid other than using the system
functions min() and max()?

Since the minimum and maximum ctid are in order, theoretically, it
should start searching from the first block and can stop as soon as it
finds the first available one when retrieving the minimum ctid.
Similarly, it should start searching in reverse order from the last
block and stop upon finding the first occurrence when retrieving the
maximum ctid. Here's a piece of code snippet:

        /* scan the relation for minimum or maximum ctid */
        if (find_max_ctid)
            dir = BackwardScanDirection;
        else
            dir = ForwardScanDirection;

        while ((tuple = heap_getnext(scan, dir)) != NULL)
        ...

The attached is a simple POC by referring to the extension pgstattuple.
Any feedback, suggestions, or alternative solutions from the community
would be greatly appreciated.

Thank you,

David

Attachment Content-Type Size
get_ctid.c text/plain 2.9 KB

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andrew Dunstan 2024-04-08 22:20:13 Re: WIP Incremental JSON Parser
Previous Message Andrew Dunstan 2024-04-08 21:42:35 Re: WIP Incremental JSON Parser