Supported Versions: Current (16) / 15 / 14 / 13 / 12
Development Versions: devel
Unsupported versions: 11 / 10 / 9.6 / 9.5 / 9.4 / 9.3 / 9.2 / 9.1 / 9.0 / 8.4 / 8.3 / 8.2 / 8.1
This documentation is for an unsupported version of PostgreSQL.
You may want to view the same page for the current version, or one of the other supported versions listed above instead.

5.9. Partitioning

PostgreSQL supports basic table partitioning. This section describes why and how you can implement partitioning as part of your database design.

5.9.1. Overview

Partitioning refers to splitting what is logically one large table into smaller physical pieces. Partitioning can provide several benefits:

  • Query performance can be improved dramatically for certain kinds of queries.

  • Update performance can be improved too, since each piece of the table has indexes smaller than an index on the entire data set would be. When an index no longer fits easily in memory, both read and write operations on the index take progressively more disk accesses.

  • Bulk deletes may be accomplished by simply removing one of the partitions, if that requirement is planned into the partitioning design. DROP TABLE is far faster than a bulk DELETE, to say nothing of the ensuing VACUUM overhead.

  • Seldom-used data can be migrated to cheaper and slower storage media.

The benefits will normally be worthwhile only when a table would otherwise be very large. The exact point at which a table will benefit from partitioning depends on the application, although a rule of thumb is that the size of the table should exceed the physical memory of the database server.

Currently, PostgreSQL supports partitioning via table inheritance. Each partition must be created as a child table of a single parent table. The parent table itself is normally empty; it exists just to represent the entire data set. You should be familiar with inheritance (see Section 5.8) before attempting to implement partitioning.

The following forms of partitioning can be implemented in PostgreSQL:

Range Partitioning

The table is partitioned into "ranges" defined by a key column or set of columns, with no overlap between the ranges of values assigned to different partitions. For example one might partition by date ranges, or by ranges of identifiers for particular business objects.

List Partitioning

The table is partitioned by explicitly listing which key values appear in each partition.

Hash partitioning is not currently supported.

5.9.2. Implementing Partitioning

To set up a partitioned table, do the following:

  1. Create the "master" table, from which all of the partitions will inherit.

    This table will contain no data. Do not define any check constraints on this table, unless you intend them to be applied equally to all partitions. There is no point in defining any indexes or unique constraints on it, either.

  2. Create several "child" tables that each inherit from the master table. Normally, these tables will not add any columns to the set inherited from the master.

    We will refer to the child tables as partitions, though they are in every way normal PostgreSQL tables.

  3. Add table constraints to the partition tables to define the allowed key values in each partition.

    Typical examples would be:

    CHECK ( x = 1 )
    CHECK ( county IN ( 'Oxfordshire', 'Buckinghamshire', 'Warwickshire' ))
    CHECK ( outletID >= 100 AND outletID < 200 )
    

    Ensure that the constraints guarantee that there is no overlap between the key values permitted in different partitions. A common mistake is to set up range constraints like this:

    CHECK ( outletID BETWEEN 100 AND 200 )
    CHECK ( outletID BETWEEN 200 AND 300 )
    

    This is wrong since it is not clear which partition the key value 200 belongs in.

    Note that there is no difference in syntax between range and list partitioning; those terms are descriptive only.

  4. For each partition, create an index on the key column(s), as well as any other indexes you might want. (The key index is not strictly necessary, but in most scenarios it is helpful. If you intend the key values to be unique then you should always create a unique or primary-key constraint for each partition.)

  5. Optionally, define a rule or trigger to redirect modifications of the master table to the appropriate partition.

  6. Ensure that the constraint_exclusion configuration parameter is enabled in postgresql.conf. Without this, queries will not be optimized as desired.

For example, suppose we are constructing a database for a large ice cream company. The company measures peak temperatures every day as well as ice cream sales in each region. Conceptually, we want a table like this:

CREATE TABLE measurement (
    city_id         int not null,
    logdate         date not null,
    peaktemp        int,
    unitsales       int
);

We know that most queries will access just the last week's, month's or quarter's data, since the main use of this table will be to prepare online reports for management. To reduce the amount of old data that needs to be stored, we decide to only keep the most recent 3 years worth of data. At the beginning of each month we will remove the oldest month's data.

In this situation we can use partitioning to help us meet all of our different requirements for the measurements table. Following the steps outlined above, partitioning can be set up as follows:

  1. The master table is the measurement table, declared exactly as above.

  2. Next we create one partition for each active month:

    CREATE TABLE measurement_yy04mm02 ( ) INHERITS (measurement);
    CREATE TABLE measurement_yy04mm03 ( ) INHERITS (measurement);
    ...
    CREATE TABLE measurement_yy05mm11 ( ) INHERITS (measurement);
    CREATE TABLE measurement_yy05mm12 ( ) INHERITS (measurement);
    CREATE TABLE measurement_yy06mm01 ( ) INHERITS (measurement);
    

    Each of the partitions are complete tables in their own right, but they inherit their definition from the measurement table.

    This solves one of our problems: deleting old data. Each month, all we will need to do is perform a DROP TABLE on the oldest child table and create a new child table for the new month's data.

  3. We must add non-overlapping table constraints, so that our table creation script becomes:

    CREATE TABLE measurement_yy04mm02 (
        CHECK ( logdate >= DATE '2004-02-01' AND logdate < DATE '2004-03-01' )
    ) INHERITS (measurement);
    CREATE TABLE measurement_yy04mm03 (
        CHECK ( logdate >= DATE '2004-03-01' AND logdate < DATE '2004-04-01' )
    ) INHERITS (measurement);
    ...
    CREATE TABLE measurement_yy05mm11 (
        CHECK ( logdate >= DATE '2005-11-01' AND logdate < DATE '2005-12-01' )
    ) INHERITS (measurement);
    CREATE TABLE measurement_yy05mm12 (
        CHECK ( logdate >= DATE '2005-12-01' AND logdate < DATE '2006-01-01' )
    ) INHERITS (measurement);
    CREATE TABLE measurement_yy06mm01 (
        CHECK ( logdate >= DATE '2006-01-01' AND logdate < DATE '2006-02-01' )
    ) INHERITS (measurement);
    
  4. We probably need indexes on the key columns too:

    CREATE INDEX measurement_yy04mm02_logdate ON measurement_yy04mm02 (logdate);
    CREATE INDEX measurement_yy04mm03_logdate ON measurement_yy04mm03 (logdate);
    ...
    CREATE INDEX measurement_yy05mm11_logdate ON measurement_yy05mm11 (logdate);
    CREATE INDEX measurement_yy05mm12_logdate ON measurement_yy05mm12 (logdate);
    CREATE INDEX measurement_yy06mm01_logdate ON measurement_yy06mm01 (logdate);
    

    We choose not to add further indexes at this time.

  5. If data will be added only to the latest partition, we can set up a very simple rule to insert data. We must redefine this each month so that it always points to the current partition.

    CREATE OR REPLACE RULE measurement_current_partition AS
    ON INSERT TO measurement
    DO INSTEAD
        INSERT INTO measurement_yy06mm01 VALUES ( NEW.city_id,
                                                  NEW.logdate,
                                                  NEW.peaktemp,
                                                  NEW.unitsales );
    

    We might want to insert data and have the server automatically locate the partition into which the row should be added. We could do this with a more complex set of rules as shown below.

    CREATE RULE measurement_insert_yy04mm02 AS
    ON INSERT TO measurement WHERE
        ( logdate >= DATE '2004-02-01' AND logdate < DATE '2004-03-01' )
    DO INSTEAD
        INSERT INTO measurement_yy04mm02 VALUES ( NEW.city_id,
                                                  NEW.logdate,
                                                  NEW.peaktemp,
                                                  NEW.unitsales );
    ...
    CREATE RULE measurement_insert_yy05mm12 AS
    ON INSERT TO measurement WHERE
        ( logdate >= DATE '2005-12-01' AND logdate < DATE '2006-01-01' )
    DO INSTEAD
        INSERT INTO measurement_yy05mm12 VALUES ( NEW.city_id,
                                                  NEW.logdate,
                                                  NEW.peaktemp,
                                                  NEW.unitsales );
    CREATE RULE measurement_insert_yy06mm01 AS
    ON INSERT TO measurement WHERE
        ( logdate >= DATE '2006-01-01' AND logdate < DATE '2006-02-01' )
    DO INSTEAD
        INSERT INTO measurement_yy06mm01 VALUES ( NEW.city_id,
                                                  NEW.logdate,
                                                  NEW.peaktemp,
                                                  NEW.unitsales );
    

    Note that the WHERE clause in each rule exactly matches the the CHECK constraint for its partition.

As we can see, a complex partitioning scheme could require a substantial amount of DDL. In the above example we would be creating a new partition each month, so it may be wise to write a script that generates the required DDL automatically.

The following caveats apply:

  • There is currently no way to verify that all of the CHECK constraints are mutually exclusive. Care is required by the database designer.

  • There is currently no simple way to specify that rows must not be inserted into the master table. A CHECK (false) constraint on the master table would be inherited by all child tables, so that cannot be used for this purpose. One possibility is to set up an ON INSERT trigger on the master table that always raises an error. (Alternatively, such a trigger could be used to redirect the data into the proper child table, instead of using a set of rules as suggested above.)

Partitioning can also be arranged using a UNION ALL view:

CREATE VIEW measurement AS
          SELECT * FROM measurement_yy04mm02
UNION ALL SELECT * FROM measurement_yy04mm03
...
UNION ALL SELECT * FROM measurement_yy05mm11
UNION ALL SELECT * FROM measurement_yy05mm12
UNION ALL SELECT * FROM measurement_yy06mm01;

However, constraint exclusion is currently not supported for partitioned tables defined in this manner. Also, the need to recreate the view adds an extra step to adding and dropping individual partitions of the dataset.

5.9.3. Partitioning and Constraint Exclusion

Constraint exclusion is a query optimization technique that improves performance for partitioned tables defined in the fashion described above. As an example:

SET constraint_exclusion = on;
SELECT count(*) FROM measurement WHERE logdate >= DATE '2006-01-01';

Without constraint exclusion, the above query would scan each of the partitions of the measurement table. With constraint exclusion enabled, the planner will examine the constraints of each partition and try to prove that the partition need not be scanned because it could not contain any rows meeting the query's WHERE clause. When the planner can prove this, it excludes the partition from the query plan.

You can use the EXPLAIN command to show the difference between a plan with constraint_exclusion on and a plan with it off. A typical default plan for this type of table setup is:

SET constraint_exclusion = off;
EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2006-01-01';

                                          QUERY PLAN
-----------------------------------------------------------------------------------------------
 Aggregate  (cost=158.66..158.68 rows=1 width=0)
   ->  Append  (cost=0.00..151.88 rows=2715 width=0)
         ->  Seq Scan on measurement  (cost=0.00..30.38 rows=543 width=0)
               Filter: (logdate >= '2006-01-01'::date)
         ->  Seq Scan on measurement_yy04mm02 measurement  (cost=0.00..30.38 rows=543 width=0)
               Filter: (logdate >= '2006-01-01'::date)
         ->  Seq Scan on measurement_yy04mm03 measurement  (cost=0.00..30.38 rows=543 width=0)
               Filter: (logdate >= '2006-01-01'::date)
...
         ->  Seq Scan on measurement_yy05mm12 measurement  (cost=0.00..30.38 rows=543 width=0)
               Filter: (logdate >= '2006-01-01'::date)
         ->  Seq Scan on measurement_yy06mm01 measurement  (cost=0.00..30.38 rows=543 width=0)
               Filter: (logdate >= '2006-01-01'::date)

Some or all of the partitions might use index scans instead of full-table sequential scans, but the point here is that there is no need to scan the older partitions at all to answer this query. When we enable constraint exclusion, we get a significantly reduced plan that will deliver the same answer:

SET constraint_exclusion = on;
EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2006-01-01';
                                          QUERY PLAN
-----------------------------------------------------------------------------------------------
 Aggregate  (cost=63.47..63.48 rows=1 width=0)
   ->  Append  (cost=0.00..60.75 rows=1086 width=0)
         ->  Seq Scan on measurement  (cost=0.00..30.38 rows=543 width=0)
               Filter: (logdate >= '2006-01-01'::date)
         ->  Seq Scan on measurement_yy06mm01 measurement  (cost=0.00..30.38 rows=543 width=0)
               Filter: (logdate >= '2006-01-01'::date)

Note that constraint exclusion is driven only by CHECK constraints, not by the presence of indexes. Therefore it isn't necessary to define indexes on the key columns. Whether an index needs to be created for a given partition depends on whether you expect that queries that scan the partition will generally scan a large part of the partition or just a small part. An index will be helpful in the latter case but not the former.

The following caveats apply:

  • Constraint exclusion only works when the query's WHERE clause contains constants. A parameterized query will not be optimized, since the planner cannot know what partitions the parameter value might select at runtime. For the same reason, "stable" functions such as CURRENT_DATE must be avoided. Joining the partition key to a column of another table will not be optimized, either.

  • Avoid cross-datatype comparisons in the CHECK constraints, as the planner will currently fail to prove such conditions false. For example, the following constraint will work if x is an integer column, but not if x is a bigint:

    CHECK ( x = 1 )
    

    For a bigint column we must use a constraint like:

    CHECK ( x = 1::bigint )
    

    The problem is not limited to the bigint data type — it can occur whenever the default data type of the constant does not match the data type of the column to which it is being compared. Cross-datatype comparisons in the supplied queries are usually OK, just not in the CHECK conditions.

  • UPDATE and DELETE commands against the master table do not currently perform constraint exclusion.

  • All constraints on all partitions of the master table are considered for constraint exclusion, so large numbers of partitions are likely to increase query planning time considerably.

  • Don't forget that you still need to run ANALYZE on each partition individually. A command like

    ANALYZE measurement;
    

    will only process the master table.