COPY — copy data between a file and a table
COPYtable_name[ (column_name[, ...] ) ] FROM { 'filename' | PROGRAM 'command' | STDIN } [ [ WITH ] (option[, ...] ) ] COPY {table_name[ (column_name[, ...] ) ] | (query) } TO { 'filename' | PROGRAM 'command' | STDOUT } [ [ WITH ] (option[, ...] ) ] whereoptioncan be one of: FORMATformat_nameOIDS [boolean] FREEZE [boolean] DELIMITER 'delimiter_character' NULL 'null_string' HEADER [boolean] QUOTE 'quote_character' ESCAPE 'escape_character' FORCE_QUOTE { (column_name[, ...] ) | * } FORCE_NOT_NULL (column_name[, ...] ) FORCE_NULL (column_name[, ...] ) ENCODING 'encoding_name'
COPY moves data between
PostgreSQL tables and standard
file-system files. COPY TO copies the
contents of a table to a
file, while COPY FROM copies data
from a file to a table
(appending the data to whatever is in the table already).
COPY TO can also copy the results of a
SELECT query.
If a list of columns is specified, COPY will only copy the data in the specified
columns to or from the file. If there are any columns in the table
that are not in the column list, COPY
FROM will insert the default values for those columns.
COPY with a file name instructs the
PostgreSQL server to directly read
from or write to a file. The file must be accessible by the
PostgreSQL user (the user ID the
server runs as) and the name must be specified from the viewpoint
of the server. When PROGRAM is
specified, the server executes the given command and reads from the
standard output of the program, or writes to the standard input of
the program. The command must be specified from the viewpoint of
the server, and be executable by the PostgreSQL user. When STDIN or STDOUT is
specified, data is transmitted via the connection between the
client and the server.
table_nameThe name (optionally schema-qualified) of an existing table.
column_nameAn optional list of columns to be copied. If no column list is specified, all columns of the table will be copied.
queryA SELECT, VALUES, INSERT, UPDATE or DELETE command whose results are to be copied. Note that parentheses are required around the query.
For INSERT, UPDATE and DELETE
queries a RETURNING clause must be provided, and the target
relation must not have a conditional rule, nor an ALSO rule, nor an INSTEAD rule that expands to multiple
statements.
filenameThe path name of the input or output file. An input file name
can be an absolute or relative path, but an output file name must
be an absolute path. Windows users might need to use an
E'' string and double any backslashes
used in the path name.
PROGRAMA command to execute. In COPY FROM,
the input is read from standard output of the command, and in
COPY TO, the output is written to the
standard input of the command.
Note that the command is invoked by the shell, so if you need to pass any arguments to shell command that come from an untrusted source, you must be careful to strip or escape any special characters that might have a special meaning for the shell. For security reasons, it is best to use a fixed command string, or at least avoid passing any user input in it.
STDINSpecifies that input comes from the client application.
STDOUTSpecifies that output goes to the client application.
booleanSpecifies whether the selected option should be turned on or
off. You can write TRUE, ON, or 1 to enable
the option, and FALSE, OFF, or 0 to disable
it. The boolean value can
also be omitted, in which case TRUE is
assumed.
FORMATSelects the data format to be read or written: text, csv (Comma
Separated Values), or binary. The
default is text.
OIDSSpecifies copying the OID for each row. (An error is raised if
OIDS is specified for a table that
does not have OIDs, or in the case of copying a query.)
FREEZERequests copying the data with rows already frozen, just as they
would be after running the VACUUM
FREEZE command. This is intended as a performance option for
initial data loading. Rows will be frozen only if the table being
loaded has been created or truncated in the current subtransaction,
there are no cursors open and there are no older snapshots held by
this transaction.
Note that all other sessions will immediately be able to see the data once it has been successfully loaded. This violates the normal rules of MVCC visibility and users specifying should be aware of the potential problems this might cause.
DELIMITERSpecifies the character that separates columns within each row
(line) of the file. The default is a tab character in text format,
a comma in CSV format. This must be a
single one-byte character. This option is not allowed when using
binary format.
NULLSpecifies the string that represents a null value. The default
is \N (backslash-N) in text format,
and an unquoted empty string in CSV
format. You might prefer an empty string even in text format for
cases where you don't want to distinguish nulls from empty strings.
This option is not allowed when using binary format.
When using COPY FROM, any data item
that matches this string will be stored as a null value, so you
should make sure that you use the same string as you used with
COPY TO.
HEADERSpecifies that the file contains a header line with the names of
each column in the file. On output, the first line contains the
column names from the table, and on input, the first line is
ignored. This option is allowed only when using CSV format.
QUOTESpecifies the quoting character to be used when a data value is
quoted. The default is double-quote. This must be a single one-byte
character. This option is allowed only when using CSV format.
ESCAPESpecifies the character that should appear before a data
character that matches the QUOTE
value. The default is the same as the QUOTE value (so that the quoting character is
doubled if it appears in the data). This must be a single one-byte
character. This option is allowed only when using CSV format.
FORCE_QUOTEForces quoting to be used for all non-NULL values in each specified column. NULL output is never quoted. If * is specified, non-NULL values will be quoted in all columns. This
option is allowed only in COPY TO, and
only when using CSV format.
FORCE_NOT_NULLDo not match the specified columns' values against the null
string. In the default case where the null string is empty, this
means that empty values will be read as zero-length strings rather
than nulls, even when they are not quoted. This option is allowed
only in COPY FROM, and only when using
CSV format.
FORCE_NULLMatch the specified columns' values against the null string,
even if it has been quoted, and if a match is found set the value
to NULL. In the default case where the
null string is empty, this converts a quoted empty string into
NULL. This option is allowed only in COPY
FROM, and only when using CSV
format.
ENCODINGSpecifies that the file is encoded in the encoding_name. If this option is
omitted, the current client encoding is used. See the Notes below
for more details.
On successful completion, a COPY
command returns a command tag of the form
COPY count
The count is the
number of rows copied.
psql will print this command
tag only if the command was not COPY ... TO
STDOUT, or the equivalent psql meta-command \copy
... to stdout. This is to prevent confusing the command tag
with the data that was just printed.
COPY TO can only be used with plain
tables, not with views. However, you can write COPY (SELECT * FROM to copy the
current contents of a view.viewname) TO ...
COPY FROM can be used with plain
tables and with views that have INSTEAD OF
INSERT triggers.
COPY only deals with the specific
table named; it does not copy data to or from child tables. Thus
for example COPY shows the same data
as table TOSELECT * FROM ONLY . But tableCOPY (SELECT * FROM can be used to
dump all of the data in an inheritance hierarchy.table) TO ...
You must have select privilege on the table whose values are
read by COPY TO, and insert privilege
on the table into which values are inserted by COPY FROM. It is sufficient to have column
privileges on the column(s) listed in the command.
If row-level security is enabled for the table, the relevant
SELECT policies will apply to
COPY statements.
Currently, table TOCOPY FROM is not supported
for tables with row-level security. Use equivalent INSERT statements instead.
Files named in a COPY command are
read or written directly by the server, not by the client
application. Therefore, they must reside on or be accessible to the
database server machine, not the client. They must be accessible to
and readable or writable by the PostgreSQL user (the user ID the server runs
as), not the client. Similarly, the command specified with
PROGRAM is executed directly by the
server, not by the client application, must be executable by the
PostgreSQL user. COPY naming a file or command is only allowed to
database superusers, since it allows reading or writing any file
that the server has privileges to access.
Do not confuse COPY with the
psql instruction \copy.
\copy invokes COPY FROM STDIN or COPY TO
STDOUT, and then fetches/stores the data in a file
accessible to the psql client.
Thus, file accessibility and access rights depend on the client
rather than the server when \copy is
used.
It is recommended that the file name used in COPY always be specified as an absolute path. This
is enforced by the server in the case of COPY
TO, but for COPY FROM you do
have the option of reading from a file specified by a relative
path. The path will be interpreted relative to the working
directory of the server process (normally the cluster's data
directory), not the client's working directory.
Executing a command with PROGRAM
might be restricted by the operating system's access control
mechanisms, such as SELinux.
COPY FROM will invoke any triggers
and check constraints on the destination table. However, it will
not invoke rules.
For identity columns, the COPY FROM
command will always write the column values provided in the input
data, like the INSERT option
OVERRIDING SYSTEM VALUE.
COPY input and output is affected
by DateStyle. To ensure portability to
other PostgreSQL installations
that might use non-default DateStyle
settings, DateStyle should be set to
ISO before using COPY TO. It is also a good idea to avoid dumping
data with IntervalStyle set to
sql_standard, because negative
interval values might be misinterpreted by a server that has a
different setting for IntervalStyle.
Input data is interpreted according to ENCODING option or the current client encoding,
and output data is encoded in ENCODING
or the current client encoding, even if the data does not pass
through the client but is read from or written to a file directly
by the server.
COPY stops operation at the first
error. This should not lead to problems in the event of a
COPY TO, but the target table will
already have received earlier rows in a COPY
FROM. These rows will not be visible or accessible, but they
still occupy disk space. This might amount to a considerable amount
of wasted disk space if the failure happened well into a large copy
operation. You might wish to invoke VACUUM to recover the wasted space.
FORCE_NULL and FORCE_NOT_NULL can be used simultaneously on the
same column. This results in converting quoted null strings to null
values and unquoted null strings to empty strings.
When the text format is used, the
data read or written is a text file with one line per table row.
Columns in a row are separated by the delimiter character. The
column values themselves are strings generated by the output
function, or acceptable to the input function, of each attribute's
data type. The specified null string is used in place of columns
that are null. COPY FROM will raise an
error if any line of the input file contains more or fewer columns
than are expected. If OIDS is
specified, the OID is read or written as the first column,
preceding the user data columns.
End of data can be represented by a single line containing just
backslash-period (\.). An end-of-data
marker is not necessary when reading from a file, since the end of
file serves perfectly well; it is needed only when copying data to
or from client applications using pre-3.0 client protocol.
Backslash characters (\) can be
used in the COPY data to quote data
characters that might otherwise be taken as row or column
delimiters. In particular, the following characters must be preceded by a backslash if they
appear as part of a column value: backslash itself, newline,
carriage return, and the current delimiter character.
The specified null string is sent by COPY
TO without adding any backslashes; conversely, COPY FROM matches the input against the null
string before removing backslashes. Therefore, a null string such
as \N cannot be confused with the
actual data value \N (which would be
represented as \\N).
The following special backslash sequences are recognized by
COPY FROM:
| Sequence | Represents |
|---|---|
\b |
Backspace (ASCII 8) |
\f |
Form feed (ASCII 12) |
\n |
Newline (ASCII 10) |
\r |
Carriage return (ASCII 13) |
\t |
Tab (ASCII 9) |
\v |
Vertical tab (ASCII 11) |
\digits |
Backslash followed by one to three octal digits specifies the character with that numeric code |
\xdigits |
Backslash x followed by one or two
hex digits specifies the character with that numeric code |
Presently, COPY TO will never emit
an octal or hex-digits backslash sequence, but it does use the
other sequences listed above for those control characters.
Any other backslashed character that is not mentioned in the
above table will be taken to represent itself. However, beware of
adding backslashes unnecessarily, since that might accidentally
produce a string matching the end-of-data marker (\.) or the null string (\N by default). These strings will be recognized
before any other backslash processing is done.
It is strongly recommended that applications generating
COPY data convert data newlines and
carriage returns to the \n and
\r sequences respectively. At present
it is possible to represent a data carriage return by a backslash
and carriage return, and to represent a data newline by a backslash
and newline. However, these representations might not be accepted
in future releases. They are also highly vulnerable to corruption
if the COPY file is transferred across
different machines (for example, from Unix to Windows or vice
versa).
COPY TO will terminate each row
with a Unix-style newline (“\n”). Servers
running on Microsoft Windows instead output carriage return/newline
(“\r\n”), but only for COPY to a server file; for consistency across
platforms, COPY TO STDOUT always sends
“\n” regardless of server platform.
COPY FROM can handle lines ending with
newlines, carriage returns, or carriage return/newlines. To reduce
the risk of error due to un-backslashed newlines or carriage
returns that were meant as data, COPY
FROM will complain if the line endings in the input are not
all alike.
This format option is used for importing and exporting the Comma
Separated Value (CSV) file format used
by many other programs, such as spreadsheets. Instead of the
escaping rules used by PostgreSQL's standard text format, it produces
and recognizes the common CSV escaping mechanism.
The values in each record are separated by the DELIMITER character. If the value contains the
delimiter character, the QUOTE
character, the NULL string, a carriage
return, or line feed character, then the whole value is prefixed
and suffixed by the QUOTE character,
and any occurrence within the value of a QUOTE character or the ESCAPE character is preceded by the escape
character. You can also use FORCE_QUOTE to force quotes when outputting
non-NULL values in specific
columns.
The CSV format has no standard way
to distinguish a NULL value from an
empty string. PostgreSQL's
COPY handles this by quoting. A
NULL is output as the NULL parameter string and is not quoted, while a
non-NULL value matching the
NULL parameter string is quoted. For
example, with the default settings, a NULL is written as an unquoted empty string, while
an empty string data value is written with double quotes
(""). Reading values follows similar
rules. You can use FORCE_NOT_NULL to
prevent NULL input comparisons for
specific columns. You can also use FORCE_NULL to convert quoted null string data
values to NULL.
Because backslash is not a special character in the CSV format, \., the
end-of-data marker, could also appear as a data value. To avoid any
misinterpretation, a \. data value
appearing as a lone entry on a line is automatically quoted on
output, and on input, if quoted, is not interpreted as the
end-of-data marker. If you are loading a file created by another
application that has a single unquoted column and might have a
value of \., you might need to quote
that value in the input file.
In CSV format, all characters are
significant. A quoted value surrounded by white space, or any
characters other than DELIMITER, will
include those characters. This can cause errors if you import data
from a system that pads CSV lines with
white space out to some fixed width. If such a situation arises you
might need to preprocess the CSV file
to remove the trailing white space, before importing the data into
PostgreSQL.
CSV format will both recognize and produce CSV files with quoted values containing embedded carriage returns and line feeds. Thus the files are not strictly one line per table row like text-format files.
Many programs produce strange and occasionally perverse CSV
files, so the file format is more a convention than a standard.
Thus you might encounter some files that cannot be imported using
this mechanism, and COPY might produce
files that other programs cannot process.
The binary format option causes all
data to be stored/read as binary format rather than as text. It is
somewhat faster than the text and CSV
formats, but a binary-format file is less portable across machine
architectures and PostgreSQL
versions. Also, the binary format is very data type specific; for
example it will not work to output binary data from a smallint column and read it into an integer column, even though that would work fine in
text format.
The binary file format consists of
a file header, zero or more tuples containing the row data, and a
file trailer. Headers and data are in network byte order.
PostgreSQL releases before 7.4 used a different binary file format.
The file header consists of 15 bytes of fixed fields, followed by a variable-length header extension area. The fixed fields are:
11-byte sequence PGCOPY\n\377\r\n\0
— note that the zero byte is a required part of the signature. (The
signature is designed to allow easy identification of files that
have been munged by a non-8-bit-clean transfer. This signature will
be changed by end-of-line-translation filters, dropped zero bytes,
dropped high bits, or parity changes.)
32-bit integer bit mask to denote important aspects of the file format. Bits are numbered from 0 (LSB) to 31 (MSB). Note that this field is stored in network byte order (most significant byte first), as are all the integer fields used in the file format. Bits 16-31 are reserved to denote critical file format issues; a reader should abort if it finds an unexpected bit set in this range. Bits 0-15 are reserved to signal backwards-compatible format issues; a reader should simply ignore any unexpected bits set in this range. Currently only one flag bit is defined, and the rest must be zero:
if 1, OIDs are included in the data; if 0, not
32-bit integer, length in bytes of remainder of header, not including self. Currently, this is zero, and the first tuple follows immediately. Future changes to the format might allow additional data to be present in the header. A reader should silently skip over any header extension data it does not know what to do with.
The header extension area is envisioned to contain a sequence of self-identifying chunks. The flags field is not intended to tell readers what is in the extension area. Specific design of header extension contents is left for a later release.
This design allows for both backwards-compatible header additions (add header extension chunks, or set low-order flag bits) and non-backwards-compatible changes (set high-order flag bits to signal such changes, and add supporting data to the extension area if needed).
Each tuple begins with a 16-bit integer count of the number of fields in the tuple. (Presently, all tuples in a table will have the same count, but that might not always be true.) Then, repeated for each field in the tuple, there is a 32-bit length word followed by that many bytes of field data. (The length word does not include itself, and can be zero.) As a special case, -1 indicates a NULL field value. No value bytes follow in the NULL case.
There is no alignment padding or any other extra data between fields.
Presently, all data values in a binary-format file are assumed to be in binary format (format code one). It is anticipated that a future extension might add a header field that allows per-column format codes to be specified.
To determine the appropriate binary format for the actual tuple
data you should consult the PostgreSQL source, in particular the
*send and *recv functions for each column's data type
(typically these functions are found in the src/backend/utils/adt/ directory of the source
distribution).
If OIDs are included in the file, the OID field immediately follows the field-count word. It is a normal field except that it's not included in the field-count. In particular it has a length word — this will allow handling of 4-byte vs. 8-byte OIDs without too much pain, and will allow OIDs to be shown as null if that ever proves desirable.
The file trailer consists of a 16-bit integer word containing -1. This is easily distinguished from a tuple's field-count word.
A reader should report an error if a field-count word is neither -1 nor the expected number of columns. This provides an extra check against somehow getting out of sync with the data.
The following example copies a table to the client using the
vertical bar (|) as the field
delimiter:
COPY country TO STDOUT (DELIMITER '|');
To copy data from a file into the country table:
COPY country FROM '/usr1/proj/bray/sql/country_data';
To copy into a file just the countries whose names start with 'A':
COPY (SELECT * FROM country WHERE country_name LIKE 'A%') TO '/usr1/proj/bray/sql/a_list_countries.copy';
To copy into a compressed file, you can pipe the output through an external compression program:
COPY country TO PROGRAM 'gzip > /usr1/proj/bray/sql/country_data.gz';
Here is a sample of data suitable for copying into a table from
STDIN:
AF AFGHANISTAN AL ALBANIA DZ ALGERIA ZM ZAMBIA ZW ZIMBABWE
Note that the white space on each line is actually a tab character.
The following is the same data, output in binary format. The
data is shown after filtering through the Unix utility od -c. The table has three columns; the first has
type char(2), the second has type
text, and the third has type integer. All the rows have a null value in the third
column.
0000000 P G C O P Y \n 377 \r \n \0 \0 \0 \0 \0 \0 0000020 \0 \0 \0 \0 003 \0 \0 \0 002 A F \0 \0 \0 013 A 0000040 F G H A N I S T A N 377 377 377 377 \0 003 0000060 \0 \0 \0 002 A L \0 \0 \0 007 A L B A N I 0000100 A 377 377 377 377 \0 003 \0 \0 \0 002 D Z \0 \0 \0 0000120 007 A L G E R I A 377 377 377 377 \0 003 \0 \0 0000140 \0 002 Z M \0 \0 \0 006 Z A M B I A 377 377 0000160 377 377 \0 003 \0 \0 \0 002 Z W \0 \0 \0 \b Z I 0000200 M B A B W E 377 377 377 377 377 377
There is no COPY statement in the
SQL standard.
The following syntax was used before PostgreSQL version 9.0 and is still supported:
COPYtable_name[ (column_name[, ...] ) ] FROM { 'filename' | STDIN } [ [ WITH ] [ BINARY ] [ OIDS ] [ DELIMITER [ AS ] 'delimiter' ] [ NULL [ AS ] 'null string' ] [ CSV [ HEADER ] [ QUOTE [ AS ] 'quote' ] [ ESCAPE [ AS ] 'escape' ] [ FORCE NOT NULLcolumn_name[, ...] ] ] ] COPY {table_name[ (column_name[, ...] ) ] | (query) } TO { 'filename' | STDOUT } [ [ WITH ] [ BINARY ] [ OIDS ] [ DELIMITER [ AS ] 'delimiter' ] [ NULL [ AS ] 'null string' ] [ CSV [ HEADER ] [ QUOTE [ AS ] 'quote' ] [ ESCAPE [ AS ] 'escape' ] [ FORCE QUOTE {column_name[, ...] | * } ] ] ]
Note that in this syntax, BINARY
and CSV are treated as independent
keywords, not as arguments of a FORMAT
option.
The following syntax was used before PostgreSQL version 7.3 and is still supported:
COPY [ BINARY ]table_name[ WITH OIDS ] FROM { 'filename' | STDIN } [ [USING] DELIMITERS 'delimiter' ] [ WITH NULL AS 'null string' ] COPY [ BINARY ]table_name[ WITH OIDS ] TO { 'filename' | STDOUT } [ [USING] DELIMITERS 'delimiter' ] [ WITH NULL AS 'null string' ]
If you see anything in the documentation that is not correct, does not match your experience with the particular feature or requires further clarification, please use this form to report a documentation issue.