Multibyte (MB) support is intended to allow PostgreSQL to handle multiple-byte character sets such as EUC (Extended Unix Code), Unicode, and Mule internal code. With MB enabled you can use multibyte character sets in regular expressions (regexp), LIKE, and some other functions. The default encoding system is selected while initializing your PostgreSQL installation using initdb. Note that this can be overridden when you create a database using createdb or by using the SQL command CREATE DATABASE. So you can have multiple databases each with a different encoding system.
Run configure with the multibyte option:
where encoding_system can be one of the values in the following table:
Table 5-1. Character Set Encodings
|MULE_INTERNAL||Mule internal code|
|LATIN1||ISO 8859-1 ECMA-94 Latin Alphabet No.1|
|LATIN2||ISO 8859-2 ECMA-94 Latin Alphabet No.2|
|LATIN3||ISO 8859-3 ECMA-94 Latin Alphabet No.3|
|LATIN4||ISO 8859-4 ECMA-94 Latin Alphabet No.4|
|LATIN5||ISO 8859-9 ECMA-128 Latin Alphabet No.5|
|LATIN6||ISO 8859-10 ECMA-144 Latin Alphabet No.6|
|LATIN7||ISO 8859-13 Latin Alphabet No.7|
|LATIN8||ISO 8859-14 Latin Alphabet No.8|
|LATIN9||ISO 8859-15 Latin Alphabet No.9|
|LATIN10||ISO 8859-16 ASRO SR 14111 Latin Alphabet No.10|
Important: Before PostgreSQL7.2, LATIN5 mistakenly meant ISO 8859-5. From 7.2 on, LATIN5 means ISO 8859-9. If you have a LATIN5 database created on 7.1 or earlier and want to migrate to 7.2 (or later), you should be very careful about this change.
Important: Not all APIs supports all the encodings listed above. For example, the PostgreSQL JDBC driver does not support MULE_INTERNAL, LATIN6, LATIN8, and LATIN10.
Here is an example of configuring PostgreSQL to use a Japanese encoding by default:
$ ./configure --enable-multibyte=EUC_JP
If the encoding system is omitted (./configure --enable-multibyte), SQL_ASCII is assumed.
initdb defines the default encoding for a PostgreSQL installation. For example:
$ initdb -E EUC_JP
sets the default encoding to EUC_JP
(Extended Unix Code for Japanese). Note that you can use
--encoding instead of
-E if you prefer to type longer option strings.
--encoding option is given, the encoding
specified at configure time is used.
You can create a database with a different encoding:
$ createdb -E EUC_KR korean
will create a database named korean with EUC_KR encoding. Another way to accomplish this is to use a SQL command:
CREATE DATABASE korean WITH ENCODING = 'EUC_KR';
The encoding for a database is represented as an encoding column in the pg_database system catalog. You can see that by
-l option or the
\l command of psql.
$ psql -l List of databases Database | Owner | Encoding ---------------+---------+--------------- euc_cn | t-ishii | EUC_CN euc_jp | t-ishii | EUC_JP euc_kr | t-ishii | EUC_KR euc_tw | t-ishii | EUC_TW mule_internal | t-ishii | MULE_INTERNAL regression | t-ishii | SQL_ASCII template1 | t-ishii | EUC_JP test | t-ishii | EUC_JP unicode | t-ishii | UNICODE (9 rows)
PostgreSQL supports an automatic encoding translation between server and client for some encodings. The available combinations are listed in Table 5-2.
Table 5-2. Client/Server Character Set Encodings
|Server Encoding||Available Client Encodings|
|SQL_ASCII||SQL_ASCII, UNICODE, MULE_INTERNAL|
|EUC_JP||EUC_JP, SJIS, UNICODE, MULE_INTERNAL|
|EUC_TW||EUC_TW, BIG5, UNICODE, MULE_INTERNAL|
|LATIN1||LATIN1, UNICODE MULE_INTERNAL|
|LATIN2||LATIN2, WIN1250, UNICODE, MULE_INTERNAL|
|LATIN3||LATIN3, UNICODE MULE_INTERNAL|
|LATIN4||LATIN4, UNICODE MULE_INTERNAL|
|LATIN5||LATIN5, UNICODE MULE_INTERNAL|
|LATIN6||LATIN6, UNICODE MULE_INTERNAL|
|LATIN7||LATIN7, UNICODE MULE_INTERNAL|
|LATIN8||LATIN8, UNICODE MULE_INTERNAL|
|LATIN9||LATIN9, UNICODE MULE_INTERNAL|
|LATIN10||LATIN10, UNICODE MULE_INTERNAL|
|ISO_8859_9||ISO_8859_9, WIN, ALT, KOI8R, UNICODE, MULE_INTERNAL|
|UNICODE||EUC_JP, SJIS, EUC_KR, EUC_CN, EUC_TW, BIG5, LATIN1 to LATIN10, ISO_8859_5, ISO_8859_6, ISO_8859_7, ISO_8859_8, WIN, ALT, KOI8|
|MULE_INTERNAL||EUC_JP, SJIS, EUC_KR, EUC_CN, EUC_TW, BIG5, LATIN1 to LATIN5, WIN, ALT, WIN1250|
|KOI8||ISO_8859_9, WIN, ALT, KOI8, UNICODE, MULE_INTERNAL|
|WIN||ISO_8859_9, WIN, ALT, KOI8, UNICODE, MULE_INTERNAL|
|ALT||ISO_8859_9, WIN, ALT, KOI8, UNICODE, MULE_INTERNAL|
To enable the automatic encoding translation, you have to tell PostgreSQL the encoding you would like to use in the client. There are several ways to accomplish this.
Using the \encoding command in psql. \encoding allows you to change client encoding on the fly. For example, to change the encoding to SJIS, type:
Using libpq functions.
\encoding actually calls
PQsetClientEncoding() for its
int PQsetClientEncoding(PGconn *conn, const char *encoding)
where conn is a connection to the server, and encoding is an encoding you want to use. If it successfully sets the encoding, it returns 0, otherwise -1. The current encoding for this connection can be shown by using:
int PQclientEncoding(const PGconn *conn)
Note that it returns the encoding ID, not a symbolic string such as EUC_JP. To convert an encoding ID to an encoding name, you can use:
char *pg_encoding_to_char(int encoding_id)
Using SET CLIENT_ENCODING TO. Setting the client encoding can be done with this SQL command:
SET CLIENT_ENCODING TO 'encoding';
Also you can use the SQL92 syntax SET NAMES for this purpose:
SET NAMES 'encoding';
To query the current client encoding:
To return to the default encoding:
Using PGCLIENTENCODING. If environment variable PGCLIENTENCODING is defined in the client's environment, that client encoding is automatically selected when a connection to the server is made. (This can subsequently be overridden using any of the other methods mentioned above.)
An automatic encoding translation between Unicode and other
encodings has been supported since PostgreSQL 7.1. For 7.1 it was not enabled
by default. To enable this feature, run configure with the
Note that this requires the
--enable-multibyte option also.
--enable-unicode-conversion is not necessary.
The Unicode conversion functionality is automatically enabled
Suppose you choose EUC_JP for the server and LATIN1 for the client, then some Japanese characters cannot be translated into LATIN1. In this case, a letter that cannot be represented in the LATIN1 character set would be transformed as:
These are good sources to start learning about various kinds of encoding systems.
Detailed explanations of EUC_JP, EUC_CN, EUC_KR, EUC_TW appear in section 3.2.
The web site of the Unicode Consortium
UTF-8 is defined here.
Dec 7, 2000 * An automatic encoding translation between Unicode and other encodings are implemented * Changes above will appear in 7.1 May 20, 2000 * SJIS UDC (NEC selection IBM kanji) support contributed by Eiji Tokuya * Changes above will appear in 7.0.1 Mar 22, 2000 * Add new libpq functions PQsetClientEncoding, PQclientEncoding * ./configure --with-mb=EUC_JP now deprecated. use ./configure --enable-multibyte=EUC_JP instead * Add SQL_ASCII regression test case * Add SJIS User Defined Character (UDC) support * All of above will appear in 7.0 July 11, 1999 * Add support for WIN1250 (Windows Czech) as a client encoding (contributed by Pavel Behal) * fix some compiler warnings (contributed by Tomoaki Nishiyama) Mar 23, 1999 * Add support for KOI8(KOI8-R), WIN(CP1251), ALT(CP866) (thanks Oleg Broytmann for testing) * Fix problem with MB and locale Jan 26, 1999 * Add support for Big5 for frontend encoding (you need to create a database with EUC_TW to use Big5) * Add regression test case for EUC_TW (contributed by Jonah Kuo
<email@example.com>) Dec 15, 1998 * Bugs related to SQL_ASCII support fixed Nov 5, 1998 * 6.4 release. In this version, pg_database has "encoding" column that represents the database encoding Jul 22, 1998 * determine encoding at initdb/createdb rather than compile time * support for PGCLIENTENCODING when issuing COPY command * support for SQL92 syntax "SET NAMES" * support for LATIN2-5 * add UNICODE regression test case * new test suite for MB * clean up source files Jun 5, 1998 * add support for the encoding translation between the backend and the frontend * new command SET CLIENT_ENCODING etc. added * add support for LATIN1 character set * enhance 8-bit cleanliness April 21, 1998 some enhancements/fixes * character_length(), position(), substring() are now aware of multi-byte characters * add octet_length() * add --with-mb option to configure * new regression tests for EUC_KR (contributed by Soonmyung Hong) * add some test cases to the EUC_JP regression test * fix problem in regress/regress.sh in case of System V * fix toupper(), tolower() to handle 8bit chars Mar 25, 1998 MB PL2 is incorporated into PostgreSQL 6.3.1 Mar 10, 1998 PL2 released * add regression test for EUC_JP, EUC_CN and MULE_INTERNAL * add an English document (this file) * fix problems concerning 8-bit single byte characters Mar 1, 1998 PL1 released
The WIN1250 character set on Windows client platforms can be used with PostgreSQL with locale support enabled.
The following should be kept in mind:
Success depends on proper system locales. This has been tested with Red Hat 6.0 and Slackware 3.6, with the cs_CZ.iso8859-2 locale.
Never try to set the server's database encoding to WIN1250. Always use LATIN2 instead since there is no WIN1250 locale in Unix.
The WIN1250 encoding is usable only for Windows ODBC clients. The characters are recoded on the fly, to be displayed and stored back properly.
WIN1250 on Windows/ODBC
Compile PostgreSQL with locale enabled and the server-side encoding set to LATIN2.
Set up your installation. Do not forget to create locale variables in your environment. For example (this may not be correct for your environment):
You have to start the server with locales set!
Try it with the Czech language, and have it sort on a query.
Install ODBC driver for PostgreSQL on your Windows machine.
Set up your data source properly. Include this line in your ODBC configuration dialog in the field Connect Settings:
SET CLIENT_ENCODING = 'WIN1250';
Now try it again, but in Windows with ODBC.
If you see anything in the documentation that is not correct, does not match your experience with the particular feature or requires further clarification, please use this form to report a documentation issue.