Data Definition
This chapter covers how one creates the database structures that
will hold one's data. In a relational database, the raw data is
stored in tables, so the majority of this chapter is devoted to
explaining how tables are created and modified and what features are
available to control what data is stored in the tables.
Subsequently, we discuss how tables can be organized into
schemas, and how privileges can be assigned to tables. Finally,
we will briefly look at other features that affect the data storage,
such as inheritance, views, functions, and triggers.
Table Basicstablerowcolumn
A table in a relational database is much like a table on paper: It
consists of rows and columns. The number and order of the columns
is fixed, and each column has a name. The number of rows is
variable — it reflects how much data is stored at a given moment.
SQL does not make any guarantees about the order of the rows in a
table. When a table is read, the rows will appear in an unspecified order,
unless sorting is explicitly requested. This is covered in . Furthermore, SQL does not assign unique
identifiers to rows, so it is possible to have several completely
identical rows in a table. This is a consequence of the
mathematical model that underlies SQL but is usually not desirable.
Later in this chapter we will see how to deal with this issue.
Each column has a data type. The data type constrains the set of
possible values that can be assigned to a column and assigns
semantics to the data stored in the column so that it can be used
for computations. For instance, a column declared to be of a
numerical type will not accept arbitrary text strings, and the data
stored in such a column can be used for mathematical computations.
By contrast, a column declared to be of a character string type
will accept almost any kind of data but it does not lend itself to
mathematical calculations, although other operations such as string
concatenation are available.
PostgreSQL includes a sizable set of
built-in data types that fit many applications. Users can also
define their own data types. Most built-in data types have obvious
names and semantics, so we defer a detailed explanation to . Some of the frequently used data types are
integer for whole numbers, numeric for
possibly fractional numbers, text for character
strings, date for dates, time for
time-of-day values, and timestamp for values
containing both date and time.
tablecreating
To create a table, you use the aptly named command.
In this command you specify at least a name for the new table, the
names of the columns and the data type of each column. For
example:
CREATE TABLE my_first_table (
first_column text,
second_column integer
);
This creates a table named my_first_table with
two columns. The first column is named
first_column and has a data type of
text; the second column has the name
second_column and the type integer.
The table and column names follow the identifier syntax explained
in . The type names are
usually also identifiers, but there are some exceptions. Note that the
column list is comma-separated and surrounded by parentheses.
Of course, the previous example was heavily contrived. Normally,
you would give names to your tables and columns that convey what
kind of data they store. So let's look at a more realistic
example:
CREATE TABLE products (
product_no integer,
name text,
price numeric
);
(The numeric type can store fractional components, as
would be typical of monetary amounts.)
When you create many interrelated tables it is wise to choose a
consistent naming pattern for the tables and columns. For
instance, there is a choice of using singular or plural nouns for
table names, both of which are favored by some theorist or other.
There is a limit on how many columns a table can contain.
Depending on the column types, it is between 250 and 1600.
However, defining a table with anywhere near this many columns is
highly unusual and often a questionable design.
tableremoving
If you no longer need a table, you can remove it using the command.
For example:
DROP TABLE my_first_table;
DROP TABLE products;
Attempting to drop a table that does not exist is an error.
Nevertheless, it is common in SQL script files to unconditionally
try to drop each table before creating it, ignoring any error
messages, so that the script works whether or not the table exists.
(If you like, you can use the DROP TABLE IF EXISTS> variant
to avoid the error messages, but this is not standard SQL.)
If you need to modify a table that already exists, see later in this chapter.
With the tools discussed so far you can create fully functional
tables. The remainder of this chapter is concerned with adding
features to the table definition to ensure data integrity,
security, or convenience. If you are eager to fill your tables with
data now you can skip ahead to and read the
rest of this chapter later.
Default Valuesdefault value
A column can be assigned a default value. When a new row is
created and no values are specified for some of the columns, those
columns will be filled with their respective default values. A
data manipulation command can also request explicitly that a column
be set to its default value, without having to know what that value is.
(Details about data manipulation commands are in .)
null valuedefault value
If no default value is declared explicitly, the default value is the
null value. This usually makes sense because a null value can
be considered to represent unknown data.
In a table definition, default values are listed after the column
data type. For example:
CREATE TABLE products (
product_no integer,
name text,
price numeric DEFAULT 9.99
);
The default value can be an expression, which will be
evaluated whenever the default value is inserted
(not when the table is created). A common example
is for a timestamp column to have a default of CURRENT_TIMESTAMP>,
so that it gets set to the time of row insertion. Another common
example is generating a serial number> for each row.
In PostgreSQL this is typically done by
something like:
CREATE TABLE products (
product_no integer DEFAULT nextval('products_product_no_seq'),
...
);
where the nextval()> function supplies successive values
from a sequence object> (see ). This arrangement is sufficiently common
that there's a special shorthand for it:
CREATE TABLE products (
product_no SERIAL,
...
);
The SERIAL> shorthand is discussed further in .
Constraintsconstraint
Data types are a way to limit the kind of data that can be stored
in a table. For many applications, however, the constraint they
provide is too coarse. For example, a column containing a product
price should probably only accept positive values. But there is no
standard data type that accepts only positive numbers. Another issue is
that you might want to constrain column data with respect to other
columns or rows. For example, in a table containing product
information, there should be only one row for each product number.
To that end, SQL allows you to define constraints on columns and
tables. Constraints give you as much control over the data in your
tables as you wish. If a user attempts to store data in a column
that would violate a constraint, an error is raised. This applies
even if the value came from the default value definition.
Check Constraintscheck constraintconstraintcheck
A check constraint is the most generic constraint type. It allows
you to specify that the value in a certain column must satisfy a
Boolean (truth-value) expression. For instance, to require positive
product prices, you could use:
CREATE TABLE products (
product_no integer,
name text,
price numeric CHECK (price > 0)
);
As you see, the constraint definition comes after the data type,
just like default value definitions. Default values and
constraints can be listed in any order. A check constraint
consists of the key word CHECK followed by an
expression in parentheses. The check constraint expression should
involve the column thus constrained, otherwise the constraint
would not make too much sense.
constraintname
You can also give the constraint a separate name. This clarifies
error messages and allows you to refer to the constraint when you
need to change it. The syntax is:
CREATE TABLE products (
product_no integer,
name text,
price numeric CONSTRAINT positive_price CHECK (price > 0)
);
So, to specify a named constraint, use the key word
CONSTRAINT followed by an identifier followed
by the constraint definition. (If you don't specify a constraint
name in this way, the system chooses a name for you.)
A check constraint can also refer to several columns. Say you
store a regular price and a discounted price, and you want to
ensure that the discounted price is lower than the regular price:
CREATE TABLE products (
product_no integer,
name text,
price numeric CHECK (price > 0),
discounted_price numeric CHECK (discounted_price > 0),
CHECK (price > discounted_price)
);
The first two constraints should look familiar. The third one
uses a new syntax. It is not attached to a particular column,
instead it appears as a separate item in the comma-separated
column list. Column definitions and these constraint
definitions can be listed in mixed order.
We say that the first two constraints are column constraints, whereas the
third one is a table constraint because it is written separately
from any one column definition. Column constraints can also be
written as table constraints, while the reverse is not necessarily
possible, since a column constraint is supposed to refer to only the
column it is attached to. (PostgreSQL doesn't
enforce that rule, but you should follow it if you want your table
definitions to work with other database systems.) The above example could
also be written as:
CREATE TABLE products (
product_no integer,
name text,
price numeric,
CHECK (price > 0),
discounted_price numeric,
CHECK (discounted_price > 0),
CHECK (price > discounted_price)
);
or even:
CREATE TABLE products (
product_no integer,
name text,
price numeric CHECK (price > 0),
discounted_price numeric,
CHECK (discounted_price > 0 AND price > discounted_price)
);
It's a matter of taste.
Names can be assigned to table constraints in the same way as
column constraints:
CREATE TABLE products (
product_no integer,
name text,
price numeric,
CHECK (price > 0),
discounted_price numeric,
CHECK (discounted_price > 0),
CONSTRAINT valid_discount> CHECK (price > discounted_price)
);
null valuewith check constraints
It should be noted that a check constraint is satisfied if the
check expression evaluates to true or the null value. Since most
expressions will evaluate to the null value if any operand is null,
they will not prevent null values in the constrained columns. To
ensure that a column does not contain null values, the not-null
constraint described in the next section can be used.
Not-Null Constraintsnot-null constraintconstraintNOT NULL
A not-null constraint simply specifies that a column must not
assume the null value. A syntax example:
CREATE TABLE products (
product_no integer NOT NULL,
name text NOT NULL,
price numeric
);
A not-null constraint is always written as a column constraint. A
not-null constraint is functionally equivalent to creating a check
constraint CHECK (column_name
IS NOT NULL), but in
PostgreSQL creating an explicit
not-null constraint is more efficient. The drawback is that you
cannot give explicit names to not-null constraints created this
way.
Of course, a column can have more than one constraint. Just write
the constraints one after another:
CREATE TABLE products (
product_no integer NOT NULL,
name text NOT NULL,
price numeric NOT NULL CHECK (price > 0)
);
The order doesn't matter. It does not necessarily determine in which
order the constraints are checked.
The NOT NULL constraint has an inverse: the
NULL constraint. This does not mean that the
column must be null, which would surely be useless. Instead, this
simply selects the default behavior that the column might be null.
The NULL constraint is not present in the SQL
standard and should not be used in portable applications. (It was
only added to PostgreSQL to be
compatible with some other database systems.) Some users, however,
like it because it makes it easy to toggle the constraint in a
script file. For example, you could start with:
CREATE TABLE products (
product_no integer NULL,
name text NULL,
price numeric NULL
);
and then insert the NOT key word where desired.
In most database designs the majority of columns should be marked
not null.
Unique Constraintsunique constraintconstraintunique
Unique constraints ensure that the data contained in a column or a
group of columns is unique with respect to all the rows in the
table. The syntax is:
CREATE TABLE products (
product_no integer UNIQUE,
name text,
price numeric
);
when written as a column constraint, and:
CREATE TABLE products (
product_no integer,
name text,
price numeric,
UNIQUE (product_no)
);
when written as a table constraint.
If a unique constraint refers to a group of columns, the columns
are listed separated by commas:
CREATE TABLE example (
a integer,
b integer,
c integer,
UNIQUE (a, c)
);
This specifies that the combination of values in the indicated columns
is unique across the whole table, though any one of the columns
need not be (and ordinarily isn't) unique.
You can assign your own name for a unique constraint, in the usual way:
CREATE TABLE products (
product_no integer CONSTRAINT must_be_different UNIQUE,
name text,
price numeric
);
Adding a unique constraint will automatically create a unique btree
index on the column or group of columns used in the constraint.
null valuewith unique constraints
In general, a unique constraint is violated when there is more than
one row in the table where the values of all of the
columns included in the constraint are equal.
However, two null values are not considered equal in this
comparison. That means even in the presence of a
unique constraint it is possible to store duplicate
rows that contain a null value in at least one of the constrained
columns. This behavior conforms to the SQL standard, but we have
heard that other SQL databases might not follow this rule. So be
careful when developing applications that are intended to be
portable.
Primary Keysprimary keyconstraintprimary key
Technically, a primary key constraint is simply a combination of a
unique constraint and a not-null constraint. So, the following
two table definitions accept the same data:
CREATE TABLE products (
product_no integer UNIQUE NOT NULL,
name text,
price numeric
);
CREATE TABLE products (
product_no integer PRIMARY KEY,
name text,
price numeric
);
Primary keys can also constrain more than one column; the syntax
is similar to unique constraints:
CREATE TABLE example (
a integer,
b integer,
c integer,
PRIMARY KEY (a, c)
);
A primary key indicates that a column or group of columns can be
used as a unique identifier for rows in the table. (This is a
direct consequence of the definition of a primary key. Note that
a unique constraint does not, by itself, provide a unique identifier
because it does not exclude null values.) This is useful both for
documentation purposes and for client applications. For example,
a GUI application that allows modifying row values probably needs
to know the primary key of a table to be able to identify rows
uniquely.
Adding a primary key will automatically create a unique btree index
on the column or group of columns used in the primary key.
A table can have at most one primary key. (There can be any number
of unique and not-null constraints, which are functionally the same
thing, but only one can be identified as the primary key.)
Relational database theory
dictates that every table must have a primary key. This rule is
not enforced by PostgreSQL, but it is
usually best to follow it.
Foreign Keysforeign keyconstraintforeign keyreferential integrity
A foreign key constraint specifies that the values in a column (or
a group of columns) must match the values appearing in some row
of another table.
We say this maintains the referential
integrity between two related tables.
Say you have the product table that we have used several times already:
CREATE TABLE products (
product_no integer PRIMARY KEY,
name text,
price numeric
);
Let's also assume you have a table storing orders of those
products. We want to ensure that the orders table only contains
orders of products that actually exist. So we define a foreign
key constraint in the orders table that references the products
table:
CREATE TABLE orders (
order_id integer PRIMARY KEY,
product_no integer REFERENCES products (product_no),
quantity integer
);
Now it is impossible to create orders with
product_no entries that do not appear in the
products table.
We say that in this situation the orders table is the
referencing table and the products table is
the referenced table. Similarly, there are
referencing and referenced columns.
You can also shorten the above command to:
CREATE TABLE orders (
order_id integer PRIMARY KEY,
product_no integer REFERENCES products,
quantity integer
);
because in absence of a column list the primary key of the
referenced table is used as the referenced column(s).
A foreign key can also constrain and reference a group of columns.
As usual, it then needs to be written in table constraint form.
Here is a contrived syntax example:
CREATE TABLE t1 (
a integer PRIMARY KEY,
b integer,
c integer,
FOREIGN KEY (b, c) REFERENCES other_table (c1, c2)
);
Of course, the number and type of the constrained columns need to
match the number and type of the referenced columns.
You can assign your own name for a foreign key constraint,
in the usual way.
A table can contain more than one foreign key constraint. This is
used to implement many-to-many relationships between tables. Say
you have tables about products and orders, but now you want to
allow one order to contain possibly many products (which the
structure above did not allow). You could use this table structure:
CREATE TABLE products (
product_no integer PRIMARY KEY,
name text,
price numeric
);
CREATE TABLE orders (
order_id integer PRIMARY KEY,
shipping_address text,
...
);
CREATE TABLE order_items (
product_no integer REFERENCES products,
order_id integer REFERENCES orders,
quantity integer,
PRIMARY KEY (product_no, order_id)
);
Notice that the primary key overlaps with the foreign keys in
the last table.
CASCADEforeign key actionRESTRICTforeign key action
We know that the foreign keys disallow creation of orders that
do not relate to any products. But what if a product is removed
after an order is created that references it? SQL allows you to
handle that as well. Intuitively, we have a few options:
Disallow deleting a referenced productDelete the orders as wellSomething else?
To illustrate this, let's implement the following policy on the
many-to-many relationship example above: when someone wants to
remove a product that is still referenced by an order (via
order_items), we disallow it. If someone
removes an order, the order items are removed as well:
CREATE TABLE products (
product_no integer PRIMARY KEY,
name text,
price numeric
);
CREATE TABLE orders (
order_id integer PRIMARY KEY,
shipping_address text,
...
);
CREATE TABLE order_items (
product_no integer REFERENCES products ON DELETE RESTRICT,
order_id integer REFERENCES orders ON DELETE CASCADE,
quantity integer,
PRIMARY KEY (product_no, order_id)
);
Restricting and cascading deletes are the two most common options.
RESTRICT prevents deletion of a
referenced row. NO ACTION means that if any
referencing rows still exist when the constraint is checked, an error
is raised; this is the default behavior if you do not specify anything.
(The essential difference between these two choices is that
NO ACTION allows the check to be deferred until
later in the transaction, whereas RESTRICT does not.)
CASCADE> specifies that when a referenced row is deleted,
row(s) referencing it should be automatically deleted as well.
There are two other options:
SET NULL and SET DEFAULT.
These cause the referencing columns to be set to nulls or default
values, respectively, when the referenced row is deleted.
Note that these do not excuse you from observing any constraints.
For example, if an action specifies SET DEFAULT
but the default value would not satisfy the foreign key, the
operation will fail.
Analogous to ON DELETE there is also
ON UPDATE which is invoked when a referenced
column is changed (updated). The possible actions are the same.
Since a DELETE of a row from the referenced table
or an UPDATE of a referenced column will require
a scan of the referencing table for rows matching the old value, it
is often a good idea to index the referencing columns. Because this
is not always needed, and there are many choices available on how
to index, declaration of a foreign key constraint does not
automatically create an index on the referencing columns.
More information about updating and deleting data is in .
Finally, we should mention that a foreign key must reference
columns that either are a primary key or form a unique constraint.
If the foreign key references a unique constraint, there are some
additional possibilities regarding how null values are matched.
These are explained in the reference documentation for
.
Exclusion constraintsexclusion constraintconstraintexclusion
Exclusion constraints ensure that if any two rows are compared on
the specified columns or expressions using the specified operators,
at least one of these operator comparisons will return false or null.
The syntax is:
CREATE TABLE circles (
c circle,
EXCLUDE USING gist (c WITH &&)
);
See also CREATE
TABLE ... CONSTRAINT ... EXCLUDE> for details.
Adding an exclusion constraint will automatically create an index
of the type specified in the constraint declaration.
System Columns
Every table has several system columns> that are
implicitly defined by the system. Therefore, these names cannot be
used as names of user-defined columns. (Note that these
restrictions are separate from whether the name is a key word or
not; quoting a name will not allow you to escape these
restrictions.) You do not really need to be concerned about these
columns; just know they exist.
columnsystem columnoid>OIDcolumn
The object identifier (object ID) of a row. This column is only
present if the table was created using WITH
OIDS, or if the
configuration variable was set at the time. This column is of type
oid (same name as the column); see for more information about the type.
tableoid>tableoid
The OID of the table containing this row. This column is
particularly handy for queries that select from inheritance
hierarchies (see ), since without it,
it's difficult to tell which individual table a row came from. The
tableoid can be joined against the
oid column of
pg_class to obtain the table name.
xmin>xmin
The identity (transaction ID) of the inserting transaction for
this row version. (A row version is an individual state of a
row; each update of a row creates a new row version for the same
logical row.)
cmin>cmin
The command identifier (starting at zero) within the inserting
transaction.
xmax>xmax
The identity (transaction ID) of the deleting transaction, or
zero for an undeleted row version. It is possible for this column to
be nonzero in a visible row version. That usually indicates that the
deleting transaction hasn't committed yet, or that an attempted
deletion was rolled back.
cmax>cmax
The command identifier within the deleting transaction, or zero.
ctid>ctid
The physical location of the row version within its table. Note that
although the ctid can be used to
locate the row version very quickly, a row's
ctid will change if it is
updated or moved by VACUUM FULL>. Therefore
ctid is useless as a long-term row
identifier. The OID, or even better a user-defined serial
number, should be used to identify logical rows.
OIDs are 32-bit quantities and are assigned from a single
cluster-wide counter. In a large or long-lived database, it is
possible for the counter to wrap around. Hence, it is bad
practice to assume that OIDs are unique, unless you take steps to
ensure that this is the case. If you need to identify the rows in
a table, using a sequence generator is strongly recommended.
However, OIDs can be used as well, provided that a few additional
precautions are taken:
A unique constraint should be created on the OID column of each
table for which the OID will be used to identify rows. When such
a unique constraint (or unique index) exists, the system takes
care not to generate an OID matching an already-existing row.
(Of course, this is only possible if the table contains fewer
than 232> (4 billion) rows, and in practice the
table size had better be much less than that, or performance
might suffer.)
OIDs should never be assumed to be unique across tables; use
the combination of tableoid> and row OID if you
need a database-wide identifier.
Of course, the tables in question must be created WITH
OIDS. As of PostgreSQL 8.1,
WITHOUT OIDS> is the default.
Transaction identifiers are also 32-bit quantities. In a
long-lived database it is possible for transaction IDs to wrap
around. This is not a fatal problem given appropriate maintenance
procedures; see for details. It is
unwise, however, to depend on the uniqueness of transaction IDs
over the long term (more than one billion transactions).
Command identifiers are also 32-bit quantities. This creates a hard limit
of 232> (4 billion) SQL commands
within a single transaction. In practice this limit is not a
problem — note that the limit is on the number of
SQL commands, not the number of rows processed.
Also, as of PostgreSQL 8.3, only commands
that actually modify the database contents will consume a command
identifier.
Modifying Tablestablemodifying
When you create a table and you realize that you made a mistake, or
the requirements of the application change, you can drop the
table and create it again. But this is not a convenient option if
the table is already filled with data, or if the table is
referenced by other database objects (for instance a foreign key
constraint). Therefore PostgreSQL
provides a family of commands to make modifications to existing
tables. Note that this is conceptually distinct from altering
the data contained in the table: here we are interested in altering
the definition, or structure, of the table.
You can:
Add columnsRemove columnsAdd constraintsRemove constraintsChange default valuesChange column data typesRename columnsRename tables
All these actions are performed using the
command, whose reference page contains details beyond those given
here.
Adding a Columncolumnadding
To add a column, use a command like:
ALTER TABLE products ADD COLUMN description text;
The new column is initially filled with whatever default
value is given (null if you don't specify a DEFAULT> clause).
You can also define constraints on the column at the same time,
using the usual syntax:
ALTER TABLE products ADD COLUMN description text CHECK (description <> '');
In fact all the options that can be applied to a column description
in CREATE TABLE> can be used here. Keep in mind however
that the default value must satisfy the given constraints, or the
ADD> will fail. Alternatively, you can add
constraints later (see below) after you've filled in the new column
correctly.
Adding a column with a default requires updating each row of the
table (to store the new column value). However, if no default is
specified, PostgreSQL is able to avoid
the physical update. So if you intend to fill the column with
mostly nondefault values, it's best to add the column with no default,
insert the correct values using UPDATE>, and then add any
desired default as described below.
Removing a Columncolumnremoving
To remove a column, use a command like:
ALTER TABLE products DROP COLUMN description;
Whatever data was in the column disappears. Table constraints involving
the column are dropped, too. However, if the column is referenced by a
foreign key constraint of another table,
PostgreSQL will not silently drop that
constraint. You can authorize dropping everything that depends on
the column by adding CASCADE>:
ALTER TABLE products DROP COLUMN description CASCADE;
See for a description of the general
mechanism behind this.
Adding a Constraintconstraintadding
To add a constraint, the table constraint syntax is used. For example:
ALTER TABLE products ADD CHECK (name <> '');
ALTER TABLE products ADD CONSTRAINT some_name UNIQUE (product_no);
ALTER TABLE products ADD FOREIGN KEY (product_group_id) REFERENCES product_groups;
To add a not-null constraint, which cannot be written as a table
constraint, use this syntax:
ALTER TABLE products ALTER COLUMN product_no SET NOT NULL;
The constraint will be checked immediately, so the table data must
satisfy the constraint before it can be added.
Removing a Constraintconstraintremoving
To remove a constraint you need to know its name. If you gave it
a name then that's easy. Otherwise the system assigned a
generated name, which you need to find out. The
psql command \d
tablename can be helpful
here; other interfaces might also provide a way to inspect table
details. Then the command is:
ALTER TABLE products DROP CONSTRAINT some_name;
(If you are dealing with a generated constraint name like $2>,
don't forget that you'll need to double-quote it to make it a valid
identifier.)
As with dropping a column, you need to add CASCADE> if you
want to drop a constraint that something else depends on. An example
is that a foreign key constraint depends on a unique or primary key
constraint on the referenced column(s).
This works the same for all constraint types except not-null
constraints. To drop a not null constraint use:
ALTER TABLE products ALTER COLUMN product_no DROP NOT NULL;
(Recall that not-null constraints do not have names.)
Changing a Column's Default Valuedefault valuechanging
To set a new default for a column, use a command like:
ALTER TABLE products ALTER COLUMN price SET DEFAULT 7.77;
Note that this doesn't affect any existing rows in the table, it
just changes the default for future INSERT> commands.
To remove any default value, use:
ALTER TABLE products ALTER COLUMN price DROP DEFAULT;
This is effectively the same as setting the default to null.
As a consequence, it is not an error
to drop a default where one hadn't been defined, because the
default is implicitly the null value.
Changing a Column's Data Typecolumn data typechanging
To convert a column to a different data type, use a command like:
ALTER TABLE products ALTER COLUMN price TYPE numeric(10,2);
This will succeed only if each existing entry in the column can be
converted to the new type by an implicit cast. If a more complex
conversion is needed, you can add a USING> clause that
specifies how to compute the new values from the old.
PostgreSQL> will attempt to convert the column's
default value (if any) to the new type, as well as any constraints
that involve the column. But these conversions might fail, or might
produce surprising results. It's often best to drop any constraints
on the column before altering its type, and then add back suitably
modified constraints afterwards.
Renaming a Columncolumnrenaming
To rename a column:
ALTER TABLE products RENAME COLUMN product_no TO product_number;
Renaming a Tabletablerenaming
To rename a table:
ALTER TABLE products RENAME TO items;
Privilegesprivilegepermissionprivilege
When you create a database object, you become its owner. By
default, only the owner of an object can do anything with the
object. In order to allow other users to use it,
privileges must be granted. (However,
users that have the superuser attribute can always
access any object.)
There are several different privileges: SELECT>,
INSERT>, UPDATE>, DELETE>,
TRUNCATE>, REFERENCES>, TRIGGER>,
CREATE>, CONNECT>, TEMPORARY>,
EXECUTE>, and USAGE>.
The privileges applicable to a particular
object vary depending on the object's type (table, function, etc).
For complete information on the different types of privileges
supported by PostgreSQL, refer to the
reference
page. The following sections and chapters will also show you how
those privileges are used.
The right to modify or destroy an object is always the privilege of
the owner only.
To change the owner of a table, index, sequence, or view, use the
command. There are corresponding ALTER> commands for
other object types.
To assign privileges, the GRANT command is
used. For example, if joe is an existing user, and
accounts is an existing table, the privilege to
update the table can be granted with:
GRANT UPDATE ON accounts TO joe;
Writing ALL in place of a specific privilege grants all
privileges that are relevant for the object type.
The special user name PUBLIC can
be used to grant a privilege to every user on the system. Also,
group> roles can be set up to help manage privileges when
there are many users of a database — for details see
.
To revoke a privilege, use the fittingly named
REVOKE command:
REVOKE ALL ON accounts FROM PUBLIC;
The special privileges of the object owner (i.e., the right to do
DROP>, GRANT>, REVOKE>, etc.)
are always implicit in being the owner,
and cannot be granted or revoked. But the object owner can choose
to revoke his own ordinary privileges, for example to make a
table read-only for himself as well as others.
Ordinarily, only the object's owner (or a superuser) can grant or
revoke privileges on an object. However, it is possible to grant a
privilege with grant option>, which gives the recipient
the right to grant it in turn to others. If the grant option is
subsequently revoked then all who received the privilege from that
recipient (directly or through a chain of grants) will lose the
privilege. For details see the and
reference pages.
Schemasschema
A PostgreSQL database cluster
contains one or more named databases. Users and groups of users are
shared across the entire cluster, but no other data is shared across
databases. Any given client connection to the server can access
only the data in a single database, the one specified in the connection
request.
Users of a cluster do not necessarily have the privilege to access every
database in the cluster. Sharing of user names means that there
cannot be different users named, say, joe> in two databases
in the same cluster; but the system can be configured to allow
joe> access to only some of the databases.
A database contains one or more named schemas>, which
in turn contain tables. Schemas also contain other kinds of named
objects, including data types, functions, and operators. The same
object name can be used in different schemas without conflict; for
example, both schema1> and myschema> can
contain tables named mytable>. Unlike databases,
schemas are not rigidly separated: a user can access objects in any
of the schemas in the database he is connected to, if he has
privileges to do so.
There are several reasons why one might want to use schemas:
To allow many users to use one database without interfering with
each other.
To organize database objects into logical groups to make them
more manageable.
Third-party applications can be put into separate schemas so
they do not collide with the names of other objects.
Schemas are analogous to directories at the operating system level,
except that schemas cannot be nested.
Creating a Schemaschemacreating
To create a schema, use the
command. Give the schema a name
of your choice. For example:
CREATE SCHEMA myschema;
qualified namenamequalified
To create or access objects in a schema, write a
qualified name> consisting of the schema name and
table name separated by a dot:
schema>.>table>
This works anywhere a table name is expected, including the table
modification commands and the data access commands discussed in
the following chapters.
(For brevity we will speak of tables only, but the same ideas apply
to other kinds of named objects, such as types and functions.)
Actually, the even more general syntax
database>.>schema>.>table>
can be used too, but at present this is just for pro
forma> compliance with the SQL standard. If you write a database name,
it must be the same as the database you are connected to.
So to create a table in the new schema, use:
CREATE TABLE myschema.mytable (
...
);
schemaremoving
To drop a schema if it's empty (all objects in it have been
dropped), use:
DROP SCHEMA myschema;
To drop a schema including all contained objects, use:
DROP SCHEMA myschema CASCADE;
See for a description of the general
mechanism behind this.
Often you will want to create a schema owned by someone else
(since this is one of the ways to restrict the activities of your
users to well-defined namespaces). The syntax for that is:
CREATE SCHEMA schemaname AUTHORIZATION username;
You can even omit the schema name, in which case the schema name
will be the same as the user name. See for how this can be useful.
Schema names beginning with pg_> are reserved for
system purposes and cannot be created by users.
The Public Schemaschemapublic
In the previous sections we created tables without specifying any
schema names. By default such tables (and other objects) are
automatically put into a schema named public. Every new
database contains such a schema. Thus, the following are equivalent:
CREATE TABLE products ( ... );
and:
CREATE TABLE public.products ( ... );
The Schema Search Pathsearch pathunqualified namenameunqualified
Qualified names are tedious to write, and it's often best not to
wire a particular schema name into applications anyway. Therefore
tables are often referred to by unqualified names>,
which consist of just the table name. The system determines which table
is meant by following a search path>, which is a list
of schemas to look in. The first matching table in the search path
is taken to be the one wanted. If there is no match in the search
path, an error is reported, even if matching table names exist
in other schemas in the database.
schemacurrent
The first schema named in the search path is called the current schema.
Aside from being the first schema searched, it is also the schema in
which new tables will be created if the CREATE TABLE>
command does not specify a schema name.
search_path
To show the current search path, use the following command:
SHOW search_path;
In the default setup this returns:
search_path
--------------
"$user",public
The first element specifies that a schema with the same name as
the current user is to be searched. If no such schema exists,
the entry is ignored. The second element refers to the
public schema that we have seen already.
The first schema in the search path that exists is the default
location for creating new objects. That is the reason that by
default objects are created in the public schema. When objects
are referenced in any other context without schema qualification
(table modification, data modification, or query commands) the
search path is traversed until a matching object is found.
Therefore, in the default configuration, any unqualified access
again can only refer to the public schema.
To put our new schema in the path, we use:
SET search_path TO myschema,public;
(We omit the $user here because we have no
immediate need for it.) And then we can access the table without
schema qualification:
DROP TABLE mytable;
Also, since myschema is the first element in
the path, new objects would by default be created in it.
We could also have written:
SET search_path TO myschema;
Then we no longer have access to the public schema without
explicit qualification. There is nothing special about the public
schema except that it exists by default. It can be dropped, too.
See also for other ways to manipulate
the schema search path.
The search path works in the same way for data type names, function names,
and operator names as it does for table names. Data type and function
names can be qualified in exactly the same way as table names. If you
need to write a qualified operator name in an expression, there is a
special provision: you must write
OPERATOR(>schema>.>operator>)>
This is needed to avoid syntactic ambiguity. An example is:
SELECT 3 OPERATOR(pg_catalog.+) 4;
In practice one usually relies on the search path for operators,
so as not to have to write anything so ugly as that.
Schemas and Privilegesprivilegefor schemas
By default, users cannot access any objects in schemas they do not
own. To allow that, the owner of the schema must grant the
USAGE privilege on the schema. To allow users
to make use of the objects in the schema, additional privileges
might need to be granted, as appropriate for the object.
A user can also be allowed to create objects in someone else's
schema. To allow that, the CREATE privilege on
the schema needs to be granted. Note that by default, everyone
has CREATE and USAGE privileges on
the schema
public. This allows all users that are able to
connect to a given database to create objects in its
public schema. If you do
not want to allow that, you can revoke that privilege:
REVOKE CREATE ON SCHEMA public FROM PUBLIC;
(The first public is the schema, the second
public means every user. In the
first sense it is an identifier, in the second sense it is a
key word, hence the different capitalization; recall the
guidelines from .)
The System Catalog Schemasystem catalogschema
In addition to public> and user-created schemas, each
database contains a pg_catalog> schema, which contains
the system tables and all the built-in data types, functions, and
operators. pg_catalog> is always effectively part of
the search path. If it is not named explicitly in the path then
it is implicitly searched before> searching the path's
schemas. This ensures that built-in names will always be
findable. However, you can explicitly place
pg_catalog> at the end of your search path if you
prefer to have user-defined names override built-in names.
In PostgreSQL versions before 7.3,
table names beginning with pg_> were reserved. This is
no longer true: you can create such a table name if you wish, in
any non-system schema. However, it's best to continue to avoid
such names, to ensure that you won't suffer a conflict if some
future version defines a system table named the same as your
table. (With the default search path, an unqualified reference to
your table name would then be resolved as the system table instead.)
System tables will continue to follow the convention of having
names beginning with pg_>, so that they will not
conflict with unqualified user-table names so long as users avoid
the pg_> prefix.
Usage Patterns
Schemas can be used to organize your data in many ways. There are
a few usage patterns that are recommended and are easily supported by
the default configuration:
If you do not create any schemas then all users access the
public schema implicitly. This simulates the situation where
schemas are not available at all. This setup is mainly
recommended when there is only a single user or a few cooperating
users in a database. This setup also allows smooth transition
from the non-schema-aware world.
You can create a schema for each user with the same name as
that user. Recall that the default search path starts with
$user, which resolves to the user name.
Therefore, if each user has a separate schema, they access their
own schemas by default.
If you use this setup then you might also want to revoke access
to the public schema (or drop it altogether), so users are
truly constrained to their own schemas.
To install shared applications (tables to be used by everyone,
additional functions provided by third parties, etc.), put them
into separate schemas. Remember to grant appropriate
privileges to allow the other users to access them. Users can
then refer to these additional objects by qualifying the names
with a schema name, or they can put the additional schemas into
their search path, as they choose.
Portability
In the SQL standard, the notion of objects in the same schema
being owned by different users does not exist. Moreover, some
implementations do not allow you to create schemas that have a
different name than their owner. In fact, the concepts of schema
and user are nearly equivalent in a database system that
implements only the basic schema support specified in the
standard. Therefore, many users consider qualified names to
really consist of
username>.tablename>.
This is how PostgreSQL will effectively
behave if you create a per-user schema for every user.
Also, there is no concept of a public> schema in the
SQL standard. For maximum conformance to the standard, you should
not use (perhaps even remove) the public> schema.
Of course, some SQL database systems might not implement schemas
at all, or provide namespace support by allowing (possibly
limited) cross-database access. If you need to work with those
systems, then maximum portability would be achieved by not using
schemas at all.
InheritanceinheritancetableinheritancePostgreSQL implements table inheritance,
which can be a useful tool for database designers. (SQL:1999 and
later define a type inheritance feature, which differs in many
respects from the features described here.)
Let's start with an example: suppose we are trying to build a data
model for cities. Each state has many cities, but only one
capital. We want to be able to quickly retrieve the capital city
for any particular state. This can be done by creating two tables,
one for state capitals and one for cities that are not
capitals. However, what happens when we want to ask for data about
a city, regardless of whether it is a capital or not? The
inheritance feature can help to resolve this problem. We define the
capitals table so that it inherits from
cities:
CREATE TABLE cities (
name text,
population float,
altitude int -- in feet
);
CREATE TABLE capitals (
state char(2)
) INHERITS (cities);
In this case, the capitals> table inherits>
all the columns of its parent table, cities>. State
capitals also have an extra column, state>, that shows
their state.
In PostgreSQL, a table can inherit from
zero or more other tables, and a query can reference either all
rows of a table or all rows of a table plus all of its descendant tables.
The latter behavior is the default.
For example, the following query finds the names of all cities,
including state capitals, that are located at an altitude over
500 feet:
SELECT name, altitude
FROM cities
WHERE altitude > 500;
Given the sample data from the PostgreSQL
tutorial (see ), this returns:
name | altitude
-----------+----------
Las Vegas | 2174
Mariposa | 1953
Madison | 845
On the other hand, the following query finds all the cities that
are not state capitals and are situated at an altitude over 500 feet:
SELECT name, altitude
FROM ONLY cities
WHERE altitude > 500;
name | altitude
-----------+----------
Las Vegas | 2174
Mariposa | 1953
Here the ONLY keyword indicates that the query
should apply only to cities, and not any tables
below cities in the inheritance hierarchy. Many
of the commands that we have already discussed —
SELECT, UPDATE and
DELETE — support the
ONLY keyword.
In some cases you might wish to know which table a particular row
originated from. There is a system column called
tableoid in each table which can tell you the
originating table:
SELECT c.tableoid, c.name, c.altitude
FROM cities c
WHERE c.altitude > 500;
which returns:
tableoid | name | altitude
----------+-----------+----------
139793 | Las Vegas | 2174
139793 | Mariposa | 1953
139798 | Madison | 845
(If you try to reproduce this example, you will probably get
different numeric OIDs.) By doing a join with
pg_class> you can see the actual table names:
SELECT p.relname, c.name, c.altitude
FROM cities c, pg_class p
WHERE c.altitude > 500 AND c.tableoid = p.oid;
which returns:
relname | name | altitude
----------+-----------+----------
cities | Las Vegas | 2174
cities | Mariposa | 1953
capitals | Madison | 845
Inheritance does not automatically propagate data from
INSERT or COPY commands to
other tables in the inheritance hierarchy. In our example, the
following INSERT statement will fail:
INSERT INTO cities (name, population, altitude, state)
VALUES ('New York', NULL, NULL, 'NY');
We might hope that the data would somehow be routed to the
capitals table, but this does not happen:
INSERT always inserts into exactly the table
specified. In some cases it is possible to redirect the insertion
using a rule (see ). However that does not
help for the above case because the cities> table
does not contain the column state>, and so the
command will be rejected before the rule can be applied.
All check constraints and not-null constraints on a parent table are
automatically inherited by its children. Other types of constraints
(unique, primary key, and foreign key constraints) are not inherited.
A table can inherit from more than one parent table, in which case it has
the union of the columns defined by the parent tables. Any columns
declared in the child table's definition are added to these. If the
same column name appears in multiple parent tables, or in both a parent
table and the child's definition, then these columns are merged>
so that there is only one such column in the child table. To be merged,
columns must have the same data types, else an error is raised. The
merged column will have copies of all the check constraints coming from
any one of the column definitions it came from, and will be marked not-null
if any of them are.
Table inheritance is typically established when the child table is
created, using the INHERITS> clause of the
statement.
Alternatively, a table which is already defined in a compatible way can
have a new parent relationship added, using the INHERIT
variant of .
To do this the new child table must already include columns with
the same names and types as the columns of the parent. It must also include
check constraints with the same names and check expressions as those of the
parent. Similarly an inheritance link can be removed from a child using the
NO INHERIT variant of ALTER TABLE>.
Dynamically adding and removing inheritance links like this can be useful
when the inheritance relationship is being used for table
partitioning (see ).
One convenient way to create a compatible table that will later be made
a new child is to use the LIKE clause in CREATE
TABLE. This creates a new table with the same columns as
the source table. If there are any CHECK
constraints defined on the source table, the INCLUDING
CONSTRAINTS option to LIKE should be
specified, as the new child must have constraints matching the parent
to be considered compatible.
A parent table cannot be dropped while any of its children remain. Neither
can columns or check constraints of child tables be dropped or altered
if they are inherited
from any parent tables. If you wish to remove a table and all of its
descendants, one easy way is to drop the parent table with the
CASCADE option.
will
propagate any changes in column data definitions and check
constraints down the inheritance hierarchy. Again, dropping
columns that are depended on by other tables is only possible when using
the CASCADE option. ALTER
TABLE follows the same rules for duplicate column merging
and rejection that apply during CREATE TABLE.
Note how table access permissions are handled. Querying a parent
table can automatically access data in child tables without further
access privilege checking. This preserves the appearance that the
data is (also) in the parent table. Accessing the child tables
directly is, however, not automatically allowed and would require
further privileges to be granted.
Caveats
Note that not all SQL commands are able to work on
inheritance hierarchies. Commands that are used for data querying,
data modification, or schema modification
(e.g., SELECT, UPDATE, DELETE,
most variants of ALTER TABLE, but
not INSERT and ALTER TABLE ...
RENAME) typically default to including child tables and
support the ONLY notation to exclude them.
Commands that do database maintenance and tuning
(e.g., REINDEX, VACUUM)
typically only work on individual, physical tables and do no
support recursing over inheritance hierarchies. The respective
behavior of each individual command is documented in the reference
part ().
A serious limitation of the inheritance feature is that indexes (including
unique constraints) and foreign key constraints only apply to single
tables, not to their inheritance children. This is true on both the
referencing and referenced sides of a foreign key constraint. Thus,
in the terms of the above example:
If we declared cities>.name> to be
UNIQUE> or a PRIMARY KEY>, this would not stop the
capitals> table from having rows with names duplicating
rows in cities>. And those duplicate rows would by
default show up in queries from cities>. In fact, by
default capitals> would have no unique constraint at all,
and so could contain multiple rows with the same name.
You could add a unique constraint to capitals>, but this
would not prevent duplication compared to cities>.
Similarly, if we were to specify that
cities>.name> REFERENCES> some
other table, this constraint would not automatically propagate to
capitals>. In this case you could work around it by
manually adding the same REFERENCES> constraint to
capitals>.
Specifying that another table's column REFERENCES
cities(name)> would allow the other table to contain city names, but
not capital names. There is no good workaround for this case.
These deficiencies will probably be fixed in some future release,
but in the meantime considerable care is needed in deciding whether
inheritance is useful for your application.
Deprecated
In releases of PostgreSQL prior to 7.1, the
default behavior was not to include child tables in queries. This was
found to be error prone and also in violation of the SQL
standard. You can get the pre-7.1 behavior by turning off the
configuration
option.
PartitioningpartitioningtablepartitioningPostgreSQL supports basic table
partitioning. This section describes why and how to implement
partitioning as part of your database design.
Overview
Partitioning refers to splitting what is logically one large table
into smaller physical pieces.
Partitioning can provide several benefits:
Query performance can be improved dramatically in certain situations,
particularly when most of the heavily accessed rows of the table are in a
single partition or a small number of partitions. The partitioning
substitutes for leading columns of indexes, reducing index size and
making it more likely that the heavily-used parts of the indexes
fit in memory.
When queries or updates access a large percentage of a single
partition, performance can be improved by taking advantage
of sequential scan of that partition instead of using an
index and random access reads scattered across the whole table.
Bulk loads and deletes can be accomplished by adding or removing
partitions, if that requirement is planned into the partitioning design.
ALTER TABLE> is far faster than a bulk operation.
It also entirely avoids the VACUUM
overhead caused by a bulk DELETE>.
Seldom-used data can be migrated to cheaper and slower storage media.
The benefits will normally be worthwhile only when a table would
otherwise be very large. The exact point at which a table will
benefit from partitioning depends on the application, although a
rule of thumb is that the size of the table should exceed the physical
memory of the database server.
Currently, PostgreSQL supports partitioning
via table inheritance. Each partition must be created as a child
table of a single parent table. The parent table itself is normally
empty; it exists just to represent the entire data set. You should be
familiar with inheritance (see ) before
attempting to set up partitioning.
The following forms of partitioning can be implemented in
PostgreSQL:
Range Partitioning
The table is partitioned into ranges defined
by a key column or set of columns, with no overlap between
the ranges of values assigned to different partitions. For
example one might partition by date ranges, or by ranges of
identifiers for particular business objects.
List Partitioning
The table is partitioned by explicitly listing which key values
appear in each partition.
Implementing Partitioning
To set up a partitioned table, do the following:
Create the master table, from which all of the
partitions will inherit.
This table will contain no data. Do not define any check
constraints on this table, unless you intend them to
be applied equally to all partitions. There is no point
in defining any indexes or unique constraints on it, either.
Create several child tables that each inherit from
the master table. Normally, these tables will not add any columns
to the set inherited from the master.
We will refer to the child tables as partitions, though they
are in every way normal PostgreSQL> tables.
Add table constraints to the partition tables to define the
allowed key values in each partition.
Typical examples would be:
CHECK ( x = 1 )
CHECK ( county IN ( 'Oxfordshire', 'Buckinghamshire', 'Warwickshire' ))
CHECK ( outletID >= 100 AND outletID < 200 )
Ensure that the constraints guarantee that there is no overlap
between the key values permitted in different partitions. A common
mistake is to set up range constraints like:
CHECK ( outletID BETWEEN 100 AND 200 )
CHECK ( outletID BETWEEN 200 AND 300 )
This is wrong since it is not clear which partition the key value
200 belongs in.
Note that there is no difference in
syntax between range and list partitioning; those terms are
descriptive only.
For each partition, create an index on the key column(s),
as well as any other indexes you might want. (The key index is
not strictly necessary, but in most scenarios it is helpful.
If you intend the key values to be unique then you should
always create a unique or primary-key constraint for each
partition.)
Optionally, define a trigger or rule to redirect data inserted into
the master table to the appropriate partition.
Ensure that the
configuration parameter is not disabled in
postgresql.conf>.
If it is, queries will not be optimized as desired.
For example, suppose we are constructing a database for a large
ice cream company. The company measures peak temperatures every
day as well as ice cream sales in each region. Conceptually,
we want a table like:
CREATE TABLE measurement (
city_id int not null,
logdate date not null,
peaktemp int,
unitsales int
);
We know that most queries will access just the last week's, month's or
quarter's data, since the main use of this table will be to prepare
online reports for management.
To reduce the amount of old data that needs to be stored, we
decide to only keep the most recent 3 years worth of data. At the
beginning of each month we will remove the oldest month's data.
In this situation we can use partitioning to help us meet all of our
different requirements for the measurements table. Following the
steps outlined above, partitioning can be set up as follows:
The master table is the measurement> table, declared
exactly as above.
Next we create one partition for each active month:
CREATE TABLE measurement_y2006m02 ( ) INHERITS (measurement);
CREATE TABLE measurement_y2006m03 ( ) INHERITS (measurement);
...
CREATE TABLE measurement_y2007m11 ( ) INHERITS (measurement);
CREATE TABLE measurement_y2007m12 ( ) INHERITS (measurement);
CREATE TABLE measurement_y2008m01 ( ) INHERITS (measurement);
Each of the partitions are complete tables in their own right,
but they inherit their definitions from the
measurement> table.
This solves one of our problems: deleting old data. Each
month, all we will need to do is perform a DROP
TABLE on the oldest child table and create a new
child table for the new month's data.
We must provide non-overlapping table constraints. Rather than
just creating the partition tables as above, the table creation
script should really be:
CREATE TABLE measurement_y2006m02 (
CHECK ( logdate >= DATE '2006-02-01' AND logdate < DATE '2006-03-01' )
) INHERITS (measurement);
CREATE TABLE measurement_y2006m03 (
CHECK ( logdate >= DATE '2006-03-01' AND logdate < DATE '2006-04-01' )
) INHERITS (measurement);
...
CREATE TABLE measurement_y2007m11 (
CHECK ( logdate >= DATE '2007-11-01' AND logdate < DATE '2007-12-01' )
) INHERITS (measurement);
CREATE TABLE measurement_y2007m12 (
CHECK ( logdate >= DATE '2007-12-01' AND logdate < DATE '2008-01-01' )
) INHERITS (measurement);
CREATE TABLE measurement_y2008m01 (
CHECK ( logdate >= DATE '2008-01-01' AND logdate < DATE '2008-02-01' )
) INHERITS (measurement);
We probably need indexes on the key columns too:
CREATE INDEX measurement_y2006m02_logdate ON measurement_y2006m02 (logdate);
CREATE INDEX measurement_y2006m03_logdate ON measurement_y2006m03 (logdate);
...
CREATE INDEX measurement_y2007m11_logdate ON measurement_y2007m11 (logdate);
CREATE INDEX measurement_y2007m12_logdate ON measurement_y2007m12 (logdate);
CREATE INDEX measurement_y2008m01_logdate ON measurement_y2008m01 (logdate);
We choose not to add further indexes at this time.
We want our application to be able to say INSERT INTO
measurement ...> and have the data be redirected into the
appropriate partition table. We can arrange that by attaching
a suitable trigger function to the master table.
If data will be added only to the latest partition, we can
use a very simple trigger function:
CREATE OR REPLACE FUNCTION measurement_insert_trigger()
RETURNS TRIGGER AS $$
BEGIN
INSERT INTO measurement_y2008m01 VALUES (NEW.*);
RETURN NULL;
END;
$$
LANGUAGE plpgsql;
After creating the function, we create a trigger which
calls the trigger function:
CREATE TRIGGER insert_measurement_trigger
BEFORE INSERT ON measurement
FOR EACH ROW EXECUTE PROCEDURE measurement_insert_trigger();
We must redefine the trigger function each month so that it always
points to the current partition. The trigger definition does
not need to be updated, however.
We might want to insert data and have the server automatically
locate the partition into which the row should be added. We
could do this with a more complex trigger function, for example:
CREATE OR REPLACE FUNCTION measurement_insert_trigger()
RETURNS TRIGGER AS $$
BEGIN
IF ( NEW.logdate >= DATE '2006-02-01' AND
NEW.logdate < DATE '2006-03-01' ) THEN
INSERT INTO measurement_y2006m02 VALUES (NEW.*);
ELSIF ( NEW.logdate >= DATE '2006-03-01' AND
NEW.logdate < DATE '2006-04-01' ) THEN
INSERT INTO measurement_y2006m03 VALUES (NEW.*);
...
ELSIF ( NEW.logdate >= DATE '2008-01-01' AND
NEW.logdate < DATE '2008-02-01' ) THEN
INSERT INTO measurement_y2008m01 VALUES (NEW.*);
ELSE
RAISE EXCEPTION 'Date out of range. Fix the measurement_insert_trigger() function!';
END IF;
RETURN NULL;
END;
$$
LANGUAGE plpgsql;
The trigger definition is the same as before.
Note that each IF test must exactly match the
CHECK constraint for its partition.
While this function is more complex than the single-month case,
it doesn't need to be updated as often, since branches can be
added in advance of being needed.
In practice it might be best to check the newest partition first,
if most inserts go into that partition. For simplicity we have
shown the trigger's tests in the same order as in other parts
of this example.
As we can see, a complex partitioning scheme could require a
substantial amount of DDL. In the above example we would be
creating a new partition each month, so it might be wise to write a
script that generates the required DDL automatically.
Managing Partitions
Normally the set of partitions established when initially
defining the table are not intended to remain static. It is
common to want to remove old partitions of data and periodically
add new partitions for new data. One of the most important
advantages of partitioning is precisely that it allows this
otherwise painful task to be executed nearly instantaneously by
manipulating the partition structure, rather than physically moving large
amounts of data around.
The simplest option for removing old data is simply to drop the partition
that is no longer necessary:
DROP TABLE measurement_y2006m02;
This can very quickly delete millions of records because it doesn't have
to individually delete every record.
Another option that is often preferable is to remove the partition from
the partitioned table but retain access to it as a table in its own
right:
ALTER TABLE measurement_y2006m02 NO INHERIT measurement;
This allows further operations to be performed on the data before
it is dropped. For example, this is often a useful time to back up
the data using COPY>, pg_dump>, or
similar tools. It might also be a useful time to aggregate data
into smaller formats, perform other data manipulations, or run
reports.
Similarly we can add a new partition to handle new data. We can create an
empty partition in the partitioned table just as the original partitions
were created above:
CREATE TABLE measurement_y2008m02 (
CHECK ( logdate >= DATE '2008-02-01' AND logdate < DATE '2008-03-01' )
) INHERITS (measurement);
As an alternative, it is sometimes more convenient to create the
new table outside the partition structure, and make it a proper
partition later. This allows the data to be loaded, checked, and
transformed prior to it appearing in the partitioned table:
CREATE TABLE measurement_y2008m02
(LIKE measurement INCLUDING DEFAULTS INCLUDING CONSTRAINTS);
ALTER TABLE measurement_y2008m02 ADD CONSTRAINT y2008m02
CHECK ( logdate >= DATE '2008-02-01' AND logdate < DATE '2008-03-01' );
\copy measurement_y2008m02 from 'measurement_y2008m02'
-- possibly some other data preparation work
ALTER TABLE measurement_y2008m02 INHERIT measurement;
Partitioning and Constraint Exclusionconstraint exclusionConstraint exclusion> is a query optimization technique
that improves performance for partitioned tables defined in the
fashion described above. As an example:
SET constraint_exclusion = on;
SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01';
Without constraint exclusion, the above query would scan each of
the partitions of the measurement> table. With constraint
exclusion enabled, the planner will examine the constraints of each
partition and try to prove that the partition need not
be scanned because it could not contain any rows meeting the query's
WHERE> clause. When the planner can prove this, it
excludes the partition from the query plan.
You can use the EXPLAIN> command to show the difference
between a plan with constraint_exclusion> on and a plan
with it off. A typical unoptimized plan for this type of table setup is:
SET constraint_exclusion = off;
EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01';
QUERY PLAN
-----------------------------------------------------------------------------------------------
Aggregate (cost=158.66..158.68 rows=1 width=0)
-> Append (cost=0.00..151.88 rows=2715 width=0)
-> Seq Scan on measurement (cost=0.00..30.38 rows=543 width=0)
Filter: (logdate >= '2008-01-01'::date)
-> Seq Scan on measurement_y2006m02 measurement (cost=0.00..30.38 rows=543 width=0)
Filter: (logdate >= '2008-01-01'::date)
-> Seq Scan on measurement_y2006m03 measurement (cost=0.00..30.38 rows=543 width=0)
Filter: (logdate >= '2008-01-01'::date)
...
-> Seq Scan on measurement_y2007m12 measurement (cost=0.00..30.38 rows=543 width=0)
Filter: (logdate >= '2008-01-01'::date)
-> Seq Scan on measurement_y2008m01 measurement (cost=0.00..30.38 rows=543 width=0)
Filter: (logdate >= '2008-01-01'::date)
Some or all of the partitions might use index scans instead of
full-table sequential scans, but the point here is that there
is no need to scan the older partitions at all to answer this query.
When we enable constraint exclusion, we get a significantly
cheaper plan that will deliver the same answer:
SET constraint_exclusion = on;
EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01';
QUERY PLAN
-----------------------------------------------------------------------------------------------
Aggregate (cost=63.47..63.48 rows=1 width=0)
-> Append (cost=0.00..60.75 rows=1086 width=0)
-> Seq Scan on measurement (cost=0.00..30.38 rows=543 width=0)
Filter: (logdate >= '2008-01-01'::date)
-> Seq Scan on measurement_y2008m01 measurement (cost=0.00..30.38 rows=543 width=0)
Filter: (logdate >= '2008-01-01'::date)
Note that constraint exclusion is driven only by CHECK>
constraints, not by the presence of indexes. Therefore it isn't
necessary to define indexes on the key columns. Whether an index
needs to be created for a given partition depends on whether you
expect that queries that scan the partition will generally scan
a large part of the partition or just a small part. An index will
be helpful in the latter case but not the former.
The default (and recommended) setting of
is actually neither
on> nor off>, but an intermediate setting
called partition>, which causes the technique to be
applied only to queries that are likely to be working on partitioned
tables. The on> setting causes the planner to examine
CHECK> constraints in all queries, even simple ones that
are unlikely to benefit.
Alternative Partitioning Methods
A different approach to redirecting inserts into the appropriate
partition table is to set up rules, instead of a trigger, on the
master table. For example:
CREATE RULE measurement_insert_y2006m02 AS
ON INSERT TO measurement WHERE
( logdate >= DATE '2006-02-01' AND logdate < DATE '2006-03-01' )
DO INSTEAD
INSERT INTO measurement_y2006m02 VALUES (NEW.*);
...
CREATE RULE measurement_insert_y2008m01 AS
ON INSERT TO measurement WHERE
( logdate >= DATE '2008-01-01' AND logdate < DATE '2008-02-01' )
DO INSTEAD
INSERT INTO measurement_y2008m01 VALUES (NEW.*);
A rule has significantly more overhead than a trigger, but the overhead
is paid once per query rather than once per row, so this method might be
advantageous for bulk-insert situations. In most cases, however, the
trigger method will offer better performance.
Be aware that COPY> ignores rules. If you want to
use COPY> to insert data, you'll need to copy into the correct
partition table rather than into the master. COPY> does fire
triggers, so you can use it normally if you use the trigger approach.
Another disadvantage of the rule approach is that there is no simple
way to force an error if the set of rules doesn't cover the insertion
date; the data will silently go into the master table instead.
Partitioning can also be arranged using a UNION ALL
view, instead of table inheritance. For example,
CREATE VIEW measurement AS
SELECT * FROM measurement_y2006m02
UNION ALL SELECT * FROM measurement_y2006m03
...
UNION ALL SELECT * FROM measurement_y2007m11
UNION ALL SELECT * FROM measurement_y2007m12
UNION ALL SELECT * FROM measurement_y2008m01;
However, the need to recreate the view adds an extra step to adding and
dropping individual partitions of the data set. In practice this
method has little to recommend it compared to using inheritance.
Caveats
The following caveats apply to partitioned tables:
There is no automatic way to verify that all of the
CHECK constraints are mutually
exclusive. It is safer to create code that generates
partitions and creates and/or modifies associated objects than
to write each by hand.
The schemes shown here assume that the partition key column(s)
of a row never change, or at least do not change enough to require
it to move to another partition. An UPDATE> that attempts
to do that will fail because of the CHECK> constraints.
If you need to handle such cases, you can put suitable update triggers
on the partition tables, but it makes management of the structure
much more complicated.
If you are using manual VACUUM or
ANALYZE commands, don't forget that
you need to run them on each partition individually. A command like:
ANALYZE measurement;
will only process the master table.
The following caveats apply to constraint exclusion:
Constraint exclusion only works when the query's WHERE>
clause contains constants. A parameterized query will not be
optimized, since the planner cannot know which partitions the
parameter value might select at run time. For the same reason,
stable> functions such as CURRENT_DATE
must be avoided.
Keep the partitioning constraints simple, else the planner may not be
able to prove that partitions don't need to be visited. Use simple
equality conditions for list partitioning, or simple
range tests for range partitioning, as illustrated in the preceding
examples. A good rule of thumb is that partitioning constraints should
contain only comparisons of the partitioning column(s) to constants
using B-tree-indexable operators.
All constraints on all partitions of the master table are examined
during constraint exclusion, so large numbers of partitions are likely
to increase query planning time considerably. Partitioning using
these techniques will work well with up to perhaps a hundred partitions;
don't try to use many thousands of partitions.
Other Database Objects
Tables are the central objects in a relational database structure,
because they hold your data. But they are not the only objects
that exist in a database. Many other kinds of objects can be
created to make the use and management of the data more efficient
or convenient. They are not discussed in this chapter, but we give
you a list here so that you are aware of what is possible:
Views
Functions and operators
Data types and domains
Triggers and rewrite rules
Detailed information on
these topics appears in .
Dependency TrackingCASCADEwith DROPRESTRICTwith DROP
When you create complex database structures involving many tables
with foreign key constraints, views, triggers, functions, etc. you
implicitly create a net of dependencies between the objects.
For instance, a table with a foreign key constraint depends on the
table it references.
To ensure the integrity of the entire database structure,
PostgreSQL makes sure that you cannot
drop objects that other objects still depend on. For example,
attempting to drop the products table we had considered in , with the orders table depending on
it, would result in an error message such as this:
DROP TABLE products;
NOTICE: constraint orders_product_no_fkey on table orders depends on table products
ERROR: cannot drop table products because other objects depend on it
HINT: Use DROP ... CASCADE to drop the dependent objects too.
The error message contains a useful hint: if you do not want to
bother deleting all the dependent objects individually, you can run:
DROP TABLE products CASCADE;
and all the dependent objects will be removed. In this case, it
doesn't remove the orders table, it only removes the foreign key
constraint. (If you want to check what DROP ... CASCADE> will do,
run DROP> without CASCADE> and read the NOTICE> messages.)
All drop commands in PostgreSQL support
specifying CASCADE. Of course, the nature of
the possible dependencies varies with the type of the object. You
can also write RESTRICT instead of
CASCADE to get the default behavior, which is to
prevent the dropping of objects that other objects depend on.
According to the SQL standard, specifying either
RESTRICT or CASCADE is
required. No database system actually enforces that rule, but
whether the default behavior is RESTRICT or
CASCADE varies across systems.
Foreign key constraint dependencies and serial column dependencies
from PostgreSQL versions prior to 7.3
are not maintained or created during the
upgrade process. All other dependency types will be properly
created during an upgrade from a pre-7.3 database.