Server Setup and Operation
This chapter discusses how to set up and run the database server
and its interactions with the operating system.
The PostgreSQL User Accountpostgres user
As with any server daemon that is accessible to the outside world,
it is advisable to run PostgreSQL under a
separate user account. This user account should only own the data
that is managed by the server, and should not be shared with other
daemons. (For example, using the user nobody is a bad
idea.) It is not advisable to install executables owned by this
user because compromised systems could then modify their own
binaries.
To add a Unix user account to your system, look for a command
useradd or adduser. The user
name postgres is often used, and is assumed
throughout this book, but you can use another name if you like.
Creating a Database Clusterdatabase clusterdata areadatabase cluster
Before you can do anything, you must initialize a database storage
area on disk. We call this a database cluster.
(SQL uses the term catalog cluster.) A
database cluster is a collection of databases that is managed by a
single instance of a running database server. After initialization, a
database cluster will contain a database named postgres,
which is meant as a default database for use by utilities, users and third
party applications. The database server itself does not require the
postgres database to exist, but many external utility
programs assume it exists. Another database created within each cluster
during initialization is called
template1. As the name suggests, this will be used
as a template for subsequently created databases; it should not be
used for actual work. (See for
information about creating new databases within a cluster.)
In file system terms, a database cluster will be a single directory
under which all data will be stored. We call this the data
directory or data area. It is
completely up to you where you choose to store your data. There is no
default, although locations such as
/usr/local/pgsql/data or
/var/lib/pgsql/data are popular. To initialize a
database cluster, use the command ,initdb>> which is
installed with PostgreSQL. The desired
file system location of your database cluster is indicated by the
option, for example:
$> initdb -D /usr/local/pgsql/data
Note that you must execute this command while logged into the
PostgreSQL user account, which is
described in the previous section.
As an alternative to the option, you can set
the environment variable PGDATA.
PGDATA
Alternatively, you can run initdb via
the
programpg_ctl>> like so:
$> pg_ctl -D /usr/local/pgsql/data initdb
This may be more intuitive if you are
using pg_ctl for starting and stopping the
server (see ), so
that pg_ctl would be the sole command you use
for managing the database server instance.
initdb will attempt to create the directory you
specify if it does not already exist. It is likely that it will not
have the permission to do so (if you followed our advice and created
an unprivileged account). In that case you should create the
directory yourself (as root) and change the owner to be the
PostgreSQL user. Here is how this might
be done:
root# mkdir /usr/local/pgsql/data
root# chown postgres /usr/local/pgsql/data
root# su postgres
postgres$ initdb -D /usr/local/pgsql/datainitdb will refuse to run if the data directory
looks like it has already been initialized.
Because the data directory contains all the data stored in the
database, it is essential that it be secured from unauthorized
access. initdb therefore revokes access
permissions from everyone but the
PostgreSQL user.
However, while the directory contents are secure, the default
client authentication setup allows any local user to connect to the
database and even become the database superuser. If you do not
trust other local users, we recommend you use one of
initdb's ,
or options to assign a password to the
database superuser.password>of the
superuser> Also, specify initdb also initializes the default
localelocale>> for the database cluster.
Normally, it will just take the locale settings in the environment
and apply them to the initialized database. It is possible to
specify a different locale for the database; more information about
that can be found in . The default sort order used
within the particular database cluster is set by
initdb, and while you can create new databases using
different sort order, the order used in the template databases that initdb
creates cannot be changed without dropping and recreating them.
There is also a performance impact for using locales
other than C> or POSIX>. Therefore, it is
important to make this choice correctly the first time.
initdb also sets the default character set encoding
for the database cluster. Normally this should be chosen to match the
locale setting. For details see .
Network File SystemsNetwork File SystemsNFS>>Network File Systems>>
Network Attached Storage (NAS>)>Network File Systems>>
Many installations create database clusters on network file systems.
Sometimes this is done directly via NFS>, or by using a
Network Attached Storage (NAS>) device that uses
NFS> internally. PostgreSQL> does nothing
special for NFS> file systems, meaning it assumes
NFS> behaves exactly like locally-connected drives
(DAS>, Direct Attached Storage). If client and server
NFS> implementations have non-standard semantics, this can
cause reliability problems (see ).
Specifically, delayed (asynchronous) writes to the NFS>
server can cause reliability problems; if possible, mount
NFS> file systems synchronously (without caching) to avoid
this. Also, soft-mounting NFS> is not recommended.
(Storage Area Networks (SAN>) use a low-level
communication protocol rather than NFS>.)
Starting the Database Server
Before anyone can access the database, you must start the database
server. The database server program is called
postgres.postgres>>
The postgres program must know where to
find the data it is supposed to use. This is done with the
option. Thus, the simplest way to start the
server is:
$ postgres -D /usr/local/pgsql/data
which will leave the server running in the foreground. This must be
done while logged into the PostgreSQL user
account. Without , the server will try to use
the data directory named by the environment variable PGDATA.
If that variable is not provided either, it will fail.
Normally it is better to start postgres in the
background. For this, use the usual Unix shell syntax:
$ postgres -D /usr/local/pgsql/data >logfile 2>&1 &
It is important to store the server's stdout> and
stderr> output somewhere, as shown above. It will help
for auditing purposes and to diagnose problems. (See for a more thorough discussion of log
file handling.)
The postgres program also takes a number of other
command-line options. For more information, see the
reference page
and below.
This shell syntax can get tedious quickly. Therefore the wrapper
program
pg_ctl
is provided to simplify some tasks. For example:
pg_ctl start -l logfile
will start the server in the background and put the output into the
named log file. The option has the same meaning
here as for postgres. pg_ctl
is also capable of stopping the server.
Normally, you will want to start the database server when the
computer boots.booting>starting
the server during>> Autostart scripts are operating-system-specific.
There are a few distributed with
PostgreSQL in the
contrib/start-scripts> directory. Installing one will require
root privileges.
Different systems have different conventions for starting up daemons
at boot time. Many systems have a file
/etc/rc.local or
/etc/rc.d/rc.local. Others use
rc.d> directories. Whatever you do, the server must be
run by the PostgreSQL user account
and not by root or any other user. Therefore you
probably should form your commands using su -c '...'
postgres. For example:
su -c 'pg_ctl start -D /usr/local/pgsql/data -l serverlog' postgres
Here are a few more operating-system-specific suggestions. (In each
case be sure to use the proper installation directory and user
name where we show generic values.)
For FreeBSD, look at the file
contrib/start-scripts/freebsd in the
PostgreSQL source distribution.
FreeBSD>start script>
On OpenBSD, add the following lines
to the file /etc/rc.local:
OpenBSD>start script>
if [ -x /usr/local/pgsql/bin/pg_ctl -a -x /usr/local/pgsql/bin/postgres ]; then
su - -c '/usr/local/pgsql/bin/pg_ctl start -l /var/postgresql/log -s' postgres
echo -n ' postgresql'
fi
On Linux systems either add
Linux>start script>
/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgsql/data
to /etc/rc.d/rc.local or look at the file
contrib/start-scripts/linux in the
PostgreSQL source distribution.
On NetBSD, either use the
FreeBSD or
Linux start scripts, depending on
preference. NetBSD>start script>
On Solaris, create a file called
/etc/init.d/postgresql that contains
the following line:
Solaris>start script>
su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgsql/data"
Then, create a symbolic link to it in /etc/rc3.d> as
S99postgresql>.
While the server is running, its
PID is stored in the file
postmaster.pid in the data directory. This is
used to prevent multiple server instances from
running in the same data directory and can also be used for
shutting down the server.
Server Start-up Failures
There are several common reasons the server might fail to
start. Check the server's log file, or start it by hand (without
redirecting standard output or standard error) and see what error
messages appear. Below we explain some of the most common error
messages in more detail.
LOG: could not bind IPv4 socket: Address already in use
HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
FATAL: could not create TCP/IP listen socket
This usually means just what it suggests: you tried to start
another server on the same port where one is already running.
However, if the kernel error message is not Address
already in use or some variant of that, there might
be a different problem. For example, trying to start a server
on a reserved port number might draw something like:
$ postgres -p 666
LOG: could not bind IPv4 socket: Permission denied
HINT: Is another postmaster already running on port 666? If not, wait a few seconds and retry.
FATAL: could not create TCP/IP listen socket
A message like:
FATAL: could not create shared memory segment: Invalid argument
DETAIL: Failed system call was shmget(key=5440001, size=4011376640, 03600).
probably means your kernel's limit on the size of shared memory is
smaller than the work area PostgreSQL
is trying to create (4011376640 bytes in this example). Or it could
mean that you do not have System-V-style shared memory support
configured into your kernel at all. As a temporary workaround, you
can try starting the server with a smaller-than-normal number of
buffers (). You will eventually want
to reconfigure your kernel to increase the allowed shared memory
size. You might also see this message when trying to start multiple
servers on the same machine, if their total space requested
exceeds the kernel limit.
An error like:
FATAL: could not create semaphores: No space left on device
DETAIL: Failed system call was semget(5440126, 17, 03600).
does not mean you've run out of disk
space. It means your kernel's limit on the number of System V> semaphores is smaller than the number
PostgreSQL wants to create. As above,
you might be able to work around the problem by starting the
server with a reduced number of allowed connections
(), but you'll eventually want to
increase the kernel limit.
If you get an illegal system call> error, it is likely that
shared memory or semaphores are not supported in your kernel at
all. In that case your only option is to reconfigure the kernel to
enable these features.
Details about configuring System V>
IPC> facilities are given in .
Client Connection Problems
Although the error conditions possible on the client side are quite
varied and application-dependent, a few of them might be directly
related to how the server was started. Conditions other than
those shown below should be documented with the respective client
application.
psql: could not connect to server: Connection refused
Is the server running on host "server.joe.com" and accepting
TCP/IP connections on port 5432?
This is the generic I couldn't find a server to talk
to failure. It looks like the above when TCP/IP
communication is attempted. A common mistake is to forget to
configure the server to allow TCP/IP connections.
Alternatively, you'll get this when attempting Unix-domain socket
communication to a local server:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
The last line is useful in verifying that the client is trying to
connect to the right place. If there is in fact no server
running there, the kernel error message will typically be either
Connection refused or
No such file or directory, as
illustrated. (It is important to realize that
Connection refused in this context
does not mean that the server got your
connection request and rejected it. That case will produce a
different message, as shown in .) Other error messages
such as Connection timed out might
indicate more fundamental problems, like lack of network
connectivity.
Managing Kernel Resources
A large PostgreSQL> installation can quickly exhaust
various operating system resource limits. (On some systems, the
factory defaults are so low that you don't even need a really
large> installation.) If you have encountered this kind of
problem, keep reading.
Shared Memory and Semaphoresshared memorysemaphores
Shared memory and semaphores are collectively referred to as
System V>
IPC> (together with message queues, which are not
relevant for PostgreSQL>). Almost all modern
operating systems provide these features, but many of them don't have
them turned on or sufficiently sized by default, especially as
available RAM and the demands of database applications grow.
(On Windows>,
PostgreSQL> provides its own replacement
implementation of these facilities, so most of this section
can be disregarded.)
The complete lack of these facilities is usually manifested by an
Illegal system call> error upon server start. In
that case there is no alternative but to reconfigure your
kernel. PostgreSQL> won't work without them.
This situation is rare, however, among modern operating systems.
When PostgreSQL> exceeds one of the various hard
IPC> limits, the server will refuse to start and
should leave an instructive error message describing the problem
and what to do about it. (See also .) The relevant kernel
parameters are named consistently across different systems; gives an overview. The methods to set
them, however, vary. Suggestions for some platforms are given below.
System V> IPC> parameters>
Name>
Description>
Reasonable values>
SHMMAX>>
Maximum size of shared memory segment (bytes)>
at least several megabytes (see text)SHMMIN>>
Minimum size of shared memory segment (bytes)>
1>
SHMALL>>
Total amount of shared memory available (bytes or pages)>
if bytes, same as SHMMAX; if pages, ceil(SHMMAX/PAGE_SIZE)>
SHMSEG>>
Maximum number of shared memory segments per process>
only 1 segment is needed, but the default is much higher>
SHMMNI>>
Maximum number of shared memory segments system-wide>
like SHMSEG> plus room for other applications>
SEMMNI>>
Maximum number of semaphore identifiers (i.e., sets)>
at least ceil((max_connections + autovacuum_max_workers) / 16)>
SEMMNS>>
Maximum number of semaphores system-wide>
ceil((max_connections + autovacuum_max_workers) / 16) * 17 plus room for other applications>
SEMMSL>>
Maximum number of semaphores per set>
at least 17>
SEMMAP>>
Number of entries in semaphore map>
see text>
SEMVMX>>
Maximum value of semaphore>
at least 1000 (The default is often 32767; do not change unless necessary)>
SHMMAX The most important
shared memory parameter is SHMMAX>, the maximum size, in
bytes, of a shared memory segment. If you get an error message from
shmget> like Invalid argument>, it is
likely that this limit has been exceeded. The size of the required
shared memory segment varies depending on several
PostgreSQL> configuration parameters, as shown in
. (Any error message you might
get will include the exact size of the failed allocation request.)
You can, as a temporary solution, lower some of those settings to
avoid the failure. While it is possible to get
PostgreSQL> to run with SHMMAX> as small as
2 MB, you need considerably more for acceptable performance. Desirable
settings are in the hundreds of megabytes to a few gigabytes.
Some systems also have a limit on the total amount of shared memory in
the system (SHMALL>). Make sure this is large enough
for PostgreSQL> plus any other applications that
are using shared memory segments. Note that SHMALL>
is measured in pages rather than bytes on many systems.
Less likely to cause problems is the minimum size for shared
memory segments (SHMMIN>), which should be at most
approximately 500 kB for PostgreSQL> (it is
usually just 1). The maximum number of segments system-wide
(SHMMNI>) or per-process (SHMSEG>) are unlikely
to cause a problem unless your system has them set to zero.
PostgreSQL> uses one semaphore per allowed connection
() and allowed autovacuum worker
process (), in sets of 16.
Each such set will
also contain a 17th semaphore which contains a magic
number, to detect collision with semaphore sets used by
other applications. The maximum number of semaphores in the system
is set by SEMMNS>, which consequently must be at least
as high as max_connections> plus
autovacuum_max_workers>, plus one extra for each 16
allowed connections plus workers (see the formula in ). The parameter SEMMNI>
determines the limit on the number of semaphore sets that can
exist on the system at one time. Hence this parameter must be at
least ceil((max_connections + autovacuum_max_workers) / 16)>.
Lowering the number
of allowed connections is a temporary workaround for failures,
which are usually confusingly worded No space
left on device>, from the function semget>.
In some cases it might also be necessary to increase
SEMMAP> to be at least on the order of
SEMMNS>. This parameter defines the size of the semaphore
resource map, in which each contiguous block of available semaphores
needs an entry. When a semaphore set is freed it is either added to
an existing entry that is adjacent to the freed block or it is
registered under a new map entry. If the map is full, the freed
semaphores get lost (until reboot). Fragmentation of the semaphore
space could over time lead to fewer available semaphores than there
should be.
The SEMMSL> parameter, which determines how many
semaphores can be in a set, must be at least 17 for
PostgreSQL>.
Various other settings related to semaphore undo>, such as
SEMMNU> and SEMUME>, do not affect
PostgreSQL>.
AIX>AIX>IPC configuration>>
At least as of version 5.1, it should not be necessary to do
any special configuration for such parameters as
SHMMAX, as it appears this is configured to
allow all memory to be used as shared memory. That is the
sort of configuration commonly used for other databases such
as DB/2. It might, however, be necessary to modify the global
ulimit information in
/etc/security/limits, as the default hard
limits for file sizes (fsize) and numbers of
files (nofiles) might be too low.
BSD/OS>BSD/OS>IPC configuration>>
Shared Memory>
By default, only 4 MB of shared memory is supported. Keep in
mind that shared memory is not pageable; it is locked in RAM.
To increase the amount of shared memory supported by your
system, add something like the following to your kernel configuration
file:
options "SHMALL=8192"
options "SHMMAX=\(SHMALL*PAGE_SIZE\)"
SHMALL> is measured in 4 kB pages, so a value of
1024 represents 4 MB of shared memory. Therefore the above increases
the maximum shared memory area to 32 MB.
For those running 4.3 or later, you will probably also need to increase
KERNEL_VIRTUAL_MB> above the default 248>.
Once all changes have been made, recompile the kernel, and reboot.
Semaphores>
You will probably want to increase the number of semaphores
as well; the default system total of 60 will only allow about
50 PostgreSQL connections. Set the
values you want in your kernel configuration file, e.g.:
options "SEMMNI=40"
options "SEMMNS=240"
FreeBSD>FreeBSD>IPC configuration>>
The default settings are only suitable for small installations
(for example, default SHMMAX is 32
MB). Changes can be made via the sysctl or
loader interfaces. The following
parameters can be set using sysctl:
$sysctl -w kern.ipc.shmall=32768$sysctl -w kern.ipc.shmmax=134217728$sysctl -w kern.ipc.semmap=256
To have these settings persist over reboots, modify
/etc/sysctl.conf.
The remaining semaphore settings are read-only as far as
sysctl is concerned, but can be changed
before boot using the loader prompt:
(loader)set kern.ipc.semmni=256(loader)set kern.ipc.semmns=512(loader)set kern.ipc.semmnu=256
Similarly these can be saved between reboots in
/boot/loader.conf.
You might also want to configure your kernel to lock shared
memory into RAM and prevent it from being paged out to swap.
This can be accomplished using the sysctl
setting kern.ipc.shm_use_phys.
If running in FreeBSD jails by enabling sysctl>'s
security.jail.sysvipc_allowed>, postmaster>s
running in different jails should be run by different operating system
users. This improves security because it prevents non-root users
from interfering with shared memory or semaphores in different jails,
and it allows the PostgreSQL IPC cleanup code to function properly.
(In FreeBSD 6.0 and later the IPC cleanup code does not properly detect
processes in other jails, preventing the running of postmasters on the
same port in different jails.)
FreeBSD> versions before 4.0 work like
NetBSD> and
OpenBSD> (see below).
NetBSD>OpenBSD>NetBSD>IPC configuration>>
OpenBSD>IPC configuration>>
The options SYSVSHM> and SYSVSEM> need
to be enabled when the kernel is compiled. (They are by
default.) The maximum size of shared memory is determined by
the option SHMMAXPGS> (in pages). The following
shows an example of how to set the various parameters on
NetBSD>
(OpenBSD> uses option> instead):
options SYSVSHM
options SHMMAXPGS=4096
options SHMSEG=256
options SYSVSEM
options SEMMNI=256
options SEMMNS=512
options SEMMNU=256
options SEMMAP=256
You might also want to configure your kernel to lock shared
memory into RAM and prevent it from being paged out to swap.
This can be accomplished using the sysctl
setting kern.ipc.shm_use_phys.
HP-UX>HP-UX>IPC configuration>>
The default settings tend to suffice for normal installations.
On HP-UX> 10, the factory default for
SEMMNS> is 128, which might be too low for larger
database sites.
IPC> parameters can be set in the System
Administration Manager> (SAM>) under
Kernel
Configuration>Configurable Parameters>>. Choose
Create A New Kernel> when you're done.
Linux>Linux>IPC configuration>>
The default maximum segment size is 32 MB, which is only adequate
for very small PostgreSQL
installations. The default maximum total size is 2097152
pages. A page is almost always 4096 bytes except in unusual
kernel configurations with huge pages
(use getconf PAGE_SIZE to verify). That
makes a default limit of 8 GB, which is often enough, but not
always.
The shared memory size settings can be changed via the
sysctl interface. For example, to allow 16 GB:
$sysctl -w kernel.shmmax=17179869184$sysctl -w kernel.shmall=4194304
In addition these settings can be preserved between reboots in
the file /etc/sysctl.conf. Doing that is
highly recommended.
Ancient distributions might not have the sysctl program,
but equivalent changes can be made by manipulating the
/proc file system:
$echo 17179869184 >/proc/sys/kernel/shmmax$echo 4194304 >/proc/sys/kernel/shmall
The remaining defaults are quite generously sized, and usually
do not require changes.
MacOS X>MacOS X>IPC configuration>>
The recommended method for configuring shared memory in OS X
is to create a file named /etc/sysctl.conf>,
containing variable assignments such as:
kern.sysv.shmmax=4194304
kern.sysv.shmmin=1
kern.sysv.shmmni=32
kern.sysv.shmseg=8
kern.sysv.shmall=1024
Note that in some OS X versions,
all five> shared-memory parameters must be set in
/etc/sysctl.conf>, else the values will be ignored.
Beware that recent releases of OS X ignore attempts to set
SHMMAX> to a value that isn't an exact multiple of 4096.
SHMALL> is measured in 4 kB pages on this platform.
In older OS X versions, you will need to reboot to have changes in the
shared memory parameters take effect. As of 10.5 it is possible to
change all but SHMMNI> on the fly, using
sysctl>. But it's still best to set up your preferred
values via /etc/sysctl.conf>, so that the values will be
kept across reboots.
The file /etc/sysctl.conf> is only honored in OS X
10.3.9 and later. If you are running a previous 10.3.x release,
you must edit the file /etc/rc>
and change the values in the following commands:
sysctl -w kern.sysv.shmmax
sysctl -w kern.sysv.shmmin
sysctl -w kern.sysv.shmmni
sysctl -w kern.sysv.shmseg
sysctl -w kern.sysv.shmall
Note that
/etc/rc> is usually overwritten by OS X system updates,
so you should expect to have to redo these edits after each update.
In OS X 10.2 and earlier, instead edit these commands in the file
/System/Library/StartupItems/SystemTuning/SystemTuning>.
SCO OpenServer>SCO OpenServer>IPC configuration>>
In the default configuration, only 512 kB of shared memory per
segment is allowed. To increase the setting, first change to the
directory /etc/conf/cf.d>. To display the current value of
SHMMAX>, run:
./configure -y SHMMAX
To set a new value for SHMMAX>, run:
./configure SHMMAX=value>
where value> is the new value you want to use
(in bytes). After setting SHMMAX>, rebuild the kernel:
./link_unix
and reboot.
Solaris>Solaris>IPC configuration>>
At least in version 2.6, the default maximum size of a shared
memory segment is too low for PostgreSQL>. The
relevant settings can be changed in /etc/system>,
for example:
set shmsys:shminfo_shmmax=0x2000000
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=256
set shmsys:shminfo_shmseg=256
set semsys:seminfo_semmap=256
set semsys:seminfo_semmni=512
set semsys:seminfo_semmns=512
set semsys:seminfo_semmsl=32
You need to reboot for the changes to take effect.
See also >
for information on shared memory under
Solaris>.
UnixWare>UnixWare>IPC configuration>>
On UnixWare> 7, the maximum size for shared
memory segments is only 512 kB in the default configuration.
To display the current value of SHMMAX>, run:
/etc/conf/bin/idtune -g SHMMAX
which displays the current, default, minimum, and maximum
values. To set a new value for SHMMAX>,
run:
/etc/conf/bin/idtune SHMMAX value>
where value> is the new value you want to use
(in bytes). After setting SHMMAX>, rebuild the
kernel:
/etc/conf/bin/idbuild -B
and reboot.
Resource Limits
Unix-like operating systems enforce various kinds of resource limits
that might interfere with the operation of your
PostgreSQL server. Of particular
importance are limits on the number of processes per user, the
number of open files per process, and the amount of memory available
to each process. Each of these have a hard and a
soft limit. The soft limit is what actually counts
but it can be changed by the user up to the hard limit. The hard
limit can only be changed by the root user. The system call
setrlimit is responsible for setting these
parameters. The shell's built-in command ulimit
(Bourne shells) or limit (csh>) is
used to control the resource limits from the command line. On
BSD-derived systems the file /etc/login.conf
controls the various resource limits set during login. See the
operating system documentation for details. The relevant
parameters are maxproc,
openfiles, and datasize. For
example:
default:\
...
:datasize-cur=256M:\
:maxproc-cur=256:\
:openfiles-cur=256:\
...
(-cur is the soft limit. Append
-max to set the hard limit.)
Kernels can also have system-wide limits on some resources.
On Linux/proc/sys/fs/file-max determines the
maximum number of open files that the kernel will support. It can
be changed by writing a different number into the file or by
adding an assignment in /etc/sysctl.conf.
The maximum limit of files per process is fixed at the time the
kernel is compiled; see
/usr/src/linux/Documentation/proc.txt for
more information.
The PostgreSQL server uses one process
per connection so you should provide for at least as many processes
as allowed connections, in addition to what you need for the rest
of your system. This is usually not a problem but if you run
several servers on one machine things might get tight.
The factory default limit on open files is often set to
socially friendly values that allow many users to
coexist on a machine without using an inappropriate fraction of
the system resources. If you run many servers on a machine this
is perhaps what you want, but on dedicated servers you might want to
raise this limit.
On the other side of the coin, some systems allow individual
processes to open large numbers of files; if more than a few
processes do so then the system-wide limit can easily be exceeded.
If you find this happening, and you do not want to alter the
system-wide limit, you can set PostgreSQL>'s configuration parameter to
limit the consumption of open files.
Linux Memory Overcommit
In Linux 2.4 and later, the default virtual memory behavior is not
optimal for PostgreSQL. Because of the
way that the kernel implements memory overcommit, the kernel might
terminate the PostgreSQL server (the
master server process) if the memory demands of
another process cause the system to run out of virtual memory.
If this happens, you will see a kernel message that looks like
this (consult your system documentation and configuration on where
to look for such a message):
Out of Memory: Killed process 12345 (postgres).
This indicates that the postgres process
has been terminated due to memory pressure.
Although existing database connections will continue to function
normally, no new connections will be accepted. To recover,
PostgreSQL will need to be restarted.
One way to avoid this problem is to run
PostgreSQL on a machine where you can
be sure that other processes will not run the machine out of
memory. If memory is tight, increasing the swap space of the
operating system can help avoid the problem, because the
out-of-memory (OOM) killer is invoked only when physical memory and
swap space are exhausted.
On Linux 2.6 and later, it is possible to modify the
kernel's behavior so that it will not overcommit> memory.
Although this setting will not prevent the OOM killer> from being invoked
altogether, it will lower the chances significantly and will therefore
lead to more robust system behavior. This is done by selecting strict
overcommit mode via sysctl:
sysctl -w vm.overcommit_memory=2
or placing an equivalent entry in /etc/sysctl.conf>.
You might also wish to modify the related setting
vm.overcommit_ratio>. For details see the kernel documentation
file Documentation/vm/overcommit-accounting>.
Another approach, which can be used with or without altering
vm.overcommit_memory>, is to set the process-specific
oom_adj> value for the postmaster process to -17>,
thereby guaranteeing it will not be targeted by the OOM killer. The
simplest way to do this is to execute
echo -17 > /proc/self/oom_adj
in the postmaster's startup script just before invoking the postmaster.
Note that this action must be done as root, or it will have no effect;
so a root-owned startup script is the easiest place to do it. If you
do this, you may also wish to build PostgreSQL>
with -DLINUX_OOM_ADJ=0> added to CFLAGS>.
That will cause postmaster child processes to run with the normal
oom_adj> value of zero, so that the OOM killer can still
target them at need.
Some vendors' Linux 2.4 kernels are reported to have early versions
of the 2.6 overcommit sysctl parameter. However, setting
vm.overcommit_memory> to 2
on a 2.4 kernel that does not have the relevant code will make
things worse, not better. It is recommended that you inspect
the actual kernel source code (see the function
vm_enough_memory> in the file mm/mmap.c>)
to verify what is supported in your kernel before you try this in a 2.4
installation. The presence of the overcommit-accounting>
documentation file should not> be taken as evidence that the
feature is there. If in any doubt, consult a kernel expert or your
kernel vendor.
Shutting Down the Servershutdown>
There are several ways to shut down the database server. You control
the type of shutdown by sending different signals to the master
postgres process.
SIGTERMSIGTERM>>
This is the Smart Shutdown mode.
After receiving SIGTERM, the server
disallows new connections, but lets existing sessions end their
work normally. It shuts down only after all of the sessions terminate.
If the server is in online backup mode, it additionally waits
until online backup mode is no longer active. While backup mode is
active, new connections will still be allowed, but only to superusers
(this exception allows a superuser to connect to terminate
online backup mode). If the server is in recovery when a smart
shutdown is requested, recovery and streaming replication will be
stopped only after all regular sessions have terminated.
SIGINTSIGINT>>
This is the Fast Shutdown mode.
The server disallows new connections and sends all existing
server processes SIGTERM, which will cause them
to abort their current transactions and exit promptly. It then
waits for all server processes to exit and finally shuts down.
If the server is in online backup mode, backup mode will be
terminated, rendering the backup useless.
SIGQUITSIGQUIT>>
This is the Immediate Shutdown mode.
The master postgres process will send a
SIGQUIT to all child processes and exit
immediately, without properly shutting itself down. The child processes
likewise exit immediately upon receiving
SIGQUIT. This will lead to recovery (by
replaying the WAL log) upon next start-up. This is recommended
only in emergencies.
The program provides a convenient
interface for sending these signals to shut down the server.
Alternatively, you can send the signal directly using kill>
on non-Windows systems.
The PID> of the postgres process can be
found using the ps program, or from the file
postmaster.pid in the data directory. For
example, to do a fast shutdown:
$ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`
It is best not to use SIGKILL to shut down
the server. Doing so will prevent the server from releasing
shared memory and semaphores, which might then have to be done
manually before a new server can be started. Furthermore,
SIGKILL kills the postgres
process without letting it relay the signal to its subprocesses,
so it will be necessary to kill the individual subprocesses by hand as
well.
To terminate an individual session while allowing other sessions to
continue, use pg_terminate_backend()> (see ) or send a
SIGTERM> signal to the child process associated with
the session.
Preventing Server Spoofingserver spoofing
While the server is running, it is not possible for a malicious user
to take the place of the normal database server. However, when the
server is down, it is possible for a local user to spoof the normal
server by starting their own server. The spoof server could read
passwords and queries sent by clients, but could not return any data
because the PGDATA> directory would still be secure because
of directory permissions. Spoofing is possible because any user can
start a database server; a client cannot identify an invalid server
unless it is specially configured.
The simplest way to prevent spoofing for local>
connections is to use a Unix domain socket directory () that has write permission only
for a trusted local user. This prevents a malicious user from creating
their own socket file in that directory. If you are concerned that
some applications might still reference /tmp> for the
socket file and hence be vulnerable to spoofing, during operating system
startup create a symbolic link /tmp/.s.PGSQL.5432> that points
to the relocated socket file. You also might need to modify your
/tmp> cleanup script to prevent removal of the symbolic link.
To prevent spoofing on TCP connections, the best solution is to use
SSL certificates and make sure that clients check the server's certificate.
To do that, the server
must be configured to accept only hostssl> connections () and have SSL
server.key (key) and
server.crt (certificate) files (). The TCP client must connect using
sslmode=verify-ca> or
verify-full> and have the appropriate root certificate
file installed ().
Encryption OptionsencryptionPostgreSQL offers encryption at several
levels, and provides flexibility in protecting data from disclosure
due to database server theft, unscrupulous administrators, and
insecure networks. Encryption might also be required to secure
sensitive data such as medical records or financial transactions.
Password Storage Encryption
By default, database user passwords are stored as MD5 hashes, so
the administrator cannot determine the actual password assigned
to the user. If MD5 encryption is used for client authentication,
the unencrypted password is never even temporarily present on the
server because the client MD5-encrypts it before being sent
across the network.
Encryption For Specific Columns
The contrib> function library
pgcrypto
allows certain fields to be stored encrypted.
This is useful if only some of the data is sensitive.
The client supplies the decryption key and the data is decrypted
on the server and then sent to the client.
The decrypted data and the decryption key are present on the
server for a brief time while it is being decrypted and
communicated between the client and server. This presents a brief
moment where the data and keys can be intercepted by someone with
complete access to the database server, such as the system
administrator.
Data Partition Encryption
On Linux, encryption can be layered on top of a file system
using a loopback device. This allows an entire
file system partition to be encrypted on disk, and decrypted by the
operating system. On FreeBSD, the equivalent facility is called
GEOM Based Disk Encryption (gbde), and many
other operating systems support this functionality, including Windows.
This mechanism prevents unencrypted data from being read from the
drives if the drives or the entire computer is stolen. This does
not protect against attacks while the file system is mounted,
because when mounted, the operating system provides an unencrypted
view of the data. However, to mount the file system, you need some
way for the encryption key to be passed to the operating system,
and sometimes the key is stored somewhere on the host that mounts
the disk.
Encrypting Passwords Across A Network
The MD5> authentication method double-encrypts the
password on the client before sending it to the server. It first
MD5-encrypts it based on the user name, and then encrypts it
based on a random salt sent by the server when the database
connection was made. It is this double-encrypted value that is
sent over the network to the server. Double-encryption not only
prevents the password from being discovered, it also prevents
another connection from using the same encrypted password to
connect to the database server at a later time.
Encrypting Data Across A Network
SSL connections encrypt all data sent across the network: the
password, the queries, and the data returned. The
pg_hba.conf> file allows administrators to specify
which hosts can use non-encrypted connections (host>)
and which require SSL-encrypted connections
(hostssl>). Also, clients can specify that they
connect to servers only via SSL. Stunnel> or
SSH> can also be used to encrypt transmissions.
SSL Host Authentication
It is possible for both the client and server to provide SSL
certificates to each other. It takes some extra configuration
on each side, but this provides stronger verification of identity
than the mere use of passwords. It prevents a computer from
pretending to be the server just long enough to read the password
sent by the client. It also helps prevent man in the middle>
attacks where a computer between the client and server pretends to
be the server and reads and passes all data between the client and
server.
Client-Side Encryption
If the system administrator for the server's machine cannot be trusted,
it is necessary
for the client to encrypt the data; this way, unencrypted data
never appears on the database server. Data is encrypted on the
client before being sent to the server, and database results have
to be decrypted on the client before being used.
Secure TCP/IP Connections with SSLSSLPostgreSQL> has native support for using
SSL> connections to encrypt client/server communications
for increased security. This requires that
OpenSSL is installed on both client and
server systems and that support in PostgreSQL> is
enabled at build time (see ).
With SSL> support compiled in, the
PostgreSQL> server can be started with
SSL> enabled by setting the parameter
to on> in
postgresql.conf>. The server will listen for both normal
and SSL> connections on the same TCP port, and will negotiate
with any connecting client on whether to use SSL>. By
default, this is at the client's option; see about how to set up the server to require
use of SSL> for some or all connections.
PostgreSQL reads the system-wide
OpenSSL configuration file. By default, this
file is named openssl.cnf and is located in the
directory reported by openssl version -d>.
This default can be overridden by setting environment variable
OPENSSL_CONF to the name of the desired configuration file.
OpenSSL supports a wide range of ciphers
and authentication algorithms, of varying strength. While a list of
ciphers can be specified in the OpenSSL
configuration file, you can specify ciphers specifically for use by
the database server by modifying in
postgresql.conf>.
It is possible to have authentication without encryption overhead by
using NULL-SHA> or NULL-MD5> ciphers. However,
a man-in-the-middle could read and pass communications between client
and server. Also, encryption overhead is minimal compared to the
overhead of authentication. For these reasons NULL ciphers are not
recommended.
To start in SSL> mode, the files server.crt>
and server.key> must exist in the server's data directory.
These files should contain the server certificate and private key,
respectively.
On Unix systems, the permissions on server.key must
disallow any access to world or group; achieve this by the command
chmod 0600 server.key.
If the private key is protected with a passphrase, the
server will prompt for the passphrase and will not start until it has
been entered.
In some cases, the server certificate might be signed by an
intermediate> certificate authority, rather than one that is
directly trusted by clients. To use such a certificate, append the
certificate of the signing authority to the server.crt> file,
then its parent authority's certificate, and so on up to a root>
authority that is trusted by the clients. The root certificate should
be included in every case where server.crt> contains more than
one certificate.
Using client certificates
To require the client to supply a trusted certificate, place
certificates of the certificate authorities (CAs)
you trust in the file root.crt in the data
directory, and set the clientcert parameter
to 1 on the appropriate hostssl> line(s) in
pg_hba.conf>.
A certificate will then be requested from the client during
SSL connection startup. (See for a
description of how to set up certificates on the client.) The server will
verify that the client's certificate is signed by one of the trusted
certificate authorities. Certificate Revocation List (CRL) entries
are also checked if the file root.crl exists.
(See >
for diagrams showing SSL certificate usage.)
The clientcert option in pg_hba.conf> is
available for all authentication methods, but only for rows specified as
hostssl>. When clientcert is not specified
or is set to 0>, the server will still verify presented client
certificates against root.crt if that file exists
— but it will not insist that a client certificate be presented.
Note that root.crt lists the top-level CAs that are
considered trusted for signing client certificates. In principle it need
not list the CA that signed the server's certificate, though in most cases
that CA would also be trusted for client certificates.
If you are setting up client certificates, you may wish to use
the cert> authentication method, so that the certificates
control user authentication as well as providing connection security.
See for details.
SSL Server File Usage
The files server.key>, server.crt>,
root.crt, and root.crl
are only examined during server start; so you must restart
the server for changes in them to take effect.
SSL Server File UsageFileContentsEffectserver.crt>server certificatesent to client to indicate server's identityserver.key>server private keyproves server certificate was sent by the owner; does not indicate
certificate owner is trustworthyroot.crt>trusted certificate authoritieschecks that client certificate is
signed by a trusted certificate authorityroot.crl>certificates revoked by certificate authoritiesclient certificate must not be on this list
Creating a Self-Signed Certificate
To create a quick self-signed certificate for the server, use the
following OpenSSL command:
openssl req -new -text -out server.req
Fill out the information that openssl> asks for. Make sure
you enter the local host name as Common Name>; the challenge
password can be left blank. The program will generate a key that is
passphrase protected; it will not accept a passphrase that is less
than four characters long. To remove the passphrase (as you must if
you want automatic start-up of the server), run the commands:
openssl rsa -in privkey.pem -out server.key
rm privkey.pem
Enter the old passphrase to unlock the existing key. Now do:
openssl req -x509 -in server.req -text -key server.key -out server.crt
to turn the certificate into a self-signed certificate and to copy
the key and certificate to where the server will look for them.
Finally do:
chmod og-rwx server.key
because the server will reject the file if its permissions are more
liberal than this.
For more details on how to create your server private key and
certificate, refer to the OpenSSL> documentation.
A self-signed certificate can be used for testing, but a certificate
signed by a certificate authority (CA>) (either one of the
global CAs> or a local one) should be used in production
so that clients can verify the server's identity. If all the clients
are local to the organization, using a local CA> is
recommended.
Secure TCP/IP Connections with SSH Tunnelsssh
It is possible to use SSH to encrypt the network
connection between clients and a
PostgreSQL server. Done properly, this
provides an adequately secure network connection, even for non-SSL-capable
clients.
First make sure that an SSH server is
running properly on the same machine as the
PostgreSQL server and that you can log in using
ssh as some user. Then you can establish a secure
tunnel with a command like this from the client machine:
ssh -L 63333:localhost:5432 joe@foo.com
The first number in the argument, 63333, is the
port number of your end of the tunnel; it can be any unused port.
(IANA reserves ports 49152 through 65535 for private use.) The
second number, 5432, is the remote end of the tunnel: the port
number your server is using. The name or IP address between the
port numbers is the host with the database server you are going to
connect to, as seen from the host you are logging in to, which
is foo.com in this example. In order to connect
to the database server using this tunnel, you connect to port 63333
on the local machine:
psql -h localhost -p 63333 postgres
To the database server it will then look as though you are really
user joe on host foo.com
connecting to localhost in that context, and it
will use whatever authentication procedure was configured for
connections from this user and host. Note that the server will not
think the connection is SSL-encrypted, since in fact it is not
encrypted between the
SSH server and the
PostgreSQL server. This should not pose any
extra security risk as long as they are on the same machine.
In order for the
tunnel setup to succeed you must be allowed to connect via
ssh as joe@foo.com, just
as if you had attempted to use ssh to create a
terminal session.
You could also have set up the port forwarding as
ssh -L 63333:foo.com:5432 joe@foo.com
but then the database server will see the connection as coming in
on its foo.com interface, which is not opened by
the default setting listen_addresses =
'localhost'. This is usually not what you want.
If you have to hop to the database server via some
login host, one possible setup could look like this:
ssh -L 63333:db.foo.com:5432 joe@shell.foo.com
Note that this way the connection
from shell.foo.com
to db.foo.com will not be encrypted by the SSH
tunnel.
SSH offers quite a few configuration possibilities when the network
is restricted in various ways. Please refer to the SSH
documentation for details.
Several other applications exist that can provide secure tunnels using
a procedure similar in concept to the one just described.