Age | Commit message (Collapse) | Author |
|
The additional packaging footprint of the OAuth Curl dependency, as well
as the existence of libcurl in the address space even if OAuth isn't
ever used by a client, has raised some concerns. Split off this
dependency into a separate loadable module called libpq-oauth.
When configured using --with-libcurl, libpq.so searches for this new
module via dlopen(). End users may choose not to install the libpq-oauth
module, in which case the default flow is disabled.
For static applications using libpq.a, the libpq-oauth staticlib is a
mandatory link-time dependency for --with-libcurl builds. libpq.pc has
been updated accordingly.
The default flow relies on some libpq internals. Some of these can be
safely duplicated (such as the SIGPIPE handlers), but others need to be
shared between libpq and libpq-oauth for thread-safety. To avoid
exporting these internals to all libpq clients forever, these
dependencies are instead injected from the libpq side via an
initialization function. This also lets libpq communicate the offsets of
PGconn struct members to libpq-oauth, so that we can function without
crashing if the module on the search path came from a different build of
Postgres. (A minor-version upgrade could swap the libpq-oauth module out
from under a long-running libpq client before it does its first load of
the OAuth flow.)
This ABI is considered "private". The module has no SONAME or version
symlinks, and it's named libpq-oauth-<major>.so to avoid mixing and
matching across Postgres versions. (Future improvements may promote this
"OAuth flow plugin" to a first-class concept, at which point we would
need a public API to replace this anyway.)
Additionally, NLS support for error messages in b3f0be788a was
incomplete, because the new error macros weren't being scanned by
xgettext. Fix that now.
Per request from Tom Lane and Bruce Momjian. Based on an initial patch
by Daniel Gustafsson, who also contributed docs changes. The "bare"
dlopen() concept came from Thomas Munro. Many people reviewed the design
and implementation; thank you!
Co-authored-by: Daniel Gustafsson <[email protected]>
Reviewed-by: Andres Freund <[email protected]>
Reviewed-by: Christoph Berg <[email protected]>
Reviewed-by: Daniel Gustafsson <[email protected]>
Reviewed-by: Jelte Fennema-Nio <[email protected]>
Reviewed-by: Peter Eisentraut <[email protected]>
Reviewed-by: Wolfgang Walther <[email protected]>
Discussion: https://postgr.es/m/641687.1742360249%40sss.pgh.pa.us
|
|
This is necessary to be able to actually use the function on Windows;
bug introduced in commit cdb6b0fdb0b2face270406905d31f8f513b015cc.
Author: Jelte Fennema-Nio <[email protected]>
Discussion: https://postgr.es/m/CAGECzQTfc_O+HXqAo5_-xG4r3EFVsTefUeQzSvhEyyLDba-O9w@mail.gmail.com
|
|
This commit implements OAUTHBEARER, RFC 7628, and OAuth 2.0 Device
Authorization Grants, RFC 8628. In order to use this there is a
new pg_hba auth method called oauth. When speaking to a OAuth-
enabled server, it looks a bit like this:
$ psql 'host=example.org oauth_issuer=... oauth_client_id=...'
Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG
Device authorization is currently the only supported flow so the
OAuth issuer must support that in order for users to authenticate.
Third-party clients may however extend this and provide their own
flows. The built-in device authorization flow is currently not
supported on Windows.
In order for validation to happen server side a new framework for
plugging in OAuth validation modules is added. As validation is
implementation specific, with no default specified in the standard,
PostgreSQL does not ship with one built-in. Each pg_hba entry can
specify a specific validator or be left blank for the validator
installed as default.
This adds a requirement on libcurl for the client side support,
which is optional to build, but the server side has no additional
build requirements. In order to run the tests, Python is required
as this adds a https server written in Python. Tests are gated
behind PG_TEST_EXTRA as they open ports.
This patch has been a multi-year project with many contributors
involved with reviews and in-depth discussions: Michael Paquier,
Heikki Linnakangas, Zhihong Yu, Mahendrakar Srinivasarao, Andrey
Chudnovsky and Stephen Frost to name a few. While Jacob Champion
is the main author there have been some levels of hacking by others.
Daniel Gustafsson contributed the validation module and various bits
and pieces; Thomas Munro wrote the client side support for kqueue.
Author: Jacob Champion <[email protected]>
Co-authored-by: Daniel Gustafsson <[email protected]>
Co-authored-by: Thomas Munro <[email protected]>
Reviewed-by: Daniel Gustafsson <[email protected]>
Reviewed-by: Peter Eisentraut <[email protected]>
Reviewed-by: Antonin Houska <[email protected]>
Reviewed-by: Kashif Zeeshan <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
This commit adds one field to PGconn for the database service name (if
any), with PQservice() as routine to retrieve it. Like the other
routines of this area, NULL is returned as result if the connection is
NULL.
A follow-up patch will make use of this feature to be able to display
the service name in the psql prompt.
Author: Michael Banck
Reviewed-by: Greg Sabino Mullane
Discusion: https://postgr.es/m/[email protected]
|
|
Commit f5e4dedfa exposed libpq's internal function PQsocketPoll
without a lot of thought about whether that was an API we really
wanted to chisel in stone. The main problem with it is the use of
time_t to specify the timeout. While we do want an absolute time
so that a loop around PQsocketPoll doesn't have problems with
timeout slippage, time_t has only 1-second resolution. That's
already problematic for libpq's own internal usage --- for example,
pqConnectDBComplete has long had a kluge to treat "connect_timeout=1"
as 2 seconds so that it doesn't accidentally round to nearly zero.
And it's even less likely to be satisfactory for external callers.
Hence, let's change this while we still can.
The best idea seems to be to use an int64 count of microseconds since
the epoch --- basically the same thing as the backend's TimestampTz,
but let's use the standard Unix epoch (1970-01-01) since that's more
likely for clients to be easy to calculate. Millisecond resolution
would be plenty for foreseeable uses, but maybe the day will come that
we're glad we used microseconds.
Also, since time(2) isn't especially helpful for computing timeouts
defined this way, introduce a new function PQgetCurrentTimeUSec
to get the current time in this form.
Remove the hack in pqConnectDBComplete, so that "connect_timeout=1"
now means what you'd expect.
We can also remove the "#include <time.h>" that f5e4dedfa added to
libpq-fe.h, since there's no longer a need for time_t in that header.
It seems better for v17 not to enlarge libpq-fe.h's include footprint
from what it's historically been, anyway.
I also failed to resist the temptation to do some wordsmithing
on PQsocketPoll's documentation.
Patch by me, per complaint from Dominique Devienne.
Discussion: https://postgr.es/m/[email protected]
|
|
This patch generalizes libpq's existing single-row mode to allow
individual partial-result PGresults to contain up to N rows, rather
than always one row. This reduces malloc overhead compared to plain
single-row mode, and it is very useful for psql's FETCH_COUNT feature,
since otherwise we'd have to add code (and cycles) to either merge
single-row PGresults into a bigger one or teach psql's
results-printing logic to accept arrays of PGresults.
To avoid API breakage, PQsetSingleRowMode() remains the same, and we
add a new function PQsetChunkedRowsMode() to invoke the more general
case. Also, PGresults obtained the old way continue to carry the
PGRES_SINGLE_TUPLE status code, while if PQsetChunkedRowsMode() is
used then their status code is PGRES_TUPLES_CHUNK. The underlying
logic is the same either way, though.
Daniel Vérité, reviewed by Laurenz Albe and myself (and whacked
around a bit by me, so any remaining bugs are my fault)
Discussion: https://postgr.es/m/CAKZiRmxsVTkO928CM+-ADvsMyePmU3L9DQCa9NwqjvLPcEe5QA@mail.gmail.com
|
|
This is useful when connecting to a database asynchronously via
PQconnectStart(), since it handles deciding between poll() and
select(), and some of the required boilerplate.
Tristan Partin, reviewed by Gurjeet Singh, Heikki Linnakangas, Jelte
Fennema-Nio, and me.
Discussion: http://postgr.es/m/[email protected]
|
|
The existing PQcancel API uses blocking IO, which makes PQcancel
impossible to use in an event loop based codebase without blocking the
event loop until the call returns. It also doesn't encrypt the
connection over which the cancel request is sent, even when the original
connection required encryption.
This commit adds a PQcancelConn struct and assorted functions, which
provide a better mechanism of sending cancel requests; in particular all
the encryption used in the original connection are also used in the
cancel connection. The main entry points are:
- PQcancelCreate creates the PQcancelConn based on the original
connection (but does not establish an actual connection).
- PQcancelStart can be used to initiate non-blocking cancel requests,
using encryption if the original connection did so, which must be
pumped using
- PQcancelPoll.
- PQcancelReset puts a PQcancelConn back in state so that it can be
reused to send a new cancel request to the same connection.
- PQcancelBlocking is a simpler-to-use blocking API that still uses
encryption.
Additional functions are
- PQcancelStatus, mimicks PQstatus;
- PQcancelSocket, mimicks PQcancelSocket;
- PQcancelErrorMessage, mimicks PQerrorMessage;
- PQcancelFinish, mimicks PQfinish.
Author: Jelte Fennema-Nio <[email protected]>
Reviewed-by: Denis Laxalde <[email protected]>
Discussion: https://postgr.es/m/AM5PR83MB0178D3B31CA1B6EC4A8ECC42F7529@AM5PR83MB0178.EURPRD83.prod.outlook.com
|
|
This new function is equivalent to PQpipelineSync(), except that it does
not flush anything to the server except if the size threshold of the
output buffer is reached; the user must subsequently call PQflush()
instead.
Its purpose is to reduce the system call overhead of pipeline mode, by
giving to applications more control over the timing of the flushes when
manipulating commands in pipeline mode.
Author: Anton Kirilov
Reviewed-by: Jelte Fennema-Nio, Robert Haas, Álvaro Herrera, Denis
Laxalde, Michael Paquier
Discussion: https://postgr.es/m/CACV6eE5arHFZEA717=iKEa_OewpVFfWJOmsOdGrqqsr8CJVfWQ@mail.gmail.com
|
|
Essentially this moves the non-interactive part of psql's "\password"
command into an exported client function. The password is not sent to the
server in cleartext because it is "encrypted" (in the case of scram and md5
it is actually hashed, but we have called these encrypted passwords for a
long time now) on the client side. This is good because it ensures the
cleartext password is never known by the server, and therefore won't end up
in logs, pg_stat displays, etc.
In other words, it exists for the same reason as PQencryptPasswordConn(), but
is more convenient as it both builds and runs the "ALTER USER" command for
you. PQchangePassword() uses PQencryptPasswordConn() to do the password
encryption. PQencryptPasswordConn() is passed a NULL for the algorithm
argument, hence encryption is done according to the server's
password_encryption setting.
Also modify the psql client to use the new function. That provides a builtin
test case. Ultimately drivers built on top of libpq should expose this
function and its use should be generally encouraged over doing ALTER USER
directly for password changes.
Author: Joe Conway
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/flat/b75955f7-e8cc-4bbd-817f-ef536bacbe93%40joeconway.com
|
|
The following routines are added to libpq:
PGresult *PQclosePrepared(PGconn *conn, const char *stmt);
PGresult *PQclosePortal(PGconn *conn, const char *portal);
int PQsendClosePrepared(PGconn *conn, const char *stmt);
int PQsendClosePortal(PGconn *conn, const char *portal);
The "send" routines are non-blocking versions of the two others.
Close messages are part of the protocol but they did not have a libpq
implementation. And, having these routines is for instance useful with
connection poolers as these can detect more easily Close messages
than DEALLOCATE queries.
The implementation takes advantage of what the Describe routines rely on
for portals and statements. Some regression tests are added in
libpq_pipeline, for the four new routines, by closing portals and
statements created already by the tests.
Author: Jelte Fennema
Reviewed-by: Jian He, Michael Paquier
Discussion: https://postgr.es/m/CAGECzQTb4xFAopAVokudB+L62Kt44mNAL4Z9zZ7UTrs1TRFvWA@mail.gmail.com
|
|
This reverts commit 3d03b24c3 (Revert Add support for Kerberos
credential delegation) which was committed on the grounds of concern
about portability, but on further review and discussion, it's clear that
we are better off explicitly requiring MIT Kerberos as that appears to
be the only GSSAPI library currently that's under proper maintenance
and ongoing development. The API used for storing credentials was added
to MIT Kerberos over a decade ago while for the other libraries which
appear to be mainly based on Heimdal, which exists explicitly to be a
re-implementation of MIT Kerberos, the API never made it to a released
version (even though it was added to the Heimdal git repo over 5 years
ago..).
This post-feature-freeze change was approved by the RMT.
Discussion: https://postgr.es/m/ZDDO6jaESKaBgej0%40tamriel.snowman.net
|
|
This reverts commit 3d4fa227bce4294ce1cc214b4a9d3b7caa3f0454.
Per discussion and buildfarm, this depends on APIs that seem to not
be available on at least one platform (NetBSD). Should be certainly
possible to rework to be optional on that platform if necessary but bit
late for that at this point.
Discussion: https://postgr.es/m/[email protected]
|
|
Support GSSAPI/Kerberos credentials being delegated to the server by a
client. With this, a user authenticating to PostgreSQL using Kerberos
(GSSAPI) credentials can choose to delegate their credentials to the
PostgreSQL server (which can choose to accept them, or not), allowing
the server to then use those delegated credentials to connect to
another service, such as with postgres_fdw or dblink or theoretically
any other service which is able to be authenticated using Kerberos.
Both postgres_fdw and dblink are changed to allow non-superuser
password-less connections but only when GSSAPI credentials have been
delegated to the server by the client and GSSAPI is used to
authenticate to the remote system.
Authors: Stephen Frost, Peifeng Qiu
Reviewed-By: David Christensen
Discussion: https://postgr.es/m/CO1PR05MB8023CC2CB575E0FAAD7DF4F8A8E29@CO1PR05MB8023.namprd05.prod.outlook.com
|
|
This new libpq function allows the application to send an 'H' message,
which instructs the server to flush its outgoing buffer.
This hasn't been needed so far because the Sync message already requests
a buffer; and I failed to realize that this was needed in pipeline mode
because PQpipelineSync also causes the buffer to be flushed. However,
sometimes it is useful to request a flush without establishing a
synchronization point.
Backpatch to 14, where pipeline mode was introduced in libpq.
Reported-by: Boris Kolpackov <[email protected]>
Author: Álvaro Herrera <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
We have a dozen PQset*() functions. PQresultSetInstanceData() and this
were the libpq setter functions having a different word order. Adopt
the majority word order.
Reviewed by Alvaro Herrera and Robert Haas, though this choice of name
was not unanimous.
Discussion: https://postgr.es/m/[email protected]
|
|
An incorrectly-encoded multibyte character near the end of a string
could cause various processing loops to run past the string's
terminating NUL, with results ranging from no detectable issue to
a program crash, depending on what happens to be in the following
memory.
This isn't an issue in the server, because we take care to verify
the encoding of strings before doing any interesting processing
on them. However, that lack of care leaked into client-side code
which shouldn't assume that anyone has validated the encoding of
its input.
Although this is certainly a bug worth fixing, the PG security team
elected not to regard it as a security issue, primarily because
any untrusted text should be sanitized by PQescapeLiteral or
the like before being incorporated into a SQL or psql command.
(If an app fails to do so, the same technique can be used to
cause SQL injection, with probably much more dire consequences
than a mere client-program crash.) Those functions were already
made proof against this class of problem, cf CVE-2006-2313.
To fix, invent PQmblenBounded() which is like PQmblen() except it
won't return more than the number of bytes remaining in the string.
In HEAD we can make this a new libpq function, as PQmblen() is.
It seems imprudent to change libpq's API in stable branches though,
so in the back branches define PQmblenBounded as a macro in the files
that need it. (Note that just changing PQmblen's behavior would not
be a good idea; notably, it would completely break the escaping
functions' defense against this exact problem. So we just want a
version for those callers that don't have any better way of handling
this issue.)
Per private report from houjingyi. Back-patch to all supported branches.
|
|
Transform the PQtrace output format from its ancient (and mostly
useless) byte-level output format to a logical-message-level output,
making it much more usable. This implementation allows the printing
code to be written (as it indeed was) by looking at the protocol
documentation, which gives more confidence that the three (docs, trace
code and actual code) actually match.
Author: 岩田 彩 (Aya Iwata) <[email protected]>
Reviewed-by: 綱川 貴之 (Takayuki Tsunakawa) <[email protected]>
Reviewed-by: Kirk Jamison <[email protected]>
Reviewed-by: Kyotaro Horiguchi <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Reviewed-by: 黒田 隼人 (Hayato Kuroda) <[email protected]>
Reviewed-by: "Nagaura, Ryohei" <[email protected]>
Reviewed-by: Ryo Matsumura <[email protected]>
Reviewed-by: Greg Nancarrow <[email protected]>
Reviewed-by: Jim Doty <[email protected]>
Reviewed-by: Álvaro Herrera <[email protected]>
Discussion: https://postgr.es/m/71E660EB361DF14299875B198D4CE5423DE3FBA4@g01jpexmbkw25
|
|
Pipeline mode in libpq lets an application avoid the Sync messages in
the FE/BE protocol that are implicit in the old libpq API after each
query. The application can then insert Sync at its leisure with a new
libpq function PQpipelineSync. This can lead to substantial reductions
in query latency.
Co-authored-by: Craig Ringer <[email protected]>
Co-authored-by: Matthieu Garrigues <[email protected]>
Co-authored-by: Álvaro Herrera <[email protected]>
Reviewed-by: Andres Freund <[email protected]>
Reviewed-by: Aya Iwata <[email protected]>
Reviewed-by: Daniel Vérité <[email protected]>
Reviewed-by: David G. Johnston <[email protected]>
Reviewed-by: Justin Pryzby <[email protected]>
Reviewed-by: Kirk Jamison <[email protected]>
Reviewed-by: Michael Paquier <[email protected]>
Reviewed-by: Nikhil Sontakke <[email protected]>
Reviewed-by: Vaishnavi Prabakaran <[email protected]>
Reviewed-by: Zhihong Yu <[email protected]>
Discussion: https://postgr.es/m/CAMsr+YFUjJytRyV4J-16bEoiZyH=4nj+sQ7JP9ajwz=B4dMMZw@mail.gmail.com
Discussion: https://postgr.es/m/CAJkzx4T5E-2cQe3dtv2R78dYFvz+in8PY7A8MArvLhs_pg75gg@mail.gmail.com
|
|
libpq's exports.txt was overlooked in commit 36d108761, which the
buildfarm is quite unhappy about.
Also, I'd gathered that the plan included renaming PQgetSSLKeyPassHook
to PQgetSSLKeyPassHook_OpenSSL, but that didn't happen in the patch
as committed. I'm taking it on my own authority to do so now, since
the window before beta1 is closing fast.
|
|
This partially reverts commit 4dc6355210.
The information returned by the function can be obtained by calling
PQconninfo(), so the function is redundant.
|
|
This patch providies for support for password protected SSL client
keys in libpq, and for DER format keys, both encrypted and unencrypted.
There is a new connection parameter sslpassword, which is supplied to
the OpenSSL libraries via a callback function. The callback function can
also be set by an application by calling PQgetSSLKeyPassHook(). There is
also a function to retreive the connection setting, PQsslpassword().
Craig Ringer and Andrew Dunstan
Reviewed by: Greg Nancarrow
Discussion: https://postgr.es/m/[email protected]
|
|
This reverts commit f7ab80285. Per discussion, we can't remove an
exported symbol without a SONAME bump, which we don't want to do.
In particular that breaks usage of current libpq.so with pre-9.3
versions of psql etc, which need libpq to export pqsignal().
As noted in that commit message, exporting the symbol from libpgport.a
won't work reliably; but actually we don't want to export src/port's
implementation anyway. Any pre-9.3 client is going to be expecting the
definition that pqsignal() had before 9.3, which was that it didn't
set SA_RESTART for SIGALRM. Hence, put back pqsignal() in a separate
source file in src/interfaces/libpq, and give it the old semantics.
Back-patch to v12.
Discussion: https://postgr.es/m/[email protected]
|
|
On both the frontend and backend, prepare for GSSAPI encryption
support by moving common code for error handling into a separate file.
Fix a TODO for handling multiple status messages in the process.
Eliminate the OIDs, which have not been needed for some time.
Add frontend and backend encryption support functions. Keep the
context initiation for authentication-only separate on both the
frontend and backend in order to avoid concerns about changing the
requested flags to include encryption support.
In postmaster, pull GSSAPI authorization checking into a shared
function. Also share the initiator name between the encryption and
non-encryption codepaths.
For HBA, add "hostgssenc" and "hostnogssenc" entries that behave
similarly to their SSL counterparts. "hostgssenc" requires either
"gss", "trust", or "reject" for its authentication.
Similarly, add a "gssencmode" parameter to libpq. Supported values are
"disable", "require", and "prefer". Notably, negotiation will only be
attempted if credentials can be acquired. Move credential acquisition
into its own function to support this behavior.
Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring
if GSSAPI authentication was used, what principal was used, and if
encryption is being used on the connection.
Finally, add documentation for everything new, and update existing
documentation on connection security.
Thanks to Michael Paquier for the Windows fixes.
Author: Robbie Harwood, with changes to the read/write functions by me.
Reviewed in various forms and at different times by: Michael Paquier,
Andres Freund, David Steele.
Discussion: https://www.postgresql.org/message-id/flat/[email protected]
|
|
When hostaddr is given, the actual IP address that psql is connected to
can be totally unexpected for the given host. The more verbose output
we now generate makes things clearer. Since the "host" and "hostaddr"
parts of the conninfo could come from different sources (say, one of
them is in the service specification or a URI-style conninfo and the
other is not), this is not as silly as it may first appear. This is
also definitely useful if the hostname resolves to multiple addresses.
Author: Fabien Coelho
Reviewed-by: Pavel Stehule, Arthur Zakirov
Discussion: https://postgr.es/m/alpine.DEB.2.21.1810261532380.27686@lancre
https://postgr.es/m/alpine.DEB.2.21.1808201323020.13832@lancre
|
|
Client applications should get this function, if they need it, from
libpgport.
The fact that it's exported from libpq is a hack left over from before
we set up libpgport. It's never been documented, and there's no good
reason for non-PG code to be calling it anyway, so hopefully this won't
cause any problems. Moreover, with the previous setup it was not real
clear whether our clients that use the function were getting it from
libpgport or libpq, so this might actually prevent problems.
The reason for changing it now is that in the wake of commit ea53100d5,
some linkers won't export the symbol, apparently because it's coming from
a .a library instead of a .o file. We could get around that by continuing
to symlink pqsignal.c into libpq as before; but unless somebody complains
very hard, I don't want to adopt such a kluge.
Discussion: https://postgr.es/m/[email protected]
Discussion: https://postgr.es/m/[email protected]
|
|
This number can be useful for application memory management, and the
overhead to track it seems pretty trivial.
Lars Kanis, reviewed by Pavel Stehule, some mods by me
Discussion: https://postgr.es/m/[email protected]
|
|
The new function supports creating SCRAM verifiers, in addition to md5
hashes. The algorithm is chosen based on password_encryption, by default.
This fixes the issue reported by Jeff Janes, that there was previously
no way to create a SCRAM verifier with "\password".
Michael Paquier and me
Discussion: https://www.postgresql.org/message-id/CAMkU%3D1wfBgFPbfAMYZQE78p%3DVhZX7nN86aWkp0QcCp%3D%2BKxZ%3Dbg%40mail.gmail.com
|
|
Often, upon getting an unexpected error in psql, one's first wish is that
the verbosity setting had been higher; for example, to be able to see the
schema-name field or the server code location info. Up to now the only way
has been to adjust the VERBOSITY variable and repeat the failing query.
That's a pain, and it doesn't work if the error isn't reproducible.
This commit adds support in libpq for regenerating the error message for
an existing error PGresult at any desired verbosity level. This is almost
just a matter of refactoring the existing code into a subroutine, but there
is one bit of possibly-needed information that was not getting put into
PGresults: the text of the last query sent to the server. We must add that
string to the contents of an error PGresult. But we only need to save it
if it might be used, which with the existing error-formatting code only
happens if there is a PG_DIAG_STATEMENT_POSITION error field, which is
probably pretty rare for errors in production situations. So really the
overhead when the feature isn't used should be negligible.
Alex Shulgin, reviewed by Daniel Vérité, some improvements by me
|
|
Per discussion, the original name was a bit misleading, and
PQsslAttributeNames() seems more apropos. It's not quite too late to
change this in 9.5, so let's change it while we can.
Also, make sure that the pointer array is const, not only the pointed-to
strings.
Minor documentation wordsmithing while at it.
Lars Kanis, slight adjustments by me
|
|
Remove the code in plpgsql that suppressed the innermost line of CONTEXT
for messages emitted by RAISE commands. That was never more than a quick
backwards-compatibility hack, and it's pretty silly in cases where the
RAISE is nested in several levels of function. What's more, it violated
our design theory that verbosity of error reports should be controlled
on the client side not the server side.
To alleviate the resulting noise increase, introduce a feature in libpq
and psql whereby the CONTEXT field of messages can be suppressed, either
always or only for non-error messages. Printing CONTEXT for errors only
is now their default behavior.
The actual code changes here are pretty small, but the effects on the
regression test outputs are widespread. I had to edit some of the
alternative expected outputs by hand; hopefully the buildfarm will soon
find anything I fat-fingered.
In passing, fix up (again) the output line counts in psql's various
help displays. Add some commentary about how to verify them.
Pavel Stehule, reviewed by Petr Jelínek, Jeevan Chalke, and others
|
|
This makes it possible to query for things like the SSL version and cipher
used, without depending on OpenSSL functions or macros. That is a good
thing if we ever get another SSL implementation.
PQgetssl() still works, but it should be considered as deprecated as it
only works with OpenSSL. In particular, PQgetSslInUse() should be used to
check if a connection uses SSL, because as soon as we have another
implementation, PQgetssl() will return NULL even if SSL is in use.
|
|
This reverts commit 9f80f4835a55a1cbffcda5d23a617917f3286c14. The
function returned the raw value of a connection parameter, a task served
by PQconninfo(). The next commit will reimplement the psql \conninfo
change that way. Back-patch to 9.4, where that commit first appeared.
|
|
There was a bug in the psql's meta command \conninfo. When the
IP address was specified in the hostaddr and psql used it to create
a connection (i.e., psql -d "hostaddr=xxx"), \conninfo could not
display that address. This is because \conninfo got the connection
information only from PQhost() which could not return hostaddr.
This patch adds PQhostaddr(), and changes \conninfo so that it
can display not only the host name that PQhost() returns but also
the IP address which PQhostaddr() returns.
The bug has existed since 9.1 where \conninfo was introduced.
But it's too late to add new libpq function into the released versions,
so no backpatch.
|
|
This allows a caller to get back the exact conninfo array that was
used to create a connection, including parameters read from the
environment.
In doing this, restructure how options are copied from the conninfo
to the actual connection.
Zoltan Boszormenyi and Magnus Hagander
|
|
4TB large objects (standard 8KB BLCKSZ case). For this purpose new
libpq API lo_lseek64, lo_tell64 and lo_truncate64 are added. Also
corresponding new backend functions lo_lseek64, lo_tell64 and
lo_truncate64 are added. inv_api.c is changed to handle 64-bit
offsets.
Patch contributed by Nozomi Anzai (backend side) and Yugo Nagata
(frontend side, docs, regression tests and example program). Reviewed
by Kohei Kaigai. Committed by Tatsuo Ishii with minor editings.
|
|
After taking awhile to digest the row-processor feature that was added to
libpq in commit 92785dac2ee7026948962cd61c4cd84a2d052772, we've concluded
it is over-complicated and too hard to use. Leave the core infrastructure
changes in place (that is, there's still a row processor function inside
libpq), but remove the exposed API pieces, and instead provide a "single
row" mode switch that causes PQgetResult to return one row at a time in
separate PGresult objects.
This approach incurs more overhead than proper use of a row processor
callback would, since construction of a PGresult per row adds extra cycles.
However, it is far easier to use and harder to break. The single-row mode
still affords applications the primary benefit that the row processor API
was meant to provide, namely not having to accumulate large result sets in
memory before processing them. Preliminary testing suggests that we can
probably buy back most of the extra cycles by micro-optimizing construction
of the extra results, but that task will be left for another day.
Marko Kreen
|
|
Traditionally libpq has collected an entire query result before passing
it back to the application. That provides a simple and transactional API,
but it's pretty inefficient for large result sets. This patch allows the
application to process each row on-the-fly instead of accumulating the
rows into the PGresult. Error recovery becomes a bit more complex, but
often that tradeoff is well worth making.
Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
|
|
This function is like the PQserverVersion() function except
it returns the version of libpq, making it possible for a client
program or driver to determine which version of libpq is in
use at runtime, and not just at link time.
Suggested by Harald Armin Massa and several others.
|
|
status, including a status where the server is running but refuses a
postgres connection.
Have pg_ctl use this new function. This fixes the case where pg_ctl
reports that the server is not running (cannot connect) but in fact it
is running.
|
|
|
|
PQconnectStartParams. These are analogous to PQconnectdb and PQconnectStart
respectively. They differ from the legacy functions in that they accept
two NULL-terminated arrays, keywords and values, rather than conninfo
strings. This avoids the need to build the conninfo string in cases
where it might be inconvenient to do so. Includes documentation.
Also modify psql to utilize PQconnectdbParams rather than PQsetdbLogin.
This allows the new config parameter application_name to be set, which
in turn is displayed in the pg_stat_activity view and included in CSV
log entries. This will also ensure both new functions get regularly
exercised.
Patch by Guillaume Lelarge with review and minor adjustments by
Joe Conway.
|
|
PQescapeLiteral is similar to PQescapeStringConn, but it relieves the
caller of the need to know how large the output buffer should be, and
it provides the appropriate quoting (in addition to escaping special
characers within the string). PQescapeIdentifier provides similar
functionality for escaping identifiers.
Per recent discussion with Tom Lane.
|
|
but not OpenSSL (or perhaps vice versa, if that's possible).
Andrew Chernow, with minor editorialization by me.
|
|
conninfo string *before* trying to connect to the remote server, not after.
As pointed out by Marko Kreen, in certain not-very-plausible situations
this could result in sending a password from the postgres user's .pgpass file,
or other places that non-superusers shouldn't have access to, to an
untrustworthy remote server. The cleanest fix seems to be to expose libpq's
conninfo-string-parsing code so that dblink can check for a password option
without duplicating the parsing logic.
Joe Conway, with a little cleanup by Tom Lane
|
|
sequence of operations that libpq goes through while creating a PGresult.
Also, remove ill-considered "const" decoration on parameters passed to
event procedures.
|
|
enable them to manage private data associated with PGconns and PGresults.
Andrew Chernow and Merlin Moncure
|
|
except that lob's oid can be specified.
|
|
PQconnectionNeedsPassword function that tells the right thing for whether to
prompt for a password, and improve PQconnectionUsedPassword so that it checks
whether the password used by the connection was actually supplied as a
connection argument, instead of coming from environment or a password file.
Per bug report from Mark Cave-Ayland and subsequent discussion.
|
|
renumbering of encoding IDs done between 8.2 and 8.3 turns out to break 8.2
initdb and psql if they are run with an 8.3beta1 libpq.so. For the moment
we can rearrange the order of enum pg_enc to keep the same number for
everything except PG_JOHAB, which isn't a problem since there are no direct
references to it in the 8.2 programs anyway. (This does force initdb
unfortunately.)
Going forward, we want to fix things so that encoding IDs can be changed
without an ABI break, and this commit includes the changes needed to allow
libpq's encoding IDs to be treated as fully independent of the backend's.
The main issue is that libpq clients should not include pg_wchar.h or
otherwise assume they know the specific values of libpq's encoding IDs,
since they might encounter version skew between pg_wchar.h and the libpq.so
they are using. To fix, have libpq officially export functions needed for
encoding name<=>ID conversion and validity checking; it was doing this
anyway unofficially.
It's still the case that we can't renumber backend encoding IDs until the
next bump in libpq's major version number, since doing so will break the
8.2-era client programs. However the code is now prepared to avoid this
type of problem in future.
Note that initdb is no longer a libpq client: we just pull in the two
source files we need directly. The patch also fixes a few places that
were being sloppy about checking for an unrecognized encoding name.
|