| Age | Commit message (Collapse) | Author |
|
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
|
|
It was still using a scan of pg_depend instead of using the conindid
column that has been added since.
Since it is now just a catalog lookup wrapper and not related to
pg_depend, move from pg_depend.c to lsyscache.c.
Reviewed-by: Matthias van de Meent <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Reviewed-by: Michael Paquier <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/4688d55c-9a2e-9a5a-d166-5f24fe0bf8db%40enterprisedb.com
|
|
It is error prone (see 5da871bfa1b) and verbose to manually create function
types. Add a helper that can reference a function pointer type via
llvmjit_types.c and and convert existing instances of manual creation.
Author: Andres Freund <[email protected]>
Reviewed-By: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
This GUC was always intended as a temporary solution to help with
finding 9.4-to-9.5 migration issues. Now that all pre-9.5 branches
are out of support, and 9.5 will be too before v14 is released,
it seems like it's okay to drop it. Doing so allows removal of
several hundred lines of poorly-tested code in parse_expr.c,
which have been a fertile source of bugs when people did use this.
Discussion: https://postgr.es/m/[email protected]
|
|
Oversight in 01469241b2.
Reported-by: Andres Freund
Discussion: https://postgr.es/m/[email protected]
|
|
The use of low-level hash routines is not recommended by upstream
OpenSSL since 2000, and pgcrypto already switched to EVP as of 5ff4a67.
This takes advantage of the refactoring done in 87ae969 that has
introduced the allocation and free routines for cryptographic hashes.
Since 1.1.0, OpenSSL does not publish the contents of the cryptohash
contexts, forcing any consumers to rely on OpenSSL for all allocations.
Hence, the resource owner callback mechanism gains a new set of routines
to track and free cryptohash contexts when using OpenSSL, preventing any
risks of leaks in the backend. Nothing is needed in the frontend thanks
to the refactoring of 87ae969, and the resowner knowledge is isolated
into cryptohash_openssl.c.
Note that this also fixes a failure with SCRAM authentication when using
FIPS in OpenSSL, but as there have been few complaints about this
problem and as this causes an ABI breakage, no backpatch is done.
Author: Michael Paquier
Reviewed-by: Daniel Gustafsson, Heikki Linnakangas
Discussion: https://postgr.es/m/[email protected]
Discussion: https://postgr.es/m/[email protected]
|
|
Formerly we only applied extended statistics to an OR clause as part
of the clauselist_selectivity() code path for an OR clause appearing
in an implicitly-ANDed list of clauses. This meant that it could only
use extended statistics if all sub-clauses of the OR clause were
covered by a single extended statistics object.
Instead, teach clause_selectivity() how to apply extended statistics
to an OR clause by handling its ORed list of sub-clauses in a similar
manner to an implicitly-ANDed list of sub-clauses, but with different
combination rules. This allows one or more extended statistics objects
to be used to estimate all or part of the list of sub-clauses. Any
remaining sub-clauses are then treated as if they are independent.
Additionally, to avoid double-application of extended statistics, this
introduces "extended" versions of clause_selectivity() and
clauselist_selectivity(), which include an option to ignore extended
statistics. This replaces the old clauselist_selectivity_simple()
function which failed to completely ignore extended statistics when
called from the extended statistics code.
A known limitation of the current infrastructure is that an AND clause
under an OR clause is not treated as compatible with extended
statistics (because we don't build RestrictInfos for such sub-AND
clauses). Thus, for example, "(a=1 AND b=1) OR (a=2 AND b=2)" will
currently be treated as two independent AND clauses (each of which may
be estimated using extended statistics), but extended statistics will
not currently be used to account for any possible overlap between
those clauses. Improving that is left as a task for the future.
Original patch by Tomas Vondra, with additional improvements by me.
Discussion: https://postgr.es/m/20200113230008.g67iyk4cs3xbnjju@development
|
|
This changes CLUSTER and REINDEX so as a parenthesized grammar becomes
possible for options, while unifying the grammar parsing rules for
option lists with the existing ones.
This is a follow-up of the work done in 873ea9e for VACUUM, ANALYZE and
EXPLAIN. This benefits REINDEX for a potential backend-side filtering
for collatable-sensitive indexes and TABLESPACE, while CLUSTER would
benefit from the latter.
Author: Alexey Kondratov, Justin Pryzby
Discussion: https://postgr.es/m/[email protected]
|
|
Commit 6b466bf5f2 allowed pg_stat_statements to track the number of
WAL records, full page images and bytes that each statement generated.
Similarly this commit allows us to track the cluster-wide WAL statistics
counters.
New columns wal_records, wal_fpi and wal_bytes are added into the
pg_stat_wal view, and reports the total number of WAL records,
full page images and bytes generated in the , respectively.
Author: Masahiro Ikeda
Reviewed-by: Amit Kapila, Movead Li, Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/[email protected]
|
|
Two new routines to allocate a hash context and to free it are created,
as these become necessary for the goal behind this refactoring: switch
the all cryptohash implementations for OpenSSL to use EVP (for FIPS and
also because upstream does not recommend the use of low-level cryptohash
functions for 20 years). Note that OpenSSL hides the internals of
cryptohash contexts since 1.1.0, so it is necessary to leave the
allocation to OpenSSL itself, explaining the need for those two new
routines. This part is going to require more work to properly track
hash contexts with resource owners, but this not introduced here.
Still, this refactoring makes the move possible.
This reduces the number of routines for all SHA2 implementations from
twelve (SHA{224,256,386,512} with init, update and final calls) to five
(create, free, init, update and final calls) by incorporating the hash
type directly into the hash context data.
The new cryptohash routines are moved to a new file, called cryptohash.c
for the fallback implementations, with SHA2 specifics becoming a part
internal to src/common/. OpenSSL specifics are part of
cryptohash_openssl.c. This infrastructure is usable for more hash
types, like MD5 or HMAC.
Any code paths using the internal SHA2 routines are adapted to report
correctly errors, which are most of the changes of this commit. The
zones mostly impacted are checksum manifests, libpq and SCRAM.
Note that e21cbb4 was a first attempt to switch SHA2 to EVP, but it
lacked the refactoring needed for libpq, as done here.
This patch has been tested on Linux and Windows, with and without
OpenSSL, and down to 1.0.1, the oldest version supported on HEAD.
Author: Michael Paquier
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/[email protected]
|
|
As it stood, expandTableLikeClause() re-did the same relation_openrv
call that transformTableLikeClause() had done. However there are
scenarios where this would not find the same table as expected.
We hold lock on the LIKE source table, so it can't be renamed or
dropped, but another table could appear before it in the search path.
This explains the odd behavior reported in bug #16758 when cloning a
table as a temp table of the same name. This case worked as expected
before commit 502898192 introduced the need to open the source table
twice, so we should fix it.
To make really sure we get the same table, let's re-open it by OID not
name. That requires adding an OID field to struct TableLikeClause,
which is a little nervous-making from an ABI standpoint, but as long
as it's at the end I don't think there's any serious risk.
Per bug #16758 from Marc Boeren. Like the previous patch,
back-patch to all supported branches.
Discussion: https://postgr.es/m/[email protected]
|
|
When truncating files by name, use truncate(2). Windows hasn't got it,
so keep our previous coding based on ftruncate(2) as a fallback.
Discussion: https://postgr.es/m/16663-fe97ccf9932fc800%40postgresql.org
|
|
For debugging purposes, Path nodes are supposed to have outfuncs
support, but this was overlooked in the original incremental sort patch.
While at it, clean up a couple other minor oversights, as well as
bizarre choice of return type for create_incremental_sort_path().
(All the existing callers just cast it to "Path *" immediately, so
they don't care, but some future caller might care.)
outfuncs.c fix by Zhijie Hou, the rest by me
Discussion: https://postgr.es/m/324c4d81d8134117972a5b1f6cdf9560@G08CNEXMBPEKD05.g08.fujitsu.local
|
|
Author: Haiying Tang <[email protected]>
Discussion: https://postgr.es/m/48a0928ac94b497d9c40acf1de394c15@G08CNEXMBPEKD05.g08.fujitsu.local
|
|
Reverts 27838981be9d (some comments are kept). Per discussion, it does
not seem safe to relax the lock level used for this; in order for it to
be safe, there would have to be memory barriers between the point we set
the flag and the point we set the trasaction Xid, which perhaps would
not be so bad; but there would also have to be barriers at the readers'
side, which from a performance perspective might be bad.
Now maybe this analysis is wrong and it *is* safe for some reason, but
proof of that is not trivial.
Discussion: https://postgr.es/m/[email protected]
|
|
In the various waiting phases of CREATE INDEX CONCURRENTLY (CIC) and
REINDEX CONCURRENTLY (RC), we wait for other processes to release their
snapshots; this is necessary in general for correctness. However,
processes doing CIC in other tables cannot possibly affect CIC or RC
done in "this" table, so we don't need to wait for those. This commit
adds a flag in MyProc->statusFlags to indicate that the current process
is doing CIC, so that other processes doing CIC or RC can ignore it when
waiting.
Note that this logic is only valid if the index does not access other
tables. For simplicity we avoid setting the flag if the index has a
column that's an expression, or has a WHERE predicate. (It is possible
to have expressional or partial indexes that do not access other tables,
but figuring that out would require more work.)
This flag can potentially also be used by processes doing REINDEX
CONCURRENTLY to be skipped; and by VACUUM to ignore processes in CIC or
RC for the purposes of computing an Xmin. That's left for future
commits.
Author: Álvaro Herrera <[email protected]>
Author: Dimitry Dolgov <[email protected]>
Reviewed-by: Michael Paquier <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
Up to now, we sent a ParameterStatus message to the client immediately
upon any change in the active value of any GUC_REPORT variable. This
was only barely okay when the feature was designed; now that we have
things like function SET clauses, there are very plausible use-cases
where a GUC_REPORT variable might change many times within a query
--- and even end up back at its original value, perhaps. Fortunately
most of our GUC_REPORT variables are unlikely to be changed often;
but there are proposals in play to enlarge that set, or even make it
user-configurable.
Hence, let's fix things to not generate more than one ParameterStatus
message per variable per query, and to not send any message at all
unless the end-of-query value is different from what we last reported.
Discussion: https://postgr.es/m/[email protected]
|
|
This is a variant of the normal Unix-domain sockets that don't use the
file system but a separate "abstract" namespace. At the user
interface, such sockets are represented by names starting with "@".
Supported on Linux and Windows right now.
Reviewed-by: Michael Paquier <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/[email protected]
|
|
The buildfarm animal walleye, running MinGW 8.1 has been having problems
ever since 697e1d02f and 913ec71d6 went in. This appears to be a bug in
assembler which was fixed in a later version.
For now, in order to get that animal running green again, let's just
define pg_attribute_cold and pg_attribute_hot to be empty macros on that
compiler. Hopefully, we can get the support of the owner of the animal to
upgrade to a less buggy compiler and revert this at a later date.
Discussion: https://postgr.es/m/[email protected]
|
|
currtid() and currtid2() are an undocumented set of functions whose sole
known user is the Postgres ODBC driver, able to retrieve the latest TID
version for a tuple given by the caller of those functions.
As used by Postgres ODBC, currtid() is a shortcut able to retrieve the
last TID loaded into a backend by passing an OID of 0 (magic value)
after a tuple insertion. This is removed in this commit, as it became
obsolete after the driver began using "RETURNING ctid" with inserts, a
clause supported since Postgres 8.2 (using RETURNING is better for
performance anyway as it reduces the number of round-trips to the
backend).
currtid2() is still used by the driver, so this remains around for now.
Note that this function is kept in its original shape for backward
compatibility reasons.
Per discussion with many people, including Andres Freund, Peter
Eisentraut, Álvaro Herrera, Hiroshi Inoue, Tom Lane and myself.
Bump catalog version.
Discussion: https://postgr.es/m/[email protected]
|
|
Previously this code assumed that all IndexScan nodes supported
mark/restore, which is not true since it depends on optional index AM
support functions. This could lead to errors about missing support
functions in rare edge cases of mergejoins with no sort keys, where an
unordered non-btree index scan was placed on the inner path without a
protecting Materialize node. (Normally, the fact that merge join
requires ordered input would avoid this error.)
Backpatch all the way since this bug is ancient.
Per report from Eugen Konkov on irc.
Discussion: https://postgr.es/m/[email protected]
|
|
1fa22a43a was a quick fix for portability problem I introduced in
697e1d02f. 1fa22a43a adds a few more cases to the preprocessor logic than
I'd have liked. Andres Freund and Dagfinn Ilmari Mannsåker suggested a
better way to do this.
In passing, also adjust the only current usage of these macros so that the
macro comes before the function's return type in the declaration of the
function. This now matches what the definition of the function does.
Discussion: https://postgr.es/m/[email protected]
Discussion: https://postgr.es/m/[email protected]
|
|
This has the advantage that the cost estimates for aggregates can count
the number of calls to transition and final functions correctly.
Bump catalog version, because views can contain Aggrefs.
Reviewed-by: Andres Freund
Discussion: https://www.postgresql.org/message-id/b2e3536b-1dbc-8303-c97e-89cb0b4a9a48%40iki.fi
|
|
While ereport() and elog() themselves are quite cheap when the
error message level is too low to be printed, some places need to do
substantial work before they can call those macros at all. To allow
optimizing away such setup work when nothing is to be printed, make
elog.c export a new function message_level_is_interesting(elevel)
that reports whether ereport/elog will do anything. Make use of that
in various places that had ad-hoc direct tests of log_min_messages etc.
Also teach ProcSleep to use it to avoid some work. (There may well
be other places that could usefully use this; I didn't search hard.)
Within elog.c, refactor a little bit to avoid having duplicate copies
of the policy-setting logic. When that code was written, we weren't
relying on the availability of inline functions; so it had some
duplications in the name of efficiency, which I got rid of.
Alvaro Herrera and Tom Lane
Discussion: https://postgr.es/m/[email protected]
|
|
This should fix the breakages caused by 697e1d02f, which seems to break
the build for GCC version < 5.
It seems, according to the GCC manual that __has_attribute is a "special
operator" and must be tested without any other conditions in the
preprocessor test.
Per recommendation from the GCC manual via Greg Nancarrow
Reported-by: Greg Nancarrow
Discussion: https://postgr.es/m/CAJcOf-euSu8fhC10v476o9dqnjqKysVs1_vRms-_fvajpZ3kFw@mail.gmail.com
|
|
Here we use a bit of preprocessor trickery to coax supporting compilers
into laying out their generated code so that the code that's in the same
branch as elog(ERROR)/ereport(ERROR) calls is moved away from the hot
path. Effectively, this reduces the size of the hot code meaning that it
can sit on fewer cache lines.
Performance improvements of between 10-15% have been seen on highly CPU
bound workloads using pgbench's TPC-b benchmark.
What's achieved here is very similar to putting the error condition inside
an unlikely() macro. For example;
if (unlikely(x < 0))
elog(ERROR, "invalid x value");
now there's no need to make use of unlikely() here as the common macro
used by elog and ereport will now see that elevel is >= ERROR and make use
of a pg_attribute_cold marked version of errstart().
When elevel < ERROR or if it cannot be determined to be constant, the
original behavior is maintained.
Author: David Rowley
Reviewed-by: Andres Freund, Peter Eisentraut
Discussion: https://postgr.es/m/CAApHDvrVpasrEzLL2er7p9iwZFZ%3DJj6WisePcFeunwfrV0js_A%40mail.gmail.com
|
|
For compilers supporting __has_attribute and __has_attribute (hot/cold).
__has_attribute is supported on gcc >= 5, clang >= 2.9 and icc >= 17.
A followup commit will implement some usages of these macros.
Author: David Rowley
Reviewed-by: Andres Freund, Peter Eisentraut
Discussion: https://postgr.es/m/CAApHDvrVpasrEzLL2er7p9iwZFZ%3DJj6WisePcFeunwfrV0js_A%40mail.gmail.com
|
|
Justin Pryzby
Discussion: https://postgr.es/m/[email protected]
|
|
Historically these were called >^ and <^, but that is inconsistent
with the similar box, polygon, and circle operators, which are named
|>> and <<| respectively. Worse, the >^ and <^ names are used for
*not* strict above/below tests for the box type.
Hence, invent new operators following the more common naming. The
old operators remain available for now, and are still accepted by
the relevant index opclasses too. But there's a deprecation notice,
so maybe we can get rid of them someday.
Emre Hasegeli, reviewed by Pavel Borisov
Discussion: https://postgr.es/m/[email protected]
|
|
Missed these in the previous commit.
|
|
Copy.c has grown really large. Split it into more manageable parts:
- copy.c now contains only a few functions that are common to COPY FROM
and COPY TO.
- copyto.c contains code for COPY TO.
- copyfrom.c contains code for initializing COPY FROM, and inserting the
tuples to the correct table.
- copyfromparse.c contains code for reading from the client/file/program,
and parsing the input text/CSV/binary format into tuples.
All of these parts are fairly complicated, and fairly independent of each
other. There is a patch being discussed to implement parallel COPY FROM,
which will add a lot of new code to the COPY FROM path, and another patch
which would allow INSERTs to use the same multi-insert machinery as COPY
FROM, both of which will require refactoring that code. With those two
patches, there's going to be a lot of code churn in copy.c anyway, so now
seems like a good time to do this refactoring.
The CopyStateData struct is also split. All the formatting options, like
FORMAT, QUOTE, ESCAPE, are put in a new CopyFormatOption struct, which
is used by both COPY FROM and TO. Other state data are kept in separate
CopyFromStateData and CopyToStateData structs.
Reviewed-by: Soumyadeep Chakraborty, Erik Rijkers, Vignesh C, Andres Freund
Discussion: https://www.postgresql.org/message-id/8e15b560-f387-7acc-ac90-763986617bfb%40iki.fi
|
|
"FPeq(infinity, infinity)" returned false, on account of getting NaN
when it subtracts the two inputs. Fix that by adding a separate
check for exact equality.
FPle() and FPge() similarly got the wrong answer for two like-signed
infinities. In those cases, we can just rearrange the comparisons
to avoid potentially subtracting infinities.
While the sibling functions FPne() etc accidentally gave the right
answers even with the internal NaN results, it seems best to make
similar adjustments to them to avoid depending on this.
FPeq() has to be converted to an inline function to avoid double
evaluations of its arguments, and I did the same for the others
just for consistency.
In passing, make the handling of NaN cases in line_eq() and
point_eq_point() simpler and easier to reason about, and perhaps
faster.
This results in just one visible regression test change: slope()
now gives DBL_MAX for two inputs of (inf,1e300), which is consistent
with what it does for (1e300,inf), so that seems like a bug fix.
Discussion: https://postgr.es/m/CAGf+fX70rWFOk5cd00uMfa__0yP+vtQg5ck7c2Onb-Yczp0URA@mail.gmail.com
|
|
Remove the ability to select random number generator independently from
SSL library. Instead, use the random number generator from the SSL
library (today only OpenSSL supported) if one is configured. If no SSL
library is configured, use the platform default (which means use
CryptoAPI on Win32 and /dev/urandom on Linux).
This also restructures pg_strong_random.c to have three clearly separate
sections, one for each implementation, with two functions in each,
instead of a scattered set of ifdefs throughout the whole file.
Author: Daniel Gustafsson, Magnus Hagander, Michael Paquier
Discussion: https://postgr.es/m/[email protected]
|
|
This feature was added a long time ago, in 7c1e67bd5 and eb121ba2c,
but never documented in any user-facing way. (Documentation added
in 6126d3e70 was commented out almost immediately, in 8272fc3f7.)
That's because, while this syntax is defined by SQL:99, our
implementation is only vaguely related to the standard's semantics.
The standard appears to intend a run-time not parse-time test, and
it definitely intends that the test should understand subtype
relationships.
No one has stepped up to fix that in the intervening years, but
people keep coming across the code and asking why it's not documented.
Let's just get rid of it: if anyone ever wants to make it work per
spec, they can easily recover whatever parts of this code are still
of value from our git history.
If there's anyone out there who's actually using this despite its
undocumented status, they can switch to using pg_typeof() instead,
eg. "pg_typeof(something) = 'mytype'::regtype". That gives
essentially the same semantics as what our IS OF code did.
(We didn't have that function last time this was discussed, or
we would have ripped out IS OF then.)
Discussion: https://postgr.es/m/CAKFQuwZ2pTc-DSkOiTfjauqLYkNREeNZvWmeg12Q-_69D+sYZA@mail.gmail.com
Discussion: https://postgr.es/m/[email protected]
Discussion: https://postgr.es/m/[email protected]
|
|
Add hash functions for the record type as well as a hash operator
family and operator class for the record type. This enables all the
hash functionality for the record type such as hash-based plans for
UNION/INTERSECT/EXCEPT DISTINCT, recursive queries using UNION
DISTINCT, hash joins, and hash partitioning.
Reviewed-by: Tom Lane <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/38eccd35-4e2d-6767-1b3c-dada1eac3124%402ndquadrant.com
|
|
Provide a way for one process to continue the remaining phases of a
(previously) parallel computation alone. Later patches will use this to
extend Parallel Hash Join.
Author: Melanie Plageman <[email protected]>
Reviewed-by: Thomas Munro <[email protected]>
Discussion: https://postgr.es/m/CA%2BhUKG%2BA6ftXPz4oe92%2Bx8Er%2BxpGZqto70-Q_ERwRaSyA%3DafNg%40mail.gmail.com
|
|
We don't actually need a lock to set PGPROC->statusFlags itself; what we
do need is a shared lock on either XidGenLock or ProcArrayLock in order to
ensure MyProc->pgxactoff keeps still while we modify the mirror array in
ProcGlobal->statusFlags. Some places were using an exclusive lock for
that, which is excessive. Relax those to use shared lock only.
procarray.c has a couple of places with somewhat brittle assumptions
about PGPROC changes: ProcArrayEndTransaction uses only shared lock, so
it's permissible to change MyProc only. On the other hand,
ProcArrayEndTransactionInternal also changes other procs, so it must
hold exclusive lock. Add asserts to ensure those assumptions continue
to hold.
Author: Álvaro Herrera <[email protected]>
Reviewed-by: Michael Paquier <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
Streamline handling of the various strategies that we have to avoid a
page split in nbtinsert.c. When it looks like a leaf page is about to
overflow, we now perform deleting LP_DEAD items and deduplication in one
central place. This greatly simplifies _bt_findinsertloc().
This has an independently useful consequence: nbtree no longer relies on
the BTP_HAS_GARBAGE page level flag/hint for anything important. We
still set and unset the flag in the same way as before, but it's no
longer treated as a gating condition when considering if we should check
for already-set LP_DEAD bits. This happens at the point where the page
looks like it might have to be split anyway, so simply checking the
LP_DEAD bits in passing is practically free. This avoids missing
LP_DEAD bits just because the page-level hint is unset, which is
probably reasonably common (e.g. it happens when VACUUM unsets the
page-level flag without actually removing index tuples whose LP_DEAD-bit
was set recently, after the VACUUM operation began but before it reached
the leaf page in question).
Note that this isn't a big behavioral change compared to PostgreSQL 13.
We were already checking for set LP_DEAD bits regardless of whether the
BTP_HAS_GARBAGE page level flag was set before we considered doing a
deduplication pass. This commit only goes slightly further by doing the
same check for all indexes, even indexes where deduplication won't be
performed.
We don't completely remove the BTP_HAS_GARBAGE flag. We still rely on
it as a gating condition with pg_upgrade'd indexes from before B-tree
version 4/PostgreSQL 12. That makes sense because we sometimes have to
make a choice among pages full of duplicates when inserting a tuple with
pre version 4 indexes. It probably still pays to avoid accessing the
line pointer array of a page there, since it won't yet be clear whether
we'll insert on to the page in question at all, let alone split it as a
result.
Author: Peter Geoghegan <[email protected]>
Reviewed-By: Victor Yegorov <[email protected]>
Discussion: https://postgr.es/m/CAH2-Wz%3DYpc1PDdk8OVJDChGJBjT06%3DA0Mbv9HyTLCsOknGcUFg%40mail.gmail.com
|
|
In streaming mode, the transaction can be decoded in multiple streams and
those streams can be interleaved with streams of other transactions. So,
we can't remember the transaction's write status in the logical decoding
context because that might get changed due to some other transactions and
lead to wrong answers for 'skip-empty-xacts' option. We decided to keep
each transaction's write status in the ReorderBufferTxn to avoid
interleaved streams changing the status of some unrelated transactions.
Diagnosed-by: Amit Kapila
Author: Dilip Kumar
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CAA4eK1LR7=XNM_TLmpZMFuV8ZQpoxkem--NZJYf8YXmesbvwLA@mail.gmail.com
|
|
With more flags associated to a PGPROC entry that are not related to
vacuum (currently existing or planned), the name "statusFlags" describes
its purpose better.
(The same is done to the mirroring PROC_HDR->vacuumFlags.)
No functional changes in this commit.
This was suggested first by Hari Babu Kommi in [1] and then by Michael
Paquier at [2].
[1] https://postgr.es/m/CAJrrPGcsDC-oy1AhqH0JkXYa0Z2AgbuXzHPpByLoBGMxfOZMEQ@mail.gmail.com
[2] https://postgr.es/m/[email protected]
Author: Dmitry Dolgov <[email protected]>
Reviewed-by: Álvaro Herrera <[email protected]>
Discussion: https://postgr.es/m/20201116182446.qcg3o6szo2zookyr@localhost
|
|
This is mostly straightforward. However, we disallow replacing
constraint triggers or changing the is-constraint property; perhaps
that can be added later, but the complexity versus benefit tradeoff
doesn't look very good.
Also, no special thought is taken here for whether replacing an
existing trigger should result in changes to queued-but-not-fired
trigger actions. We just document that if you're surprised by the
results, too bad, don't do that. (Note that any such pending trigger
activity would have to be within the current session.)
Takamichi Osumi, reviewed at various times by Surafel Temesgen,
Peter Smith, and myself
Discussion: https://postgr.es/m/0DDF369B45A1B44B8A687ED43F06557C010BC362@G01JPEXMBYT03
|
|
Author: Daniel Gustafsson
Discussion: https://postgr.es/m/[email protected]
|
|
Especially for the list API such as lappend() forgetting to assign the
return value is a common problem.
Reviewed-by: Michael Paquier <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/[email protected]
|
|
pg_nodiscard means the compiler should warn if the result of a
function call is ignored. The name "nodiscard" is chosen in alignment
with (possibly future) C and C++ standards. It maps to the GCC
attribute warn_unused_result.
Reviewed-by: Michael Paquier <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/[email protected]
|
|
Introduce TimestampDifferenceMilliseconds() to simplify callers
that would rather have the difference in milliseconds, instead of
the select()-oriented seconds-and-microseconds format. This gets
rid of at least one integer division per call, and it eliminates
some apparently-easy-to-mess-up arithmetic.
Two of these call sites were in fact wrong:
* pg_prewarm's autoprewarm_main() forgot to multiply the seconds
by 1000, thus ending up with a delay 1000X shorter than intended.
That doesn't quite make it a busy-wait, but close.
* postgres_fdw's pgfdw_get_cleanup_result() thought it needed to compute
microseconds not milliseconds, thus ending up with a delay 1000X longer
than intended. Somebody along the way had noticed this problem but
misdiagnosed the cause, and imposed an ad-hoc 60-second limit rather
than fixing the units. This was relatively harmless in context, because
we don't care that much about exactly how long this delay is; still,
it's wrong.
There are a few more callers of TimestampDifference() that don't
have a direct need for seconds-and-microseconds, but can't use
TimestampDifferenceMilliseconds() either because they do need
microsecond precision or because they might possibly deal with
intervals long enough to overflow 32-bit milliseconds. It might be
worth inventing another API to improve that, but that seems outside
the scope of this patch; so those callers are untouched here.
Given the fact that we are fixing some bugs, and the likelihood
that future patches might want to back-patch code that uses this
new API, back-patch to all supported branches.
Alexey Kondratov and Tom Lane
Discussion: https://postgr.es/m/[email protected]
|
|
Move the system catalog index declarations from catalog/indexing.h to
the respective parent tables' catalog/pg_*.h files. The original
reason for having it split was that the old genbki system produced the
output in the order of the catalog files it read, so all the indexing
stuff needed to come separately. But this is no longer the case, and
keeping it together makes more sense.
Reviewed-by: John Naylor <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/[email protected]
|
|
Move the system catalog toast table declarations from
catalog/toasting.h to the respective parent tables' catalog/pg_*.h
files. The original reason for having it split was that the old
genbki system produced the output in the order of the catalog files it
read, so all the toasting stuff needed to come separately. But this
is no longer the case, and keeping it together makes more sense.
Reviewed-by: John Naylor <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/[email protected]
|
|
Currently only OpenSSL requires this initialization, but in the future
other SSL implementations are likely to need it as well. Abstracting
this functionality out into a separate function makes this cleaner and
more clear, and also removes the dependency on OpenSSL headers from
fork_process.c.
OpenSSL is special in that we need to initialize this random number
generator even if we're not going to use it directly, until we drop
support for everything prior to OpenSSL 1.1.1. (And of course also if we
actually use it). All other implementations are left empty at this time,
but more are expected to be added in the future.
Author: Daniel Gustafsson <[email protected]>, Michael Paquier <[email protected]>
Reviewed-By: Magnus Hagander <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
float4_div and float8_div correctly produced zero for zero divided
by infinity, but threw an underflow error for nonzero finite values
divided by infinity. This seems wrong; at the very least it's
inconsistent with the behavior recently implemented for numeric
infinities. Remove the error and allow zero to be returned.
This patch also removes a useless isinf() test from the overflow
checks in these functions (non-Inf divided by Inf can't produce Inf).
Extracted from a larger patch; this seems significant outside the
context of geometric operators, so it deserves its own commit.
Kyotaro Horiguchi
Discussion: https://postgr.es/m/CAGf+fX70rWFOk5cd00uMfa__0yP+vtQg5ck7c2Onb-Yczp0URA@mail.gmail.com
|
|
Convert array_append, array_prepend, array_cat, array_position,
array_positions, array_remove, array_replace, and width_bucket
to use anycompatiblearray. This is a simple extension of commit
5c292e6b9 to hit some other places where there's a pretty obvious
gain in usability from doing so.
Ideally we'd also modify other functions taking multiple old-style
polymorphic arguments. But most of the remainder are tied into one
or more operator classes, making any such change a much larger can of
worms than I desire to open right now.
Discussion: https://postgr.es/m/[email protected]
|