| Age | Commit message (Collapse) | Author |
|
This new DDL command splits a single partition into several partitions. Just
like the ALTER TABLE ... MERGE PARTITIONS ... command, new partitions are
created using the createPartitionTable() function with the parent partition
as the template.
This commit comprises a quite naive implementation which works in a single
process and holds the ACCESS EXCLUSIVE LOCK on the parent table during all
the operations, including the tuple routing. This is why the new DDL command
can't be recommended for large, partitioned tables under high load. However,
this implementation comes in handy in certain cases, even as it is. Also, it
could serve as a foundation for future implementations with less locking and
possibly parallelism.
Discussion: https://postgr.es/m/c73a1746-0cd0-6bdd-6b23-3ae0b7c0c582%40postgrespro.ru
Author: Dmitry Koval <[email protected]>
Co-authored-by: Alexander Korotkov <[email protected]>
Co-authored-by: Tender Wang <[email protected]>
Co-authored-by: Richard Guo <[email protected]>
Co-authored-by: Dagfinn Ilmari Mannsaker <[email protected]>
Co-authored-by: Fujii Masao <[email protected]>
Co-authored-by: Jian He <[email protected]>
Reviewed-by: Matthias van de Meent <[email protected]>
Reviewed-by: Laurenz Albe <[email protected]>
Reviewed-by: Zhihong Yu <[email protected]>
Reviewed-by: Justin Pryzby <[email protected]>
Reviewed-by: Alvaro Herrera <[email protected]>
Reviewed-by: Robert Haas <[email protected]>
Reviewed-by: Stephane Tachoires <[email protected]>
Reviewed-by: Jian He <[email protected]>
Reviewed-by: Alexander Korotkov <[email protected]>
Reviewed-by: Pavel Borisov <[email protected]>
Reviewed-by: Masahiko Sawada <[email protected]>
Reviewed-by: Alexander Lakhin <[email protected]>
Reviewed-by: Kyotaro Horiguchi <[email protected]>
Reviewed-by: Daniel Gustafsson <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Reviewed-by: Noah Misch <[email protected]>
|
|
This new DDL command merges several partitions into a single partition of the
target table. The target partition is created using the new
createPartitionTable() function with the parent partition as the template.
This commit comprises a quite naive implementation which works in a single
process and holds the ACCESS EXCLUSIVE LOCK on the parent table during all
the operations, including the tuple routing. This is why this new DDL
command can't be recommended for large partitioned tables under a high load.
However, this implementation comes in handy in certain cases, even as it is.
Also, it could serve as a foundation for future implementations with less
locking and possibly parallelism.
Discussion: https://postgr.es/m/c73a1746-0cd0-6bdd-6b23-3ae0b7c0c582%40postgrespro.ru
Author: Dmitry Koval <[email protected]>
Co-authored-by: Alexander Korotkov <[email protected]>
Co-authored-by: Tender Wang <[email protected]>
Co-authored-by: Richard Guo <[email protected]>
Co-authored-by: Dagfinn Ilmari Mannsaker <[email protected]>
Co-authored-by: Fujii Masao <[email protected]>
Co-authored-by: Jian He <[email protected]>
Reviewed-by: Matthias van de Meent <[email protected]>
Reviewed-by: Laurenz Albe <[email protected]>
Reviewed-by: Zhihong Yu <[email protected]>
Reviewed-by: Justin Pryzby <[email protected]>
Reviewed-by: Alvaro Herrera <[email protected]>
Reviewed-by: Robert Haas <[email protected]>
Reviewed-by: Stephane Tachoires <[email protected]>
Reviewed-by: Jian He <[email protected]>
Reviewed-by: Alexander Korotkov <[email protected]>
Reviewed-by: Pavel Borisov <[email protected]>
Reviewed-by: Masahiko Sawada <[email protected]>
Reviewed-by: Alexander Lakhin <[email protected]>
Reviewed-by: Kyotaro Horiguchi <[email protected]>
Reviewed-by: Daniel Gustafsson <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Reviewed-by: Noah Misch <[email protected]>
|
|
index-killtuples test depends on the contrib modules btree_gin and
btree_gist, which would not be installed in a temporary installation
with an execution of the main isolation test suite like this one:
make -C src/test/isolation/ check
src/test/isolation/ should not depend on contrib/, and EXTRA_INSTALL has
no effect in this case as this test suite uses its own Makefile rules.
This commit moves index-killtuples into its new module, called "index",
whose name looks like the best fit there can be as it depends on more
than one index AM. btree_gin and btree_gist are now pulled in the
temporary installation with EXTRA_INSTALL. The test is renamed to
"killtuples", for simplicity.
Author: Nazir Bilal Yavuz <[email protected]>
Suggested-by: Andres Freund <[email protected]>
Suggested-by: Michael Paquier <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
When instrumenting a MERGE command containing both WHEN NOT MATCHED BY
SOURCE and WHEN NOT MATCHED BY TARGET actions using EXPLAIN ANALYZE, a
concurrent update of the target relation could lead to an Assert
failure in show_modifytable_info(). In a non-assert build, this would
lead to an incorrect value for "skipped" tuples in the EXPLAIN output,
rather than a crash.
This could happen if the concurrent update caused a matched row to no
longer match, in which case ExecMerge() treats the single originally
matched row as a pair of not matched rows, and potentially executes 2
not-matched actions for the single source row. This could then lead to
a state where the number of rows processed by the ModifyTable node
exceeds the number of rows produced by its source node, causing
"skipped_path" in show_modifytable_info() to be negative, triggering
the Assert.
Fix this in ExecMergeMatched() by incrementing the instrumentation
tuple count on the source node whenever a concurrent update of this
kind is detected, if both kinds of merge actions exist, so that the
number of source rows matches the number of actions potentially
executed, and the "skipped" tuple count is correct.
Back-patch to v17, where support for WHEN NOT MATCHED BY SOURCE
actions was introduced.
Bug: #19111
Reported-by: Dilip Kumar <[email protected]>
Author: Dean Rasheed <[email protected]>
Reviewed-by: Dilip Kumar <[email protected]>
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 17
|
|
EvalPlanQualStart() failed to propagate es_partition_directory into
the child EState used for EPQ rechecks. When execution time partition
pruning ran during the EPQ scan, executor code dereferenced a NULL
partition directory and crashed.
Previously, propagating es_partition_directory into the EPQ EState was
unnecessary because CreatePartitionPruneState(), which sets it on
demand, also initialized the exec-pruning context. After commit
d47cbf474, CreatePartitionPruneState() now initializes only the init-
time pruning context, leaving exec-pruning context initialization to
ExecInitNode(). Since EvalPlanQualStart() runs only ExecInitNode() and
not CreatePartitionPruneState(), it can encounter a NULL
es_partition_directory. Other executor fields initialized during
CreatePartitionPruneState() are already copied into the child EState
thanks to commit 8741e48e5d, but es_partition_directory was missed.
Fix by borrowing the parent estate's es_partition_directory in
EvalPlanQualStart(), and by clearing that field in EvalPlanQualEnd()
so the parent remains responsible for freeing the directory.
Add an isolation test permutation that triggers EPQ with execution-
time partition pruning, the case that reproduces this crash.
Bug: #19078
Reported-by: Yuri Zamyatin <[email protected]>
Diagnosed-by: David Rowley <[email protected]>
Author: David Rowley <[email protected]>
Co-authored-by: Amit Langote <[email protected]>
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 18
|
|
It is possible to call pg_stat_reset_single_function_counters() for a
single function, but the reset time was missing the system view showing
its statistics. Like all the fields of pg_stat_user_functions, the GUC
track_functions needs to be enabled to show the statistics about
function executions.
Bump catalog version.
Bump PGSTAT_FILE_FORMAT_ID, as a result of the new field added to
PgStat_StatFuncEntry.
Author: Bertrand Drouvot <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
The functions test_stat_func() and test_stat_func2() had empty
function bodies, so that they took very little time to run. This made
it possible that on machines with relatively low timer resolution the
functions could return before the clock advanced, making the test fail
(as seen on buildfarm members fruitcrow and hamerkop).
To avoid that, pg_sleep for 10us during the functions. As far as we
can tell, all current hardware has clock resolution much less than
that. (The current implementation of pg_sleep will round it up to
1ms anyway, but someday that might get improved.)
Author: Michael Banck <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 15
|
|
Commit bb3ec16e14 moved partition pruning metadata into PlannedStmt.
At executor startup this metadata is used to initialize the EState
fields es_part_prune_infos, es_part_prune_states, and
es_part_prune_results. EvalPlanQualStart() failed to copy those
fields into the child EState, causing NULL dereference when Append
ran partition pruning during a recheck. This can occur with DELETE
or UPDATE on partitioned tables that use runtime pruning, e.g. with
generic plans.
Fix by copying all partition pruning state into the EPQ estate.
Add an isolation test that reproduces the crash with concurrent
UPDATE and DELETE on a partitioned table, where the DELETE session
hits the crash during its EPQ recheck after the UPDATE commits.
Bug: #19056
Reported-by: Fei Changhong <[email protected]>
Diagnozed-by: Fei Changhong <[email protected]>
Author: David Rowley <[email protected]>
Co-authored-by: Amit Langote <[email protected]>
Discussion: https://postgr.es/m/19056-a677cef9b54d76a0%40postgresql.org
|
|
The EvalPlanQual recheck for TID Range Scan wasn't rechecking the TID qual
still passed after following update chains. This could result in tuples
being updated or deleted by plans using TID Range Scans where the ctid of
the new (updated) tuple no longer matches the clause of the scan. This
isn't desired behavior, and isn't consistent with what would happen if the
chosen plan had used an Index or Seq Scan, and that could lead to hard to
predict behavior for scans that contain TID quals and other quals as the
planner has freedom to choose TID Range or some other non-TID scan method
for such queries, and the chosen plan could change at any moment.
Here we fix this by properly implementing the recheck function for TID
Range Scans.
Backpatch to 14, where TID Range Scans were added
Reported-by: Sophie Alpert <[email protected]>
Author: Sophie Alpert <[email protected]>
Author: David Rowley <[email protected]>
Reviewed-by: David Rowley <[email protected]>
Reviewed-by: Chao Li <[email protected]>
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 14
|
|
The EvalPlanQual recheck for TID Scan wasn't rechecking the TID qual
still passed after following update chains. This could result in tuples
being updated or deleted by plans using TID Scans where the ctid of the
new (updated) tuple no longer matches the clause of the scan. This isn't
desired behavior, and isn't consistent with what would happen if the
chosen plan had used an Index or Seq Scan, and that could lead to hard to
predict behavior for scans that contain TID quals and other quals as the
planner has freedom to choose TID or some other scan method for such
queries, and the chosen plan could change at any moment.
Here we fix this by properly implementing the recheck function for TID
Scans.
Backpatch to 13, oldest supported version
Reported-by: Sophie Alpert <[email protected]>
Author: Sophie Alpert <[email protected]>
Author: David Rowley <[email protected]>
Reviewed-by: David Rowley <[email protected]>
Reviewed-by: Chao Li <[email protected]>
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 13
|
|
This commit adds an isolation test showing that temporal foreign keys do
not permit referential integrity violations under concurrency, like
fk-snapshot-2. You can show that the test fails by passing false for
detectNewRows to ri_PerformCheck in ri_restrict.
Author: Paul Jungwirth <[email protected]>
Reviewed-by: Rustam ALLAKOV <[email protected]>
Reviewed-by: Dean Rasheed <[email protected]>
Reviewed-by: Robert Haas <[email protected]>
Discussion: https://postgr.es/m/CA+renyUp=xja80rBaB6NpY3RRdi750y046x28bo_xg29zKY72Q@mail.gmail.com
|
|
This commit adds a missing isolation test for (non-PERIOD) foreign keys.
With REPEATABLE READ, one transaction can insert a referencing row while
another deletes the referenced row, and both see a valid state. But
after they have committed, the table violates referential integrity.
If the INSERT precedes the DELETE, we use a crosscheck snapshot to see
the just-added row, so that the DELETE can raise a foreign key error.
You can see the table violate referential integrity if you change
ri_restrict to pass false for detectNewRows to ri_PerformCheck.
A crosscheck snapshot is not needed when the DELETE comes first, because
the INSERT's trigger takes a FOR KEY SHARE lock that sees the row now
marked for deletion, waits for that transaction to commit, and raises a
serialization error. I (Paul) added a test for that too though.
We already have a similar test (in ri-triggers.spec) for SERIALIZABLE
snapshot isolation showing that you can implement foreign keys with just
pl/pgSQL, but that test does nothing to validate ri_triggers.c. We also
have tests (in fk-snapshot.spec) for other concurrency scenarios, but
not this one: we test concurrently deleting both the referencing and
referenced row, when the constraint activates a cascade/set null action.
But those tests don't exercise ri_restrict, and the consequence of
omitting a crosscheck comparison is different: a serialization failure,
not a referential integrity violation.
Author: Paul Jungwirth <[email protected]>
Reviewed-by: Rustam ALLAKOV <[email protected]>
Reviewed-by: Dean Rasheed <[email protected]>
Reviewed-by: Robert Haas <[email protected]>
Discussion: https://postgr.es/m/CA+renyUp=xja80rBaB6NpY3RRdi750y046x28bo_xg29zKY72Q@mail.gmail.com
|
|
When executing a MERGE UPDATE action, if there is more than one
concurrent update of the target row, the lock-and-retry code would
sometimes incorrectly identify the latest version of the target tuple,
leading to incorrect results.
This was caused by using the ctid field from the TM_FailureData
returned by table_tuple_lock() in a case where the result was TM_Ok,
which is unsafe because the TM_FailureData struct is not guaranteed to
be fully populated in that case. Instead, it should use the tupleid
passed to (and updated by) table_tuple_lock().
To reduce the chances of similar errors in the future, improve the
commentary for table_tuple_lock() and TM_FailureData to make it
clearer that table_tuple_lock() updates the tid passed to it, and most
fields of TM_FailureData should not be relied on in non-failure cases.
An exception to this is the "traversed" field, which is set in both
success and failure cases.
Reported-by: Dmitry <[email protected]>
Author: Yugo Nagata <[email protected]>
Reviewed-by: Dean Rasheed <[email protected]>
Reviewed-by: Chao Li <[email protected]>
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 15
|
|
Replace 'committs' with 'commits'.
Author: Chao Li <[email protected]>
Reviewed-by: Ashutosh Bapat <[email protected]>
Discussion: https://postgr.es/m/CAEoWx2=BESkfXsZ9jQW+1NcGTazKuj2wEXsPm1_EpgzHs0BHDQ@mail.gmail.com
|
|
Commit d31bbfb6590 lost some test coverage, because the situation
being tested, a concurrent DROP, cannot happen anymore. Put the test
coverage back with a bit of a trick, by deleting directly from the
catalog table.
Co-authored-by: Noah Misch <[email protected]>
Reviewed-by: Heikki Linnakangas <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/[email protected]
|
|
This test was failing because MD5 computations are not supported in
these environments. This switches the test to rely on sha256() instead,
providing the same coverage while avoiding the failure.
Oversight in f57e214d1cbb. Per buildfarm members gecko, molamola,
shikra and froghopper.
Discussion: https://postgr.es/m/[email protected]
|
|
This test exercises the corner case in toast_save_datum() where CLUSTER
operations encounter duplicated TOAST references, reusing the existing
TOAST data instead of creating redundant copies.
During table rewrites like CLUSTER, both live and recently-dead versions
of a row may reference the same TOAST value. When copying the second or
later version of such a row, the system checks if a TOAST value already
exists in the new TOAST table using toastrel_valueid_exists(). If
found, toast_save_datum() sets data_todo = 0 so as redundant data is not
stored, ensuring only one copy of the TOAST value exists in the new
table.
The test relies on a combination of UPDATE, CLUSTER, and checks of the
TOAST values used before and after the relation rewrite, to make sure
that the same values are reused across the rewrite.
This is a continuation of 69f75d671475 to make sure that this corner
case keeps working should we mess with this area of the code.
Author: Nikhil Kumar Veldanda <[email protected]>
Discussion: https://postgr.es/m/CAFAfj_E+kw5P713S8_jZyVgQAGVFfzFiTUJPrgo-TTtJJoazQw@mail.gmail.com
|
|
Previously our tests did not exercise kill_prior_tuples for hash and gist. For
gist some related paths were reached, but gist's implementation seems to not
work if all the dead tuples are on one page (or something like that). The
coverage for other index types was rather incidental.
Thus add an explicit test ensuring kill_prior_tuples works at all.
Reviewed-by: Heikki Linnakangas <[email protected]>
Discussion: https://postgr.es/m/lxzj26ga6ippdeunz6kuncectr5gfuugmm2ry22qu6hcx6oid6@lzx3sjsqhmt6
|
|
If a MERGE inside a CTE attempts an UPDATE or DELETE on a table with
BEFORE ROW triggers, and a concurrent UPDATE or DELETE happens, the
merge code would fail (crashing in the case of an UPDATE action, and
potentially executing the wrong action for a DELETE action).
This is the same issue that 9321c79c86 attempted to fix, except now
for a MERGE inside a CTE. As noted in 9321c79c86, what needs to happen
is for the trigger code to exit early, returning the TM_Result and
TM_FailureData information to the merge code, if a concurrent
modification is detected, rather than attempting to do an EPQ
recheck. The merge code will then do its own rechecking, and rescan
the action list, potentially executing a different action in light of
the concurrent update. In particular, the trigger code must never call
ExecGetUpdateNewTuple() for MERGE, since that is bound to fail because
MERGE has its own per-action projection information.
Commit 9321c79c86 did this using estate->es_plannedstmt->commandType
in the trigger code to detect that a MERGE was being executed, which
is fine for a plain MERGE command, but does not work for a MERGE
inside a CTE. Fix by passing that information to the trigger code as
an additional parameter passed to ExecBRUpdateTriggers() and
ExecBRDeleteTriggers().
Back-patch as far as v17 only, since MERGE cannot appear inside a CTE
prior to that. Additionally, take care to preserve the trigger ABI in
v17 (though not in v18, which is still in beta).
Bug: #18986
Reported-by: Yaroslav Syrytsia <[email protected]>
Author: Dean Rasheed <[email protected]>
Reviewed-by: Michael Paquier <[email protected]>
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 17
|
|
With debug_discard_caches = 1, the runtime of this test script
increased by about a factor of 10 after commit 0dca5d68d. That's
causing some of our buildfarm animals to fail with a timeout.
The reason for the increased time is that now we are re-planning
some intentionally-non-inlineable SQL functions on every execution,
where the previous coding held onto the original plans throughout
the outer query. The previous behavior was arguably quite buggy,
so I don't think 0dca5d68d deserves blame here. But we would
like this test script to not take so long.
To fix, instead of forcing a "parallel safe" label via a
non-inlineable SQL function, apply it directly to the advisory-lock
functions by making internal-language aliases for them. A small
problem is that the advisory-lock functions return void but this
test would really like them to return integer 1. I cheated here by
declaring the aliases as returning "int". That's perhaps undue
familiarity with the implementation of PG_RETURN_VOID(), but that
hasn't changed in twenty years and is unlikely to do so in the next
twenty. That gets us an integer 0 result, and then an inline-able
wrapper to convert that to an integer 1 allows the rest of the script
to remain unchanged.
For me, this reduces the runtime with debug_discard_caches = 1
by about 100x, making the test comfortably faster than before
instead of slower.
Discussion: https://postgr.es/m/[email protected]
|
|
In the previous commit HeapBitmapScan's skip_fetch optimization was removed,
due to being broken in not easily fixable ways. Add a test that verifies we
don't re-introduce this bug if somebody tries to re-add the feature.
Only add the test to master for now, it's possible it's not entirely
stable. That seems sufficient, as we're not going to re-introduce the feature
on the backbranches. I did verify that the test passes on all branches. If the
test turns out to be unproblematic, we can backpatch it later, should we feel
a need to do so.
Discussion: https://postgr.es/m/CAEze2Wg3gXXZTr6_rwC+s4-o2ZVFB5F985uUSgJTsECx6AmGcQ@mail.gmail.com
|
|
This allows the RETURNING list of INSERT/UPDATE/DELETE/MERGE queries
to explicitly return old and new values by using the special aliases
"old" and "new", which are automatically added to the query (if not
already defined) while parsing its RETURNING list, allowing things
like:
RETURNING old.colname, new.colname, ...
RETURNING old.*, new.*
Additionally, a new syntax is supported, allowing the names "old" and
"new" to be changed to user-supplied alias names, e.g.:
RETURNING WITH (OLD AS o, NEW AS n) o.colname, n.colname, ...
This is useful when the names "old" and "new" are already defined,
such as inside trigger functions, allowing backwards compatibility to
be maintained -- the interpretation of any existing queries that
happen to already refer to relations called "old" or "new", or use
those as aliases for other relations, is not changed.
For an INSERT, old values will generally be NULL, and for a DELETE,
new values will generally be NULL, but that may change for an INSERT
with an ON CONFLICT ... DO UPDATE clause, or if a query rewrite rule
changes the command type. Therefore, we put no restrictions on the use
of old and new in any DML queries.
Dean Rasheed, reviewed by Jian He and Jeff Davis.
Discussion: https://postgr.es/m/CAEZATCWx0J0-v=Qjc6gXzR=KtsdvAE7Ow=D=mu50AgOe+pvisQ@mail.gmail.com
|
|
The inplace update survives ROLLBACK. The inval didn't, so another
backend's DDL could then update the row without incorporating the
inplace update. In the test this fixes, a mix of CREATE INDEX and ALTER
TABLE resulted in a table with an index, yet relhasindex=f. That is a
source of index corruption. Back-patch to v12 (all supported versions).
The back branch versions don't change WAL, because those branches just
added end-of-recovery SIResetAll(). All branches change the ABI of
extern function PrepareToInvalidateCacheTuple(). No PGXN extension
calls that, and there's no apparent use case in extensions.
Reviewed by Nitin Motiani and (in earlier versions) Andres Freund.
Discussion: https://postgr.es/m/[email protected]
|
|
Stop computing a never-used value. This removes the read; the read had
no functional implications. Back-patch to v12, like commit
a07e03fd8fa7daf4d1356f7cb501ffe784ea6257.
Reported by Alexander Lakhin.
Discussion: https://postgr.es/m/[email protected]
|
|
The previous commit fixed some ways of losing an inplace update. It
remained possible to lose one when a backend working toward a
heap_update() copied a tuple into memory just before inplace update of
that tuple. In catalogs eligible for inplace update, use LOCKTAG_TUPLE
to govern admission to the steps of copying an old tuple, modifying it,
and issuing heap_update(). This includes MERGE commands. To avoid
changing most of the pg_class DDL, don't require LOCKTAG_TUPLE when
holding a relation lock sufficient to exclude inplace updaters.
Back-patch to v12 (all supported versions). In v13 and v12, "UPDATE
pg_class" or "UPDATE pg_database" can still lose an inplace update. The
v14+ UPDATE fix needs commit 86dc90056dfdbd9d1b891718d2e5614e3e432f35,
and it wasn't worth reimplementing that fix without such infrastructure.
Reviewed by Nitin Motiani and (in earlier versions) Heikki Linnakangas.
Discussion: https://postgr.es/m/[email protected]
|
|
As previously-added tests demonstrated, heap_inplace_update() could
instead update an unrelated tuple of the same catalog. It could lose
the update. Losing relhasindex=t was a source of index corruption.
Inplace-updating commands like VACUUM will now wait for heap_update()
commands like GRANT TABLE and GRANT DATABASE. That isn't ideal, but a
long-running GRANT already hurts VACUUM progress more just by keeping an
XID running. The VACUUM will behave like a DELETE or UPDATE waiting for
the uncommitted change.
For implementation details, start at the systable_inplace_update_begin()
header comment and README.tuplock. Back-patch to v12 (all supported
versions). In back branches, retain a deprecated heap_inplace_update(),
for extensions.
Reported by Smolkin Grigory. Reviewed by Nitin Motiani, (in earlier
versions) Heikki Linnakangas, and (in earlier versions) Alexander
Lakhin.
Discussion: https://postgr.es/m/CAMp+ueZQz3yDk7qg42hk6-9gxniYbp-=bG2mgqecErqR5gGGOA@mail.gmail.com
|
|
This commit reverts 1adf16b8fb, 87c21bb941, and subsequent fixes and
improvements including df64c81ca9, c99ef1811a, 9dfcac8e15, 885742b9f8,
842c9b2705, fcf80c5d5f, 96c7381c4c, f4fc7cb54b, 60ae37a8bc, 259c96fa8f,
449cdcd486, 3ca43dbbb6, 2a679ae94e, 3a82c689fd, fbd4321fd5, d53a4286d7,
c086896625, 4e5d6c4091, 04158e7fa3.
The reason for reverting is security issues related to repeatable name lookups
(CVE-2014-0062). Even though 04158e7fa3 solved part of the problem, there
are still remaining issues, which aren't feasible to even carefully analyze
before the RC deadline.
Reported-by: Noah Misch, Robert Haas
Discussion: https://postgr.es/m/20240808171351.a9.nmisch%40google.com
Backpatch-through: 17
|
|
Previously, when a path type was disabled by e.g. enable_seqscan=false,
we either avoided generating that path type in the first place, or
more commonly, we added a large constant, called disable_cost, to the
estimated startup cost of that path. This latter approach can distort
planning. For instance, an extremely expensive non-disabled path
could seem to be worse than a disabled path, especially if the full
cost of that path node need not be paid (e.g. due to a Limit).
Or, as in the regression test whose expected output changes with this
commit, the addition of disable_cost can make two paths that would
normally be distinguishible in cost seem to have fuzzily the same cost.
To fix that, we now count the number of disabled path nodes and
consider that a high-order component of both the startup cost and the
total cost. Hence, the path list is now sorted by disabled_nodes and
then by total_cost, instead of just by the latter, and likewise for
the partial path list. It is important that this number is a count
and not simply a Boolean; else, as soon as we're unable to respect
disabled path types in all portions of the path, we stop trying to
avoid them where we can.
Because the path list is now sorted by the number of disabled nodes,
the join prechecks must compute the count of disabled nodes during
the initial cost phase instead of postponing it to final cost time.
Counts of disabled nodes do not cross subquery levels; at present,
there is no reason for them to do so, since the we do not postpone
path selection across subquery boundaries (see make_subplan).
Reviewed by Andres Freund, Heikki Linnakangas, and David Rowley.
Discussion: http://postgr.es/m/CA+TgmoZ_+MS+o6NeGK2xyBv-xM+w1AfFVuHE4f_aq6ekHv7YSQ@mail.gmail.com
|
|
This used to be part of CREATE OPERATOR CLASS and ALTER OPERATOR
FAMILY, but it has done nothing (except issue a NOTICE) since
PostgreSQL 8.4. Commit 30e7c175b81 removed support for dumping from
pre-9.2 servers, so this no longer serves any need.
This now removes it completely, and you'd get a normal parse error if
you used it.
Reviewed-by: Aleksander Alekseev <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/113ef2d2-3657-4353-be97-f28fceddbca1%40eisentraut.org
|
|
Per buildfarm members prion and trilobite. Back-patch to v12 (all
supported versions), like commit
0844b3968985447ed0a6937cfc8639e379da2fe6.
Strategy reviewed by Tom Lane.
Discussion: https://postgr.es/m/[email protected]
|
|
This covers both regular and inplace changes, since bugs arise at their
intersection. Where marked, these witness extant bugs. Back-patch to
v12 (all supported versions).
Reviewed (in an earlier version) by Robert Haas.
Discussion: https://postgr.es/m/[email protected]
|
|
This fixes various typos, duplicated words, and tiny bits of whitespace
mainly in code comments but also in docs.
Author: Daniel Gustafsson <[email protected]>
Author: Heikki Linnakangas <[email protected]>
Author: Alexander Lakhin <[email protected]>
Author: David Rowley <[email protected]>
Author: Nazir Bilal Yavuz <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
The fixes relate to comments, error messages, and corresponding expected output
of regression tests.
Discussion: https://postgr.es/m/CAMbWs49DDsknxyoycBqiE72VxzL_sYHF6zqL8dSeNehKPJhkKg%40mail.gmail.com
Discussion: https://postgr.es/m/86bfd241-a58c-479a-9a72-2c67a02becf8%40postgrespro.ru
Discussion: https://postgr.es/m/CAHewXNkGMPU50QG7V6Q60JGFORfo8LfYO1_GCkCa0VWbmB-fEw%40mail.gmail.com
Author: Richard Guo, Dmitry Koval, Tender Wang
|
|
This new DDL command splits a single partition into several parititions.
Just like ALTER TABLE ... MERGE PARTITIONS ... command, new patitions are
created using createPartitionTable() function with parent partition as the
template.
This commit comprises quite naive implementation which works in single process
and holds the ACCESS EXCLUSIVE LOCK on the parent table during all the
operations including the tuple routing. This is why this new DDL command
can't be recommended for large partitioned tables under a high load. However,
this implementation come in handy in certain cases even as is.
Also, it could be used as a foundation for future implementations with lesser
locking and possibly parallel.
Discussion: https://postgr.es/m/c73a1746-0cd0-6bdd-6b23-3ae0b7c0c582%40postgrespro.ru
Author: Dmitry Koval
Reviewed-by: Matthias van de Meent, Laurenz Albe, Zhihong Yu, Justin Pryzby
Reviewed-by: Alvaro Herrera, Robert Haas, Stephane Tachoires
|
|
This new DDL command merges several partitions into the one partition of the
target table. The target partition is created using new
createPartitionTable() function with parent partition as the template.
This commit comprises quite naive implementation which works in single process
and holds the ACCESS EXCLUSIVE LOCK on the parent table during all the
operations including the tuple routing. This is why this new DDL command
can't be recommended for large partitioned tables under a high load. However,
this implementation come in handy in certain cases even as is.
Also, it could be used as a foundation for future implementations with lesser
locking and possibly parallel.
Discussion: https://postgr.es/m/c73a1746-0cd0-6bdd-6b23-3ae0b7c0c582%40postgrespro.ru
Author: Dmitry Koval
Reviewed-by: Matthias van de Meent, Laurenz Albe, Zhihong Yu, Justin Pryzby
Reviewed-by: Alvaro Herrera, Robert Haas, Stephane Tachoires
|
|
This allows MERGE commands to include WHEN NOT MATCHED BY SOURCE
actions, which operate on rows that exist in the target relation, but
not in the data source. These actions can execute UPDATE, DELETE, or
DO NOTHING sub-commands.
This is in contrast to already-supported WHEN NOT MATCHED actions,
which operate on rows that exist in the data source, but not in the
target relation. To make this distinction clearer, such actions may
now be written as WHEN NOT MATCHED BY TARGET.
Writing WHEN NOT MATCHED without specifying BY SOURCE or BY TARGET is
equivalent to writing WHEN NOT MATCHED BY TARGET.
Dean Rasheed, reviewed by Alvaro Herrera, Ted Yu and Vik Fearing.
Discussion: https://postgr.es/m/CAEZATCWqnKGc57Y_JanUBHQXNKcXd7r=0R4NEZUVwP+syRkWbA@mail.gmail.com
|
|
Author: Andres Freund
Reviewed-by: Pavel Borisov
Discussion: https://www.postgresql.org/message-id/flat/CAPpHfdua-YFw3XTprfutzGp28xXLigFtzNbuFY8yPhqeq6X5kg%40mail.gmail.com
|
|
We don't determine the position at which a process waiting for a lock
should insert itself into the wait queue until we reach ProcSleep(),
and we may at that point discover that we must insert ourselves ahead
of everyone who wants a conflicting lock, in which case we obtain the
lock immediately. Up until now, a no-wait lock acquisition would fail
in such cases, erroneously claiming that the lock couldn't be obtained
immediately. Fix that by trying ProcSleep even in the no-wait case.
No back-patch for now, because I'm treating this as an improvement to
the existing no-wait feature. It could instead be argued that it's a
bug fix, on the theory that there should never be any case whatsoever
where no-wait fails to obtain a lock that would have been obtained
immediately without no-wait, but I'm reluctant to interpret the
semantics of no-wait that strictly.
Robert Haas and Jingxian Li
Discussion: http://postgr.es/m/CA+TgmobCH-kMXGVpb0BB-iNMdtcNkTvcZ4JBxDJows3kYM+GDg@mail.gmail.com
|
|
Roles with MAINTAIN on a relation may run VACUUM, ANALYZE, REINDEX,
REFRESH MATERIALIZE VIEW, CLUSTER, and LOCK TABLE on the relation.
Roles with privileges of pg_maintain may run those same commands on
all relations.
This was previously committed for v16, but it was reverted in
commit 151c22deee due to concerns about search_path tricks that
could be used to escalate privileges to the table owner. Commits
2af07e2f74, 59825d1639, and c7ea3f4229 resolved these concerns by
restricting search_path when running maintenance commands.
Bumps catversion.
Reviewed-by: Jeff Davis
Discussion: https://postgr.es/m/20240305161235.GA3478007%40nathanxps13
|
|
When an UPDATE or DELETE action in MERGE returns TM_SelfModified,
there are 2 possible causes:
1). The target tuple was already updated or deleted by the current
command. This can happen if the target row joins to more than one
source row, and the SQL standard explicitly says that this must be
an error.
2). The target tuple was already updated or deleted by a later command
in the current transaction. This can happen if the tuple is
modified by a BEFORE trigger or a volatile function used in the
query, and should be an error for the same reason that it is in a
plain UPDATE or DELETE command.
In MERGE's primary error handling block, it failed to check for (2),
causing it to return a misleading error message in such cases.
In the secondary error handling block, following a concurrent update
from another session, it failed to check for (1), causing it to
silently ignore target rows joined to more than one source row,
instead of reporting an error.
Fix this, and add tests for both of these cases.
Per report from Wenjiang Zhang. Back-patch to v15, where MERGE was
introduced.
Discussion: https://postgr.es/m/tencent_41DE0FF443FE14B94A5898D373792109E408%40qq.com
|
|
The new names are intended to match those in an upcoming patch that adds
a few GUCs to configure the SLRU buffer sizes.
Backwards compatibility concern: this changes the accepted names for
function pg_stat_slru_rest(). Since this function recognizes "any other
string" as a request to reset the entry for "other", this means that
calling it with the old names would silently reset "other" instead of
doing nothing or throwing an error.
Reviewed-by: Andrey M. Borodin <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
51efe38cb92f introduced bunch of tests for idle_in_transaction_session_timeout,
transaction_timeout and statement_timeout. These tests were too flaky on some
slow buildfarm machines, so we plan to replace them with TAP tests using
injection points. This commit removes flaky tests.
Discussion: https://postgr.es/m/CAAhFRxiQsRs2Eq5kCo9nXE3HTugsAAJdSQSmxncivebAxdmBjQ%40mail.gmail.com
Author: Andrey Borodin
|
|
Don't deal with transaction timeout in PostgresMain(). Instead, release
transaction timeout activated by StartTransaction() in
CommitTransaction()/AbortTransaction()/PrepareTransaction(). Deal with both
enabling and disabling transaction timeout in assign_transaction_timeout().
Also, remove potentially flaky timeouts-long isolation test, which has no
guarantees to pass on slow/busy machines.
Reported-by: Andres Freund
Discussion: https://postgr.es/m/20240215230856.pc6k57tqxt7fhldm%40awork3.anarazel.de
|
|
This commit adds timeout that is expected to be used as a prevention
of long-running queries. Any session within the transaction will be
terminated after spanning longer than this timeout.
However, this timeout is not applied to prepared transactions.
Only transactions with user connections are affected.
Discussion: https://postgr.es/m/CAAhFRxiQsRs2Eq5kCo9nXE3HTugsAAJdSQSmxncivebAxdmBjQ%40mail.gmail.com
Author: Andrey Borodin <[email protected]>
Author: Japin Li <[email protected]>
Author: Junwang Zhao <[email protected]>
Reviewed-by: Nikolay Samokhvalov <[email protected]>
Reviewed-by: Andres Freund <[email protected]>
Reviewed-by: Fujii Masao <[email protected]>
Reviewed-by: bt23nguyent <[email protected]>
Reviewed-by: Yuhang Qiu <[email protected]>
|
|
This fixes places where words derived from cancel were not using their
common en-US ugly^H^H^H^Hspelling.
Author: Jelte Fennema-Nio <[email protected]>
Reported-by: Thomas Munro <[email protected]>
Discussion: https://postgr.es/m/CA+hUKG+Lrq+ty6yWXF5572qNQ8KwxGwG5n4fsEcCUap685nWvQ@mail.gmail.com
|
|
Dagfinn Ilmari Mannsåker, reviewed by Shubham Khanna. Some subtractions
by me.
Discussion: http://postgr.es/m/[email protected]
|
|
If the tuple being updated is not visible to the crosscheck snapshot,
we return TM_Updated but the assertions would not hold in that case.
Move them to before the cross-check.
Fixes bug #17893. Backpatch to all supported versions.
Author: Alexander Lakhin
Backpatch-through: 12
Discussion: https://www.postgresql.org/message-id/17893-35847009eec517b5%40postgresql.org
|
|
Under some circumstances, concurrent MERGE operations could lead to
inconsistent results, that varied according the plan chosen. This was
caused by a lack of rowmarks on the source relation, which meant that
EvalPlanQual rechecking was not guaranteed to return the same source
tuples when re-running the join query.
Fix by ensuring that preprocess_rowmarks() sets up PlanRowMarks for
all non-target relations used in MERGE, in the same way that it does
for UPDATE and DELETE.
Per bug #18103. Back-patch to v15, where MERGE was introduced.
Dean Rasheed, reviewed by Richard Guo.
Discussion: https://postgr.es/m/18103-c4386baab8e355e3%40postgresql.org
|
|
The fix itself is fine, but the test revealed other problems related
to parallel query that are not easily fixable. Remove the test for
now to fix the buildfarm.
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 11
|
|
Renaming a role can affect the meaning of the special string $user, so
must cause search_path to be recalculated.
Discussion: https://postgr.es/m/[email protected]
Reviewed-by: Nathan Bossart, Michael Paquier
Backpatch-through: 11
|