| Age | Commit message (Collapse) | Author |
|
A few places that access this catalog don't set up an active
snapshot before potentially accessing its TOAST table. However,
roname (the replication origin name) is the only varlena column, so
this is only a problem if the name requires out-of-line storage.
This commit removes its TOAST table to avoid needing to set up a
snapshot. It also places a limit on replication origin names so
that attempts to set long names will fail with a more user-friendly
error. Those chosen limit of 512 bytes should be sufficient to
avoid "row is too big" errors independent of BLCKSZ, but it should
also be lenient enough for all reasonable use-cases.
Bumps catversion.
Reviewed-by: Michael Paquier <[email protected]>
Reviewed-by: Amit Kapila <[email protected]>
Reviewed-by: Euler Taveira <[email protected]>
Reviewed-by: Nisha Moond <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/ZvMSUPOqUU-VNADN%40nathan
|
|
pg_authid's only varlena column is rolpassword, which unfortunately
cannot be de-TOASTed during authentication because we haven't
selected a database yet and cannot read pg_class. By removing
pg_authid's TOAST table, attempts to set password hashes that
require out-of-line storage will fail with a "row is too big"
error instead. We may want to provide a more user-friendly error
in the future, but for now let's just remove the useless TOAST
table.
Bumps catversion.
Reported-by: Alexander Lakhin
Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/89e8649c-eb74-db25-7945-6d6b23992394%40gmail.com
|
|
This change allows pg_index rows to use out-of-line storage for the
"indexprs" and "indpred" columns, which enables use-cases such as
very large index expressions.
This system catalog was previously not given a TOAST table due to a
fear of circularity issues (see commit 96cdeae07f). Testing has
not revealed any such problems, and it seems unlikely that the
entries for system indexes could ever need out-of-line storage. In
any case, it is still early in the v18 development cycle, so
committing this now will hopefully increase the chances of finding
any unexpected problems prior to release.
Bumps catversion.
Reported-by: Jonathan Katz
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/b611015f-b423-458c-aa2d-be0e655cc1b4%40postgresql.org
|
|
Oversight in 534287403. We missed this up to now because the
core regression tests create no such entries (at least up to
this test), so the only way to see the failure is to do
"make installcheck" in an installation where some other DB
has such entries. I happened to do that just now ...
|
|
As of v14, pg_depend contains almost 7000 "pin" entries recording
the OIDs of built-in objects. This is a fair amount of bloat for
every database, and it adds time to pg_depend lookups as well as
initdb. We can get rid of all of those entries in favor of an OID
range check, i.e. "OIDs below FirstUnpinnedObjectId are pinned".
(template1 and the public schema are exceptions. Those exceptions
are now wired into IsPinnedObject() instead of initdb's code for
filling pg_depend, but it's the same amount of cruft either way.)
The contents of pg_shdepend are modified likewise.
Discussion: https://postgr.es/m/[email protected]
|
|
For those system catalogs that have a unique indexes, make a primary
key and unique constraint, using ALTER TABLE ... PRIMARY KEY/UNIQUE
USING INDEX.
This can be helpful for GUI tools that look for a primary key, and it
might in the future allow declaring foreign keys, for making schema
diagrams.
The constraint creation statements are automatically created by
genbki.pl from DECLARE_UNIQUE_INDEX directives. To specify which one
of the available unique indexes is the primary key, use the new
directive DECLARE_UNIQUE_INDEX_PKEY instead. By convention, we
usually make a catalog's OID column its primary key, if it has one.
Reviewed-by: Tom Lane <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/[email protected]
|
|
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/[email protected]
|
|
It has been project policy to create toast tables only for those catalogs
that might reasonably need one. Since this judgment call can change over
time, just create one for every catalog, as this can be useful when
creating rather-long entries in catalogs, with recent examples being in
the shape of policy expressions or customly-formatted SCRAM verifiers.
To prevent circular dependencies and to avoid adding complexity to VACUUM
FULL logic, exclude pg_class, pg_attribute, and pg_index. Also, to
prevent pg_upgrade from seeing a non-empty new cluster, exclude
pg_largeobject and pg_largeobject_metadata from the set as large object
data is handled as user data. Those relations have no reason to use a
toast table anyway.
Author: Joe Conway, John Naylor
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/[email protected]
|
|
For the moment, this just records which system catalogs have toast
tables right now. Future patches will possibly change that set.
from Tom Lane via Joe Conway
Discussion: https://www.postgresql.org/message-id/flat/[email protected]/
|
|
Fix thinko in commit 8be8510cf: it's okay to have dbid == 0 in normal
(non-pin) entries in pg_shdepend, because global objects such as
databases are entered that way. The test would pass so long as it
was run in a cluster containing no databases/tablespaces owned by,
or granted to, roles other than the bootstrap superuser. That's the
expected situation for "make check", but for "make installcheck", not
so much.
Reported by Ryan Murphy.
Discussion: https://postgr.es/m/CAHeEsBc6EQe0mxGBKDXAwJbntgfvoAd5MQC-5362SmC3Tng_6g@mail.gmail.com
|
|
It's essential that initdb.c's setup_depend() scan each system catalog
that could contain objects that need to have "p" (pin) entries in pg_depend
or pg_shdepend. Forgetting to add that, either when a catalog is first
invented or when it first acquires DATA() entries, is an obvious bug
hazard. We can detect such omissions at reasonable cost by probing every
OID-containing system catalog to see whether the lowest-numbered OID in it
is pinned. If so, the catalog must have been properly accounted for in
setup_depend(). If the lowest OID is above FirstNormalObjectId then the
catalog must have been empty at the end of initdb, so it doesn't matter.
There are a small number of catalogs whose first entry is made later in
initdb than setup_depend(), resulting in nonempty expected output of the
test, but these can be manually inspected to see that they are OK. Any
future mistake of this ilk will manifest as a new entry in the test's
output.
Since pg_conversion is already in the test's output, add it to the set of
catalogs scanned by setup_depend(). That has no effect today (hence, no
catversion bump here) but it will protect us if we ever do add pin-worthy
conversions.
This test is very much like the catalog sanity checks embodied in
opr_sanity.sql and type_sanity.sql, but testing pg_depend doesn't seem to
fit naturally into either of those scripts' charters. Hence, invent a new
test script misc_sanity.sql, which can be a home for this as well as tests
on any other catalogs we might want in future.
Discussion: https://postgr.es/m/[email protected]
|