summaryrefslogtreecommitdiff
path: root/src/include/utils/syscache.h
AgeCommit message (Collapse)Author
2016-01-02Update copyright for 2016Bruce Momjian
Backpatch certain files through 9.1
2015-07-25Redesign tablesample method API, and do extensive code review.Tom Lane
The original implementation of TABLESAMPLE modeled the tablesample method API on index access methods, which wasn't a good choice because, without specialized DDL commands, there's no way to build an extension that can implement a TSM. (Raw inserts into system catalogs are not an acceptable thing to do, because we can't undo them during DROP EXTENSION, nor will pg_upgrade behave sanely.) Instead adopt an API more like procedural language handlers or foreign data wrappers, wherein the only SQL-level support object needed is a single handler function identified by having a special return type. This lets us get rid of the supporting catalog altogether, so that no custom DDL support is needed for the feature. Adjust the API so that it can support non-constant tablesample arguments (the original coding assumed we could evaluate the argument expressions at ExecInitSampleScan time, which is undesirable even if it weren't outright unsafe), and discourage sampling methods from looking at invisible tuples. Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable within and across queries, as required by the SQL standard, and deal more honestly with methods that can't support that requirement. Make a full code-review pass over the tablesample additions, and fix assorted bugs, omissions, infelicities, and cosmetic issues (such as failure to put the added code stanzas in a consistent ordering). Improve EXPLAIN's output of tablesample plans, too. Back-patch to 9.5 so that we don't have to support the original API in production.
2015-06-07Use a safer method for determining whether relcache init file is stale.Tom Lane
When we invalidate the relcache entry for a system catalog or index, we must also delete the relcache "init file" if the init file contains a copy of that rel's entry. The old way of doing this relied on a specially maintained list of the OIDs of relations present in the init file: we made the list either when reading the file in, or when writing the file out. The problem is that when writing the file out, we included only rels present in our local relcache, which might have already suffered some deletions due to relcache inval events. In such cases we correctly decided not to overwrite the real init file with incomplete data --- but we still used the incomplete initFileRelationIds list for the rest of the current session. This could result in wrong decisions about whether the session's own actions require deletion of the init file, potentially allowing an init file created by some other concurrent session to be left around even though it's been made stale. Since we don't support changing the schema of a system catalog at runtime, the only likely scenario in which this would cause a problem in the field involves a "vacuum full" on a catalog concurrently with other activity, and even then it's far from easy to provoke. Remarkably, this has been broken since 2002 (in commit 786340441706ac1957a031f11ad1c2e5b6e18314), but we had never seen a reproducible test case until recently. If it did happen in the field, the symptoms would probably involve unexpected "cache lookup failed" errors to begin with, then "could not open file" failures after the next checkpoint, as all accesses to the affected catalog stopped working. Recovery would require manually removing the stale "pg_internal.init" file. To fix, get rid of the initFileRelationIds list, and instead consult syscache.c's list of relations used in catalog caches to decide whether a relation is included in the init file. This should be a tad more efficient anyway, since we're replacing linear search of a list with ~100 entries with a binary search. It's a bit ugly that the init file contents are now so directly tied to the catalog caches, but in practice that won't make much difference. Back-patch to all supported branches.
2015-05-15TABLESAMPLE, SQL Standard and extensibleSimon Riggs
Add a TABLESAMPLE clause to SELECT statements that allows user to specify random BERNOULLI sampling or block level SYSTEM sampling. Implementation allows for extensible sampling functions to be written, using a standard API. Basic version follows SQLStandard exactly. Usable concrete use cases for the sampling API follow in later commits. Petr Jelinek Reviewed by Michael Paquier and Simon Riggs
2015-04-29Introduce replication progress tracking infrastructure.Andres Freund
When implementing a replication solution ontop of logical decoding, two related problems exist: * How to safely keep track of replication progress * How to change replication behavior, based on the origin of a row; e.g. to avoid loops in bi-directional replication setups The solution to these problems, as implemented here, consist out of three parts: 1) 'replication origins', which identify nodes in a replication setup. 2) 'replication progress tracking', which remembers, for each replication origin, how far replay has progressed in a efficient and crash safe manner. 3) The ability to filter out changes performed on the behest of a replication origin during logical decoding; this allows complex replication topologies. E.g. by filtering all replayed changes out. Most of this could also be implemented in "userspace", e.g. by inserting additional rows contain origin information, but that ends up being much less efficient and more complicated. We don't want to require various replication solutions to reimplement logic for this independently. The infrastructure is intended to be generic enough to be reusable. This infrastructure also replaces the 'nodeid' infrastructure of commit timestamps. It is intended to provide all the former capabilities, except that there's only 2^16 different origins; but now they integrate with logical decoding. Additionally more functionality is accessible via SQL. Since the commit timestamp infrastructure has also been introduced in 9.5 (commit 73c986add) changing the API is not a problem. For now the number of origins for which the replication progress can be tracked simultaneously is determined by the max_replication_slots GUC. That GUC is not a perfect match to configure this, but there doesn't seem to be sufficient reason to introduce a separate new one. Bumps both catversion and wal page magic. Author: Andres Freund, with contributions from Petr Jelinek and Craig Ringer Reviewed-By: Heikki Linnakangas, Petr Jelinek, Robert Haas, Steve Singer Discussion: [email protected], [email protected], [email protected]
2015-04-26Add transforms featurePeter Eisentraut
This provides a mechanism for specifying conversions between SQL data types and procedural languages. As examples, there are transforms for hstore and ltree for PL/Perl and PL/Python. reviews by Pavel Stěhule and Andres Freund
2015-01-06Update copyright for 2015Bruce Momjian
Backpatch certain files through 9.0
2014-01-07Update copyright for 2014Bruce Momjian
Update all files in head, and files COPYRIGHT and legal.sgml in all back branches.
2013-07-02Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.Robert Haas
SnapshotNow scans have the undesirable property that, in the face of concurrent updates, the scan can fail to see either the old or the new versions of the row. In many cases, we work around this by requiring DDL operations to hold AccessExclusiveLock on the object being modified; in some cases, the existing locking is inadequate and random failures occur as a result. This commit doesn't change anything related to locking, but will hopefully pave the way to allowing lock strength reductions in the future. The major issue has held us back from making this change in the past is that taking an MVCC snapshot is significantly more expensive than using a static special snapshot such as SnapshotNow. However, testing of various worst-case scenarios reveals that this problem is not severe except under fairly extreme workloads. To mitigate those problems, we avoid retaking the MVCC snapshot for each new scan; instead, we take a new snapshot only when invalidation messages have been processed. The catcache machinery already requires that invalidation messages be sent before releasing the related heavyweight lock; else other backends might rely on locally-cached data rather than scanning the catalog at all. Thus, making snapshot reuse dependent on the same guarantees shouldn't break anything that wasn't already subtly broken. Patch by me. Review by Michael Paquier and Andres Freund.
2013-01-01Update copyrights for 2013Bruce Momjian
Fully update git head, and update back branches in ./COPYRIGHT and legal.sgml files.
2012-08-28remove catcache.h from syscache.hAlvaro Herrera
Instead, place a forward struct declaration for struct catclist in syscache.h. This reduces header proliferation somewhat.
2012-07-18Syntax support and documentation for event triggers.Robert Haas
They don't actually do anything yet; that will get fixed in a follow-on commit. But this gets the basic infrastructure in place, including CREATE/ALTER/DROP EVENT TRIGGER; support for COMMENT, SECURITY LABEL, and ALTER EXTENSION .. ADD/DROP EVENT TRIGGER; pg_dump and psql support; and documentation for the anticipated initial feature set. Dimitri Fontaine, with review and a bunch of additional hacking by me. Thom Brown extensively reviewed earlier versions of this patch set, but there's not a whole lot of that code left in this commit, as it turns out.
2012-03-07Expose an API for calculating catcache hash values.Tom Lane
Now that cache invalidation callbacks get only a hash value, and not a tuple TID (per commits 632ae6829f7abda34e15082c91d9dfb3fc0f298b and b5282aa893e565b7844f8237462cb843438cdd5e), the only way they can restrict what they invalidate is to know what the hash values mean. setrefs.c was doing this via a hard-wired assumption but that seems pretty grotty, and it'll only get worse as more cases come up. So let's expose a calculation function that takes the same parameters as SearchSysCache. Per complaint from Marko Kreen.
2012-01-01Update copyright notices for year 2012.Bruce Momjian
2011-11-03Support range data types.Heikki Linnakangas
Selectivity estimation functions are missing for some range type operators, which is a TODO. Jeff Davis
2011-02-08Per-column collation supportPeter Eisentraut
This adds collation support for columns and domains, a COLLATE clause to override it per expression, and B-tree index support. Peter Eisentraut reviewed by Pavel Stehule, Itagaki Takahiro, Robert Haas, Noah Misch
2011-01-02Basic foreign table support.Robert Haas
Foreign tables are a core component of SQL/MED. This commit does not provide a working SQL/MED infrastructure, because foreign tables cannot yet be queried. Support for foreign table scans will need to be added in a future patch. However, this patch creates the necessary system catalog structure, syntax support, and support for ancillary operations such as COMMENT and SECURITY LABEL. Shigeru Hanada, heavily revised by Robert Haas
2011-01-01Stamp copyrights for year 2011.Bruce Momjian
2010-09-20Remove cvs keywords from all files.Magnus Hagander
2010-02-14Wrap calls to SearchSysCache and related functions using macros.Robert Haas
The purpose of this change is to eliminate the need for every caller of SearchSysCache, SearchSysCacheCopy, SearchSysCacheExists, GetSysCacheOid, and SearchSysCacheList to know the maximum number of allowable keys for a syscache entry (currently 4). This will make it far easier to increase the maximum number of keys in a future release should we choose to do so, and it makes the code shorter, too. Design and review by Tom Lane.
2010-01-05Support ALTER TABLESPACE name SET/RESET ( tablespace_options ).Robert Haas
This patch only supports seq_page_cost and random_page_cost as parameters, but it provides the infrastructure to scalably support many more. In particular, we may want to add support for effective_io_concurrency, but I'm leaving that as future work for now. Thanks to Tom Lane for design help and Alvaro Herrera for the review.
2010-01-02Update copyright for the year 2010.Bruce Momjian
2009-12-29Add the ability to store inheritance-tree statistics in pg_statistic,Tom Lane
and teach ANALYZE to compute such stats for tables that have subclasses. Per my proposal of yesterday. autovacuum still needs to be taught about running ANALYZE on parent tables when their subclasses change, but the feature is useful even without that.
2009-10-05Create an ALTER DEFAULT PRIVILEGES command, which allows users to adjustTom Lane
the privileges that will be applied to subsequently-created objects. Such adjustments are always per owning role, and can be restricted to objects created in particular schemas too. A notable benefit is that users can override the traditional default privilege settings, eg, the PUBLIC EXECUTE privilege traditionally granted by default for functions. Petr Jelinek
2009-01-01Update copyright for 2009.Bruce Momjian
2008-12-19SQL/MED catalog manipulation facilitiesPeter Eisentraut
This doesn't do any remote or external things yet, but it gives modules like plproxy and dblink a standardized and future-proof system for managing their connection information. Martin Pihlak and Peter Eisentraut
2008-05-07Convert the list of syscache names from a series of #define's into an enum,Tom Lane
to avoid the pain of manually renumbering them anytime we insert another name in alphabetical order. An excellent idea from Alex Hunsaker and NikhilS' inherited-constraints patch --- whether or not the rest of that gets in, this should. Dunno why we never thought of it before.
2008-01-01Update copyrights in source tree to 2008.Bruce Momjian
2007-08-21Tsearch2 functionality migrates to core. The bulk of this work is byTom Lane
Oleg Bartunov and Teodor Sigaev, but I did a lot of editorializing, so anything that's broken is probably my fault. Documentation is nonexistent as yet, but let's land the patch so we can get some portability testing done.
2007-04-02Support enum data types. Along the way, use macros for the values ofTom Lane
pg_type.typtype whereever practical. Tom Dunstan, with some kibitzing from Tom Lane.
2007-02-14Fix up foreign-key mechanism so that there is a sound semantic basis for theTom Lane
equality checks it applies, instead of a random dependence on whatever operators might be named "=". The equality operators will now be selected from the opfamily of the unique index that the FK constraint depends on to enforce uniqueness of the referenced columns; therefore they are certain to be consistent with that index's notion of equality. Among other things this should fix the problem noted awhile back that pg_dump may fail for foreign-key constraints on user-defined types when the required operators aren't in the search path. This also means that the former warning condition about "foreign key constraint will require costly sequential scans" is gone: if the comparison condition isn't indexable then we'll reject the constraint entirely. All per past discussions. Along the way, make the RI triggers look into pg_constraint for their information, instead of using pg_trigger.tgargs; and get rid of the always error-prone fixed-size string buffers in ri_triggers.c in favor of building up the RI queries in StringInfo buffers. initdb forced due to columns added to pg_constraint and pg_trigger.
2007-01-05Update CVS HEAD for 2007 copyright. Back branches are typically notBruce Momjian
back-stamped for this.
2006-12-23Restructure operator classes to allow improved handling of cross-data-typeTom Lane
cases. Operator classes now exist within "operator families". While most families are equivalent to a single class, related classes can be grouped into one family to represent the fact that they are semantically compatible. Cross-type operators are now naturally adjunct parts of a family, without having to wedge them into a particular opclass as we had done originally. This commit restructures the catalogs and cleans up enough of the fallout so that everything still works at least as well as before, but most of the work needed to actually improve the planner's behavior will come later. Also, there are not yet CREATE/DROP/ALTER OPERATOR FAMILY commands; the only way to create a new family right now is to allow CREATE OPERATOR CLASS to make one by default. I owe some more documentation work, too. But that can all be done in smaller pieces once this infrastructure is in place.
2006-07-13More include file adjustments.Bruce Momjian
2006-07-11Allow each C include file to compile on its own by including any neededBruce Momjian
header files.
2006-05-03Create a syscache for pg_database-indexed-by-oid, and make use of itTom Lane
in various places that were previously doing ad hoc pg_database searches. This may speed up database-related privilege checks a little bit, but the main motivation is to eliminate the performance reason for having ReverifyMyDatabase do such a lot of stuff (viz, avoiding repeat scans of pg_database during backend startup). The locking reason for having that routine is about to go away, and it'd be good to have the option to break it up.
2006-03-05Update copyright for 2006. Update scripts.Bruce Momjian
2005-10-15Standard pgindent run for 8.1.Bruce Momjian
2005-06-28Replace pg_shadow and pg_group by new role-capable catalogs pg_authidTom Lane
and pg_auth_members. There are still many loose ends to finish in this patch (no documentation, no regression tests, no pg_dump support for instance). But I'm going to commit it now anyway so that Alvaro can make some progress on shared dependencies. The catalog changes should be pretty much done.
2005-03-29Convert oidvector and int2vector into variable-length arrays. ThisTom Lane
change saves a great deal of space in pg_proc and its primary index, and it eliminates the former requirement that INDEX_MAX_KEYS and FUNC_MAX_ARGS have the same value. INDEX_MAX_KEYS is still embedded in the on-disk representation (because it affects index tuple header size), but FUNC_MAX_ARGS is not. I believe it would now be possible to increase FUNC_MAX_ARGS at little cost, but haven't experimented yet. There are still a lot of vestigial references to FUNC_MAX_ARGS, which I will clean up in a separate pass. However, getting rid of it altogether would require changing the FunctionCallInfoData struct, and I'm not sure I want to buy into that.
2004-12-31Tag appropriate files for rc3PostgreSQL Daemon
Also performed an initial run through of upgrading our Copyright date to extend to 2005 ... first run here was very simple ... change everything where: grep 1996-2004 && the word 'Copyright' ... scanned through the generated list with 'less' first, and after, to make sure that I only picked up the right entries ...
2004-08-29Update copyright to 2004.Bruce Momjian
2003-11-29make sure the $Id tags are converted to $PostgreSQL as well ...PostgreSQL Daemon
2003-08-04Update copyrights to 2003.Bruce Momjian
2002-11-02Fix permissions-checking bugs and namespace-search-path bugs inTom Lane
CONVERSION code. Still need to figure out what to do about inappropriate coding in parsing.
2002-09-04pgindent run.Bruce Momjian
2002-08-02ALTER TABLE DROP COLUMN works. Patch by Christopher Kings-Lynne,Tom Lane
code review by Tom Lane. Remaining issues: functions that take or return tuple types are likely to break if one drops (or adds!) a column in the table defining the type. Need to think about what to do here. Along the way: some code review for recent COPY changes; mark system columns attnotnull = true where appropriate, per discussion a month ago.
2002-07-25Implement DROP CONVERSIONTatsuo Ishii
Add regression test
2002-07-18pg_cast table, and standards-compliant CREATE/DROP CAST commands, plusPeter Eisentraut
extension to create binary compatible casts. Includes dependency tracking as well. pg_proc.proimplicit is now defunct, but will be removed in a separate commit. pg_dump provides a migration path from the previous scheme to declare casts. Dumping binary compatible casts is currently impossible, though.
2002-07-11Add new CREATE CONVERSION/DROP CONVERSION command.Tatsuo Ishii
This is the first cut toward CREATE CONVERSION/DROP CONVERSION implementaion. The commands can now add/remove tuples to the new pg_conversion system catalog, but that's all. Still need work to make them actually working. Documentations, regression tests also need work.