diff options
author | Tom Lane | 2022-10-03 14:56:16 +0000 |
---|---|---|
committer | Tom Lane | 2022-10-03 14:56:16 +0000 |
commit | f4c7c410ee4a7baa06f51ebb8d5333c169691dd3 (patch) | |
tree | 8b0811e2be7edf69c6e2216af085112335374b1b /src/test/regress/sql/aggregates.sql | |
parent | f60eb3f2827db292edf71bb7296fbdf5958ace3d (diff) |
Revert "Optimize order of GROUP BY keys".
This reverts commit db0d67db2401eb6238ccc04c6407a4fd4f985832 and
several follow-on fixes. The idea of making a cost-based choice
of the order of the sorting columns is not fundamentally unsound,
but it requires cost information and data statistics that we don't
really have. For example, relying on procost to distinguish the
relative costs of different sort comparators is pretty pointless
so long as most such comparator functions are labeled with cost 1.0.
Moreover, estimating the number of comparisons done by Quicksort
requires more than just an estimate of the number of distinct values
in the input: you also need some idea of the sizes of the larger
groups, if you want an estimate that's good to better than a factor of
three or so. That's data that's often unknown or not very reliable.
Worse, to arrive at estimates of the number of calls made to the
lower-order-column comparison functions, the code needs to make
estimates of the numbers of distinct values of multiple columns,
which are necessarily even less trustworthy than per-column stats.
Even if all the inputs are perfectly reliable, the cost algorithm
as-implemented cannot offer useful information about how to order
sorting columns beyond the point at which the average group size
is estimated to drop to 1.
Close inspection of the code added by db0d67db2 shows that there
are also multiple small bugs. These could have been fixed, but
there's not much point if we don't trust the estimates to be
accurate in-principle.
Finally, the changes in cost_sort's behavior made for very large
changes (often a factor of 2 or so) in the cost estimates for all
sorting operations, not only those for multi-column GROUP BY.
That naturally changes plan choices in many situations, and there's
precious little evidence to show that the changes are for the better.
Given the above doubts about whether the new estimates are really
trustworthy, it's hard to summon much confidence that these changes
are better on the average.
Since we're hard up against the release deadline for v15, let's
revert these changes for now. We can always try again later.
Note: in v15, I left T_PathKeyInfo in place in nodes.h even though
it's unreferenced. Removing it would be an ABI break, and it seems
a bit late in the release cycle for that.
Discussion: https://postgr.es/m/TYAPR01MB586665EB5FB2C3807E893941F5579@TYAPR01MB5866.jpnprd01.prod.outlook.com
Diffstat (limited to 'src/test/regress/sql/aggregates.sql')
-rw-r--r-- | src/test/regress/sql/aggregates.sql | 99 |
1 files changed, 0 insertions, 99 deletions
diff --git a/src/test/regress/sql/aggregates.sql b/src/test/regress/sql/aggregates.sql index 4540a06f454..a4c00ff7a9d 100644 --- a/src/test/regress/sql/aggregates.sql +++ b/src/test/regress/sql/aggregates.sql @@ -1068,105 +1068,6 @@ SELECT balk(hundred) FROM tenk1; ROLLBACK; --- GROUP BY optimization by reorder columns - -SELECT - i AS id, - i/2 AS p, - format('%60s', i%2) AS v, - i/4 AS c, - i/8 AS d, - (random() * (10000/8))::int as e --the same as d but no correlation with p - INTO btg -FROM - generate_series(1, 10000) i; - -VACUUM btg; -ANALYZE btg; - --- GROUP BY optimization by reorder columns by frequency - -SET enable_hashagg=off; -SET max_parallel_workers= 0; -SET max_parallel_workers_per_gather = 0; - -EXPLAIN (COSTS off) -SELECT count(*) FROM btg GROUP BY p, v; - -EXPLAIN (COSTS off) -SELECT count(*) FROM btg GROUP BY v, p; - -EXPLAIN (COSTS off) -SELECT count(*) FROM btg GROUP BY v, p, c; - -EXPLAIN (COSTS off) -SELECT count(*) FROM btg GROUP BY v, p, c ORDER BY v, p, c; - -EXPLAIN (COSTS off) -SELECT count(*) FROM btg GROUP BY v, p, d, c; - -EXPLAIN (COSTS off) -SELECT count(*) FROM btg GROUP BY v, p, d, c ORDER BY v, p, d ,c; - -EXPLAIN (COSTS off) -SELECT count(*) FROM btg GROUP BY v, p, d, c ORDER BY p, v, d ,c; - -EXPLAIN (COSTS off) -SELECT count(*) FROM btg GROUP BY p, d, e; - -EXPLAIN (COSTS off) -SELECT count(*) FROM btg GROUP BY p, e, d; - -CREATE STATISTICS btg_dep ON d, e, p FROM btg; -ANALYZE btg; - -EXPLAIN (COSTS off) -SELECT count(*) FROM btg GROUP BY p, d, e; - -EXPLAIN (COSTS off) -SELECT count(*) FROM btg GROUP BY p, e, d; - - --- GROUP BY optimization by reorder columns by index scan - -CREATE INDEX ON btg(p, v); -SET enable_seqscan=off; -SET enable_bitmapscan=off; -VACUUM btg; - -EXPLAIN (COSTS off) -SELECT count(*) FROM btg GROUP BY p, v; - -EXPLAIN (COSTS off) -SELECT count(*) FROM btg GROUP BY p, v ORDER BY p, v; - -EXPLAIN (COSTS off) -SELECT count(*) FROM btg GROUP BY v, p; - -EXPLAIN (COSTS off) -SELECT count(*) FROM btg GROUP BY v, p ORDER BY p, v; - -EXPLAIN (COSTS off) -SELECT count(*) FROM btg GROUP BY v, p, c; - -EXPLAIN (COSTS off) -SELECT count(*) FROM btg GROUP BY v, p, c ORDER BY p, v; - -EXPLAIN (COSTS off) -SELECT count(*) FROM btg GROUP BY v, c, p, d; - -EXPLAIN (COSTS off) -SELECT count(*) FROM btg GROUP BY v, c, p, d ORDER BY p, v; - -DROP TABLE btg; - -RESET enable_hashagg; -RESET max_parallel_workers; -RESET max_parallel_workers_per_gather; -RESET enable_seqscan; -RESET enable_bitmapscan; - - -- Secondly test the case of a parallel aggregate combiner function -- returning NULL. For that use normal transition function, but a -- combiner function returning NULL. |