summaryrefslogtreecommitdiff
path: root/src/backend/storage
diff options
context:
space:
mode:
authorAndres Freund2025-03-17 22:51:33 +0000
committerAndres Freund2025-03-17 22:51:33 +0000
commitda7226993fd4b73d8b40abb7167d124eada97f2e (patch)
tree6dfb9949c552c6a6aa6c5511e77a2477ccb9641b /src/backend/storage
parent02844012b304ba80d1c48d51f6fe10bb622490cc (diff)
aio: Add core asynchronous I/O infrastructure
The main motivations to use AIO in PostgreSQL are: a) Reduce the time spent waiting for IO by issuing IO sufficiently early. In a few places we have approximated this using posix_fadvise() based prefetching, but that is fairly limited (no completion feedback, double the syscalls, only works with buffered IO, only works on some OSs). b) Allow to use Direct-I/O (DIO). DIO can offload most of the work for IO to hardware and thus increase throughput / decrease CPU utilization, as well as reduce latency. While we have gained the ability to configure DIO in d4e71df6, it is not yet usable for real world workloads, as every IO is executed synchronously. For portability, the new AIO infrastructure allows to implement AIO using different methods. The choice of the AIO method is controlled by the new io_method GUC. As of this commit, the only implemented method is "sync", i.e. AIO is not actually executed asynchronously. The "sync" method exists to allow to bypass most of the new code initially. Subsequent commits will introduce additional IO methods, including a cross-platform method implemented using worker processes and a linux specific method using io_uring. To allow different parts of postgres to use AIO, the core AIO infrastructure does not need to know what kind of files it is operating on. The necessary behavioral differences for different files are abstracted as "AIO Targets". One example target would be smgr. For boring portability reasons, all targets currently need to be added to an array in aio_target.c. This commit does not implement any AIO targets, just the infrastructure for them. The smgr target will be added in a later commit. Completion (and other events) of IOs for one type of file (i.e. one AIO target) need to be reacted to differently, based on the IO operation and the callsite. This is made possible by callbacks that can be registered on IOs. E.g. an smgr read into a local buffer does not need to update the corresponding BufferDesc (as there is none), but a read into shared buffers does. This commit does not contain any callbacks, they will be added in subsequent commits. For now the AIO infrastructure only understands READV and WRITEV operations, but it is expected that more operations will be added. E.g. fsync/fdatasync, flush_range and network operations like send/recv. As of this commit, nothing uses the AIO infrastructure. Later commits will add an smgr target, md.c and bufmgr.c callbacks and then finally use AIO for read_stream.c IO, which, in one fell swoop, will convert all read stream users to AIO. The goal is to use AIO in many more places. There are patches to use AIO for checkpointer and bgwriter that are reasonably close to being ready. There also are prototypes to use it for WAL, relation extension, backend writes and many more. Those prototypes were important to ensure the design of the AIO subsystem is not too limiting (e.g. WAL writes need to happen in critical sections, which influenced a lot of the design). A future commit will add an AIO README explaining the AIO architecture and how to use the AIO subsystem. The README is added later, as it references details only added in later commits. Many many more people than the folks named below have contributed with feedback, work on semi-independent patches etc. E.g. various folks have contributed patches to use the read stream infrastructure (added by Thomas in b5a9b18cd0b) in more places. Similarly, a *lot* of folks have contributed to the CI infrastructure, which I had started to work on to make adding AIO feasible. Some of the work by contributors has gone into the "v1" prototype of AIO, which heavily influenced the current design of the AIO subsystem. None of the code from that directly survives, but without the prototype, the current version of the AIO infrastructure would not exist. Similarly, the reviewers below have not necessarily looked at the current design or the whole infrastructure, but have provided very valuable input. I am to blame for problems, not they. Author: Andres Freund <[email protected]> Co-authored-by: Thomas Munro <[email protected]> Co-authored-by: Nazir Bilal Yavuz <[email protected]> Co-authored-by: Melanie Plageman <[email protected]> Reviewed-by: Heikki Linnakangas <[email protected]> Reviewed-by: Noah Misch <[email protected]> Reviewed-by: Jakub Wartak <[email protected]> Reviewed-by: Melanie Plageman <[email protected]> Reviewed-by: Robert Haas <[email protected]> Reviewed-by: Dmitry Dolgov <[email protected]> Reviewed-by: Antonin Houska <[email protected]> Discussion: https://postgr.es/m/uvrtrknj4kdytuboidbhwclo4gxhswwcpgadptsjvjqcluzmah%40brqs62irg4dt Discussion: https://postgr.es/m/[email protected] Discussion: https://postgr.es/m/stj36ea6yyhoxtqkhpieia2z4krnam7qyetc57rfezgk4zgapf@gcnactj4z56m
Diffstat (limited to 'src/backend/storage')
-rw-r--r--src/backend/storage/aio/Makefile4
-rw-r--r--src/backend/storage/aio/aio.c1130
-rw-r--r--src/backend/storage/aio/aio_callback.c308
-rw-r--r--src/backend/storage/aio/aio_init.c198
-rw-r--r--src/backend/storage/aio/aio_io.c184
-rw-r--r--src/backend/storage/aio/aio_target.c114
-rw-r--r--src/backend/storage/aio/meson.build4
-rw-r--r--src/backend/storage/aio/method_sync.c47
8 files changed, 1989 insertions, 0 deletions
diff --git a/src/backend/storage/aio/Makefile b/src/backend/storage/aio/Makefile
index eaeaeeee8e3..89f821ea7e1 100644
--- a/src/backend/storage/aio/Makefile
+++ b/src/backend/storage/aio/Makefile
@@ -10,7 +10,11 @@ include $(top_builddir)/src/Makefile.global
OBJS = \
aio.o \
+ aio_callback.o \
aio_init.o \
+ aio_io.o \
+ aio_target.o \
+ method_sync.o \
read_stream.o
include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/storage/aio/aio.c b/src/backend/storage/aio/aio.c
index 828a94efdc3..4d5439c73fd 100644
--- a/src/backend/storage/aio/aio.c
+++ b/src/backend/storage/aio/aio.c
@@ -3,6 +3,28 @@
* aio.c
* AIO - Core Logic
*
+ * For documentation about how AIO works on a higher level, including a
+ * schematic example, see README.md.
+ *
+ *
+ * AIO is a complicated subsystem. To keep things navigable, it is split
+ * across a number of files:
+ *
+ * - method_*.c - different ways of executing AIO (e.g. worker process)
+ *
+ * - aio_target.c - IO on different kinds of targets
+ *
+ * - aio_io.c - method-independent code for specific IO ops (e.g. readv)
+ *
+ * - aio_callback.c - callbacks at IO operation lifecycle events
+ *
+ * - aio_init.c - per-server and per-backend initialization
+ *
+ * - aio.c - all other topics
+ *
+ * - read_stream.c - helper for reading buffered relation data
+ *
+ *
* Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
@@ -15,10 +37,28 @@
#include "postgres.h"
#include "lib/ilist.h"
+#include "miscadmin.h"
+#include "port/atomics.h"
#include "storage/aio.h"
+#include "storage/aio_internal.h"
#include "storage/aio_subsys.h"
#include "utils/guc.h"
#include "utils/guc_hooks.h"
+#include "utils/resowner.h"
+#include "utils/wait_event_types.h"
+
+#ifdef USE_INJECTION_POINTS
+#include "utils/injection_point.h"
+#endif
+
+
+static inline void pgaio_io_update_state(PgAioHandle *ioh, PgAioHandleState new_state);
+static void pgaio_io_reclaim(PgAioHandle *ioh);
+static void pgaio_io_resowner_register(PgAioHandle *ioh);
+static void pgaio_io_wait_for_free(void);
+static PgAioHandle *pgaio_io_from_wref(PgAioWaitRef *iow, uint64 *ref_generation);
+static const char *pgaio_io_state_get_name(PgAioHandleState s);
+static void pgaio_io_wait(PgAioHandle *ioh, uint64 ref_generation);
/* Options for io_method. */
@@ -31,17 +71,976 @@ const struct config_enum_entry io_method_options[] = {
int io_method = DEFAULT_IO_METHOD;
int io_max_concurrency = -1;
+/* global control for AIO */
+PgAioCtl *pgaio_ctl;
+
+/* current backend's per-backend state */
+PgAioBackend *pgaio_my_backend;
+
+
+static const IoMethodOps *const pgaio_method_ops_table[] = {
+ [IOMETHOD_SYNC] = &pgaio_sync_ops,
+};
+
+/* callbacks for the configured io_method, set by assign_io_method */
+const IoMethodOps *pgaio_method_ops;
/*
+ * Currently there's no infrastructure to pass arguments to injection points,
+ * so we instead set this up for the duration of the injection point
+ * invocation. See pgaio_io_call_inj().
+ */
+#ifdef USE_INJECTION_POINTS
+static PgAioHandle *pgaio_inj_cur_handle;
+#endif
+
+
+
+/* --------------------------------------------------------------------------------
+ * Public Functions related to PgAioHandle
+ * --------------------------------------------------------------------------------
+ */
+
+/*
+ * Acquire an AioHandle, waiting for IO completion if necessary.
+ *
+ * Each backend can only have one AIO handle that has been "handed out" to
+ * code, but not yet submitted or released. This restriction is necessary to
+ * ensure that it is possible for code to wait for an unused handle by waiting
+ * for in-flight IO to complete. There is a limited number of handles in each
+ * backend, if multiple handles could be handed out without being submitted,
+ * waiting for all in-flight IO to complete would not guarantee that handles
+ * free up.
+ *
+ * It is cheap to acquire an IO handle, unless all handles are in use. In that
+ * case this function waits for the oldest IO to complete. If that is not
+ * desirable, use pgaio_io_acquire_nb().
+ *
+ * If a handle was acquired but then does not turn out to be needed,
+ * e.g. because pgaio_io_acquire() is called before starting an IO in a
+ * critical section, the handle needs to be released with pgaio_io_release().
+ *
+ *
+ * To react to the completion of the IO as soon as it is known to have
+ * completed, callbacks can be registered with pgaio_io_register_callbacks().
+ *
+ * To actually execute IO using the returned handle, the pgaio_io_prep_*()
+ * family of functions is used. In many cases the pgaio_io_prep_*() call will
+ * not be done directly by code that acquired the handle, but by lower level
+ * code that gets passed the handle. E.g. if code in bufmgr.c wants to perform
+ * AIO, it typically will pass the handle to smgr.c, which will pass it on to
+ * md.c, on to fd.c, which then finally calls pgaio_io_prep_*(). This
+ * forwarding allows the various layers to react to the IO's completion by
+ * registering callbacks. These callbacks in turn can translate a lower
+ * layer's result into a result understandable by a higher layer.
+ *
+ * During pgaio_io_prep_*() the IO is staged (i.e. prepared for execution but
+ * not submitted to the kernel). Unless in batchmode
+ * (c.f. pgaio_enter_batchmode()), the IO will also get submitted for
+ * execution. Note that, whether in batchmode or not, the IO might even
+ * complete before the functions return.
+ *
+ * After pgaio_io_prep_*() the AioHandle is "consumed" and may not be
+ * referenced by the IO issuing code. To e.g. wait for IO, references to the
+ * IO can be established with pgaio_io_get_wref() *before* pgaio_io_prep_*()
+ * is called. pgaio_wref_wait() can be used to wait for the IO to complete.
+ *
+ *
+ * To know if the IO [partially] succeeded or failed, a PgAioReturn * can be
+ * passed to pgaio_io_acquire(). Once the issuing backend has called
+ * pgaio_wref_wait(), the PgAioReturn contains information about whether the
+ * operation succeeded and details about the first failure, if any. The error
+ * can be raised / logged with pgaio_result_report().
+ *
+ * The lifetime of the memory pointed to be *ret needs to be at least as long
+ * as the passed in resowner. If the resowner releases resources before the IO
+ * completes (typically due to an error), the reference to *ret will be
+ * cleared. In case of resowner cleanup *ret will not be updated with the
+ * results of the IO operation.
+ */
+PgAioHandle *
+pgaio_io_acquire(struct ResourceOwnerData *resowner, PgAioReturn *ret)
+{
+ PgAioHandle *h;
+
+ while (true)
+ {
+ h = pgaio_io_acquire_nb(resowner, ret);
+
+ if (h != NULL)
+ return h;
+
+ /*
+ * Evidently all handles by this backend are in use. Just wait for
+ * some to complete.
+ */
+ pgaio_io_wait_for_free();
+ }
+}
+
+/*
+ * Acquire an AioHandle, returning NULL if no handles are free.
+ *
+ * See pgaio_io_acquire(). The only difference is that this function will return
+ * NULL if there are no idle handles, instead of blocking.
+ */
+PgAioHandle *
+pgaio_io_acquire_nb(struct ResourceOwnerData *resowner, PgAioReturn *ret)
+{
+ if (pgaio_my_backend->num_staged_ios >= PGAIO_SUBMIT_BATCH_SIZE)
+ {
+ Assert(pgaio_my_backend->num_staged_ios == PGAIO_SUBMIT_BATCH_SIZE);
+ pgaio_submit_staged();
+ }
+
+ if (pgaio_my_backend->handed_out_io)
+ elog(ERROR, "API violation: Only one IO can be handed out");
+
+ if (!dclist_is_empty(&pgaio_my_backend->idle_ios))
+ {
+ dlist_node *ion = dclist_pop_head_node(&pgaio_my_backend->idle_ios);
+ PgAioHandle *ioh = dclist_container(PgAioHandle, node, ion);
+
+ Assert(ioh->state == PGAIO_HS_IDLE);
+ Assert(ioh->owner_procno == MyProcNumber);
+
+ pgaio_io_update_state(ioh, PGAIO_HS_HANDED_OUT);
+ pgaio_my_backend->handed_out_io = ioh;
+
+ if (resowner)
+ pgaio_io_resowner_register(ioh);
+
+ if (ret)
+ {
+ ioh->report_return = ret;
+ ret->result.status = ARS_UNKNOWN;
+ }
+
+ return ioh;
+ }
+
+ return NULL;
+}
+
+/*
+ * Release IO handle that turned out to not be required.
+ *
+ * See pgaio_io_acquire() for more details.
+ */
+void
+pgaio_io_release(PgAioHandle *ioh)
+{
+ if (ioh == pgaio_my_backend->handed_out_io)
+ {
+ Assert(ioh->state == PGAIO_HS_HANDED_OUT);
+ Assert(ioh->resowner);
+
+ pgaio_my_backend->handed_out_io = NULL;
+ pgaio_io_reclaim(ioh);
+ }
+ else
+ {
+ elog(ERROR, "release in unexpected state");
+ }
+}
+
+/*
* Release IO handle during resource owner cleanup.
*/
void
pgaio_io_release_resowner(dlist_node *ioh_node, bool on_error)
{
+ PgAioHandle *ioh = dlist_container(PgAioHandle, resowner_node, ioh_node);
+
+ Assert(ioh->resowner);
+
+ ResourceOwnerForgetAioHandle(ioh->resowner, &ioh->resowner_node);
+ ioh->resowner = NULL;
+
+ switch (ioh->state)
+ {
+ case PGAIO_HS_IDLE:
+ elog(ERROR, "unexpected");
+ break;
+ case PGAIO_HS_HANDED_OUT:
+ Assert(ioh == pgaio_my_backend->handed_out_io || pgaio_my_backend->handed_out_io == NULL);
+
+ if (ioh == pgaio_my_backend->handed_out_io)
+ {
+ pgaio_my_backend->handed_out_io = NULL;
+ if (!on_error)
+ elog(WARNING, "leaked AIO handle");
+ }
+
+ pgaio_io_reclaim(ioh);
+ break;
+ case PGAIO_HS_DEFINED:
+ case PGAIO_HS_STAGED:
+ if (!on_error)
+ elog(WARNING, "AIO handle was not submitted");
+ pgaio_submit_staged();
+ break;
+ case PGAIO_HS_SUBMITTED:
+ case PGAIO_HS_COMPLETED_IO:
+ case PGAIO_HS_COMPLETED_SHARED:
+ case PGAIO_HS_COMPLETED_LOCAL:
+ /* this is expected to happen */
+ break;
+ }
+
+ /*
+ * Need to unregister the reporting of the IO's result, the memory it's
+ * referencing likely has gone away.
+ */
+ if (ioh->report_return)
+ ioh->report_return = NULL;
+}
+
+/*
+ * Add a [set of] flags to the IO.
+ *
+ * Note that this combines flags with already set flags, rather than set flags
+ * to explicitly the passed in parameters. This is to allow multiple callsites
+ * to set flags.
+ */
+void
+pgaio_io_set_flag(PgAioHandle *ioh, PgAioHandleFlags flag)
+{
+ Assert(ioh->state == PGAIO_HS_HANDED_OUT);
+
+ ioh->flags |= flag;
+}
+
+/*
+ * Returns an ID uniquely identifying the IO handle. This is only really
+ * useful for logging, as handles are reused across multiple IOs.
+ */
+int
+pgaio_io_get_id(PgAioHandle *ioh)
+{
+ Assert(ioh >= pgaio_ctl->io_handles &&
+ ioh < (pgaio_ctl->io_handles + pgaio_ctl->io_handle_count));
+ return ioh - pgaio_ctl->io_handles;
+}
+
+/*
+ * Return the ProcNumber for the process that can use an IO handle. The
+ * mapping from IO handles to PGPROCs is static, therefore this even works
+ * when the corresponding PGPROC is not in use.
+ */
+ProcNumber
+pgaio_io_get_owner(PgAioHandle *ioh)
+{
+ return ioh->owner_procno;
+}
+
+/*
+ * Return a wait reference for the IO. Only wait references can be used to
+ * wait for an IOs completion, as handles themselves can be reused after
+ * completion. See also the comment above pgaio_io_acquire().
+ */
+void
+pgaio_io_get_wref(PgAioHandle *ioh, PgAioWaitRef *iow)
+{
+ Assert(ioh->state == PGAIO_HS_HANDED_OUT ||
+ ioh->state == PGAIO_HS_DEFINED ||
+ ioh->state == PGAIO_HS_STAGED);
+ Assert(ioh->generation != 0);
+
+ iow->aio_index = ioh - pgaio_ctl->io_handles;
+ iow->generation_upper = (uint32) (ioh->generation >> 32);
+ iow->generation_lower = (uint32) ioh->generation;
+}
+
+
+
+/* --------------------------------------------------------------------------------
+ * Internal Functions related to PgAioHandle
+ * --------------------------------------------------------------------------------
+ */
+
+static inline void
+pgaio_io_update_state(PgAioHandle *ioh, PgAioHandleState new_state)
+{
+ pgaio_debug_io(DEBUG5, ioh,
+ "updating state to %s",
+ pgaio_io_state_get_name(new_state));
+
+ /*
+ * Ensure the changes signified by the new state are visible before the
+ * new state becomes visible.
+ */
+ pg_write_barrier();
+
+ ioh->state = new_state;
+}
+
+static void
+pgaio_io_resowner_register(PgAioHandle *ioh)
+{
+ Assert(!ioh->resowner);
+ Assert(CurrentResourceOwner);
+
+ ResourceOwnerRememberAioHandle(CurrentResourceOwner, &ioh->resowner_node);
+ ioh->resowner = CurrentResourceOwner;
+}
+
+/*
+ * Stage IO for execution and, if appropriate, submit it immediately.
+ *
+ * Should only be called from pgaio_io_prep_*().
+ */
+void
+pgaio_io_stage(PgAioHandle *ioh, PgAioOp op)
+{
+ bool needs_synchronous;
+
+ Assert(ioh->state == PGAIO_HS_HANDED_OUT);
+ Assert(pgaio_my_backend->handed_out_io == ioh);
+ Assert(pgaio_io_has_target(ioh));
+
+ ioh->op = op;
+ ioh->result = 0;
+
+ pgaio_io_update_state(ioh, PGAIO_HS_DEFINED);
+
+ /* allow a new IO to be staged */
+ pgaio_my_backend->handed_out_io = NULL;
+
+ pgaio_io_call_stage(ioh);
+
+ pgaio_io_update_state(ioh, PGAIO_HS_STAGED);
+
+ /*
+ * Synchronous execution has to be executed, well, synchronously, so check
+ * that first.
+ */
+ needs_synchronous = pgaio_io_needs_synchronous_execution(ioh);
+
+ pgaio_debug_io(DEBUG3, ioh,
+ "prepared (synchronous: %d, in_batch: %d)",
+ needs_synchronous, pgaio_my_backend->in_batchmode);
+
+ if (!needs_synchronous)
+ {
+ pgaio_my_backend->staged_ios[pgaio_my_backend->num_staged_ios++] = ioh;
+ Assert(pgaio_my_backend->num_staged_ios <= PGAIO_SUBMIT_BATCH_SIZE);
+
+ /*
+ * Unless code explicitly opted into batching IOs, submit the IO
+ * immediately.
+ */
+ if (!pgaio_my_backend->in_batchmode)
+ pgaio_submit_staged();
+ }
+ else
+ {
+ pgaio_io_prepare_submit(ioh);
+ pgaio_io_perform_synchronously(ioh);
+ }
+}
+
+bool
+pgaio_io_needs_synchronous_execution(PgAioHandle *ioh)
+{
+ /*
+ * If the caller said to execute the IO synchronously, do so.
+ *
+ * XXX: We could optimize the logic when to execute synchronously by first
+ * checking if there are other IOs in flight and only synchronously
+ * executing if not. Unclear whether that'll be sufficiently common to be
+ * worth worrying about.
+ */
+ if (ioh->flags & PGAIO_HF_SYNCHRONOUS)
+ return true;
+
+ /* Check if the IO method requires synchronous execution of IO */
+ if (pgaio_method_ops->needs_synchronous_execution)
+ return pgaio_method_ops->needs_synchronous_execution(ioh);
+
+ return false;
+}
+
+/*
+ * Handle IO being processed by IO method.
+ *
+ * Should be called by IO methods / synchronous IO execution, just before the
+ * IO is performed.
+ */
+void
+pgaio_io_prepare_submit(PgAioHandle *ioh)
+{
+ pgaio_io_update_state(ioh, PGAIO_HS_SUBMITTED);
+
+ dclist_push_tail(&pgaio_my_backend->in_flight_ios, &ioh->node);
+}
+
+/*
+ * Handle IO getting completed by a method.
+ *
+ * Should be called by IO methods / synchronous IO execution, just after the
+ * IO has been performed.
+ *
+ * Expects to be called in a critical section. We expect IOs to be usable for
+ * WAL etc, which requires being able to execute completion callbacks in a
+ * critical section.
+ */
+void
+pgaio_io_process_completion(PgAioHandle *ioh, int result)
+{
+ Assert(ioh->state == PGAIO_HS_SUBMITTED);
+
+ Assert(CritSectionCount > 0);
+
+ ioh->result = result;
+
+ pgaio_io_update_state(ioh, PGAIO_HS_COMPLETED_IO);
+
+ pgaio_io_call_inj(ioh, "AIO_PROCESS_COMPLETION_BEFORE_SHARED");
+
+ pgaio_io_call_complete_shared(ioh);
+
+ pgaio_io_update_state(ioh, PGAIO_HS_COMPLETED_SHARED);
+
+ /* condition variable broadcast ensures state is visible before wakeup */
+ ConditionVariableBroadcast(&ioh->cv);
+
+ /* contains call to pgaio_io_call_complete_local() */
+ if (ioh->owner_procno == MyProcNumber)
+ pgaio_io_reclaim(ioh);
+}
+
+/*
+ * Has the IO completed and thus the IO handle been reused?
+ *
+ * This is useful when waiting for IO completion at a low level (e.g. in an IO
+ * method's ->wait_one() callback).
+ */
+bool
+pgaio_io_was_recycled(PgAioHandle *ioh, uint64 ref_generation, PgAioHandleState *state)
+{
+ *state = ioh->state;
+ pg_read_barrier();
+
+ return ioh->generation != ref_generation;
+}
+
+/*
+ * Wait for IO to complete. External code should never use this, outside of
+ * the AIO subsystem waits are only allowed via pgaio_wref_wait().
+ */
+static void
+pgaio_io_wait(PgAioHandle *ioh, uint64 ref_generation)
+{
+ PgAioHandleState state;
+ bool am_owner;
+
+ am_owner = ioh->owner_procno == MyProcNumber;
+
+ if (pgaio_io_was_recycled(ioh, ref_generation, &state))
+ return;
+
+ if (am_owner)
+ {
+ if (state != PGAIO_HS_SUBMITTED
+ && state != PGAIO_HS_COMPLETED_IO
+ && state != PGAIO_HS_COMPLETED_SHARED
+ && state != PGAIO_HS_COMPLETED_LOCAL)
+ {
+ elog(PANIC, "waiting for own IO in wrong state: %d",
+ state);
+ }
+ }
+
+ while (true)
+ {
+ if (pgaio_io_was_recycled(ioh, ref_generation, &state))
+ return;
+
+ switch (state)
+ {
+ case PGAIO_HS_IDLE:
+ case PGAIO_HS_HANDED_OUT:
+ elog(ERROR, "IO in wrong state: %d", state);
+ break;
+
+ case PGAIO_HS_SUBMITTED:
+
+ /*
+ * If we need to wait via the IO method, do so now. Don't
+ * check via the IO method if the issuing backend is executing
+ * the IO synchronously.
+ */
+ if (pgaio_method_ops->wait_one && !(ioh->flags & PGAIO_HF_SYNCHRONOUS))
+ {
+ pgaio_method_ops->wait_one(ioh, ref_generation);
+ continue;
+ }
+ /* fallthrough */
+
+ /* waiting for owner to submit */
+ case PGAIO_HS_DEFINED:
+ case PGAIO_HS_STAGED:
+ /* waiting for reaper to complete */
+ /* fallthrough */
+ case PGAIO_HS_COMPLETED_IO:
+ /* shouldn't be able to hit this otherwise */
+ Assert(IsUnderPostmaster);
+ /* ensure we're going to get woken up */
+ ConditionVariablePrepareToSleep(&ioh->cv);
+
+ while (!pgaio_io_was_recycled(ioh, ref_generation, &state))
+ {
+ if (state == PGAIO_HS_COMPLETED_SHARED ||
+ state == PGAIO_HS_COMPLETED_LOCAL)
+ break;
+ ConditionVariableSleep(&ioh->cv, WAIT_EVENT_AIO_IO_COMPLETION);
+ }
+
+ ConditionVariableCancelSleep();
+ break;
+
+ case PGAIO_HS_COMPLETED_SHARED:
+ case PGAIO_HS_COMPLETED_LOCAL:
+ /* see above */
+ if (am_owner)
+ pgaio_io_reclaim(ioh);
+ return;
+ }
+ }
+}
+
+/*
+ * Make IO handle ready to be reused after IO has completed or after the
+ * handle has been released without being used.
+ */
+static void
+pgaio_io_reclaim(PgAioHandle *ioh)
+{
+ /* This is only ok if it's our IO */
+ Assert(ioh->owner_procno == MyProcNumber);
+ Assert(ioh->state != PGAIO_HS_IDLE);
+
+ /*
+ * It's a bit ugly, but right now the easiest place to put the execution
+ * of shared completion callbacks is this function, as we need to execute
+ * local callbacks just before reclaiming at multiple callsites.
+ */
+ if (ioh->state == PGAIO_HS_COMPLETED_SHARED)
+ {
+ pgaio_io_call_complete_local(ioh);
+ pgaio_io_update_state(ioh, PGAIO_HS_COMPLETED_LOCAL);
+ }
+
+ pgaio_debug_io(DEBUG4, ioh,
+ "reclaiming: distilled_result: (status %s, id %u, error_data %d), raw_result: %d",
+ pgaio_result_status_string(ioh->distilled_result.status),
+ ioh->distilled_result.id,
+ ioh->distilled_result.error_data,
+ ioh->result);
+
+ /* if the IO has been defined, we might need to do more work */
+ if (ioh->state != PGAIO_HS_HANDED_OUT)
+ {
+ dclist_delete_from(&pgaio_my_backend->in_flight_ios, &ioh->node);
+
+ if (ioh->report_return)
+ {
+ ioh->report_return->result = ioh->distilled_result;
+ ioh->report_return->target_data = ioh->target_data;
+ }
+ }
+
+ if (ioh->resowner)
+ {
+ ResourceOwnerForgetAioHandle(ioh->resowner, &ioh->resowner_node);
+ ioh->resowner = NULL;
+ }
+
+ Assert(!ioh->resowner);
+
+ ioh->op = PGAIO_OP_INVALID;
+ ioh->target = PGAIO_TID_INVALID;
+ ioh->flags = 0;
+ ioh->num_callbacks = 0;
+ ioh->handle_data_len = 0;
+ ioh->report_return = NULL;
+ ioh->result = 0;
+ ioh->distilled_result.status = ARS_UNKNOWN;
+
+ /* XXX: the barrier is probably superfluous */
+ pg_write_barrier();
+ ioh->generation++;
+
+ pgaio_io_update_state(ioh, PGAIO_HS_IDLE);
+
+ /*
+ * We push the IO to the head of the idle IO list, that seems more cache
+ * efficient in cases where only a few IOs are used.
+ */
+ dclist_push_head(&pgaio_my_backend->idle_ios, &ioh->node);
+}
+
+/*
+ * Wait for an IO handle to become usable.
+ *
+ * This only really is useful for pgaio_io_acquire().
+ */
+static void
+pgaio_io_wait_for_free(void)
+{
+ int reclaimed = 0;
+
+ pgaio_debug(DEBUG2, "waiting for self with %d pending",
+ pgaio_my_backend->num_staged_ios);
+
+ /*
+ * First check if any of our IOs actually have completed - when using
+ * worker, that'll often be the case. We could do so as part of the loop
+ * below, but that'd potentially lead us to wait for some IO submitted
+ * before.
+ */
+ for (int i = 0; i < io_max_concurrency; i++)
+ {
+ PgAioHandle *ioh = &pgaio_ctl->io_handles[pgaio_my_backend->io_handle_off + i];
+
+ if (ioh->state == PGAIO_HS_COMPLETED_SHARED)
+ {
+ pgaio_io_reclaim(ioh);
+ reclaimed++;
+ }
+ }
+
+ if (reclaimed > 0)
+ return;
+
+ /*
+ * If we have any unsubmitted IOs, submit them now. We'll start waiting in
+ * a second, so it's better they're in flight. This also addresses the
+ * edge-case that all IOs are unsubmitted.
+ */
+ if (pgaio_my_backend->num_staged_ios > 0)
+ pgaio_submit_staged();
+
+ if (dclist_count(&pgaio_my_backend->in_flight_ios) == 0)
+ elog(ERROR, "no free IOs despite no in-flight IOs");
+
+ /*
+ * Wait for the oldest in-flight IO to complete.
+ *
+ * XXX: Reusing the general IO wait is suboptimal, we don't need to wait
+ * for that specific IO to complete, we just need *any* IO to complete.
+ */
+ {
+ PgAioHandle *ioh = dclist_head_element(PgAioHandle, node,
+ &pgaio_my_backend->in_flight_ios);
+
+ switch (ioh->state)
+ {
+ /* should not be in in-flight list */
+ case PGAIO_HS_IDLE:
+ case PGAIO_HS_DEFINED:
+ case PGAIO_HS_HANDED_OUT:
+ case PGAIO_HS_STAGED:
+ case PGAIO_HS_COMPLETED_LOCAL:
+ elog(ERROR, "shouldn't get here with io:%d in state %d",
+ pgaio_io_get_id(ioh), ioh->state);
+ break;
+
+ case PGAIO_HS_COMPLETED_IO:
+ case PGAIO_HS_SUBMITTED:
+ pgaio_debug_io(DEBUG2, ioh,
+ "waiting for free io with %d in flight",
+ dclist_count(&pgaio_my_backend->in_flight_ios));
+
+ /*
+ * In a more general case this would be racy, because the
+ * generation could increase after we read ioh->state above.
+ * But we are only looking at IOs by the current backend and
+ * the IO can only be recycled by this backend.
+ */
+ pgaio_io_wait(ioh, ioh->generation);
+ break;
+
+ case PGAIO_HS_COMPLETED_SHARED:
+ /* it's possible that another backend just finished this IO */
+ pgaio_io_reclaim(ioh);
+ break;
+ }
+
+ if (dclist_count(&pgaio_my_backend->idle_ios) == 0)
+ elog(PANIC, "no idle IO after waiting for IO to terminate");
+ return;
+ }
+}
+
+/*
+ * Internal - code outside of AIO should never need this and it'd be hard for
+ * such code to be safe.
+ */
+static PgAioHandle *
+pgaio_io_from_wref(PgAioWaitRef *iow, uint64 *ref_generation)
+{
+ PgAioHandle *ioh;
+
+ Assert(iow->aio_index < pgaio_ctl->io_handle_count);
+
+ ioh = &pgaio_ctl->io_handles[iow->aio_index];
+
+ *ref_generation = ((uint64) iow->generation_upper) << 32 |
+ iow->generation_lower;
+
+ Assert(*ref_generation != 0);
+
+ return ioh;
+}
+
+static const char *
+pgaio_io_state_get_name(PgAioHandleState s)
+{
+#define PGAIO_HS_TOSTR_CASE(sym) case PGAIO_HS_##sym: return #sym
+ switch (s)
+ {
+ PGAIO_HS_TOSTR_CASE(IDLE);
+ PGAIO_HS_TOSTR_CASE(HANDED_OUT);
+ PGAIO_HS_TOSTR_CASE(DEFINED);
+ PGAIO_HS_TOSTR_CASE(STAGED);
+ PGAIO_HS_TOSTR_CASE(SUBMITTED);
+ PGAIO_HS_TOSTR_CASE(COMPLETED_IO);
+ PGAIO_HS_TOSTR_CASE(COMPLETED_SHARED);
+ PGAIO_HS_TOSTR_CASE(COMPLETED_LOCAL);
+ }
+#undef PGAIO_HS_TOSTR_CASE
+
+ return NULL; /* silence compiler */
+}
+
+const char *
+pgaio_io_get_state_name(PgAioHandle *ioh)
+{
+ return pgaio_io_state_get_name(ioh->state);
+}
+
+const char *
+pgaio_result_status_string(PgAioResultStatus rs)
+{
+ switch (rs)
+ {
+ case ARS_UNKNOWN:
+ return "UNKNOWN";
+ case ARS_OK:
+ return "OK";
+ case ARS_PARTIAL:
+ return "PARTIAL";
+ case ARS_ERROR:
+ return "ERROR";
+ }
+
+ return NULL; /* silence compiler */
+}
+
+
+
+/* --------------------------------------------------------------------------------
+ * Functions primarily related to IO Wait References
+ * --------------------------------------------------------------------------------
+ */
+
+/*
+ * Mark a wait reference as invalid
+ */
+void
+pgaio_wref_clear(PgAioWaitRef *iow)
+{
+ iow->aio_index = PG_UINT32_MAX;
+}
+
+/* Is the wait reference valid? */
+bool
+pgaio_wref_valid(PgAioWaitRef *iow)
+{
+ return iow->aio_index != PG_UINT32_MAX;
+}
+
+/*
+ * Similar to pgaio_io_get_id(), just for wait references.
+ */
+int
+pgaio_wref_get_id(PgAioWaitRef *iow)
+{
+ Assert(pgaio_wref_valid(iow));
+ return iow->aio_index;
+}
+
+/*
+ * Wait for the IO to have completed. Can be called in any process, not just
+ * in the issuing backend.
+ */
+void
+pgaio_wref_wait(PgAioWaitRef *iow)
+{
+ uint64 ref_generation;
+ PgAioHandle *ioh;
+
+ ioh = pgaio_io_from_wref(iow, &ref_generation);
+
+ pgaio_io_wait(ioh, ref_generation);
+}
+
+/*
+ * Check if the referenced IO completed, without blocking.
+ */
+bool
+pgaio_wref_check_done(PgAioWaitRef *iow)
+{
+ uint64 ref_generation;
+ PgAioHandleState state;
+ bool am_owner;
+ PgAioHandle *ioh;
+
+ ioh = pgaio_io_from_wref(iow, &ref_generation);
+
+ if (pgaio_io_was_recycled(ioh, ref_generation, &state))
+ return true;
+
+ if (state == PGAIO_HS_IDLE)
+ return true;
+
+ am_owner = ioh->owner_procno == MyProcNumber;
+
+ if (state == PGAIO_HS_COMPLETED_SHARED ||
+ state == PGAIO_HS_COMPLETED_LOCAL)
+ {
+ if (am_owner)
+ pgaio_io_reclaim(ioh);
+ return true;
+ }
+
+ /*
+ * XXX: It likely would be worth checking in with the io method, to give
+ * the IO method a chance to check if there are completion events queued.
+ */
+
+ return false;
+}
+
+
+
+/* --------------------------------------------------------------------------------
+ * Actions on multiple IOs.
+ * --------------------------------------------------------------------------------
+ */
+
+/*
+ * Submit IOs in batches going forward.
+ *
+ * Submitting multiple IOs at once can be substantially faster than doing so
+ * one-by-one. At the same time, submitting multiple IOs at once requires more
+ * care to avoid deadlocks.
+ *
+ * Consider backend A staging an IO for buffer 1 and then trying to start IO
+ * on buffer 2, while backend B does the inverse. If A submitted the IO before
+ * moving on to buffer 2, this works just fine, B will wait for the IO to
+ * complete. But if batching were used, each backend will wait for IO that has
+ * not yet been submitted to complete, i.e. forever.
+ *
+ * End batch submission mode with pgaio_exit_batchmode(). (Throwing errors is
+ * allowed; error recovery will end the batch.)
+ *
+ * To avoid deadlocks, code needs to ensure that it will not wait for another
+ * backend while there is unsubmitted IO. E.g. by using conditional lock
+ * acquisition when acquiring buffer locks. To check if there currently are
+ * staged IOs, call pgaio_have_staged() and to submit all staged IOs call
+ * pgaio_submit_staged().
+ *
+ * It is not allowed to enter batchmode while already in batchmode, it's
+ * unlikely to ever be needed, as code needs to be explicitly aware of being
+ * called in batchmode, to avoid the deadlock risks explained above.
+ *
+ * Note that IOs may get submitted before pgaio_exit_batchmode() is called,
+ * e.g. because too many IOs have been staged or because pgaio_submit_staged()
+ * was called.
+ */
+void
+pgaio_enter_batchmode(void)
+{
+ if (pgaio_my_backend->in_batchmode)
+ elog(ERROR, "starting batch while batch already in progress");
+ pgaio_my_backend->in_batchmode = true;
}
/*
+ * Stop submitting IOs in batches.
+ */
+void
+pgaio_exit_batchmode(void)
+{
+ Assert(pgaio_my_backend->in_batchmode);
+
+ pgaio_submit_staged();
+ pgaio_my_backend->in_batchmode = false;
+}
+
+/*
+ * Are there staged but unsubmitted IOs?
+ *
+ * See comment above pgaio_enter_batchmode() for why code may need to check if
+ * there is IO in that state.
+ */
+bool
+pgaio_have_staged(void)
+{
+ Assert(pgaio_my_backend->in_batchmode ||
+ pgaio_my_backend->num_staged_ios == 0);
+ return pgaio_my_backend->num_staged_ios > 0;
+}
+
+/*
+ * Submit all staged but not yet submitted IOs.
+ *
+ * Unless in batch mode, this never needs to be called, as IOs get submitted
+ * as soon as possible. While in batchmode pgaio_submit_staged() can be called
+ * before waiting on another backend, to avoid the risk of deadlocks. See
+ * pgaio_enter_batchmode().
+ */
+void
+pgaio_submit_staged(void)
+{
+ int total_submitted = 0;
+ int did_submit;
+
+ if (pgaio_my_backend->num_staged_ios == 0)
+ return;
+
+
+ START_CRIT_SECTION();
+
+ did_submit = pgaio_method_ops->submit(pgaio_my_backend->num_staged_ios,
+ pgaio_my_backend->staged_ios);
+
+ END_CRIT_SECTION();
+
+ total_submitted += did_submit;
+
+ Assert(total_submitted == did_submit);
+
+ pgaio_my_backend->num_staged_ios = 0;
+
+ pgaio_debug(DEBUG4,
+ "aio: submitted %d IOs",
+ total_submitted);
+}
+
+
+
+/* --------------------------------------------------------------------------------
+ * Other
+ * --------------------------------------------------------------------------------
+ */
+
+
+/*
* Perform AIO related cleanup after an error.
*
* This should be called early in the error recovery paths, as later steps may
@@ -50,6 +1049,22 @@ pgaio_io_release_resowner(dlist_node *ioh_node, bool on_error)
void
pgaio_error_cleanup(void)
{
+ /*
+ * It is possible that code errored out after pgaio_enter_batchmode() but
+ * before pgaio_exit_batchmode() was called. In that case we need to
+ * submit the IO now.
+ */
+ if (pgaio_my_backend->in_batchmode)
+ {
+ pgaio_my_backend->in_batchmode = false;
+
+ pgaio_submit_staged();
+ }
+
+ /*
+ * As we aren't in batchmode, there shouldn't be any unsubmitted IOs.
+ */
+ Assert(pgaio_my_backend->num_staged_ios == 0);
}
/*
@@ -62,11 +1077,86 @@ pgaio_error_cleanup(void)
void
AtEOXact_Aio(bool is_commit)
{
+ /*
+ * We should never be in batch mode at transactional boundaries. In case
+ * an error was thrown while in batch mode, pgaio_error_cleanup() should
+ * have exited batchmode.
+ *
+ * In case we are in batchmode somehow, make sure to submit all staged
+ * IOs, other backends may need them to complete to continue.
+ */
+ if (pgaio_my_backend->in_batchmode)
+ {
+ pgaio_error_cleanup();
+ elog(WARNING, "open AIO batch at end of (sub-)transaction");
+ }
+
+ /*
+ * As we aren't in batchmode, there shouldn't be any unsubmitted IOs.
+ */
+ Assert(pgaio_my_backend->num_staged_ios == 0);
+}
+
+/*
+ * Need to submit staged but not yet submitted IOs using the fd, otherwise
+ * the IO would end up targeting something bogus.
+ */
+void
+pgaio_closing_fd(int fd)
+{
+ /*
+ * Might be called before AIO is initialized or in a subprocess that
+ * doesn't use AIO.
+ */
+ if (!pgaio_my_backend)
+ return;
+
+ /*
+ * For now just submit all staged IOs - we could be more selective, but
+ * it's probably not worth it.
+ */
+ pgaio_submit_staged();
+}
+
+/*
+ * Registered as before_shmem_exit() callback in pgaio_init_backend()
+ */
+void
+pgaio_shutdown(int code, Datum arg)
+{
+ Assert(pgaio_my_backend);
+ Assert(!pgaio_my_backend->handed_out_io);
+
+ /* first clean up resources as we would at a transaction boundary */
+ AtEOXact_Aio(code == 0);
+
+ /*
+ * Before exiting, make sure that all IOs are finished. That has two main
+ * purposes:
+ *
+ * - Some kernel-level AIO mechanisms don't deal well with the issuer of
+ * an AIO exiting before IO completed
+ *
+ * - It'd be confusing to see partially finished IOs in stats views etc
+ */
+ while (!dclist_is_empty(&pgaio_my_backend->in_flight_ios))
+ {
+ PgAioHandle *ioh = dclist_head_element(PgAioHandle, node, &pgaio_my_backend->in_flight_ios);
+
+ /* see comment in pgaio_io_wait_for_free() about raciness */
+ pgaio_io_wait(ioh, ioh->generation);
+ }
+
+ pgaio_my_backend = NULL;
}
void
assign_io_method(int newval, void *extra)
{
+ Assert(pgaio_method_ops_table[newval] != NULL);
+ Assert(newval < lengthof(io_method_options));
+
+ pgaio_method_ops = pgaio_method_ops_table[newval];
}
bool
@@ -88,3 +1178,43 @@ check_io_max_concurrency(int *newval, void **extra, GucSource source)
return true;
}
+
+
+
+/* --------------------------------------------------------------------------------
+ * Injection point support
+ * --------------------------------------------------------------------------------
+ */
+
+#ifdef USE_INJECTION_POINTS
+
+/*
+ * Call injection point with support for pgaio_inj_io_get().
+ */
+void
+pgaio_io_call_inj(PgAioHandle *ioh, const char *injection_point)
+{
+ pgaio_inj_cur_handle = ioh;
+
+ PG_TRY();
+ {
+ InjectionPointCached(injection_point);
+ }
+ PG_FINALLY();
+ {
+ pgaio_inj_cur_handle = NULL;
+ }
+ PG_END_TRY();
+}
+
+/*
+ * Return IO associated with injection point invocation. This is only needed
+ * as injection points currently don't support arguments.
+ */
+PgAioHandle *
+pgaio_inj_io_get(void)
+{
+ return pgaio_inj_cur_handle;
+}
+
+#endif
diff --git a/src/backend/storage/aio/aio_callback.c b/src/backend/storage/aio/aio_callback.c
new file mode 100644
index 00000000000..d5a2cca28f1
--- /dev/null
+++ b/src/backend/storage/aio/aio_callback.c
@@ -0,0 +1,308 @@
+/*-------------------------------------------------------------------------
+ *
+ * aio_callback.c
+ * AIO - Functionality related to callbacks that can be registered on IO
+ * Handles
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/backend/storage/aio/aio_callback.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "miscadmin.h"
+#include "storage/aio.h"
+#include "storage/aio_internal.h"
+
+
+/* just to have something to put into aio_handle_cbs */
+static const PgAioHandleCallbacks aio_invalid_cb = {0};
+
+typedef struct PgAioHandleCallbacksEntry
+{
+ const PgAioHandleCallbacks *const cb;
+ const char *const name;
+} PgAioHandleCallbacksEntry;
+
+/*
+ * Callback definition for the callbacks that can be registered on an IO
+ * handle. See PgAioHandleCallbackID's definition for an explanation for why
+ * callbacks are not identified by a pointer.
+ */
+static const PgAioHandleCallbacksEntry aio_handle_cbs[] = {
+#define CALLBACK_ENTRY(id, callback) [id] = {.cb = &callback, .name = #callback}
+ CALLBACK_ENTRY(PGAIO_HCB_INVALID, aio_invalid_cb),
+#undef CALLBACK_ENTRY
+};
+
+
+
+/* --------------------------------------------------------------------------------
+ * Public callback related functions operating on IO Handles
+ * --------------------------------------------------------------------------------
+ */
+
+/*
+ * Register callback for the IO handle.
+ *
+ * Only a limited number (PGAIO_HANDLE_MAX_CALLBACKS) of callbacks can be
+ * registered for each IO.
+ *
+ * Callbacks need to be registered before [indirectly] calling
+ * pgaio_io_prep_*(), as the IO may be executed immediately.
+ *
+ * A callback can be passed a small bit of data, e.g. to indicate whether to
+ * zero a buffer if it is invalid.
+ *
+ *
+ * Note that callbacks are executed in critical sections. This is necessary
+ * to be able to execute IO in critical sections (consider e.g. WAL
+ * logging). To perform AIO we first need to acquire a handle, which, if there
+ * are no free handles, requires waiting for IOs to complete and to execute
+ * their completion callbacks.
+ *
+ * Callbacks may be executed in the issuing backend but also in another
+ * backend (because that backend is waiting for the IO) or in IO workers (if
+ * io_method=worker is used).
+ *
+ *
+ * See PgAioHandleCallbackID's definition for an explanation for why
+ * callbacks are not identified by a pointer.
+ */
+void
+pgaio_io_register_callbacks(PgAioHandle *ioh, PgAioHandleCallbackID cb_id,
+ uint8 cb_data)
+{
+ const PgAioHandleCallbacksEntry *ce = &aio_handle_cbs[cb_id];
+
+ if (cb_id >= lengthof(aio_handle_cbs))
+ elog(ERROR, "callback %d is out of range", cb_id);
+ if (aio_handle_cbs[cb_id].cb->complete_shared == NULL &&
+ aio_handle_cbs[cb_id].cb->complete_local == NULL)
+ elog(ERROR, "callback %d does not have a completion callback", cb_id);
+ if (ioh->num_callbacks >= PGAIO_HANDLE_MAX_CALLBACKS)
+ elog(PANIC, "too many callbacks, the max is %d",
+ PGAIO_HANDLE_MAX_CALLBACKS);
+ ioh->callbacks[ioh->num_callbacks] = cb_id;
+ ioh->callbacks_data[ioh->num_callbacks] = cb_data;
+
+ pgaio_debug_io(DEBUG3, ioh,
+ "adding cb #%d, id %d/%s",
+ ioh->num_callbacks + 1,
+ cb_id, ce->name);
+
+ ioh->num_callbacks++;
+}
+
+/*
+ * Associate an array of data with the Handle. This is e.g. useful to the
+ * transport knowledge about which buffers a multi-block IO affects to
+ * completion callbacks.
+ *
+ * Right now this can be done only once for each IO, even though multiple
+ * callbacks can be registered. There aren't any known usecases requiring more
+ * and the required amount of shared memory does add up, so it doesn't seem
+ * worth multiplying memory usage by PGAIO_HANDLE_MAX_CALLBACKS.
+ */
+void
+pgaio_io_set_handle_data_64(PgAioHandle *ioh, uint64 *data, uint8 len)
+{
+ Assert(ioh->state == PGAIO_HS_HANDED_OUT);
+ Assert(ioh->handle_data_len == 0);
+ Assert(len <= PG_IOV_MAX);
+
+ for (int i = 0; i < len; i++)
+ pgaio_ctl->handle_data[ioh->iovec_off + i] = data[i];
+ ioh->handle_data_len = len;
+}
+
+/*
+ * Convenience version of pgaio_io_set_handle_data_64() that converts a 32bit
+ * array to a 64bit array. Without it callers would end up needing to
+ * open-code equivalent code.
+ */
+void
+pgaio_io_set_handle_data_32(PgAioHandle *ioh, uint32 *data, uint8 len)
+{
+ Assert(ioh->state == PGAIO_HS_HANDED_OUT);
+ Assert(ioh->handle_data_len == 0);
+ Assert(len <= PG_IOV_MAX);
+
+ for (int i = 0; i < len; i++)
+ pgaio_ctl->handle_data[ioh->iovec_off + i] = data[i];
+ ioh->handle_data_len = len;
+}
+
+/*
+ * Return data set with pgaio_io_set_handle_data_*().
+ */
+uint64 *
+pgaio_io_get_handle_data(PgAioHandle *ioh, uint8 *len)
+{
+ Assert(ioh->handle_data_len > 0);
+
+ *len = ioh->handle_data_len;
+
+ return &pgaio_ctl->handle_data[ioh->iovec_off];
+}
+
+
+
+/* --------------------------------------------------------------------------------
+ * Public IO Result related functions
+ * --------------------------------------------------------------------------------
+ */
+
+void
+pgaio_result_report(PgAioResult result, const PgAioTargetData *target_data, int elevel)
+{
+ PgAioHandleCallbackID cb_id = result.id;
+ const PgAioHandleCallbacksEntry *ce = &aio_handle_cbs[cb_id];
+
+ Assert(result.status != ARS_UNKNOWN);
+ Assert(result.status != ARS_OK);
+
+ if (ce->cb->report == NULL)
+ elog(ERROR, "callback %d/%s does not have report callback",
+ result.id, ce->name);
+
+ ce->cb->report(result, target_data, elevel);
+}
+
+
+
+/* --------------------------------------------------------------------------------
+ * Internal callback related functions operating on IO Handles
+ * --------------------------------------------------------------------------------
+ */
+
+/*
+ * Internal function which invokes ->stage for all the registered callbacks.
+ */
+void
+pgaio_io_call_stage(PgAioHandle *ioh)
+{
+ Assert(ioh->target > PGAIO_TID_INVALID && ioh->target < PGAIO_TID_COUNT);
+ Assert(ioh->op > PGAIO_OP_INVALID && ioh->op < PGAIO_OP_COUNT);
+
+ for (int i = ioh->num_callbacks; i > 0; i--)
+ {
+ PgAioHandleCallbackID cb_id = ioh->callbacks[i - 1];
+ uint8 cb_data = ioh->callbacks_data[i - 1];
+ const PgAioHandleCallbacksEntry *ce = &aio_handle_cbs[cb_id];
+
+ if (!ce->cb->stage)
+ continue;
+
+ pgaio_debug_io(DEBUG3, ioh,
+ "calling cb #%d %d/%s->stage(%u)",
+ i, cb_id, ce->name, cb_data);
+ ce->cb->stage(ioh, cb_data);
+ }
+}
+
+/*
+ * Internal function which invokes ->complete_shared for all the registered
+ * callbacks.
+ */
+void
+pgaio_io_call_complete_shared(PgAioHandle *ioh)
+{
+ PgAioResult result;
+
+ START_CRIT_SECTION();
+
+ Assert(ioh->target > PGAIO_TID_INVALID && ioh->target < PGAIO_TID_COUNT);
+ Assert(ioh->op > PGAIO_OP_INVALID && ioh->op < PGAIO_OP_COUNT);
+
+ result.status = ARS_OK; /* low level IO is always considered OK */
+ result.result = ioh->result;
+ result.id = PGAIO_HCB_INVALID;
+ result.error_data = 0;
+
+ /*
+ * Call callbacks with the last registered (innermost) callback first.
+ * Each callback can modify the result forwarded to the next callback.
+ */
+ for (int i = ioh->num_callbacks; i > 0; i--)
+ {
+ PgAioHandleCallbackID cb_id = ioh->callbacks[i - 1];
+ uint8 cb_data = ioh->callbacks_data[i - 1];
+ const PgAioHandleCallbacksEntry *ce = &aio_handle_cbs[cb_id];
+
+ if (!ce->cb->complete_shared)
+ continue;
+
+ pgaio_debug_io(DEBUG4, ioh,
+ "calling cb #%d, id %d/%s->complete_shared(%u) with distilled result: (status %s, id %u, error_data %d, result %d)",
+ i, cb_id, ce->name,
+ cb_data,
+ pgaio_result_status_string(result.status),
+ result.id, result.error_data, result.result);
+ result = ce->cb->complete_shared(ioh, result, cb_data);
+ }
+
+ ioh->distilled_result = result;
+
+ pgaio_debug_io(DEBUG3, ioh,
+ "after shared completion: distilled result: (status %s, id %u, error_data: %d, result %d), raw_result: %d",
+ pgaio_result_status_string(result.status),
+ result.id, result.error_data, result.result,
+ ioh->result);
+
+ END_CRIT_SECTION();
+}
+
+/*
+ * Internal function which invokes ->complete_local for all the registered
+ * callbacks.
+ *
+ * XXX: It'd be nice to deduplicate with pgaio_io_call_complete_shared().
+ */
+void
+pgaio_io_call_complete_local(PgAioHandle *ioh)
+{
+ PgAioResult result;
+
+ START_CRIT_SECTION();
+
+ Assert(ioh->target > PGAIO_TID_INVALID && ioh->target < PGAIO_TID_COUNT);
+ Assert(ioh->op > PGAIO_OP_INVALID && ioh->op < PGAIO_OP_COUNT);
+
+ /* start with distilled result from shared callback */
+ result = ioh->distilled_result;
+
+ for (int i = ioh->num_callbacks; i > 0; i--)
+ {
+ PgAioHandleCallbackID cb_id = ioh->callbacks[i - 1];
+ uint8 cb_data = ioh->callbacks_data[i - 1];
+ const PgAioHandleCallbacksEntry *ce = &aio_handle_cbs[cb_id];
+
+ if (!ce->cb->complete_local)
+ continue;
+
+ pgaio_debug_io(DEBUG4, ioh,
+ "calling cb #%d, id %d/%s->complete_local(%u) with distilled result: status %s, id %u, error_data %d, result %d",
+ i, cb_id, ce->name, cb_data,
+ pgaio_result_status_string(result.status),
+ result.id, result.error_data, result.result);
+ result = ce->cb->complete_local(ioh, result, cb_data);
+ }
+
+ /*
+ * Note that we don't save the result in ioh->distilled_result, the local
+ * callback's result should not ever matter to other waiters.
+ */
+ pgaio_debug_io(DEBUG3, ioh,
+ "after local completion: distilled result: (status %s, id %u, error_data %d, result %d), raw_result: %d",
+ pgaio_result_status_string(result.status),
+ result.id, result.error_data, result.result,
+ ioh->result);
+
+ END_CRIT_SECTION();
+}
diff --git a/src/backend/storage/aio/aio_init.c b/src/backend/storage/aio/aio_init.c
index aeacc144149..6fe55510fae 100644
--- a/src/backend/storage/aio/aio_init.c
+++ b/src/backend/storage/aio/aio_init.c
@@ -14,24 +14,222 @@
#include "postgres.h"
+#include "miscadmin.h"
+#include "storage/aio.h"
+#include "storage/aio_internal.h"
#include "storage/aio_subsys.h"
+#include "storage/ipc.h"
+#include "storage/proc.h"
+#include "storage/shmem.h"
+#include "utils/guc.h"
+static Size
+AioCtlShmemSize(void)
+{
+ Size sz;
+
+ /* pgaio_ctl itself */
+ sz = offsetof(PgAioCtl, io_handles);
+
+ return sz;
+}
+
+static uint32
+AioProcs(void)
+{
+ return MaxBackends + NUM_AUXILIARY_PROCS;
+}
+
+static Size
+AioBackendShmemSize(void)
+{
+ return mul_size(AioProcs(), sizeof(PgAioBackend));
+}
+
+static Size
+AioHandleShmemSize(void)
+{
+ Size sz;
+
+ /* verify AioChooseMaxConcurrency() did its thing */
+ Assert(io_max_concurrency > 0);
+
+ /* io handles */
+ sz = mul_size(AioProcs(),
+ mul_size(io_max_concurrency, sizeof(PgAioHandle)));
+
+ return sz;
+}
+
+static Size
+AioHandleIOVShmemSize(void)
+{
+ /*
+ * Each IO handle can have an PG_IOV_MAX long iovec.
+ *
+ * XXX: Right now the amount of space available for each IO is PG_IOV_MAX.
+ * While it's tempting to use the io_combine_limit GUC, that's
+ * PGC_USERSET, so we can't allocate shared memory based on that.
+ */
+ return mul_size(sizeof(struct iovec),
+ mul_size(mul_size(PG_IOV_MAX, AioProcs()),
+ io_max_concurrency));
+}
+
+static Size
+AioHandleDataShmemSize(void)
+{
+ /* each buffer referenced by an iovec can have associated data */
+ return mul_size(sizeof(uint64),
+ mul_size(mul_size(PG_IOV_MAX, AioProcs()),
+ io_max_concurrency));
+}
+
+/*
+ * Choose a suitable value for io_max_concurrency.
+ *
+ * It's unlikely that we could have more IOs in flight than buffers that we
+ * would be allowed to pin.
+ *
+ * On the upper end, apply a cap too - just because shared_buffers is large,
+ * it doesn't make sense have millions of buffers undergo IO concurrently.
+ */
+static int
+AioChooseMaxConcurrency(void)
+{
+ uint32 max_backends;
+ int max_proportional_pins;
+
+ /* Similar logic to LimitAdditionalPins() */
+ max_backends = MaxBackends + NUM_AUXILIARY_PROCS;
+ max_proportional_pins = NBuffers / max_backends;
+
+ max_proportional_pins = Max(max_proportional_pins, 1);
+
+ /* apply upper limit */
+ return Min(max_proportional_pins, 64);
+}
+
Size
AioShmemSize(void)
{
Size sz = 0;
+ /*
+ * We prefer to report this value's source as PGC_S_DYNAMIC_DEFAULT.
+ * However, if the DBA explicitly set io_max_concurrency = -1 in the
+ * config file, then PGC_S_DYNAMIC_DEFAULT will fail to override that and
+ * we must force the matter with PGC_S_OVERRIDE.
+ */
+ if (io_max_concurrency == -1)
+ {
+ char buf[32];
+
+ snprintf(buf, sizeof(buf), "%d", AioChooseMaxConcurrency());
+ SetConfigOption("io_max_concurrency", buf, PGC_POSTMASTER,
+ PGC_S_DYNAMIC_DEFAULT);
+ if (io_max_concurrency == -1) /* failed to apply it? */
+ SetConfigOption("io_max_concurrency", buf, PGC_POSTMASTER,
+ PGC_S_OVERRIDE);
+ }
+
+ sz = add_size(sz, AioCtlShmemSize());
+ sz = add_size(sz, AioBackendShmemSize());
+ sz = add_size(sz, AioHandleShmemSize());
+ sz = add_size(sz, AioHandleIOVShmemSize());
+ sz = add_size(sz, AioHandleDataShmemSize());
+
+ /* Reserve space for method specific resources. */
+ if (pgaio_method_ops->shmem_size)
+ sz = add_size(sz, pgaio_method_ops->shmem_size());
+
return sz;
}
void
AioShmemInit(void)
{
+ bool found;
+ uint32 io_handle_off = 0;
+ uint32 iovec_off = 0;
+ uint32 per_backend_iovecs = io_max_concurrency * PG_IOV_MAX;
+
+ pgaio_ctl = (PgAioCtl *)
+ ShmemInitStruct("AioCtl", AioCtlShmemSize(), &found);
+
+ if (found)
+ goto out;
+
+ memset(pgaio_ctl, 0, AioCtlShmemSize());
+
+ pgaio_ctl->io_handle_count = AioProcs() * io_max_concurrency;
+ pgaio_ctl->iovec_count = AioProcs() * per_backend_iovecs;
+
+ pgaio_ctl->backend_state = (PgAioBackend *)
+ ShmemInitStruct("AioBackend", AioBackendShmemSize(), &found);
+
+ pgaio_ctl->io_handles = (PgAioHandle *)
+ ShmemInitStruct("AioHandle", AioHandleShmemSize(), &found);
+
+ pgaio_ctl->iovecs = (struct iovec *)
+ ShmemInitStruct("AioHandleIOV", AioHandleIOVShmemSize(), &found);
+ pgaio_ctl->handle_data = (uint64 *)
+ ShmemInitStruct("AioHandleData", AioHandleDataShmemSize(), &found);
+
+ for (int procno = 0; procno < AioProcs(); procno++)
+ {
+ PgAioBackend *bs = &pgaio_ctl->backend_state[procno];
+
+ bs->io_handle_off = io_handle_off;
+ io_handle_off += io_max_concurrency;
+
+ dclist_init(&bs->idle_ios);
+ memset(bs->staged_ios, 0, sizeof(PgAioHandle *) * PGAIO_SUBMIT_BATCH_SIZE);
+ dclist_init(&bs->in_flight_ios);
+
+ /* initialize per-backend IOs */
+ for (int i = 0; i < io_max_concurrency; i++)
+ {
+ PgAioHandle *ioh = &pgaio_ctl->io_handles[bs->io_handle_off + i];
+
+ ioh->generation = 1;
+ ioh->owner_procno = procno;
+ ioh->iovec_off = iovec_off;
+ ioh->handle_data_len = 0;
+ ioh->report_return = NULL;
+ ioh->resowner = NULL;
+ ioh->num_callbacks = 0;
+ ioh->distilled_result.status = ARS_UNKNOWN;
+ ioh->flags = 0;
+
+ ConditionVariableInit(&ioh->cv);
+
+ dclist_push_tail(&bs->idle_ios, &ioh->node);
+ iovec_off += PG_IOV_MAX;
+ }
+ }
+
+out:
+ /* Initialize IO method specific resources. */
+ if (pgaio_method_ops->shmem_init)
+ pgaio_method_ops->shmem_init(!found);
}
void
pgaio_init_backend(void)
{
+ /* shouldn't be initialized twice */
+ Assert(!pgaio_my_backend);
+
+ if (MyProc == NULL || MyProcNumber >= AioProcs())
+ elog(ERROR, "aio requires a normal PGPROC");
+
+ pgaio_my_backend = &pgaio_ctl->backend_state[MyProcNumber];
+
+ if (pgaio_method_ops->init_backend)
+ pgaio_method_ops->init_backend();
+
+ before_shmem_exit(pgaio_shutdown, 0);
}
diff --git a/src/backend/storage/aio/aio_io.c b/src/backend/storage/aio/aio_io.c
new file mode 100644
index 00000000000..36d2c1f492d
--- /dev/null
+++ b/src/backend/storage/aio/aio_io.c
@@ -0,0 +1,184 @@
+/*-------------------------------------------------------------------------
+ *
+ * aio_io.c
+ * AIO - Low Level IO Handling
+ *
+ * Functions related to associating IO operations to IO Handles and IO-method
+ * independent support functions for actually performing IO.
+ *
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/backend/storage/aio/aio_io.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "miscadmin.h"
+#include "storage/aio.h"
+#include "storage/aio_internal.h"
+#include "storage/fd.h"
+#include "utils/wait_event.h"
+
+
+static void pgaio_io_before_prep(PgAioHandle *ioh);
+
+
+
+/* --------------------------------------------------------------------------------
+ * Public IO related functions operating on IO Handles
+ * --------------------------------------------------------------------------------
+ */
+
+/*
+ * Scatter/gather IO needs to associate an iovec with the Handle. To support
+ * worker mode this data needs to be in shared memory.
+ */
+int
+pgaio_io_get_iovec(PgAioHandle *ioh, struct iovec **iov)
+{
+ Assert(ioh->state == PGAIO_HS_HANDED_OUT);
+
+ *iov = &pgaio_ctl->iovecs[ioh->iovec_off];
+
+ return PG_IOV_MAX;
+}
+
+PgAioOp
+pgaio_io_get_op(PgAioHandle *ioh)
+{
+ return ioh->op;
+}
+
+PgAioOpData *
+pgaio_io_get_op_data(PgAioHandle *ioh)
+{
+ return &ioh->op_data;
+}
+
+
+
+/* --------------------------------------------------------------------------------
+ * "Preparation" routines for individual IO operations
+ *
+ * These are called by the code actually initiating an IO, to associate the IO
+ * specific data with an AIO handle.
+ *
+ * Each of the preparation routines first needs to call
+ * pgaio_io_before_prep(), then fill IO specific fields in the handle and then
+ * finally call pgaio_io_stage().
+ * --------------------------------------------------------------------------------
+ */
+
+void
+pgaio_io_prep_readv(PgAioHandle *ioh,
+ int fd, int iovcnt, uint64 offset)
+{
+ pgaio_io_before_prep(ioh);
+
+ ioh->op_data.read.fd = fd;
+ ioh->op_data.read.offset = offset;
+ ioh->op_data.read.iov_length = iovcnt;
+
+ pgaio_io_stage(ioh, PGAIO_OP_READV);
+}
+
+void
+pgaio_io_prep_writev(PgAioHandle *ioh,
+ int fd, int iovcnt, uint64 offset)
+{
+ pgaio_io_before_prep(ioh);
+
+ ioh->op_data.write.fd = fd;
+ ioh->op_data.write.offset = offset;
+ ioh->op_data.write.iov_length = iovcnt;
+
+ pgaio_io_stage(ioh, PGAIO_OP_WRITEV);
+}
+
+
+
+/* --------------------------------------------------------------------------------
+ * Internal IO related functions operating on IO Handles
+ * --------------------------------------------------------------------------------
+ */
+
+/*
+ * Execute IO operation synchronously. This is implemented here, not in
+ * method_sync.c, because other IO methods also might use it / fall back to
+ * it.
+ */
+void
+pgaio_io_perform_synchronously(PgAioHandle *ioh)
+{
+ ssize_t result = 0;
+ struct iovec *iov = &pgaio_ctl->iovecs[ioh->iovec_off];
+
+ START_CRIT_SECTION();
+
+ /* Perform IO. */
+ switch (ioh->op)
+ {
+ case PGAIO_OP_READV:
+ pgstat_report_wait_start(WAIT_EVENT_DATA_FILE_READ);
+ result = pg_preadv(ioh->op_data.read.fd, iov,
+ ioh->op_data.read.iov_length,
+ ioh->op_data.read.offset);
+ pgstat_report_wait_end();
+ break;
+ case PGAIO_OP_WRITEV:
+ pgstat_report_wait_start(WAIT_EVENT_DATA_FILE_WRITE);
+ result = pg_pwritev(ioh->op_data.write.fd, iov,
+ ioh->op_data.write.iov_length,
+ ioh->op_data.write.offset);
+ pgstat_report_wait_end();
+ break;
+ case PGAIO_OP_INVALID:
+ elog(ERROR, "trying to execute invalid IO operation");
+ }
+
+ ioh->result = result < 0 ? -errno : result;
+
+ pgaio_io_process_completion(ioh, ioh->result);
+
+ END_CRIT_SECTION();
+}
+
+/*
+ * Helper function to be called by IO operation preparation functions, before
+ * any data in the handle is set. Mostly to centralize assertions.
+ */
+static void
+pgaio_io_before_prep(PgAioHandle *ioh)
+{
+ Assert(ioh->state == PGAIO_HS_HANDED_OUT);
+ Assert(pgaio_my_backend->handed_out_io == ioh);
+ Assert(pgaio_io_has_target(ioh));
+ Assert(ioh->op == PGAIO_OP_INVALID);
+}
+
+/*
+ * Could be made part of the public interface, but it's not clear there's
+ * really a use case for that.
+ */
+const char *
+pgaio_io_get_op_name(PgAioHandle *ioh)
+{
+ Assert(ioh->op >= 0 && ioh->op < PGAIO_OP_COUNT);
+
+ switch (ioh->op)
+ {
+ case PGAIO_OP_INVALID:
+ return "invalid";
+ case PGAIO_OP_READV:
+ return "read";
+ case PGAIO_OP_WRITEV:
+ return "write";
+ }
+
+ return NULL; /* silence compiler */
+}
diff --git a/src/backend/storage/aio/aio_target.c b/src/backend/storage/aio/aio_target.c
new file mode 100644
index 00000000000..b01406a6a52
--- /dev/null
+++ b/src/backend/storage/aio/aio_target.c
@@ -0,0 +1,114 @@
+/*-------------------------------------------------------------------------
+ *
+ * aio_target.c
+ * AIO - Functionality related to executing IO for different targets
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/backend/storage/aio/aio_target.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "storage/aio.h"
+#include "storage/aio_internal.h"
+
+
+/*
+ * Registry for entities that can be the target of AIO.
+ */
+static const PgAioTargetInfo *pgaio_target_info[] = {
+ [PGAIO_TID_INVALID] = &(PgAioTargetInfo) {
+ .name = "invalid",
+ },
+};
+
+
+
+/* --------------------------------------------------------------------------------
+ * Public target related functions operating on IO Handles
+ * --------------------------------------------------------------------------------
+ */
+
+bool
+pgaio_io_has_target(PgAioHandle *ioh)
+{
+ return ioh->target != PGAIO_TID_INVALID;
+}
+
+/*
+ * Return the name for the target associated with the IO. Mostly useful for
+ * debugging/logging.
+ */
+const char *
+pgaio_io_get_target_name(PgAioHandle *ioh)
+{
+ Assert(ioh->target >= 0 && ioh->target < PGAIO_TID_COUNT);
+
+ return pgaio_target_info[ioh->target]->name;
+}
+
+/*
+ * Assign a target to the IO.
+ *
+ * This has to be called exactly once before pgaio_io_prep_*() is called.
+ */
+void
+pgaio_io_set_target(PgAioHandle *ioh, PgAioTargetID targetid)
+{
+ Assert(ioh->state == PGAIO_HS_HANDED_OUT);
+ Assert(ioh->target == PGAIO_TID_INVALID);
+
+ ioh->target = targetid;
+}
+
+PgAioTargetData *
+pgaio_io_get_target_data(PgAioHandle *ioh)
+{
+ return &ioh->target_data;
+}
+
+/*
+ * Return a stringified description of the IO's target.
+ *
+ * The string is localized and allocated in the current memory context.
+ */
+char *
+pgaio_io_get_target_description(PgAioHandle *ioh)
+{
+ return pgaio_target_info[ioh->target]->describe_identity(&ioh->target_data);
+}
+
+
+
+/* --------------------------------------------------------------------------------
+ * Internal target related functions operating on IO Handles
+ * --------------------------------------------------------------------------------
+ */
+
+/*
+ * Internal: Check if pgaio_io_reopen() is available for the IO.
+ */
+bool
+pgaio_io_can_reopen(PgAioHandle *ioh)
+{
+ return pgaio_target_info[ioh->target]->reopen != NULL;
+}
+
+/*
+ * Internal: Before executing an IO outside of the context of the process the
+ * IO has been prepared in, the file descriptor has to be reopened - any FD
+ * referenced in the IO itself, won't be valid in the separate process.
+ */
+void
+pgaio_io_reopen(PgAioHandle *ioh)
+{
+ Assert(ioh->target >= 0 && ioh->target < PGAIO_TID_COUNT);
+ Assert(ioh->op >= 0 && ioh->op < PGAIO_OP_COUNT);
+
+ pgaio_target_info[ioh->target]->reopen(ioh);
+}
diff --git a/src/backend/storage/aio/meson.build b/src/backend/storage/aio/meson.build
index c822fd4ddf7..2c26089d52e 100644
--- a/src/backend/storage/aio/meson.build
+++ b/src/backend/storage/aio/meson.build
@@ -2,6 +2,10 @@
backend_sources += files(
'aio.c',
+ 'aio_callback.c',
'aio_init.c',
+ 'aio_io.c',
+ 'aio_target.c',
+ 'method_sync.c',
'read_stream.c',
)
diff --git a/src/backend/storage/aio/method_sync.c b/src/backend/storage/aio/method_sync.c
new file mode 100644
index 00000000000..902c2428d41
--- /dev/null
+++ b/src/backend/storage/aio/method_sync.c
@@ -0,0 +1,47 @@
+/*-------------------------------------------------------------------------
+ *
+ * method_sync.c
+ * AIO - perform "AIO" by executing it synchronously
+ *
+ * This method is mainly to check if AIO use causes regressions. Other IO
+ * methods might also fall back to the synchronous method for functionality
+ * they cannot provide.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/backend/storage/aio/method_sync.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "storage/aio.h"
+#include "storage/aio_internal.h"
+
+static bool pgaio_sync_needs_synchronous_execution(PgAioHandle *ioh);
+static int pgaio_sync_submit(uint16 num_staged_ios, PgAioHandle **staged_ios);
+
+
+const IoMethodOps pgaio_sync_ops = {
+ .needs_synchronous_execution = pgaio_sync_needs_synchronous_execution,
+ .submit = pgaio_sync_submit,
+};
+
+
+
+static bool
+pgaio_sync_needs_synchronous_execution(PgAioHandle *ioh)
+{
+ return true;
+}
+
+static int
+pgaio_sync_submit(uint16 num_staged_ios, PgAioHandle **staged_ios)
+{
+ elog(ERROR, "IO should have been executed synchronously");
+
+ return 0;
+}