Lists: | pgsql-bugspgsql-hackers |
---|
From: | PG Bug reporting form <noreply(at)postgresql(dot)org> |
---|---|
To: | pgsql-bugs(at)lists(dot)postgresql(dot)org |
Cc: | sergey(dot)belyashov(at)gmail(dot)com |
Subject: | BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-02-17 10:52:32 |
Message-ID: | [email protected] |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
The following bug has been logged on the website:
Bug reference: 18815
Logged by: Sergey Belyashov
Email address: sergey(dot)belyashov(at)gmail(dot)com
PostgreSQL version: 17.3
Operating system: Debian bookworm x86_64
Description:
Today I try to upgrade my cluster from postgresql-16 to postgresql-17. And
it was successfull until I restore some logical replication subscriptions.
When subscription is activated and first data are come then server logs:
2025-02-17 13:34:08.975 [98417] LOG: logical replication apply worker for
subscription "node4_closed_sessions_sub" has started
2025-02-17 13:34:11.213 [62583] LOG: background worker "logical
replication apply worker" (PID 98417) was terminated by signal 11:
Segmentation fault
2025-02-17 13:34:11.213 [62583] LOG: terminating any other active server
processes
2025-02-17 13:34:11.240 [62583] LOG: all server processes terminated;
reinitializing
2025-02-17 13:34:11.310 [98418] LOG: database system was interrupted; last
known up at 2025-02-17 13:22:08
and then restarts.
Kernel has been logged following info:
[94740743.468001] postgres[98417]: segfault at 10 ip 0000562b2b74d69c sp
00007fff284a7320 error 4 in postgres[562b2b6bb000+595000]
[94740743.468173] Code: 1f 80 00 00 00 00 44 89 e0 48 8b 15 56 0b 82 00 f7
d0 48 98 4c 8b 3c c2 eb 99 0f 1f 40 00 55 48 89 e5 53 48 89
fb 48 83 ec 08 <8b> 7f 10 e8 4c b1 32 00 8b 7b 14 85 ff 75 15 48 89 df 48
8b 5d f8
After some investigations I found that segfault is caused by one type of
subscriptions: subscription for huge partitioned tables on publisher and
subscriber (via root), subscriptions are created with data_copy=false
(source table updated by inserts and partition detaches, and it is huge,
data transfer is not compressed so it may take a days). Segfault does not
come immediately after subscription creation, but it cause when data is come
from the publisher. Then subscriber is restarts, recover, run subscription
again, catch segfault and repeat again until subscription is disabled.
Subscriptions for tables (small) without partitions works fine.
There is difference for publisher server versions: both publishers 16 and 17
cause the segfault on subscriber (version 17.3).
postgresql versions 12-16 works for years without any segfault with same
partition tables and publications/subscriptions.
postgresql-17=17.3-3.pgdg120+1 installed from the repository:
http://apt.postgresql.org/pub/repos/apt/ bookworm-pgdg main
From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | sergey(dot)belyashov(at)gmail(dot)com |
Cc: | pgsql-bugs(at)lists(dot)postgresql(dot)org |
Subject: | Re: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-02-17 16:39:11 |
Message-ID: | [email protected] |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
PG Bug reporting form <noreply(at)postgresql(dot)org> writes:
> After some investigations I found that segfault is caused by one type of
> subscriptions: subscription for huge partitioned tables on publisher and
> subscriber (via root), subscriptions are created with data_copy=false
> (source table updated by inserts and partition detaches, and it is huge,
> data transfer is not compressed so it may take a days). Segfault does not
> come immediately after subscription creation, but it cause when data is come
> from the publisher. Then subscriber is restarts, recover, run subscription
> again, catch segfault and repeat again until subscription is disabled.
This is not enough information for anyone else to reproduce the
problem; it very likely depends on details that you haven't mentioned.
Can you create a reproducer case? I'm hoping for a script that sets
up the necessary tables and subscriptions and populates the tables
with enough dummy data to cause the failure.
Something that might be less work for you is to get a stack trace
from the crash:
However, I make no promises that we can isolate the cause from
just a stack trace. A reproducer would be much better.
regards, tom lane
From: | Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-bugs(at)lists(dot)postgresql(dot)org |
Subject: | Re: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-02-17 19:17:05 |
Message-ID: | CAOe0RDwbowSGv8qeuqyHzJndtwV5JyPEG4YwmaAvgquC=gjz+Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
Hi,
I think backtrace will help.
Core was generated by `postgres: 17/main: logical replication apply
worker for subscription 602051860'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00005635402c869c in brinRevmapTerminate (revmap=0x0)
at ./build/../src/backend/access/brin/brin_revmap.c:102
(gdb) backtrace
#0 0x00005635402c869c in brinRevmapTerminate (revmap=0x0)
at ./build/../src/backend/access/brin/brin_revmap.c:102
#1 0x00005635402bfddd in brininsertcleanup (index=<optimized out>,
indexInfo=<optimized out>)
at ./build/../src/backend/access/brin/brin.c:515
#2 0x0000563540479309 in ExecCloseIndices
(resultRelInfo=resultRelInfo(at)entry=0x563541cab8d0)
at ./build/../src/backend/executor/execIndexing.c:248
#3 0x000056354048067f in ExecCleanupTupleRouting
(mtstate=0x563541c6db58, proute=0x563541cab638)
at ./build/../src/backend/executor/execPartition.c:1270
#4 0x00005635405c89f7 in finish_edata (edata=0x563541ca0fa8)
at ./build/../src/backend/replication/logical/worker.c:718
#5 0x00005635405cc6c4 in apply_handle_insert (s=0x7f61d2a3a1d8)
at ./build/../src/backend/replication/logical/worker.c:2438
#6 apply_dispatch (s=s(at)entry=0x7ffd30d95a70) at
./build/../src/backend/replication/logical/worker.c:3296
#7 0x00005635405cdb7f in LogicalRepApplyLoop (last_received=106949425100872)
at ./build/../src/backend/replication/logical/worker.c:3587
#8 start_apply (origin_startpos=origin_startpos(at)entry=0)
at ./build/../src/backend/replication/logical/worker.c:4429
#9 0x00005635405ce11f in run_apply_worker () at
./build/../src/backend/replication/logical/worker.c:4550
#10 ApplyWorkerMain (main_arg=<optimized out>) at
./build/../src/backend/replication/logical/worker.c:4719
#11 0x0000563540594bf8 in BackgroundWorkerMain (startup_data=<optimized out>,
startup_data_len=<optimized out>) at
./build/../src/backend/postmaster/bgworker.c:848
#12 0x0000563540596daa in postmaster_child_launch
(child_type=child_type(at)entry=B_BG_WORKER,
startup_data=startup_data(at)entry=0x563541bc3618 "",
startup_data_len=startup_data_len(at)entry=1472,
client_sock=client_sock(at)entry=0x0) at
./build/../src/backend/postmaster/launch_backend.c:277
#13 0x0000563540598f88 in do_start_bgworker (rw=0x563541bc3618)
at ./build/../src/backend/postmaster/postmaster.c:4272
#14 maybe_start_bgworkers () at
./build/../src/backend/postmaster/postmaster.c:4503
#15 0x0000563540599fea in process_pm_pmsignal () at
./build/../src/backend/postmaster/postmaster.c:3776
#16 ServerLoop () at ./build/../src/backend/postmaster/postmaster.c:1669
#17 0x000056354059c7c8 in PostmasterMain (argc=argc(at)entry=5,
argv=argv(at)entry=0x563541b229c0)
at ./build/../src/backend/postmaster/postmaster.c:1374
#18 0x00005635402bf5b1 in main (argc=5, argv=0x563541b229c0) at
./build/../src/backend/main/main.c:199
The destination (subscriber) table has two timestamps "started" and
"closed" with brin index on each of them. Table is partitioned by the
range on the "closed" column. Each partition is splitted on 6
subpartitions via list (remainder of id).
Best regards,
Sergey Belyashov
пн, 17 февр. 2025 г. в 19:39, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>:
>
> PG Bug reporting form <noreply(at)postgresql(dot)org> writes:
> > After some investigations I found that segfault is caused by one type of
> > subscriptions: subscription for huge partitioned tables on publisher and
> > subscriber (via root), subscriptions are created with data_copy=false
> > (source table updated by inserts and partition detaches, and it is huge,
> > data transfer is not compressed so it may take a days). Segfault does not
> > come immediately after subscription creation, but it cause when data is come
> > from the publisher. Then subscriber is restarts, recover, run subscription
> > again, catch segfault and repeat again until subscription is disabled.
>
> This is not enough information for anyone else to reproduce the
> problem; it very likely depends on details that you haven't mentioned.
> Can you create a reproducer case? I'm hoping for a script that sets
> up the necessary tables and subscriptions and populates the tables
> with enough dummy data to cause the failure.
>
> Something that might be less work for you is to get a stack trace
> from the crash:
>
> https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend
>
> However, I make no promises that we can isolate the cause from
> just a stack trace. A reproducer would be much better.
>
> regards, tom lane
From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com> |
Cc: | pgsql-bugs(at)lists(dot)postgresql(dot)org, Tomas Vondra <tomas(at)vondra(dot)me> |
Subject: | Re: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-02-17 19:43:57 |
Message-ID: | [email protected] |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com> writes:
> I think backtrace will help.
> Core was generated by `postgres: 17/main: logical replication apply
> worker for subscription 602051860'.
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0 0x00005635402c869c in brinRevmapTerminate (revmap=0x0)
> at ./build/../src/backend/access/brin/brin_revmap.c:102
> (gdb) backtrace
> #0 0x00005635402c869c in brinRevmapTerminate (revmap=0x0)
> at ./build/../src/backend/access/brin/brin_revmap.c:102
> #1 0x00005635402bfddd in brininsertcleanup (index=<optimized out>,
> indexInfo=<optimized out>)
> at ./build/../src/backend/access/brin/brin.c:515
> #2 0x0000563540479309 in ExecCloseIndices
> (resultRelInfo=resultRelInfo(at)entry=0x563541cab8d0)
> at ./build/../src/backend/executor/execIndexing.c:248
Thanks! It seems clear from that that the fault is basically in
brininsertcleanup(), which is trashing the BrinInsertState but
leaving indexInfo->ii_AmCache still pointing at it, so that
the next brininsert() will think it has a valid cache entry.
I suspect that the attached will fix it. What I don't understand
is why it's apparently so hard to trigger the crash, because it
looks to me like any two successive insert commands on the same
BRIN index should hit this.
BTW, I'm also a bit suspicious of the comment's claim that the
brinDesc doesn't need cleanup. That looks like a potential
memory leak.
regards, tom lane
Attachment | Content-Type | Size |
---|---|---|
fix-brininsertcleanup.patch | text/x-diff | 824 bytes |
From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Tomas Vondra <tomas(at)vondra(dot)me> |
Cc: | Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com>, pgsql-bugs(at)lists(dot)postgresql(dot)org |
Subject: | Re: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-02-17 20:13:00 |
Message-ID: | [email protected] |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
I wrote:
> I suspect that the attached will fix it. What I don't understand
> is why it's apparently so hard to trigger the crash, because it
> looks to me like any two successive insert commands on the same
> BRIN index should hit this.
Oh, wait: I was confusing ii_AmCache with rd_amcache in the index's
relcache entry. This coding would absolutely not work with rd_amcache
since that's persistent. It mostly works with the IndexInfo field
though, since an IndexInfo typically only survives per-query.
Evidently there's some path in logical replication that will re-use an
IndexInfo across multiple distinct insertion operations, and that's
what breaks it.
> BTW, I'm also a bit suspicious of the comment's claim that the
> brinDesc doesn't need cleanup. That looks like a potential
> memory leak.
This concern still stands.
regards, tom lane
From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com> |
Cc: | Tomas Vondra <tomas(at)vondra(dot)me>, pgsql-bugs(at)lists(dot)postgresql(dot)org |
Subject: | Re: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-02-17 20:57:30 |
Message-ID: | [email protected] |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
Further to this ... I'd still really like to have a reproducer.
While brininsertcleanup is clearly being less robust than it should
be, I now suspect that there is another bug somewhere further down
the call stack. We're getting to this point via ExecCloseIndices,
and that should be paired with ExecOpenIndices, and that would have
created a fresh IndexInfo. So it looks a lot like some path in a
logrep worker is able to call ExecCloseIndices twice on the same
working data. That would probably lead to a "releasing a lock you
don't own" error if we weren't hitting this crash first.
regards, tom lane
From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com> |
Cc: | Tomas Vondra <tomas(at)vondra(dot)me>, pgsql-bugs(at)lists(dot)postgresql(dot)org |
Subject: | Re: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-02-17 23:37:56 |
Message-ID: | [email protected] |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
I wrote:
> Further to this ... I'd still really like to have a reproducer.
> While brininsertcleanup is clearly being less robust than it should
> be, I now suspect that there is another bug somewhere further down
> the call stack. We're getting to this point via ExecCloseIndices,
> and that should be paired with ExecOpenIndices, and that would have
> created a fresh IndexInfo. So it looks a lot like some path in a
> logrep worker is able to call ExecCloseIndices twice on the same
> working data. That would probably lead to a "releasing a lock you
> don't own" error if we weren't hitting this crash first.
Hmm ... I tried modifying ExecCloseIndices to blow up if called
twice, as in the attached. This gets through core regression
just fine, but it blows up in three different subscription TAP
tests, all with a stack trace matching Sergey's:
#0 __GI_raise (sig=sig(at)entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007f064bfe3e65 in __GI_abort () at abort.c:79
#2 0x00000000009e9253 in ExceptionalCondition (
conditionName=conditionName(at)entry=0xb8717b "indexDescs[i] != NULL",
fileName=fileName(at)entry=0xb87139 "execIndexing.c",
lineNumber=lineNumber(at)entry=249) at assert.c:66
#3 0x00000000006f0b13 in ExecCloseIndices (
resultRelInfo=resultRelInfo(at)entry=0x2f11c18) at execIndexing.c:249
#4 0x00000000006f86d8 in ExecCleanupTupleRouting (mtstate=0x2ef92d8,
proute=0x2ef94e8) at execPartition.c:1273
#5 0x0000000000848cb6 in finish_edata (edata=0x2ef8f50) at worker.c:717
#6 0x000000000084d0a0 in apply_handle_insert (s=<optimized out>)
at worker.c:2460
#7 apply_dispatch (s=<optimized out>) at worker.c:3389
#8 0x000000000084e494 in LogicalRepApplyLoop (last_received=25066600)
at worker.c:3680
#9 start_apply (origin_startpos=0) at worker.c:4507
#10 0x000000000084e711 in run_apply_worker () at worker.c:4629
#11 ApplyWorkerMain (main_arg=<optimized out>) at worker.c:4798
#12 0x00000000008138f9 in BackgroundWorkerMain (startup_data=<optimized out>,
startup_data_len=<optimized out>) at bgworker.c:842
The problem seems to be that apply_handle_insert_internal does
ExecOpenIndices and then ExecCloseIndices, and then
ExecCleanupTupleRouting does ExecCloseIndices again, which nicely
explains why brininsertcleanup blows up if you happen to have a BRIN
index involved. What it doesn't explain is how come we don't see
other symptoms from the duplicate index_close calls, regardless of
index type. I'd have expected an assertion failure from
RelationDecrementReferenceCount, and/or an assertion failure for
nonzero rd_refcnt at transaction end, and/or a "you don't own a lock
of type X" gripe from LockRelease. We aren't getting any of those,
but why not, if this code is as broken as I think it is?
(On closer inspection, we seem to have about 99% broken relcache.c's
ability to notice rd_refcnt being nonzero at transaction end, but
the other two things should still be happening.)
regards, tom lane
Attachment | Content-Type | Size |
---|---|---|
complain-about-duplicate-ExecCloseIndices.patch | text/x-diff | 678 bytes |
From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Tomas Vondra <tomas(at)vondra(dot)me> |
Cc: | Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com>, pgsql-bugs(at)lists(dot)postgresql(dot)org |
Subject: | Re: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-02-18 01:51:50 |
Message-ID: | [email protected] |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
I wrote:
> The problem seems to be that apply_handle_insert_internal does
> ExecOpenIndices and then ExecCloseIndices, and then
> ExecCleanupTupleRouting does ExecCloseIndices again, which nicely
> explains why brininsertcleanup blows up if you happen to have a BRIN
> index involved. What it doesn't explain is how come we don't see
> other symptoms from the duplicate index_close calls, regardless of
> index type.
Hah, I see it: this bug is specific to the apply_handle_tuple_routing
code path. The ResultRelInfo we're dealing with is made by
ExecFindPartition, which does ExecOpenIndices on it. Then
apply_handle_tuple_routing calls apply_handle_insert_internal, which
does ExecOpenIndices *again* on the same ResultRelInfo. That gets a
second refcount and second lock on the index(es), and leaks all the
IndexInfo data structures made by the first call. When done,
apply_handle_insert_internal does ExecCloseIndices, releasing one
refcount and lock. Then, back in apply_handle_tuple_routing, we do
ExecCloseIndices again. So the refcounts and locks balance out, and
it very accidentally fails to crash, so long as nobody is expecting
the contents of the IndexInfos to match up.
The "Move the tuple into the new partition" path in
apply_handle_tuple_routing's UPDATE case is even squirrelier. That
does another ExecFindPartition and then apply_handle_insert_internal,
but there's no ExecCloseIndices call balancing the ExecFindPartition,
meaning it looks like it's leaking refcount and lock. Experimentation
shows that it's not, because the matching ExecCloseIndices is
eventually done by the ExecCleanupTupleRouting call in finish_edata
after returning to the outer apply_handle_update.
So if you ask me, this is impossibly convoluted and in need of a
significant rewrite. It's not okay to be opening/closing the
ResultRelInfo's indexes twice (not least because "speculative"
is different in the two open calls). And it's not okay to have
such radically different paths for cleaning up the index-opening
done by ExecFindPartition. It's even more not okay that there's
not one word of commentary about this complexity, strongly suggesting
that the original author didn't quite understand how it worked either.
regards, tom lane
From: | Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Tomas Vondra <tomas(at)vondra(dot)me>, pgsql-bugs(at)lists(dot)postgresql(dot)org |
Subject: | Re: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-02-18 06:56:56 |
Message-ID: | CAOe0RDwUeZduRUcD1N=BcAk5z3ANPpdyZtr4qNjiY6fPQu=sDw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
Hi,
Do I need to apply this patch for debugging purposes?
I want to remove brin indexes from active partitions and start
replication. When the issue is fixed I will return brin indexes back.
Best regards,
Sergey Belyashov
вт, 18 февр. 2025 г. в 02:37, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>:
>
> I wrote:
> > Further to this ... I'd still really like to have a reproducer.
> > While brininsertcleanup is clearly being less robust than it should
> > be, I now suspect that there is another bug somewhere further down
> > the call stack. We're getting to this point via ExecCloseIndices,
> > and that should be paired with ExecOpenIndices, and that would have
> > created a fresh IndexInfo. So it looks a lot like some path in a
> > logrep worker is able to call ExecCloseIndices twice on the same
> > working data. That would probably lead to a "releasing a lock you
> > don't own" error if we weren't hitting this crash first.
>
> Hmm ... I tried modifying ExecCloseIndices to blow up if called
> twice, as in the attached. This gets through core regression
> just fine, but it blows up in three different subscription TAP
> tests, all with a stack trace matching Sergey's:
>
> #0 __GI_raise (sig=sig(at)entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
> #1 0x00007f064bfe3e65 in __GI_abort () at abort.c:79
> #2 0x00000000009e9253 in ExceptionalCondition (
> conditionName=conditionName(at)entry=0xb8717b "indexDescs[i] != NULL",
> fileName=fileName(at)entry=0xb87139 "execIndexing.c",
> lineNumber=lineNumber(at)entry=249) at assert.c:66
> #3 0x00000000006f0b13 in ExecCloseIndices (
> resultRelInfo=resultRelInfo(at)entry=0x2f11c18) at execIndexing.c:249
> #4 0x00000000006f86d8 in ExecCleanupTupleRouting (mtstate=0x2ef92d8,
> proute=0x2ef94e8) at execPartition.c:1273
> #5 0x0000000000848cb6 in finish_edata (edata=0x2ef8f50) at worker.c:717
> #6 0x000000000084d0a0 in apply_handle_insert (s=<optimized out>)
> at worker.c:2460
> #7 apply_dispatch (s=<optimized out>) at worker.c:3389
> #8 0x000000000084e494 in LogicalRepApplyLoop (last_received=25066600)
> at worker.c:3680
> #9 start_apply (origin_startpos=0) at worker.c:4507
> #10 0x000000000084e711 in run_apply_worker () at worker.c:4629
> #11 ApplyWorkerMain (main_arg=<optimized out>) at worker.c:4798
> #12 0x00000000008138f9 in BackgroundWorkerMain (startup_data=<optimized out>,
> startup_data_len=<optimized out>) at bgworker.c:842
>
> The problem seems to be that apply_handle_insert_internal does
> ExecOpenIndices and then ExecCloseIndices, and then
> ExecCleanupTupleRouting does ExecCloseIndices again, which nicely
> explains why brininsertcleanup blows up if you happen to have a BRIN
> index involved. What it doesn't explain is how come we don't see
> other symptoms from the duplicate index_close calls, regardless of
> index type. I'd have expected an assertion failure from
> RelationDecrementReferenceCount, and/or an assertion failure for
> nonzero rd_refcnt at transaction end, and/or a "you don't own a lock
> of type X" gripe from LockRelease. We aren't getting any of those,
> but why not, if this code is as broken as I think it is?
>
> (On closer inspection, we seem to have about 99% broken relcache.c's
> ability to notice rd_refcnt being nonzero at transaction end, but
> the other two things should still be happening.)
>
> regards, tom lane
>
From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com> |
Cc: | Tomas Vondra <tomas(at)vondra(dot)me>, pgsql-bugs(at)lists(dot)postgresql(dot)org |
Subject: | Re: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-02-18 16:54:45 |
Message-ID: | [email protected] |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com> writes:
> Do I need to apply this patch for debugging purposes?
I think we've debugged enough to understand what's happening.
Thanks for the report!
If you just want to get some work done, applying that patch
should be enough to prevent the crash, although I want to
change some other things too.
regards, tom lane
From: | Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Tomas Vondra <tomas(at)vondra(dot)me>, pgsql-bugs(at)lists(dot)postgresql(dot)org |
Subject: | Re: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-02-18 17:41:55 |
Message-ID: | CAOe0RDwamh5YQjEqoPsx2=3bBiuG8H9Trr_fkN1HgknWiMyX_w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
Thank you very much
вт, 18 февр. 2025 г., 19:54 Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>:
> Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com> writes:
> > Do I need to apply this patch for debugging purposes?
>
> I think we've debugged enough to understand what's happening.
> Thanks for the report!
>
> If you just want to get some work done, applying that patch
> should be enough to prevent the crash, although I want to
> change some other things too.
>
> regards, tom lane
>
From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | pgsql-hackers(at)lists(dot)postgresql(dot)org |
Cc: | Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com>, Tomas Vondra <tomas(at)vondra(dot)me> |
Subject: | Re: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-02-18 20:38:34 |
Message-ID: | [email protected] |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
[ redirecting to -hackers for patch discussion ]
Here's a completed set of patches for bug #18815 [1] (which,
briefly, is that the logrep worker opens/closes indexes
multiple times and thereby breaks brininsertcleanup).
0001 adds a subscriber-side BRIN index to 013_partition.pl, which
replicates the crash Sergey reported. I've got mixed feelings about
whether to actually commit this: it's certainly useful for testing
this fix, but without context it looks like a pretty random thing to
do. It could be justified perhaps on the grounds of testing
aminsertcleanup callbacks within replication, since BRIN is the
only core index type that has such a callback.
0002 fixes the crash by hardening brininsertcleanup against multiple
calls. I think that this is patching a symptom not the root cause,
but it still seems like good defensive programming.
0003 adds Asserts to ExecOpenIndices and ExecCloseIndices to check
that they're not called more than once per ResultRelInfo, and thereby
exposes what I consider the root cause: apply_handle_tuple_routing
opens the indexes twice and then closes them twice. This somewhat
accidentally leaves us with zero refcounts and zero locks on the
indexes, so in the absence of aminsertcleanup callbacks the only real
reason to complain is the duplicative construction of the IndexInfo
structures. But the double call of brininsertcleanup breaks the
existing coding of that function, and it might well break third-party
aminsertcleanup callbacks if there are any. So I think we should
institute a policy that this coding pattern is forbidden.
And finally, 0004 fixes worker.c to not do that. This turned out
simpler than I thought, because I was misled by believing that the
ExecOpenIndices/ExecCloseIndices calls in apply_handle_tuple_routing
itself do something useful. They don't, and should be nuked outright.
I don't intend 0003 for back-patch, but the rest of this has to go
back as far as the bug can be observed, which I didn't test yet.
To be clear, 0004 would fix the issue even without 0002, but
I still think we should back-patch both.
regards, tom lane
Attachment | Content-Type | Size |
---|---|---|
v1-0001-Demonstrate-crash-in-brininsertcleanup-during-log.patch | text/x-diff | 1.3 KB |
v1-0002-Harden-brininsertcleanup-against-repeat-calls.patch | text/x-diff | 1.7 KB |
v1-0003-Assert-that-ExecOpenIndices-and-ExecCloseIndices-.patch | text/x-diff | 2.5 KB |
v1-0004-Avoid-duplicate-ExecOpenIndices-ExecCloseIndices-.patch | text/x-diff | 4.1 KB |
From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | pgsql-hackers(at)lists(dot)postgresql(dot)org |
Cc: | Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com>, Tomas Vondra <tomas(at)vondra(dot)me> |
Subject: | Re: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-02-18 22:24:06 |
Message-ID: | [email protected] |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
I wrote:
> I don't intend 0003 for back-patch, but the rest of this has to go
> back as far as the bug can be observed, which I didn't test yet.
Ah: this bug is new in v17, because neither brininsertcleanup nor
the whole aminsertcleanup infrastructure exist before that.
So that leaves us with a question of whether 0004 should be
back-patched any further than v17. There's some attraction
to doing so to keep the branches in sync; but there's already
quite a lot of cross-branch churn in worker.c, so on the whole
I'm inclined to just do v17.
regards, tom lane
From: | Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-hackers(at)lists(dot)postgresql(dot)org, Tomas Vondra <tomas(at)vondra(dot)me> |
Subject: | Re: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-02-19 06:55:08 |
Message-ID: | CAOe0RDxj6QCBHzKY_OGoLHykJ7oAYx2SD_dKs8_aeNBk9WTTXA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
Hi,
The 4th patch is not applicable for the current REL_17_STABLE branch.
There are a lot of differences from master in the worker.c.
Best regards,
Sergey Belyashov
ср, 19 февр. 2025 г. в 01:24, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>:
>
> I wrote:
> > I don't intend 0003 for back-patch, but the rest of this has to go
> > back as far as the bug can be observed, which I didn't test yet.
>
> Ah: this bug is new in v17, because neither brininsertcleanup nor
> the whole aminsertcleanup infrastructure exist before that.
>
> So that leaves us with a question of whether 0004 should be
> back-patched any further than v17. There's some attraction
> to doing so to keep the branches in sync; but there's already
> quite a lot of cross-branch churn in worker.c, so on the whole
> I'm inclined to just do v17.
>
> regards, tom lane
From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com> |
Cc: | pgsql-hackers(at)lists(dot)postgresql(dot)org, Tomas Vondra <tomas(at)vondra(dot)me> |
Subject: | Re: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-02-19 07:06:51 |
Message-ID: | [email protected] |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com> writes:
> The 4th patch is not applicable for the current REL_17_STABLE branch.
Yeah, I saw that, didn't work on fixing it yet. It looked like
the diffs were mostly about the ExecOpen/CloseIndices calls in
apply_handle_tuple_routing, which we already knew are completely
bogus. So I'm hopeful that just removing whichever of them are
in v17 will do the job --- but it'll require a full re-test
to be sure :-(
regards, tom lane
From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com> |
Cc: | pgsql-hackers(at)lists(dot)postgresql(dot)org, Tomas Vondra <tomas(at)vondra(dot)me> |
Subject: | Re: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-02-19 21:55:52 |
Message-ID: | [email protected] |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com> writes:
> The 4th patch is not applicable for the current REL_17_STABLE branch.
> There are a lot of differences from master in the worker.c.
If you want to try the committed v17 patch, see
Thanks again for the report!
regards, tom lane
From: | Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-hackers(at)lists(dot)postgresql(dot)org, Tomas Vondra <tomas(at)vondra(dot)me> |
Subject: | Re: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-02-20 19:17:18 |
Message-ID: | CAOe0RDwp-1LKQp4Dy5xROX2m1jBjy_K_bodoNifkxam=6WB+hg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
I have applied the patch. Replication now works. Thank you.
Regards
чт, 20 февр. 2025 г. в 00:55, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>:
>
> Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com> writes:
> > The 4th patch is not applicable for the current REL_17_STABLE branch.
> > There are a lot of differences from master in the worker.c.
>
> If you want to try the committed v17 patch, see
>
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=patch;h=788baa9a25ae83b576621166367c868d3329b4d4
>
> Thanks again for the report!
>
> regards, tom lane
From: | "Zhijie Hou (Fujitsu)" <houzj(dot)fnst(at)fujitsu(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com>, Tomas Vondra <tomas(at)vondra(dot)me>, Nisha Moond <nisha(dot)moond412(at)gmail(dot)com>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | RE: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-03-26 05:07:47 |
Message-ID: | TYAPR01MB57244ADA33DDA57119B9D26494A62@TYAPR01MB5724.jpnprd01.prod.outlook.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
On Wed, Feb 19, 2025 at 4:38 AM Tom Lane wrote:
Hi,
>
> Here's a completed set of patches for bug #18815 [1] (which, briefly,
> is that the logrep worker opens/closes indexes multiple times and
> thereby breaks brininsertcleanup).
>
> 0001 adds a subscriber-side BRIN index to 013_partition.pl, which
> replicates the crash Sergey reported. I've got mixed feelings about
> whether to actually commit this: it's certainly useful for testing
> this fix, but without context it looks like a pretty random thing to
> do. It could be justified perhaps on the grounds of testing
> aminsertcleanup callbacks within replication, since BRIN is the only core index type that has such a callback.
>
> 0002 fixes the crash by hardening brininsertcleanup against multiple
> calls. I think that this is patching a symptom not the root cause,
> but it still seems like good defensive programming.
>
> 0003 adds Asserts to ExecOpenIndices and ExecCloseIndices to check
> that they're not called more than once per ResultRelInfo, and thereby
> exposes what I consider the root cause: apply_handle_tuple_routing
> opens the indexes twice and then closes them twice. This somewhat
> accidentally leaves us with zero refcounts and zero locks on the
> indexes, so in the absence of aminsertcleanup callbacks the only real
> reason to complain is the duplicative construction of the IndexInfo
> structures. But the double call of brininsertcleanup breaks the
> existing coding of that function, and it might well break third-party
> aminsertcleanup callbacks if there are any. So I think we should
> institute a policy that this coding pattern is forbidden.
>
> And finally, 0004 fixes worker.c to not do that. This turned out
> simpler than I thought, because I was misled by believing that the
> ExecOpenIndices/ExecCloseIndices calls in apply_handle_tuple_routing
> itself do something useful. They don't, and should be nuked outright.
My colleague Nisha reported off-list about a crash in logical replication that
occurs during unique constraint violations on leaf partitions. Upon
investigation, I confirmed that this crash started happening after commit
9ff6867.
The problem arises because unique key conflict detection relied on the
ExecOpenIndices call in apply_handle_insert_internal and
apply_handle_tuple_routing with the speculative parameter set to true to
construct necessary index information. However, these openings were redundant,
as indexes had already been opened during target partition searches via the
parent table (e.g., using ExecFindPartition). Hence, they were removed in
commit 9ff6867. Unfortunately, ExecFindPartition opens indexes without
constructing the needed index information for conflict detection, which leads
to the crash.
To fix it, I tried to add a detect_conflict flag in ModifyTableState, enabling
ExecFindPartition() to internally build the required index information in this
case, like the attachment.
An alternative approach may be to delay index information initialization until
immediately before executing the actual update or insert. E.g., do that in
InitConflictIndexes(). This approach can also avoid unnecessary construction of
index information when the tuple to be updated is not found, or during a
cross-partition update where index information for the source partition is not
required. However, it introduces an additional location for index
initialization, potentially increasing maintenance efforts.
Best Regards,
Hou zj
Attachment | Content-Type | Size |
---|---|---|
v1-0001-Fix-crashes-in-logical-replication-during-unique-.patch | application/octet-stream | 5.6 KB |
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | "Zhijie Hou (Fujitsu)" <houzj(dot)fnst(at)fujitsu(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com>, Tomas Vondra <tomas(at)vondra(dot)me>, Nisha Moond <nisha(dot)moond412(at)gmail(dot)com>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-04-07 09:37:08 |
Message-ID: | CAA4eK1LbzVVTXLfaNiUwy6Gf6YY7iuKEZFMGfEMmekBx9XrT7A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
On Wed, Mar 26, 2025 at 10:38 AM Zhijie Hou (Fujitsu)
<houzj(dot)fnst(at)fujitsu(dot)com> wrote:
>
> On Wed, Feb 19, 2025 at 4:38 AM Tom Lane wrote:
>
> My colleague Nisha reported off-list about a crash in logical replication that
> occurs during unique constraint violations on leaf partitions. Upon
> investigation, I confirmed that this crash started happening after commit
> 9ff6867.
>
Even though the commit 9ff68679b5 is backpatched to 17, the proposed
patch needs to be applied only for HEAD because the requirement for
additional conflict information is new to HEAD, right?
> The problem arises because unique key conflict detection relied on the
> ExecOpenIndices call in apply_handle_insert_internal and
> apply_handle_tuple_routing with the speculative parameter set to true to
> construct necessary index information. However, these openings were redundant,
> as indexes had already been opened during target partition searches via the
> parent table (e.g., using ExecFindPartition). Hence, they were removed in
> commit 9ff6867. Unfortunately, ExecFindPartition opens indexes without
> constructing the needed index information for conflict detection, which leads
> to the crash.
>
> To fix it, I tried to add a detect_conflict flag in ModifyTableState, enabling
> ExecFindPartition() to internally build the required index information in this
> case, like the attachment.
>
I have a question about the additional information built in
ExecOpenIndices(). We built the required information when the index is
not used for exclusion constraint whereas we set the specConflict flag
in ExecInsertIndexTuples() even for the exclusion constraints case.
The specConflict flag is used by the caller to report conflicts. So,
won't we need to build the required information in ExecOpenIndices()
even when the index is used for exclusion constraint? What is the
behavior of conflict reporting code in case of exclusion constraints?
> An alternative approach may be to delay index information initialization until
> immediately before executing the actual update or insert. E.g., do that in
> InitConflictIndexes().
>
Right, this is also worth considering. But let us first conclude on
the existing approach to build the required information in
ExecOpenIndices().
--
With Regards,
Amit Kapila.
From: | "Zhijie Hou (Fujitsu)" <houzj(dot)fnst(at)fujitsu(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com>, Tomas Vondra <tomas(at)vondra(dot)me>, Nisha Moond <nisha(dot)moond412(at)gmail(dot)com>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | RE: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-04-07 09:58:39 |
Message-ID: | OS3PR01MB57181F30E6CE0B21387B331394AA2@OS3PR01MB5718.jpnprd01.prod.outlook.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
On Mon, Apr 7, 2025 at 5:37 PM Amit Kapila wrote:
>
> On Wed, Mar 26, 2025 at 10:3 AM Zhijie Hou (Fujitsu)
> <houzj(dot)fnst(at)fujitsu(dot)com> wrote:
> >
> > On Wed, Feb 19, 2025 at 4:38 AM Tom Lane wrote:
> >
> > My colleague Nisha reported off-list about a crash in logical replication that
> > occurs during unique constraint violations on leaf partitions. Upon
> > investigation, I confirmed that this crash started happening after commit
> > 9ff6867.
> >
>
> Even though the commit 9ff68679b5 is backpatched to 17, the proposed
> patch needs to be applied only for HEAD because the requirement for
> additional conflict information is new to HEAD, right?
Right.
>
> > The problem arises because unique key conflict detection relied on the
> > ExecOpenIndices call in apply_handle_insert_internal and
> > apply_handle_tuple_routing with the speculative parameter set to true to
> > construct necessary index information. However, these openings were
> redundant,
> > as indexes had already been opened during target partition searches via the
> > parent table (e.g., using ExecFindPartition). Hence, they were removed in
> > commit 9ff6867. Unfortunately, ExecFindPartition opens indexes without
> > constructing the needed index information for conflict detection, which leads
> > to the crash.
> >
> > To fix it, I tried to add a detect_conflict flag in ModifyTableState, enabling
> > ExecFindPartition() to internally build the required index information in this
> > case, like the attachment.
> >
>
> I have a question about the additional information built in
> ExecOpenIndices(). We built the required information when the index is
> not used for exclusion constraint whereas we set the specConflict flag
> in ExecInsertIndexTuples() even for the exclusion constraints case.
> The specConflict flag is used by the caller to report conflicts. So,
> won't we need to build the required information in ExecOpenIndices()
> even when the index is used for exclusion constraint?
I think the index information used for detecting exclusion conflict is always
initialized In ExecOpenIndices-> BuildIndexInfo -> RelationGetExclusionInfo
regardless of the input parameter.
> What is the
> behavior of conflict reporting code in case of exclusion constraints?
Under logical replication context, since we do not detect conflicts for exclusion
constraints, it would simply report the original constraint violation ERROR.
Best Regards,
Hou zj
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | "Zhijie Hou (Fujitsu)" <houzj(dot)fnst(at)fujitsu(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com>, Tomas Vondra <tomas(at)vondra(dot)me>, Nisha Moond <nisha(dot)moond412(at)gmail(dot)com>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-04-08 05:59:53 |
Message-ID: | CAA4eK1KWhm8dcGb4OUHTbA+UhC=2+T-k+=pBTq1CjdWaiTeAbg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
On Mon, Apr 7, 2025 at 3:28 PM Zhijie Hou (Fujitsu)
<houzj(dot)fnst(at)fujitsu(dot)com> wrote:
>
> > What is the
> > behavior of conflict reporting code in case of exclusion constraints?
>
> Under logical replication context, since we do not detect conflicts for exclusion
> constraints, it would simply report the original constraint violation ERROR.
>
Fair enough. On considering it again, I find your idea of building
conflict-related information when it is actually required sounds
better, as it may also save us performance in some corner cases.
--
With Regards,
Amit Kapila.
From: | "Zhijie Hou (Fujitsu)" <houzj(dot)fnst(at)fujitsu(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com>, Tomas Vondra <tomas(at)vondra(dot)me>, Nisha Moond <nisha(dot)moond412(at)gmail(dot)com>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | RE: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-04-08 06:56:25 |
Message-ID: | OS3PR01MB57185F018C8F64689B44672B94B52@OS3PR01MB5718.jpnprd01.prod.outlook.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
On Tue, Apr 8, 2025 at 2:00 PM Amit Kapila wrote:
>
> On Mon, Apr 7, 2025 at 3:28 PM Zhijie Hou (Fujitsu) <houzj(dot)fnst(at)fujitsu(dot)com>
> wrote:
> >
> > > What is the
> > > behavior of conflict reporting code in case of exclusion constraints?
> >
> > Under logical replication context, since we do not detect conflicts
> > for exclusion constraints, it would simply report the original constraint
> violation ERROR.
> >
>
> Fair enough. On considering it again, I find your idea of building conflict-related
> information when it is actually required sounds better, as it may also save us
> performance in some corner cases.
Thanks for reviewing.
After studying more on this alternative, I think it can be improved further.
Specifically, we can delay index info initialization until the
FindConflictTuple() function, where the index info is actually used. This
can minimize overhead associated with building index info when no conflict
exists.
I generated a profile on the HEAD to check the impact of
BuildSpeculativeIndexInfo() during bulk insert operations (without conflicts),
and the results were notable:
--14.68%--BuildSpeculativeIndexInfo
|--5.34%--IndexAmTranslateCompareType
| --5.05%--GetIndexAmRoutineByAmId
| |--2.26%--GetIndexAmRoutine
| |--1.68%--SearchSysCache1
| --0.52%--ReleaseSysCache
|--5.29%--get_opfamily_member
| |--4.06%--SearchSysCache4
| --0.50%--ReleaseSysCache
|
|--2.66%--get_opcode
| |--1.74%--SearchSysCache1
And this function disappeared after delaying the init to FindConflictTuple().
To further analyze performance, I measured the time taken to apply a
transaction that inserts 1,000,000 rows into a table with three unique indexes:
HEAD: 6267 ms
Improved alternative: 5593 ms
It shows about 11% performance improvement with the refined approach.
I am attaching the V2 patch which implements this idea by
postponing index info initialization until the FindConflictTuple(). I confirmed
It can pass regression and pgindent check.
Best Regards,
Hou zj
Attachment | Content-Type | Size |
---|---|---|
v2-0001-Fix-crashes-in-logical-replication-during-unique-.patch | application/octet-stream | 7.0 KB |
From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | "Zhijie Hou (Fujitsu)" <houzj(dot)fnst(at)fujitsu(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Sergey Belyashov <sergey(dot)belyashov(at)gmail(dot)com>, Tomas Vondra <tomas(at)vondra(dot)me>, Nisha Moond <nisha(dot)moond412(at)gmail(dot)com>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: BUG #18815: Logical replication worker Segmentation fault |
Date: | 2025-04-09 09:29:22 |
Message-ID: | CAA4eK1KTbQGVQcypVMnvo=69nf_PY2if6FLC6O+t99WqxeZ+cw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs pgsql-hackers |
On Tue, Apr 8, 2025 at 12:26 PM Zhijie Hou (Fujitsu)
<houzj(dot)fnst(at)fujitsu(dot)com> wrote:
>
>
> I am attaching the V2 patch which implements this idea by
> postponing index info initialization until the FindConflictTuple(). I confirmed
> It can pass regression and pgindent check.
>
Thanks, I pushed this patch yesterday.
--
With Regards,
Amit Kapila.