| 
186 | 186 | 
 
  | 
187 | 187 |   <para>  | 
188 | 188 |    Each subscription will receive changes via one replication slot (see  | 
189 |  | -   <xref linkend="streaming-replication-slots"/>).  Additional temporary  | 
190 |  | -   replication slots may be required for the initial data synchronization  | 
191 |  | -   of pre-existing table data.  | 
 | 189 | +   <xref linkend="streaming-replication-slots"/>).  Additional replication  | 
 | 190 | +   slots may be required for the initial data synchronization of  | 
 | 191 | +   pre-existing table data and those will be dropped at the end of data  | 
 | 192 | +   synchronization.  | 
192 | 193 |   </para>  | 
193 | 194 | 
 
  | 
194 | 195 |   <para>  | 
 | 
248 | 249 | 
 
  | 
249 | 250 |    <para>  | 
250 | 251 |     As mentioned earlier, each (active) subscription receives changes from a  | 
251 |  | -    replication slot on the remote (publishing) side.  Normally, the remote  | 
252 |  | -    replication slot is created automatically when the subscription is created  | 
253 |  | -    using <command>CREATE SUBSCRIPTION</command> and it is dropped  | 
254 |  | -    automatically when the subscription is dropped using <command>DROP  | 
255 |  | -    SUBSCRIPTION</command>.  In some situations, however, it can be useful or  | 
256 |  | -    necessary to manipulate the subscription and the underlying replication  | 
257 |  | -    slot separately.  Here are some scenarios:  | 
 | 252 | +    replication slot on the remote (publishing) side.  | 
 | 253 | +   </para>  | 
 | 254 | +   <para>  | 
 | 255 | +    Additional table synchronization slots are normally transient, created  | 
 | 256 | +    internally to perform initial table synchronization and dropped  | 
 | 257 | +    automatically when they are no longer needed. These table synchronization  | 
 | 258 | +    slots have generated names: <quote><literal>pg_%u_sync_%u_%llu</literal></quote>  | 
 | 259 | +    (parameters: Subscription <parameter>oid</parameter>,  | 
 | 260 | +    Table <parameter>relid</parameter>, system identifier <parameter>sysid</parameter>)  | 
 | 261 | +   </para>  | 
 | 262 | +   <para>  | 
 | 263 | +    Normally, the remote replication slot is created automatically when the  | 
 | 264 | +    subscription is created using <command>CREATE SUBSCRIPTION</command> and it  | 
 | 265 | +    is dropped automatically when the subscription is dropped using  | 
 | 266 | +    <command>DROP SUBSCRIPTION</command>.  In some situations, however, it can  | 
 | 267 | +    be useful or necessary to manipulate the subscription and the underlying  | 
 | 268 | +    replication slot separately.  Here are some scenarios:  | 
258 | 269 | 
 
  | 
259 | 270 |     <itemizedlist>  | 
260 | 271 |      <listitem>  | 
 | 
294 | 305 |        using <command>ALTER SUBSCRIPTION</command> before attempting to drop  | 
295 | 306 |        the subscription.  If the remote database instance no longer exists, no  | 
296 | 307 |        further action is then necessary.  If, however, the remote database  | 
297 |  | -       instance is just unreachable, the replication slot should then be  | 
298 |  | -       dropped manually; otherwise it would continue to reserve WAL and might  | 
 | 308 | +       instance is just unreachable, the replication slot (and any still   | 
 | 309 | +       remaining table synchronization slots) should then be  | 
 | 310 | +       dropped manually; otherwise it/they would continue to reserve WAL and might  | 
299 | 311 |        eventually cause the disk to fill up.  Such cases should be carefully  | 
300 | 312 |        investigated.  | 
301 | 313 |       </para>  | 
 | 
468 | 480 |   <sect2 id="logical-replication-snapshot">  | 
469 | 481 |     <title>Initial Snapshot</title>  | 
470 | 482 |     <para>  | 
471 |  | -      The initial data in existing subscribed tables are snapshotted and  | 
472 |  | -      copied in a parallel instance of a special kind of apply process.  | 
473 |  | -      This process will create its own temporary replication slot and  | 
474 |  | -      copy the existing data. Once existing data is copied, the worker  | 
475 |  | -      enters synchronization mode, which ensures that the table is brought  | 
476 |  | -      up to a synchronized state with the main apply process by streaming  | 
477 |  | -      any changes that happened during the initial data copy using standard  | 
478 |  | -      logical replication. Once the synchronization is done, the control  | 
479 |  | -      of the replication of the table is given back to the main apply  | 
480 |  | -      process where the replication continues as normal.  | 
 | 483 | +     The initial data in existing subscribed tables are snapshotted and  | 
 | 484 | +     copied in a parallel instance of a special kind of apply process.  | 
 | 485 | +     This process will create its own replication slot and copy the existing  | 
 | 486 | +     data.  As soon as the copy is finished the table contents will become  | 
 | 487 | +     visible to other backends.  Once existing data is copied, the worker  | 
 | 488 | +     enters synchronization mode, which ensures that the table is brought  | 
 | 489 | +     up to a synchronized state with the main apply process by streaming  | 
 | 490 | +     any changes that happened during the initial data copy using standard  | 
 | 491 | +     logical replication.  During this synchronization phase, the changes  | 
 | 492 | +     are applied and committed in the same order as they happened on the  | 
 | 493 | +     publisher.  Once the synchronization is done, the control of the  | 
 | 494 | +     replication of the table is given back to the main apply process where  | 
 | 495 | +     the replication continues as normal.  | 
481 | 496 |     </para>  | 
482 | 497 |   </sect2>  | 
483 | 498 |  </sect1>  | 
 | 
0 commit comments