0% found this document useful (0 votes)
1K views426 pages

710 - Archive Server 9.7.0 Administration

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views426 pages

710 - Archive Server 9.7.0 Administration

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 426

Archive Server Administration

Learning Services course 710

July 2008

QPENTEXT
Great Minds Working Together TrV1
Impressum Learning Services course material
Revision 9.6q
Author: Learning Services
Date: July 2008

Copyright © 2008 IXOS SOFTWARE AG


All rights reserved, including those regarding reproduction, copying or other use or
communication of the contents of this document or parts thereof. No part of this
publication may be reproduced, transmitted to third parties, processed using electronic
retrieval systems, copied, distributed or used for public demonstration in any form without
the written consent of IXOS SOFTWARE AG. We reserve the right to update or modify
the contents. Any and all information that appears within illustrations of screenshots is
provided coincidentally to better demonstrate the functioning of the software. IXOS AG
hereby declares that this information reflects no statistics of nor has any validity for any
existing company.
© 2008 by Open Text Corporation. The copyright to these materials and any
accompanying software is owned, without reservation, by Open Text. These materials
and any accompanying software may not be copied in whole or part without the express,
written permission of Open Text. The information in this document is subject to change
without notice. All rights reserved. Printed in Canada.
Open Text Corporation is the owner of the trademarks Open Text, 'Great Minds Working
Together', and Livelink, among others. This list is not exhaustive. All other products or
company names are used for identification purposes only, and are trademarks of their
respective owners.

Trademarks SAP, R/3, SAPmail, SAPoffice, SAPscript, SAP Business Workflow, SAP ArchiveLink:
SAPAG
IXOS, IXtrain: IXOS SOFTWARE AG, Munchen
UNIX: Unis Systems Laboratories, Inc.
OSF, Motif, OSF/Motif: Open Software Foundation, Inc.
X Window System: Massachusetts Institute of Technology
PostScript: Adobe Systems, Inc.
FrameMaker: Frame Technology Corporation
ORACLE: ORACLE Corporation, Kalifornien USA
Microsoft, WINDOWS, EXCEL, NT: Microsoft Corporation
Intel, Intel Inside: Intel Corporation
Other product names have been used only to identify those products and may be
trademarks of their respective owners.
QPENTEXT

Table of Contents
1 Course Objectives and Contents
710 - Course objectives 1- 2
710 - Contents overview 1- 3
Where to go from here 1- 4

2 Archiving with Open Text flXOS


Chapter overview 2- 2
Fundamental Aspects of Archiving 2- 3
Archive Server 2- 4
Storage technologies used by Archive Server. , 2- 5
Supported Storage Platforms 2- 6
Opticals - DVD, WORM, UDO 2- 7
Comparison of Optical Media Types 2- 8
CAS - Content Addressed Storage - Example: EMC Centera 2- 9
SAN - Storage Area Network - Example: Hitachi Data Retention Manager 2-10
NAS - Network Adressed Storage - Example: NetApp NearStore (SnapLock) 2-11
NAS - HSM - Hierarchical Storage Management... 2-12
HD-WO - Fixed Content versus Optical Media 2-13
Tools for administering the Archive Server. 2-14
Document Archiving 2-15
Data Archiving 2-16
Different kinds of document representation: CI, NCI 2-17
Leading application 2-18
Leading application 2-19

3 Resources for Archive Server Administrators


Overview 3- 2
Global Services 3- 3
Learning Services 3- 4
Standard Support 3- 5
Premier Support Program 3- 6
Premier Support Program - Service Catalogue Options 3- 7
Application Support Program 3- 8
Extended Hours Support 3- 9
Open Text Global Support Hotline 3-10
IXOS Expert Service Center (ESC) 3-11
OpenText Knowledge Center (KC) 3-12
Open Text Online Community 3-13
SOlution Packages by Open Text Global Services 3-14
Overview Solution Packages 3-15

4 Document Processing by the Archive Server


Chapter overview 4- 2
Document flow in the Archive Server - Synopsis 4- 3
Document archival 4- 4
Writing documents to ISO media to DVD (or WORM) ;, .4- 5
Writing documents to ISO media to hard-disk based Storage System (HD-WO) .4- 6
Writing documents to IXW media (WORM or UDO) .4- 7

Table of Contents 0-3


Free space on a IXW volume 4- 8
Writing documents to FS pool or VI pool (hard disk) .4- 9
Providing a document for read access .4-10

5 Document Structure on Storage Media


Chapter overview 5- 2
Inner structure of documents 5- 3
Directory structure on storage media 5- 4
Files of documents 5- 5
System attributes in the ATTRIB.ATR 5- 6
Tracing documents with dsClient (1) 5- 7
Tracing documents with dsClient (2): document component details 5- 8
Document structure in the cache 5- 9
Exercise: Examine a document on storage media 5-10

6 The IXW File system· WORM File system


Chapter overview 6- 2
IXW: The Open TexU IXOS file system for UDO and WORM medias 6- 3
How IXW medias are written 6- 4
WORM file system database 6- 5
IXW media finalization (1) 6- 6
IXW media finalization (2): Properties 6- 7
IXW medias and the IS09660 filesystem standard 6- 8

7 Configuring Logical Archives


Chapter guide 7- 2
Definition: logical archives 7- 3
Why use more than one logical archive? 7- 4
Restrictions for the number of logical archives 7- 5
Chapter guide 7- 6
Inner structure of a logical archive 7- 7
Pool 7- 8
Storage Systems supporting FS pools 7- 9
Devices 7-10
Chapter guide 7-11
Create logical archive 7-12
Create ISO (write-at-once) pool (1): Pool name and type 7-13
Create ISO (write-at-once) pool (2): Write configuration 7-14
Create ISO (write-at-once) pool (3): Writing schedule, disk buffer. 7-15
Use ISO pool with HD-WO (hard disk write-once) medias (1): Pool name and type 7-16
Use ISO pool with HD-WO (hard disk write-once) medias (2): Write configuration 7-17
Create IXW (write-incremental) pool 7-18
Create FS (single file) Pool 7-19
Create HDSK (write-through) pool 7-20
Set document processing options (1) 7-21
Set document processing options (2) 7-22
Set security options (for HTTP access) 7-23
Exercise: Create logical archive with media pool on Archive Server 7-24

8 Disk Buffer Configuration


Chapter guide 8- 2
Disk buffer fundamentals 8- 3

0-4 710
QPENTEXT

Disk buffer purging 8- 4


Chapter guide 8- 5
Disk buffer as temporary IXW media backup 8- 6
Ways of caching documents after writing to optical media 8- 7
Caching: in disk buffer or in cache? 8- 8
Chapter guide 8- 9
Purge configuration examples 8-10
Chapter guide 8-11
Sizing considerations (1) 8-12
Sizing considerations (2) 8-13
Chapter guide .........................................................................•......................................................8-14
Create disk buffer (1): Provide hard disk partition 8-15
Create disk buffer (2) 8-16
Assign disk buffer to media pool ' 8-17
One or more disk buffers? 8-18
Exercise: Create disk buffer. 8-19

9 Document Processing Options


Chapter Overview 9- 2
Caching 9- 3
Compression 9- 4
BLOBs (1) - Properties 9- 5
BLOBs (2) - Tracing documents with dsClient.. 9- 6
Single Instance Archiving (1) - Properties 9- 7
Single Instance Archiving (2) - Tracing SIA components 9- 8
Encryption 9- 9
Timestamps (1) - Properties 9-10
Timestamps (2) - Signing a document.. 9-11
Timestamps (3) - Verifying a signed document 9-12
Timestamps (4) - Renewal of Timestamps 9-13
Timestamps (5) - ArchiSig Timestamps 9-14
Timestamps (6) - Delivery of Signed Documents 9-15
Deferred Archiving (1) - Concept 9-16
Deferred Archiving (2) - pool_DELAYED_ 9-17
Deferred Archiving (3) - start archiving 9-18

10 Document Lifecycle Management Overview


Document Lifecycle Management 10- 2
Value of Records & Documents 10- 3
Retention Management Layers 10- 4
Archive Server - Retention Handling 10- 5
Retention Handling - Exterior View 10- 6
Retention period 10- 7
Retention date 10- 8
Protected documents 10- 9
Example for Retention - Time based 10-10
Examples for Retention - Event & Time based 10-11
Deferred Archiving 10-12
Compliance Mode 10-13
Exercise: Retention Settings 10-14

Table of Contents 0-5


11 Archive Server Architecture
Chapter Overview 11- 2
Archive Server Components 11- 3
Document Service (1): Tasks 11- 4
Document Service (2): Components 11- 5
Storage Manager (STORM) (if installed) 11- 6
Administration Server 11- 7
DocumentPipeline 11- 8
Volume Migration Server 11- 9
Monitor and Notification Servers 11-10
HTTP Interface Server 11-11
Storage Database OS 11-12
Product-specific Components 11-13
Archive Server Installation Directory Tree 11-14
Archive Server Software Packages (1) 11-15
Archive Server Software Packages (2) 11-16

12 Where to Find What


Chapter Guide 12- 2
Configuring the "storage dynamics" in the Archive Server Administration 12- 3
Maintaining Configuration Variables: The Server Configuration page 12- 4
Storage of Configuration Variables: Registry (Win), setup files (Unix) 12- 5
Structured Configuration Files 12- 6
Chapter Guide 12- 7
Installed Software 12- 8
logfiles 12- 9
SCSI Device Files 12-10
Chapter Guide 12-11
Globally defined Storage locations 12-12
Storage locations defined per Archivel Pooll Medium 12-13
STORM's WORM Management Data 12-14
Oracle Database Files 12-15
MS SQl Server Database Files 12-16
Exercise: Find elements of the Archive Server installation 12-17

13 Archive Server Startup and Shutdown


Archive Server process layers 13- 2
Different types of spawner shutdown 13- 3
Startup and Shutdown on OS level (1): Windows 2003 13- 4
Startup and Shutdown on OS level (2): Unix Platforms 13- 5
Checking status after startup (1): OS database operation 13- 6
Checking status after startup (2): Spawner layer 13- 7
Spawner control in Archive Server Administration 13- 8
STORM caveats 13- 9
Exercise: Shut down and start up your Archive Server 13-10

14 Archive Server Monitoring


Chapter guide 14- 2
Parameters observed by the Archive Web Monitor 14- 3
Working with the Archive Web Monitor 14- 4
Features of Archive Web Monitor 14- 5

0-6 710
QPENTEXT

Review job messages and job protocol 14- 6


Change Protocol Settings 14- 7
Chapter guide 14- 8
Notifications: Active alerts upon certain events on the Archive Server 14- 9
Scriptable monitoring (1): ixmonTest 14-10
Scriptable monitoring (2): Scheduled jobs protocol 14-11

15 Handling Optical Archive Media


Possible states of optical archive media 15- 2
Regular tasks when using ISO Medias 15- 3
Regular tasks when using IXW Medias 15- 4
IXW media partition assignment to a pool 15- 5
IXW media naming scheme 15- 6
Automatic IXW media initialization 15- 7
IXW media finalization (1): Properties 15- 8
IXW media finalization (2): Usage 15- 9
When the jukebox(es) are filled up 15-10
Re-inserting offline media into the jukebox on demand 15-11

16 Media Migration
Chapter guide 16- 2
Introduction 16- 3
Volume Migration - Process 16- 4
Migration Server's work 16- 5
Additional considerations 16- 6
Chapter guide 16- 7
Preparation steps 16- 8
Plan migration for selected volumes 16- 9
Review migration progress of volumes 16-10
Pause Migration Job 16-11
After a migration project. 16-12
Chapter guide 16-13
Verification after Migration (1) 16-14
Verification after Migration (2) 16-15
Verification after Migration (3) 16-16
Bulk migration of ISO images (1) 16-17
Bulk migration of ISO images (2) 16-18
Bulk migration of ISO images (3) 16-19
Bulk migration of remote ISO volumes (1) 16-20
Bulk migration of remote ISO volumes (2) 16-21
Run Migration per Pool 16-22
Exercise: Do media migration 16-23
Chapter guide 16-24
Document Migration - Feature 16-25
Document Migration - Details 16-26
Document Migration - Function Call 16-27

17 Export and Import of Storage Media


Possible states of optical archive media 17- 2
Exporting a medium: when documents' retention period has passed 17- 3
Single-instance archiving (SIA) and media export 17- 4
Steps for exporting a medium of a SIA-enabled archive 17- 5

T able of Contents 0-7


Importing a medium 17- 6
Media Import & Index Reconstruction 17- 7
Exercise: ExporU import a medium 17- 8

18 Consistency Checks for Storage Media and Database


Consistency checks: overview 18- 2
Check database against partition 18- 3
Using consistency check tools (1): Starting a utility , 18- 4
Using consistency check tools (2): The messages window 18- 5
Check partition against database (1) 18- 6
Check partition against database (2) '" 18- 7
Check only partition , 18- 8
Check document 18- 9
Compare backup IXW medias 18-10
Exercise: Check and repair consistency between medium and database 18-11

19 Expanded Archive Server Installations


Chapter overview 19- 2
Local Backup of Media 19- 3
Single Archive Server with separate backup jukebox 19- 4
RemoteStandby 19- 5
Architecture of Remote Standby (1) 19- 6
Architecture of Remote Standby (2) 19- 7
Switch over to remote standby 19- 8
Scenarios for Remote Standby Server 19- 9
HotStandby 19-10
Hot Standby Server with two hubs 19-11
Hot Standby and Remote Standby Server 19-12
Features of a Hot Standby Server 19-13
Cache Server 19-14
Cache Server Scenario 19-15
Cache Server 19-16
Input Scenario and Local Cache Server. 19-17
Remote Standby Server versus Cache Server. 19-18
Summary 19-19

20 Remote Standby Configuration and Operating


Chapter guide 20- 2
Basic concept and benefits 20- 3
Proposal: central backup for multiple servers 20- 4
How replication is performed 20- 5
Chapter guide 20- 6
Globally enable remote replication on original server. 20- 7
Make original and Remote Standby servers known to each other .. , 20- 8
Configure replication of logical archive 20- 9
Configure disk buffer replication 20-10
Chapter guide 20-11
Replication status in the administration client... 20-12
Initialize disk buffer partition replicate 20-13
Initialize IXW media replicate 20-14
Provide empty media for ISO media replication 20-15
Chapter guide 20-16

0-8 710
OPEN TEXT

The replication job 20-17


Review status of replicates 20-18
Exercise: Configure and perform RemoteStandby replication 20-19
Chapter guide 20-20
EMC Centera: ISO Images - Remote standby 20-21
EMC Centera: Single file - Remote standby 20-22
HDS DRU / HP XP: ISO - Remote Standby 20-23
IBM DR550: single file & ISO - Remote standby 20-24
NetApp Filer: single file & ISO - Remote standby 20-25

21 Setting up an Administrator Workstation


Chapter overview 21- 2
Requirements for an administrator's workstation 21- 3
Installing the graphical administration tools 21- 4
Additional considerations 21- 5

22 Periodic Jobs
Chapter overview 22- 2
Tasks for jobs: synopsis (1) 22- 3
Tasks for jobs: synopsis (2) 22- 4
Running several jobs simultaneously 22- 5
Scheduling automatic daily jobs: Example 22- 6
Jobs administration 22- 7
Disable a Job 22- 8
Edit a job (1): scheduling 22- 9
Edit a job (2): conditional invocation 22-10
Additional recurring tasks not executable as jobs 22-11
Exercise: Schedule jobs appropriately 22-12

23 Configuring Audit Trails


Agenda 23- 2
Audit Trails - Overview 23- 3
Collect Audit Data 23- 4
Access Audit Information - On Documents & Administrative Infos 23- 5
Purge Audit Data - exportAudit Command 23- 6
Purge Audit Data - Periodic Job 23- 7
Access Audit Information - On Single Document (1) 23- 8
Access Audit Information - On Single Document (2) 23- 9
Deletion Holds 23-10
Exercise: Audit Trails 23-11
Exercise: Deletion Holds 23-12

24 Backing up the Archive Server


Chapter guide '" 24- 2
Which HD areas have to be mirrored (RAID 1 or RAID 5) 24- 3
Chapter guide 24- 4
ISO pool backup 24- 5
IXW pool backup 24- 6
Chapter guide 24- 7
Which HD areas have to be backed up 24- 8
FS & HDSK pool backup 24- 9
Disk buffer backup 24-10

Table of Contents 0-9


Database backup 24-11
STORM files backup (1): What has to be backed up 24-12
STORM files backup (2): Backup methods 24-13
Software backup '" 24-14

25 Hard Disk Resource Maintenance


Chapter overview 25- 2
Disk buffer 25- 3
Add hard disk partition to disk buffer 25- 4
FS & HDSK pool 25- 5
DocumentService cache (1) 25- 6
DocumentService cache (2) 25- 7
Increase Cache Paths 25- 8
Databases: DS, WORM filesystem 25- 9
Other HD resources 25-1 0
Exercises 25-11

26 Accounting information
Motivation and Objective 26- 2
Involved steps 26- 3
Access logging by Archive Server 26- 4
Accounting data retrieval (1): Interactive 26- 5
Accounting data retrieval (2): Command line or script-based 26- 6
Using retrieved accounting data for billing 26- 7
Reorganization of "old" accounting data 26- 8
Exercise: Download accounting data for billing 26- 9

~ 27 Statistics and Performance Monitoring


Chapter guide 27- 2
Read/write statistics for storage media 27- 3
Statistics about reading from cache / jukebox / hard disk 27- 4
DS statistics: technical aspects 27- 5
Chapter guide 27- 6
Statistics for jukeboxes, drives, media 27- 7
Structure of STORM statistics files 27- 8
Statistics file processing 27- 9
Chapter guide 27-10
Statistics interface to Windows Performance Monitor 27-11
Statistics add-on: IXOS-Insight.. 27-12

28 Logfiles and Loglevels


Chapter guide 28- 2
Log message structure (1): General. 28- 3
Log message structure (2): STORM trace file 28- 4
Log message interrelations (1): Within a log file 28- 5
Log message interrelations (2): Across log files 28- 6
Chapter guide '" 28- 7
Static vs. dynamic loglevel settings 28- 8
Log switches (1) 28- 9
Log switches (2) 28-1 0
Log switches (3) 28-11
STORM loglevels 28-12

0-10 710
OPEN TEXT

Chapter guide 28-13


Size limitations for logfiles (genera!) 28-14
Size limitations for STORM log and trace files 28-15
Chapter gUide 28-16
Relevant logfiles for Archive Server Operations 28-17
Access to log file via perl script... 28-18
Logging: Further sources of information 28-19

29 Summary of troubleshooting tasks on the Archive Server


Chapter overview 29- 2
Avoiding problems 29- 3
Symptom examination (1): Error state ("red bulb") in Server Monitor 29- 4
Symptom examination (2): "Dead" services in spawncmd status 29- 5
Symptom examination (3): Red bulb in Archive Server job protocol 29- 6
Symptom examination (4): DocumentPipeline errors 29- 7
Further Information 29- 8
Contacting Customer Support 29- 9

Appendix A Exercise Worksheets

Table of Contents 0-11


Exercise 1: Storing and retrieving a sample document.. A- 3
Exercise 2: Using the Expert Service Center A- 4
Exercise 3: Writing stored documents to WORM A- 5
Exercise 4: Examine a document on storage media A- 6
Exercise 5: Finalizing a WORM A- 7
Exercise 6: Create logical archive on Archive Server A- 8
Exercise 7: Create disk buffer A-10
Exercise 8: Store and examine a document in a BLOB A-11
Exercise 9: Store and examine a single-instance archived document.. A-12
Exercise 10: Retention Settings A-13
Exercise 11: Archive Server components A-14
Exercise 12: Check operating condition and restart Archive Server c1eanly A-15
Exercise 13: Archive Server monitoring A-16
Exercise 14: WORM operating A-17
Exercise 15: Media migration A-18
Exercise 16: Media export and import A-19
Exercise 17: Check consistency of storage medium and database A-20
Exercise 18: Remote Standby configuration and operating A-21
Exercise 19: Scheduling periodic jobs A-22
Exercise 20: Audit Trails A-23
Exercise 21: Deletion Holds A-25
Exercise 22: Online backup of WORM filesystem database A-26
Exercise 23: Add partition to disk buffer A-27
Exercise 24: Move DocumentPipeline directory A-28

Appendix B Archive Server - Command Line Tools


O. General Remarks B- 1
1. dsClient B- 1
2. spawncmd B- 3
3. dpctrl B- 3
4. ixmonTest / ixmontst (Windows) B- 4
5. cdadm B- 4

Appendix C Glossary

0-12 710
QPENTEXT

OPENTEXT

1 Course Objectives and Contents

Course Objectives and Contents 1-1


710 • Course objectives
OPENTEXT

III Gain a basic understanding of the Archive Server


- Role within the Open Text flXOS Solutions
- Functionality and architecture

I!I Be able to administer an Archive Server efficiently and reliably


This includes being able to
- configure the Archive Server according to the enterprise's needs
- operate the server reliably
- detect and solve common problems yourself

Course Objectives and Conlents Slide 2

1-2 710
710 . Contents overview

Course Objectives and Contents Slide 3

Course Objectives and Contents 1-3


Where to go from here
OPENTEXT

See
http://opentext.com/training
for more information on
.... ....
Learning Services offering.
5 days
.... ....
.... .... .... ....
.... .... ........
.... .... Archive Server
Installation
.... .... 4 days
.... ....
....
"

I
I
T
Application-specific
customizing courses
Course Objectives and Contents Slide 4

Starting from the present course 710, Archive Server Administration, Open Text Learning
Services offers a variety of courses for different education needs.
Since course 710 covers the server part of an Livelink Enterprise Archive only, it is
recommended for all administrators to complete their Administration skills with the
application-specific counterpart. For this purpose, Learning Services offers
administration courses for all Livelink Enterprise Archive products separately; all of
them require having attended course 710 before.
715 Archive Server Administration Advanced is not needed for administering a
"normal" Archive system. Target group of this course are administrators of huge Archive
installations, especially in outsourcing centers; here they learn how to automate
administrative tasks, integrate the Archive Server into a computing center infrastructure
more tightly, and do advanced troubleshooting.
Some of the leading systems, e. g. SAP, require additional customizing or configuration
in order to integrate optical document storage into their "main" functionality. Learning
Services offers appropriate, product-specific customizing courses for building up the
necessary skills.
For specialized interest, further courses and workshops are available. For full
information including scheduled course dates, see http://www.opentext.com/training

1-4 710
QPENTEXT

QPENTEXT

2 Archiving with Open Text flXOS


A general introduction

Archiving with Open Text flXOS 2-1


Chapter overview

III Archiving: Fundamental properties


III Introduction to the Archive Server
III Types of document/data archiving
III The role of the "leading application"

Archiving with Open Text flXOS Slide 2

2-2 710
QPENTEXT

Fundamental Aspects of Archiving


OPEN TEXT

Certain business scenarios require that documents & data ...


III can no longer be changed or manipulated
- i. e. optical media
- i. e. write-once mode on hard-disk based storage system
or proof by use of time stamps that content has not been modified

III can be retrieved as needed


- It is stored safely (i. e. protected against becoming lost or changed)
throughout its retention period
- It can be found easily (in the leading application)
- It is available for prompt retrieval and display

III For a scanned document:


Ensure it is a legally correct facsimile of the original document

Archiving with Open Text J lXOS Slide.3

Traditionally, optical media like DVD or WORM have been used to ensure that documents are
no longer changed or manipulated. Modern storage systems can usually be switched to "write-
once mode" to ensure that documents cannot be changed even though hard-disk is used as
final storage media. While non-manipulation of documents is desirable, disposition
management (removing documents after their retention period has expired should also be
considered.
Besides technical components, the right organisational implementation and its documentation
are important for ensuring legally compliant archiving according to the specific lawmakers.

Archiving with Open Text / IXOS 2-3


Archive Server
OPEN TEXT

---------....=~

Archiving with Open Text / IXOS Slide 4

2-4 710
QPENTEXT

Storage technologies OPEN TEXT


used by ArC~h~iv;e~s~e;rv~e:r~• • • • • • • • • •r_=~:.=,g:,:m={!JIl=mi¥=mill!="""~

II! CAS storage systems (i. e. EMC Centera)


- "virtual jukebox" writing ISO images
- =::9.6: also storing single files

III SAN/NAS storage systems (i. e. NetApp, HDS, STK ...)


- depending on system, ISO images/single files

II Hard disk partitions or HSM storage systems


- For short-term archiving (e. g. employment applications)

II Jukeboxes with Optical media (i. e. DVD, WORM, UDO)


- Guarantees against corruption, changes, or loss of documents
- Inexpensive media for large quantities of data

Archiving with Open Text I lXQS Slide 5

About the media type abbreviations:


• CAS =Content Addressed Storage
• SAN =Storage Area Network
NAS = Network Addressed Storage
• UDO =Ultra Density Optical
Abbreviations and related storage technologies will be covered in the next slides in more
detail.

Archiving with Open Text flXOS 2-5


Supported Storage Platforms
QPENTEXT

( Retention Periods \

Archiving with Open Text /IXOS Slide 6

For details on supported operating systems and interface view the storage platform release
notes.

WO Feature
The write-once feature allows Le. hard-disk based storage systems to store documents and
data on the hard-disk similar to e.g. a DVD-R. After the inital write process, the data is stored
as "read-only" and may only be modified or deleted after a certain retention period.

2-6 710
OPEN TEXT

Opticals - DVD, WORM, UDO


OPEN TEXT

Archive Server III Support of various jukebox vendors


IJ Support of different media:
- WORM, UDO
m Single file

ISO image (WORM media only)


- DVD
m ISO image

IJ Advantage
Connection:
- Support of offline media
SCSI or
fibre channel - Non erasable, robust
Tamper proof

II Drawback
- Nearline media
- Cache areas required for fast access

Optical jukeboxes:
DVD, WORM, UDO
Archiving with Open Text! IXOS Slide 7

Archiving with Open Text flXOS 2-7


QPENTEXT'

Comparison of Optical Media Types


QPENTEXT

iii UDO is similar to standard WORMs


- same size, significantly higher storage capacity

iii CD writing discontinued with Archive Server ~ 9.6


----~~-.."..~~~-.."..~

Archiving with Open Text /IXOS Slide 8

About the media type abbreviations:


• WORM =Write Once, Read MUltiple
CD-R =Compact Disk Recordable (for Archive Server;;::: 9.6 read-only)
DVD-R =Digital Versatile Disk Recordable;;::: eCONserver 5.0
UDO =Ultra Density Optical;;::: eCONserver version 6.0 D or Patch SV55-109
In general, a CD-R or DVD-R must be written at one time. It cannot be modified thereafter.
Backup CD/DVD copies are identical in every way to the original medium and should be stored
in a separate location.
Discontinuation of CD writing ( ~ 9.6)
With Archive Server ~ 9.6, support of CDs is switched to read-only mode (latest migration
possibility) with the next Archive Server version. CDs will not be supported in further Archive
Server versions.
WORM disks can only be written once, but the disk itself can be written incrementally.
MO disks ("magneto-optical") are the re-writable counterparts of WORMs. Since they do not
fulfill the legal requirement of unalterability of stored data, IXOS does not support the use of
MOs as storage media.
As opposed to magnetic storage media, particularly tapes, optical disks offer excellent long-
term safety of data. When archiving documents, the fact that data cannot be modified is also
an important advantage to fulfill legal requirements.
UDO is a fairly new media, similar to standard WORMs, also in size, but with a significantly
higher storage capacity. See also http://www.udo.com
If desired, non-optical media (like hard disk partitions) can be used as alternative storage
technology. This serves mainly these purposes:
Documents can be deleted easily; this may be necessary under certain legal
circumstances, e. g. when storing job application documents.
• Access to documents stored on hard disks is always very fast and does not impose any
workload on the optical storage devices. Situations exist where this is an important
requirement, e. g. when using documents with overlay forms.

2-8 710
OPEN TEXT

CAS - Content Addressed Storage - OPEN TEXT


Example: EM~c~c~e~n~t:e~ra:'• • • • • • • • • •'.\"_:'_:i_='=d=d,=ti=...t.*
Archive Server .. Support of WO (write-once) feature

.. Support of retention periods


CentraStar
- Retention period on single document or
- Retention period per virtual jukebox

.. Network connection (IP)

Single
documents II Advantage:
- 150 image: easy backup to optical media
- Flexible partition size
IP connection
.. Drawback
- Centera 5DK used
- Can't be used as disk subsystem
Virtual Jukebox N - No file system available

Archiving with Open Text! IXOS Slide 9

Centera SDK is provided by OpenText.

Archiving with Open Text / IXOS 2-9


QPENTEXT

SAN - Storage Area Network -


OPEN TEXT
Example: Hitachi Data Retention Manager

II! Support of WO (write-once) feature


Archive Server
II! Support of retention periods
- Retention period per virtual
jukebox

II! Storage LDEVs integrated as one or


more virtual jukeboxes
- Data stored as 150 Image

II! Fiber Channel connection

II! Advantage:
- Easy backup to optical media

Virtual Jukebox - Can be used as disk subsystem


ISO images - Deletion tool available by HD5

II! Drawback
- API necessary
Archiving with Open Text flXOS Slide 10

API is provided by HDS.

2-10 710
OPEN TEXT

NAS - Network Adressed Storage -


QPENTEXT
Example: NetApp NearStore (SnapLock)

II Integration into Document


Service
10 SnapLock enables Write-Once
feature on NearStore filers
- Support of fixed content for single
documents
- Support of ISO images

10 Treated as hard disk partition


- Provide large hard disk storage by
attaching multiples NetApp volumes

10 Advantage
- No special API necessary
- Can be used together with optical
and other storage media

ISO files

Archiving with Open Text / IXOS Slide 11

Archiving with Open Text / IXOS 2-11


QPENTEXT

NAS - HSM-
QPENTEXT
Hierarchical Storage Management

II Container files (ISO image)


- STORM API - HSM device
- Support of virtual jukeboxes
- Retention periods in 150 image

II Single documents
- 05 through disk buffer

" Hint:
- Release to tape not supported
always copy on hard disk required

Single Files ISO files

Archiving with Open Text/ IXOS Slide 12

HSM is policy-based management of file backup and archiving in a way that uses storage
devices economically and without the user needing to be aware of when files are being
retrieved from backup storage media.
Although HSM can be implemented on a standalone system, it is more frequently used in the
distributed network of an enterprise. The hierarchy represents different types of storage media,
such as redundant array of independent disk systems, optical storage, or tape, each type
representing a different level of cost and speed of retrieval when access is needed.
For example, as a file ages in an archive, it can be automatically moved to a slower but less
expensive form of storage. Using an HSM product, an administrator can establish and state
guidelines for how often different kinds of files are to be copied to a backup storage device.
Once the guideline has been set up, the HSM software manages everything automatically.
HSM adds to archiving and file protection for disaster recovery the capability to manage
storage devices efficiently, especially in large-scale user environments where storage costs
can mount rapidly;
An administrator can set high and low thresholds for hard disk capacity that HSM software will
use to decide when to migrate older or less-frequently used files to another medium. Certain
file types, such as executable files (programs), can be excluded from those to be migrated. For
connection with the Archive Server, these thresholds should be set appropriately so that files
stay on hard disk.

2-12 710
QPENTEXT

HD-WO - Fixed Content versus QPENTEXT


Optical Medlia:.• • • • • • • • • • • • • • •II=L.ii=diil:~@:Rl=.i.*.B

II! Magnetic storage media: " Optical media:


Various storage vendors WORM, DVD, UDO
.. High volume storage " Written and read by a laser
II! Sometimes already existing as iii Data stability
file servers (NetApp) or backup
system (HSM) II Tamper proof (no manipulation)
II! Improved technique for fast iii Non erasable, robust
access
" Medium access time
II Additional security by
(if volume in drive)
software technique
II! Can be destroyed by magnetic " DVD:
fields or water leakage widely used in consumer market
" Dust (media)
mechanical wear out (jUkeboxes)

Archiving with Open Text flXOS Slide 13

Incremental write for DVD-R


• Not supported

Support of DVD-RAM as Soft-WORM (also known as DVD-WO)


• Lot of blocks are wasted when writing FCBs.
• Incompatibilities in read drives have been found.
• DVD-RAM not been standardized by DVD-forum.

DVD-RAM in the MO modus


• Project solution
• Not allowed to use!

Archiving with Open Text flXOS 2-13


Tools for administering the OPEN TEXT
Archive se~r~v~e~r~• • • • • • • • • • • • • • • •I·.=. !ll.;I=_;=ia=:liiili"=!.;!.;*;*_
Ili Graphical administration tools
- Archive Server Administration
" Configuring the server
~ Handling jukeboxes and optical media

- Archive Server Monitor S 9.5.0


- Livelink ECM - Document Pipeline Info
" Monitoring and troubleshooting the Document Pipeline

III Web-based administration interface


- Monitoring the server
- Handling accounting data

Ili Command line tools


- Gathering storage statistics
- Administering the system without graphical screen
- Low-level system handling for troubleshooting

Archiving with Open Text flXOS Slide 14 i


\ (

The graphical administration tools are focused on "everyday" server handling.


Nevertheless, most system configuration tasks can be accomplished with them as well.
These tools are normally installed on the administrator's own workstation computer,
enabling him to administer the Archive Server remotely.
The new web-based administration interface is the basis for a future shift from dedicated
client software to web-based server access techniques. Currently, only new
administrative functions are implemented that way. As an exception, the IXOS-
WebMonitor is an equivalent, client-less alternative to the traditional/XOS-eCONserver
Monitor.
Command line tools on the Archive Server grant the administrator "deeper" access to the
server than the graphical tools provide. This is normally reserved for special occasions
like advanced troubleshooting or - for example - specific data migration tasks.

2-14 710
QPENTEXT

Document Archiving
OPEN TEXT
lli;t\MlWJ!t'll'lliW.

Scanning client
~
~til@i"

ArchiV~
Scanning
Archive Server
imagefiles ~

Server of
Direct archiving Download
leading
via specific interface
application
(e. g. SAP R/3)
Display on client

Archiving with Open Text flXOS Slide 15

Regarding the archival of documents, three basic variants can be distinguished.


Paper documents are scanned first; this is normally done using the scanning application
Livelink Enterprise Scan. The resulting image files are then transferred to the Archive
Server for archival.
Machine-generated documents can be archived directly from the generating server to the
Archive Server via a specialized interface - provided that such an interface exists for
that application system. The "classical" example for this type of archival is archiving
from SAP R/3 (outgoing documents and print lists).
Document files from external sources are archived via the so-called batch input interface
of the Archive Server. This is the most generalized way of archiving: The documents
may origin from any leading application - there is no special integration interface
necessary. Moreover, scanned images of paper documents can be archived that way
as well as machine-generated documents.

Document retrieval is carried out the same way, no matter how the documents have entered
the archive (as detailed above): The retrieval client component downloads the document from
the Archive Server to the user's workstation, then the document is displayed in a suitable
viewing application (for most frequently used document formats, this is the Livelink Archive
Windows Viewer).

Archiving with Open Text / IXOS 2-15


Data Archiving
OPEN TEXT

User workstations

Server of leading application


@ E. g. SAP R/3
$ E. g. MS Exchange

Archive Server

Archiving with Open Text t IXOS Slide 16

Compared to document archiving (see previous page), data archiving implies different roles
of the leading system and the Archive Server.
A typical server-based application produces and/or stores large amounts of electronic data.
However, the available storage space for that application data is limited by the server
hardware. In many cases, the server is not even able to keep all data it is supposed to due to
business requirements (e. g. a certain legally in forced data retention period). In such a
situation, selected application data can be sourced out from the application server to the
Archive Server; this is referred to as data archiving.
The application server can then access the archived data again for various kinds of
processing; this includes:
Display of archived data items (in non-changeable mode)
Reloading archived data into the server's own storage space
From the point of view of the leading application, the Archive Server is therefore a mere (safe
and huge) storage backend. As a consequence, the system users (and their computers) have
no direct relation or connection to the Archive Server; they only "see" the leading application's
server that interacts with the Archive Server behind the scenes.
The following IXOS products make use of the Archive Server the data archiving way:
Livelink Integration for SAP Solutions
Livelink E-mail Archiving for MS Exchange
Livelink E-mail Archiving for Lotus Notes

2-16 710
QPENTEXT

Different kinds of document representation:


CI, NCI

Non-coded information - NCI Coded information - CI


II Documents stored in non-text .. Machine-generated, machine-
formats, e. g. image data readable documents

II Machine does not know wording


of document

II Automatic indexing not possible II Automatic indexing possible


- However, OCR may be used for this - Depending on application context

II! Examples of NCI documents: .. Examples of CI documents:


- All scanned documents ... - Print lists, voucher lists
- Incoming faxes - Outgoing documents from SAP R/3
- Office documents
~
- E-mail messages
~

Archiving with Open Text i IXOS Slide 17

Non-coded information (also referred to as NCt):


Contents of a document captured as an image of the original paper document; this
occurs when paper documents are scanned
Archived as an image
Retrieved and displayed as an electronic facsimile of the scanned-in document
Contents cannot be interpreted by a computer (unless optical character recognition,
OCR, is performed). Consequently, TIFF documents cannot be 'searched' for
keywords
It may be desirable to have applications convert documents to TIFF format before
archiving. This protects against having their content modified afterwards and assures
that the document can be accessed and read in the future (regardless of what radical
changes are made to the software with which the document was originally produced)

Coded information (also referred to as Ct):


Computer-generated documents
Machine-readable
• Archived in its original form
Can be searched for indices (similar to keywords)

Archiving with Open Text / IXOS 2-17


Leading application
OPEN TEXT
·'t1 dIiBMmNii@¥

The leading application ...


iii controls how documents are used
iii stores references to archived documents for retrieval
- Expressed as document IDs

III maintains the relationship between documents and business-


related "objects", e. g.
- business occurrences like purchase orders, deliveries, ...
- attribute sets within a document index

III uses a communication interface to the Archive Server


(and/or the Archive Clients)

Archiving with Open Text flXOS Slide 18

For making up an optical archiving solution, it is not sufficient just to store documents or
document images on storage media. In order to make the documents serve some business
purpose, they must be made available for retrieval by one of these methods:
Maintaining attributes for each document by which document users can search for
specific documents of interest. Such attributes can include:
- Date of origin
- Document number
- Customer number
- Document type: order, invoice, correspondence, ...
- ... and many more
Linking documents to some kind of "object" maintained by another business data
system. For example, an invoice document may correspond to an invoice booking in the
SAP database. A SAP user can search for and retrieve the invoice booking, then
retrieve the corresponding document by activating a link to it that is stored as part of the
booking data.
Since the choice of how to make documents retrievable fundamentally decides how
documents are used in business, the system performing this task is called the leading
application. It mayor may not be part of the optical archiving system itself; Livelink for
Electronic Archiving (VAG (former IXOS-eGONtext for Applications)) and Livelink for SAP
Solutions (former IXOS-eGONtext for SAP) are two opposite examples for this.

2-18 710
QPENTEXT

Leading application
OPEN TEXT

requests 10 aaahx4c...


User
o~
requests

--.~
•.. --
Leading Application

The leading application ...


" controls how documents are used
II stores references to archived documents for retrieval
- Expressed as document IDs, i. e.

" maintains the relationship between documents and business-related


"objects", e. g.
- business occurrences like purchase orders, deliveries, ...
- attribute sets within a document index

" uses a communication interface to the Archive Server


(and/or the Archive Clients)
Archiv'lng with Open Text IIXQS Slide 19

Archiving with Open Text / IXOS 2-19


I
!

2-20 710
QPENTEXT

QPENTEXT

3 Resources for Archive Server Admini~+r~f·n·..,~.·.

Resources for Archive Server Administrators 3-1


IIIi Global Services
IIIi Learning Services

III Customer & Premier Support


III Web Resources (ESC & KC)
III Open Text Online Community
III Solution Packages

Resources for Archive Server Administrators Slide 2

3-2 710
QPENTEXT

Global Services
OPENTEXT

Objectives:
m Helping customers to fully exploit the potential of
OpenText solutions, i.e.
- Document and Data Archiving
- Workflow Integration
- Web-based Portals
- Migration and parallel legacy and SAP systems
- Existing archive migration

m Implementing specialized archiving solutions on customers'


demand
- Individual archiving projects
- Solution packages

Resources for Archive Server Administrators Slide 3

Solution packages are implementations for specialized archiving requirements. They are not
part of the standard products but can be added to them in order to expand their functionality.
Some solution packages are ready-to-run, others require a certain amount of consulting to be
established at a customer.

Cooperation with Global Services is initiated via your local sales representative; please contact
them for further details.

Resources for Archive Server Administrators 3-3


Learning Services
OPENTEXT

Course portfolio examples: ~~~"~~~-"~"~ --


\ See http://www.opentext.com/training
Iii Consultant courses for full portfolio
- 601 ECM Fundamentals and course schedule
- 775 SAP Legal Compliance

III Administrator courses


- 710 Archive Server Administration
- 715 Archive Server Advanced Administration

II! Installation course


- 720 Archive Server Installation

Iii Workshops
- 725 Scanning Documents
- SAp·CST·PL Customizing SAP Print Lists for Archiving

III Customer-specific, on-site courses

Resources for Archive Server Administrators Slide 4

OpenText's training center Learning Services offers a wide variety of courses for different
OpenText and IXOS products and target groups; examples are given above.
Please check our webpage to see to contact your local Open Text training registrar:
http://www.opentext.com/training/contacts.html

3-4 710
QPENTEXT

Standard Support
QPBNTEXT
:jj[ NMii~IMMt145it·

Software Maintenance
Program (Standard Support)
II Support Services
- Phone, Web, E-mail
- Only for Standard Products

II Software Updates
II Customer Care Program
- e-Newsletters
- LiveLinkUp Webinar Series
- Champion Toolkit

iii Open Text Online Accounts


- Knowledge Center
- Open Text Communities
- Customer Self-Service

Resources for Archive Server Administrators SlideS

Resources for Archive Server Administrators 3-5


Premier Support Program
OPEN TEXT

" Dedicated Open Text Contact


- Program Manager
- Technical Specialist

" Site Inventory & Health Check


" Knowledge of Customer
Infrastructure & Process
" On-site Support
" Many Optional Services

For more information, contact


[email protected]
(North America)
[email protected]
(EMEA)

Resources for Archive Server Administrators Slide 6

The Premier Support Program is an optional support service that is offered in addition to the
Software Maintenance Program.
It provides you with a level of support that brings together highly experienced Technical
Specialists who will work with your in-house Service Management teams to assist with these
challenges and further the achievement of your deployment goals.
Benefits:
- Optimized Customer Support Processes
- Improved Understanding of Open Text Software
- Improved Risk Management
- Improved Strategic Planning
- Improved Issue Support
- Proactive Services
All of the services delivered by the Technical Services team are developed and delivered
within the ITIL framework. All members of the Technical Services team are ITIL certified.

Program Manager
The Program Manager is your single point of contact within Open Text Customer Support,
responsible for the relationship and all communication between your Service Management
Team and Open Text Customer Support/Development. They are also responsible for the
management of the delivery of the program to which you subscribe.

Technical Specialist
A Technical Specialist is responsible for working with the Program Manager and your Service
Management Team to manage the technical scope of the program to which you subscribe.
Their repsonsibilities depend on the Service Catalog options selected.

3-6 710
OPEN TEXT

Premier Support Program -


Service Catalogue Options OPENTEXT

Various Service Catalogue options can be delivered as part of your


Premier Support Program, i.e.:
* On-Site Support
* After-Hours Standby Support
* Critical On-Site Support
* Production Support
Technical Administration
Performance Check
Health Check
• Capacity Planning
" Monitoring
~ Security Audit
• Backup/Recovery and Failover Management
" SLA Consulting

III See also


http://opentext.com/services/premier-support.html

Resources for Archive Server Administrators Slide 7

On-site Support: A Technical Specialist is scheduled to travel to your site to provide


troubleshooting or configuration assistance.
After-Hours Standby Support: A Technical Specialist is scheduled for after-hours or weekend
standby assistance to provide a safety net during major changes.
Critical On-Site Support: A Technical Specialist is made available to go on-site to your location
the next business day to provide critical issue troubleshooting and resolution.
Production Support: A Technical Specialist assists during the final stage of an installation,
upgrade or functional expansion of your environment.
Technical Administration: A Technical Specialist provides daily technical administration of your
Open Text application on a full-time on-site basis. You may also be able to receive off-site or
part-time administration if remote access is available to the application.
Performance Check: The diagnosis of existing performance issues and the proactive
identification of potential performance bottlenecks before they negatively impact end-users'
experiences with the Open Text software system.
Health Check: The proactive identification of potential problems in the configuration and usage
of the Open Text software environment.
Capacity Planning: The continuous recording, statistical analysis and then prediction of the
future system's usage of technical resources (hardware, software, and network).
Monitoring: The real time monitoring of your system so that issues that arise can be verified
and corrected quickly.
Security Audit: The proactive identification of potential security problems in the configuration
and usage of thebasic Open Text software environment.
Backup/Recovery and Failover Management: The definition, implementation, and follow-on
support of a Backup/Recovery and Failover strategy.
SLA Consulting: Open Textcan support you with the definition and implementation of a
Service Level Agreement (SLA).
See also http://opentext . com/services/premier-support .html
Resources for Archive Server Administrators 3-7
QPENTEXT

Application Support Program


OPENTEXT
"·ilmJMiMiMfjMUJ%HU.tp-

.. Application Support
- In addition to Premier Support
- Support for specific applications

II Support for Customizing


- Hotline support
- Critical on-site support

II Knowledge of Customizing
- Code & Configuration

Resources for Archive Server Administrators Slide 8

3-8 710
OPEN TEXT

Extended Hours Support


OPEN TEXT
-;ikt1
lM JtIbAiib ii%i'*MM&

II 24x7 Support for Critical Issues


II 24x5 Support for all Critical,
Serious and Normal Issues
III "Follow the Sun" Support for
Serious and Critical Issues

Resources for Archive Server Administrators Slide 9

Resources for Archive Server Administrators 3-9


QPENTEXT

Open Text Global Support Hotline


OPEN TEXT

III Support Center - North America


III Support Center - Germany
II Support Center - UK
III Support Center - APAC (Australia & Japan)

III For more information, hours and phone numbers see


http://support.opentext.com

Resources for Archive Server Administrators Slide 10

Standard support is provided during business hours (see


http://support.opentext.com for deails). Additionally, 24x7 support
is available.

3-10 710
QPENTEXT

IXOS Expert Service Center (ESC)


OPENTEXT

iii Portal for information on i.e. Archive Server


iii Access via:
https://esc.ixos.com/

III Find there: Here you find all infonnation aluHn the current generation 1)( the
former IXOS ECM portfolio.
Manuals
The heart of the ECM Suite is the [jveflnk Enterprise Archive Server
(LEA) formerly knO\lm as the Enterprise Content Repository (ECR),
Release Notes which greatly facilitates the integration and operation of Document
Management, Web Conlent Management, Business Process
Management, ERP, CRM, Groupware Management and Archiving.

Installation/upgrade guides The top channel ECM SlIite on the left provides general information
about the ECM Suite and the rebranding of former IXOS products into
the Livelink scheme. For more information about Ihe different ECM
Patches components. please click on Ihe respective channels in the navigation
baronthelefl.
Troubleshooting help A complete mapping table including the former IXQS product names
is available here.
• Notes on specific
problem issues
• Troubleshooting guides

iii For newer product releases and all other products,


see http://knowledge . opentext. com (Knowledge Center)
Resources for Archive Server Administrators Slide 11

Historically, a lot of information esp. on the Archive Server can be found in the ESC. A
migration of the ESC content to the Open Text Knowledge Center is planned.

To get your own personal account, send an email to support@opentext. com with the
following information:

"Request for New Support Account"


First Name:
Last Name:
Email Address:
Phone Number:
Company Name:
Additional Information (if known):
Name of a co-worker who has KC access:
End User Code or Site 10:
Product Line(s):

Resources for Archive Server Administrators 3-11


QPENTEXT

OpenText Knowledge Center (KC)


OPENTEXT

III Your portal for all Open Text related product information
II! Access via: http://knowledge.opentext.com

Select a product family below to locate your product, and then proceed to the product family page where
you will find the latest downloads, patches, documentation, and more for your Open Text product.

Resources for Archive Server Administrators Slide 12

To get your own personal account, send an email to [email protected] with the
following information:

"Request for New Support Account"


First Name:
Last Name:
Email Address:
Phone Number:
Company Name:
Additional Information (if known):
Name of a co-worker who has KC access:
End User Code or Site ID:
Product Line(s):

3-12 710
OPEN TEXT

Open Text Online Community


OPENTEXT

Access via: http://communities . opentext. com

OP!l'D 'Text Ollnnetoro) i:S:s business e~ronmlint1h$i S6f..-uyO'.lllleOO!1l g~ tM f(llJsl o:isI ui' 16£<1' OpenteX! prGdutlS
The CGffiml.mitiei allow JOllIe leilm abOll! best pt<l'Cliceil,v.fl.$lt Wtilk$, 'M'o<rt dUi?$rrl:; coolt\'ttm1fQltf\:aIil, you f1ilm"" itW£lllb~
IO;l:di~ C9nt$ht,bUlUwlU l)!1IYgel b(>t!erWithYO<J,aclr>1' tQllttbllt<ltl($ft.~

Iollll1lr.$y!l'l'Wll'.SMllh
WO. C<Mt>s;W'..« ......iMp~".,,,

fi¢~takollfffi~~tl)~I\'trCl.li'l:!OOti
(lI$tmne~ClmiS\ir¥t!l:' f¢ryrM t~t()l'M$
·!i>OOmuffle.

"",""*"" ",",~"""", 'o"""""'M',,,, .~W_ W/lotzw>


fJf,/Z1t2..OO$

Resources for Archive Server Administrators Slide 13

Open Text Online (OTO) is a business environment that serves your need to get the most out
of your Open Text products. The communities allow you to learn about best practices, ask
questions in forums and allow customers to share their experience.

Resources for Archive Server Administrators 3-13


OPEN TEXT

Solution Packages by
Open Text Global Services

III Expand functionality of Open Text products


III Developed and offered by Open Text Global Services
III Need to be purchased separately by customer
- not included in maintenance contract

III Maintenance/guarantee provided by Global Services

Resources for Archive Server Administrators Slide 14

3-14 710
OPEN TEXT

Overview Solution Packages


OPENTBXT

Push documents to Archive Cache Server according to


defined rules
Push documents to local cache

Auto initialize hard disk volumes and control growth

Cleaning-up ISO images on EMC Centera

Check optical media on jukeboxes

Comparison of two Archive Servers


(DB and binary comparison of documents)
Verification of already archived documents concerning
completeness and consistency

Resources for Archive Server Administrators Slide 15

Contact your Global Services Consultant for more information on solution packages that you
are interested in.

Resources for Archive Server Administrators 3-15


OPENTEXT:

3-16 710
QPENTEXT

QPENTEXT

4 Document Processing by the Archive ~Oir\lt:l.r

How the Archive Server manages storage


and retrieval of documents

Document Processing by the Archive Server 4-1


Chapter overview
OPEN TEXT

How the Archive Server handles documents during ...


III archiving
III writing to media
III retrieving

DocumentProcesslng by the Archive Server Slide 2

4-2 710
OPEN TEXT

Document flow in the Archive Server -


Synopsis

- . " Archival
Writing to
. . storage
media
--+ Retrieval

Document Processing by the Archive Server Slide 3

The chart above gives a complete overview of the possible paths a document may take as it is
processed by the Document Processing by the Archive Server. The illustration also reveals that the
whole "life cycle" of a document is composed of three stages:
The archival of the document from its source to either the disk buffer or a hard disk pool. In
some situations, documents pass the DocumentPipeline before they enter the Document
Processing by the Archive Server's core component, the DocumentService.
An important aspect of this is that a document is defined to be archived already while it is still
held in the disk buffer, actually before it is stored on optical media or buffered disk. While this can
be interpreted as a potential safety gap (data is less safe in a hard-disk based buffer than on
optical disks), it is a mandatory precondition for many archiving applications requiring access to
documents immediately after their storage: The disk buffer provides this feature.
Writing the document from the disk buffer to an optical medium or a buffered hard disk
(= FS pool, available since Archive Server ~ 9.6).
Exception: Documents that have been stored in a write-through hard disk (= HDSK pool).
Retrieving the document from its current storage location to a client.
The chart also already names the Document Processing by the Archive Server components that perform
the involved tasks:
The DocumentPipeline preprocesses certain documents before they are stored.
The DocumentService manages the buffering, optical storage, and retrieval of documents.
The storage database DS is used by the DocumentService for storing technical attributes of
stored documents; they are needed to keep track of the current state of a document and to find it
upon retrieval requests.
The StorageManager (also called STORM) manages optical media in jukeboxes and provides
write and read access to storage systems.
Details about all three mentioned document processing stages are explained on the following pages.

Document Processing by the Archive Server 4-3


OPEN TEXT

Document archival
OPEN TEXT

Document Processing by the Archive Server Slide 4

The chart above illustrates the steps the Document Processing by the Archive Servertakes
when a it receives a document for archival:
(A) The document is stored as a file (or a set of files) in the DocumentPipeline directory.
This does, however, not apply to all documents. Depending on the leading application
and the used storage scenario, a document may as well bypass the DocumentPipeline
and directly enter the DocumentService where step (C) is performed.
(8) The DocumentPipeline preprocesses the document: A sequence of document tools
(also called DocTools) accesses the document one after the other and performs various
tasks. The exact sequence of steps depends again on the type of document and the
storage scenario. Examples of preprocessing actions include:
Extracting attributes from the document's contents
- Storing retrieved attributes in an index database of the leading application
- Adding information (example: a page index for a print list)
- Converting the document (example: collecting multiple scanned document pages
into a single multi-page image file)
(C) After the DocumentPipeline has finished its work - or when it has been bypassed -
the document is then handed over to the DocumentService. Depending on the archive
configuration, the document is stored in one of two places:
- If the document shall later be written to an optical medium, it is stored in a disk
buffer.
- If it shall be stored on a hard disk permanently, it is directly written to the
destination hard disk pool.
(D) The DocumentService stored status information about the received document in the
storage database; this includes the newly allocated document 10 and the chosen
storage location.

4-4 710
QPENTExT

Writing documents to ISO media to DVD QPENTEXT


(or WORM)• • • • • • • • • • • • • • • • •:i:if;:{fl.ic=m:%.iilii~il,jjj=m_=
..=.~

Document Processing by the Archive Server Slide 5

The chart above illustrates the steps involved in writing documents from the disk buffer to ISO
media to DVD. (Optionally, an ISO image can be also written to a WORM media). This is
organized as a periodic job; whenever the job is invoked (usually once a night), it performs the
following steps:
(A) It checks the disk buffer for the amount of collected data. If it is too little to fill a medium,
nothing happens; the job finishes, waiting for its next invocation. Otherwise it continues
with the next step.
(8) As a preparation for ISO media burning, the ISO image is created in the burn buffer.
This "image" is a single large file containing the complete file system layout for the
target media; its contents is the complete set of documents selected for burning.
To optimize read performance for the target media, the document files are sorted by
their size (large files first) before the ISO image is assembled; for this, the ISO tree
structure is created in the burn buffer.
(C)A medium is inserted into the jukebox's writer drive and the ISO image is written to it.
Immediately afterwards, the medium is checked for writing errors ("verified") by reading
it completely and comparing it with the ISO image in the burn buffer. Should writing
faults be detected, the medium is marked as "bad" and a further attempt to burn the ISO
image on another medium is made. (After the third unsuccessful attempt, the job
assumes that the writer drive is damaged, stops operation, and terminates with an error
status.)
(D) If thus configured, a second medium - i. e. the backup - is burned from the same ISO
image and verified. Original and backup are completely identical; no distinction is
possible and necessary.
(E) Depending on the configuration, either (or none) of these actions is taken:
- The copied documents are deleted from the disk buffer.
- The copied documents are moved from the disk buffer to the cache (so that they
remain to be accessible fast).
(F) The storage database DS is updated to reflect the new status and location of the
processed documents.

Document Processing by the Archive Server 4-5


OPEN TEXT (

Writing documents to ISO media to


OPEN TEXT
hard-disk based Storage System (HD-WO)

Document Processing by the Archive Server Slide 6

The chart above illustrates the steps involved in writing documents from the disk buffer to ISO
media on Le. a hard-disk based storage system. This is organized as a periodic job; whenever
the job is invoked (usually once a night), it performs the following steps:
(A) It checks the disk buffer for the amount of collected data. If it is too little to fill a medium,
nothing happens; the job finishes, waiting for its next invocation. Otherwise it continues
with the next step.
(B) As a preparation for ISO media burning, the ISO image is created in the burn buffer.
This "image" is a single large file containing the complete file system layout for the
target media; its contents is the complete set of documents selected for burning.
To optimize read performance for the target media, the document files are sorted by
their size (large files first) before the ISO image is assembled; for this, the ISO tree
structure is created in the burn buffer.
(C) The ISO image is transferred to the storage system (conventions may vary depending
on storage system).
Immediately afterwards, the medium is checked for writing errors ("verified") by reading
it completely and comparing it with the ISO image in the burn buffer. Should writing
faults be detected, the medium is marked as "bad" and a further attempt to burn the ISO
image on another medium is made. (After the third unsuccessful attempt, the job stops
operation, and terminates with an error status.)
Theoretically a second medium - i. e. the backup - can be written from the same ISO
image and verified. However, using backup usually won't apply to storage systems
using HD-WO method. Storage systems normally have their own backup mechanisms.
(D) Depending on the configuration, either (or none) of these actions is taken:
- The copied documents are deleted from the disk buffer.
- The copied documents are moved from the disk buffer to the cache (so that they
remain to be accessible fast)
Generally, storage systems have better access times than DVD or WORM
jukeboxes. Therefore, usually caching the disk buffer in a cache partition on the
Archive Server is less critical.
(E) The storage database DS is updated to reflect the new status and location of the
processed documents.
4-6 710
QPENTEXT

Writing documents to IXW media OPEN TEXT


(WORM orIU~D~O~). . . .IIII11I11I11I11I11I11I11I11I11=:m!f=:.=I~=' : ......~
&i.lljm:IU=*=

Document Processing by the Archive Server Slide 7

The chart above illustrates the steps involved in writing documents from the disk buffer to IYMJ media.
As opposed to ISO media writing (detailed on the previous page), there are three separate periodic
jobs Involved. They are invoked by the job schedule independently from each other, but the
corresponding actions on a specific document are always carried out in the order explained here.
CD IXW write job. The write job copies document files from the disk buffer to the target IYMJ media
one by one. (Unlike writing to ISO media, no ISO image preparation is involved). This involves
the following subtasks for each written file:
- The file is copied temporarily to another hard disk area.
- The file is copied from there to the IYMJ media.
- The file is read back from the IYMJ media and compared with the temporarily stored file
instance in order to ensure no writing errors have occurred. (If writing has failed, another
attempt is made to write the file.)
- The WORM filesystem database is updated so that it now knows the written file.
- The storage database OS is informed that the file is now resident on the IYMJ media.
The write job is usually scheduled to run rather often, e. g. every 30 minutes.
@ Backup job. Since writing to IYMJ media is comparatively slow, the IYMJ media write job never
copies documents to the IYMJ media backup itself; this task is left to the backup job that is
normally executed once a night. This job copies newly written data from all IYMJ medias to their
corresponding backups; it does this on filesystem block level rather than on file level which
makes the process faster. After this, original and backup IYMJ medias have identical contents.
® Purge buffer job. This job is not really part of IYMJ media writing, but it must be considered here
since no other instance deletes written documents from the disk buffer. The purge I'ob scans the
disk buffer for documents and deletes those ones that are already written to optica media. It
may, however, decide to keep even such "old" documents depending on given purging rules; for
example, a rule may dictate that documents have to be retained for a certain number of days.
When a document has been found that is subject to deletion according to the purging rules, the
following steps are performed:
1. It is checked that the document is really present on the optical medium it is said to be, to
prevent deleting documents that are not really stored anywhere else (e. g. due to a
medium damage that has happened in the meantime).
2. Optionally (depending on purging rules), the document is copied to the cache.
3. The document is deleted from the disk buffer.
4. The storage database OS is informed about the deletion.

Document Processing by the Archive Server 4-7


Free space on a IXW volume
QPENTEXT

11II IXW medias are written incrementally


11II A IXW partition is considered full even before the physical limit
is reached
- 2% of space is left empty by default (Archive Server S 9.5: 10%)
- Remaining space is reserved for later use, i. e. for changes to notesl
annotations of documents resident on the same volume
- Helps to keep all components of a document together on a single volume
- Avoids collecting a retrieved document from multiple volumes

Physical storage media capacity

Document Processing by the Archive Server Slide 8

The percentage of space to be left empty is set globally during Archive Server installation.
2% is just a default suggestion of the installation routine since Archive Server <:: 9.6.
In Archive Server::; 9.5, this value was 10%.

If this global setting is unsuitable for your purpose, it can be altered at any time:
Administration Client 7 Server Configuration:
Administration Server (ADMS)
L Default Values for Pools
Unix: /usr/ixos-archive/config/setup/ADMS. Setup, parameter
ADMS - WM- PART- PERCENT- FREE
Windows: Registry: HKEY_LOCAL_MACHINE/SOFTWARE/IXOS/IXOS_ARCHIVE/
ADMS/ADMS_WM_PART_PERCENT_FREE
Once altered, the change becomes effective after the next Archive Server restart. However,
the change affects only medias that are initialized henceforth. The space reservation for
medias already in use can be altered using the dsClient utility; see ESC document for details:
https://esc.ixos.com/0914004631-652

4-8 710
QPENTEXT

Writing documents to FS pool or VI pool


(hard disk)

Document Processing by the Arch'lve Server Slide 9

The chart above illustrates the steps involved in writing documents from the disk buffer to an FS pool.
Both HOSK and FS pool write to hard disk as final destination. As opposed to HOSK pool, however,
the FS pool utilizes a disk buffer and has its own write job. This provides certain advantages, esp.
when FS pool is used in combination with certain storage systems like Le. NetApp filers.
FS pool is available starting Archive Server ~ 9.6 and replaces the HOSK pool. HOSK pools can
usually be migrated to FS pools fairly easily and it is recommended that HOSK pools are only used
for test purposes in the future.
The periodic jobs involved in writing to FS pools are the write job and the purge job. They are
invoked by the job schedule independently from each other, but the corresponding actions on a
specific document are always carried out In the order explained here.
Q) FS write job. The write job copies document files from the disk buffer to the target FS pool. The
storage database OS is informed that the file is now resident on disk location designated to the
FS pool. The write job is usually scheduled to run rather often, e. g. every 30 minutes.
@ Purge buffer job. This job is not really part of the writing, but it must be considered here since
no other instance deletes written documents from the disk buffer. The purge I'ob
scans the disk
buffer for documents and deletes those ones that are already written to final ocation. It may,
however, decide to keep even such "old" documents depending on given purging rules; for
example, a rule may dictate that documents have to be retained for a certain number of days.
When a document has been found that is subject to deletion according to the purging rules,'the
following steps are performed:
1. It is checked that the document is really present on the optical medium it is said to be, to
prevent deleting documents that are not really stored anywhere else (e. g. due to a
medium damage that has happened in the meantime).
2. Optionally (depending on purging rules), the document is copied to the cache.
3. The document is deleted from the disk buffer.
4. The storage database OS is informed about the deletion.

Document Processing by the Archive Server 4-9


OPEN TEXT

Providing a document for read access


OPEN TEXT
"?'--

Document Processing by the Archive Server Slide 10

The chart above illustrates how the Archive Server organizes to provide a document for
retrieval by a client. Since a document may be resident in one (or more than one at the
same time) of several locations, a reasonable, well-defined order of precedence is obeyed
for accessing the document:
1. First the storage database is queried whether the document is available either in a disk
buffer or in a hard disk pool. In either case, it is taken from there and transmitted to the
client.
2. If the document is not present in either a disk buffer or a hard disk pool (HDSK or
FS), it is checked whether the document is present in the cache. If so, it is taken from
there and transmitted to the client.
3. Only if the document cannot be taken from any hard disk location (cases 1 and 2) it is
read from an optical medium; this is the least attractive situation because reading from
a jukebox is much slower than from hard disk.
Before the document is actually transmitted to the client, it is first copied to the cache
so that subsequent read requests can be fulfilled from there. As a matter of optimization
for very large documents (like print lists), a document is cached in fragments of 64 kB
size; only those parts of the document are read, cached, and transmitted that are
actually requested by the client. As the user browses through the document in the
Archive Windows Viewer, the client automatically requests the desired parts from the
server, step by step.
If the client application requesting the document is not able to load the document
fragment-wise, i. e. it insists on receiving the complete document immediately, then the
cache will receive the whole document as well.
When the cache becomed full,it flushes old documents as needed to make room for
newly requested ones (FIFO or LRU mechanism); unlike for the disk buffer, there is no
periodic job needed for cache reorganization.

4-10 710
QPENTEXT

OPEN TEXT

5 Document Structure on Storage Media


How the Archive Server arranges stored documents

Document Structure on Storage Media 5-1


Chapter overview
OPEN TEXT

---_.-=~
11II Document structure
- Documents and components
- Directories and files

II Tracing documents with dsClient

II Cache structure

Document Structure on Storage Media Slide 2

5-2 710
QPENTEXT

Inner structure of documents

Document
III Documents are identified by .--_ _......c omponents
unique document IDs (also r-'----:;;=";:
known as document string)

III Documents are composed of


components:
Document body, e. g.
collection of scanned images
single multi-page image
single PDF, DOC, ... file
"Helper" components
(e. g. indexes into the document)
User comments
* Notes
* Annotations
* OLE annotations
Documents
(type examples)

Document Structure on Storage Media Slide 3

Every document stored on an Document Structure on Storage Media is composed of components.


These can be classified into these categories:
The "document body" itself, e. g. a single PDF file.
Regarding scanned documents, the body may in turn consist of multiple components, Le. the
sequence of scanned document pages each of which is stored as a separate image file. (Binding
all scanned pages together into a single multi-page file is also possible; this is done by the
scanning application, not by the Document Structure on Storage Media.)
Additional "helper" components of the document. Examples:
- Collections of application-related attribute sets (for retrieval database recovery)
- Page index for SAP print list (for efficient page-wise viewing)
- Attribute search index for SAP print list (for efficient table search)
These components are not normally visible as part of the document; they are used by the system
components behind the scenes for certain purposes.
Comments that users create and store along with documents. These include:
- Notes
- Annotations, which in turn are subdivided into:
* "Normal" annotations, L e. vector-graphic objects. All such annotations of a
document are collected in a single annotation component.
* OLE annotations, L e. OLE objects embedded into the document. Since these
normally contain quite much data, the Document Structure on Storage Media
stores every OLE annotation as a separate component.
Notes and annotations are alterable (notes: extensible only) by the users. To manage this on
write-once media, document components have a version attribute. Every time a user adds a
note, all notes together are stored once again as the notes component with the next available
version number; every new version supersedes all older versions. If physically possible, older
versions are even deleted as a new one is stored in order to regain storage space. The same
applies to annotations (except for OLE annotations).

Document Structure on Storage Media 5-3


OPEN TEXT

Directory structure on storage media


OPEN TEXT

" "Distribution" layers


43 - Avoid storing too many items in a
single directory
ttl 61 (limitation or performance loss on
operating system level)
El 53 - "Random" structure
(no implied meaning)

00000065 " Same structure on all storage


media
00000066 - Disk buffer
- Hard disk pool
00000067 - Optical disks
00000068 Exceptions:
- Cache (but similar)
- Document Pipeline
3 "distribution" 4 rd layer:
layers directory = document " Path to document is retained
Names are 1-byte Names are 4-bytes throughout the processing flow
hex numbers hex numbers - Relative to medium root directory

Document Structure onSlorage Media Slide 4

5-4 710
QPENTEXT

Files of documents

II! Files =document components


- Number and types depend on
document type and scenario
Prefix rd.:
file marked as read-only
- Suffix; n: component version number
" For revisable components like
notes, annotations

II! Additional "administrative" files:


- ATTRIB.ATR
~ Accompanies every document
" Contains all technical document
attributes (e. g. document ID)
" Needed for database imports
. job. <ARCHIVE>_<POOL>
Only in disk buffer
" Indicates that document (or part of
it) is not yet written to target media
Needed for disk buffer recovery

Document Structure on Storage Media Slide 5

Every document component is stored as a file in the document's directory (discussed on the
previous page).

Before copying documents to final media, the write jobs marks components as read-only with
the prefix .rd. This ensures that the files are not changed anymore i. e. on the diskbuffer.

Document Structure on Storage Media 5-5


System attributes in the ATTRIB.ATR
OPEN TEXT

.A :.A=archive-id Archive ID, name of the archive


.D :.D=creationdate Creation date of the document
.L :.L=doc-id Doc ID, name of the document
.R :.R=retention Retention of document
.TS :.TS=timestamp Date and time at which ATTRIB.ATR was written
.Y :.Y=docType Document type within the DS
.C compsname :.C= number Status of compression and encryption
.CRC32 compsname :.CRC32=number Ckecksum of a componet
.CL-SIG compsname :.CL-SIG=signature Cryptographic checksum and signature with the
date of the component
.D compsname :.D=creationdate Creation date of the component
.F Compsname: .F=flags Flags for each component
.L Compsname: .L=compname Compsname-compname assignment
.P Compsname: .P=protvers Protocol version of the component
.T Compname: .T=type (Mime-)Type of the component
.SIG Compsname: .SIG=signature Cryptographic checksum and signature with the
date of the component

Document Structure on Storage Media Slide 6

.C addition of following values:


• OxOl: already compressed OxOOOl
• Ox02: don't compressed OxOOlO OxOOlO
• Ox04: already encrypted OxOlOO
• Ox08: don't encrypt OxlOOO OxlOOO
typical value .C=1 0 means: sum =OxlOlO
see also https://esc.ixos.com/0941678195-841

5-6 710
QPENTEXT

Tracing documents with dsClient (1)


QPENTEXT

PROMPT> dsClient localhost dsadmin [password] Identify by document 10


_> dinfo aaajclfhitjt32mqp3fmus5qglvzk __-.
__....,...;;;;;;;;;;;;;;;;;o---,.-:;.----...-J
docldNo=240 path=88\95\85\00000 53 noOfComps=4
creationDate= 2~~~S;;::--.:.._
modificationDate=Mon Jul 28 12:42:05 200
Relative document path
status=Ox10( DB ), archive=W1
rights=3f ( R W C N D A ) Identical on all media

component type volume version


1.pg image/tiff Diskbufl
This component is stored
t.:a;;;;;;..-..... twice: in disk buffer and
on WORM

Document Structure on Storage Media Slide 7

The Document Structure on Storage Media command line tool dsClient is useful for
retrieving complete information about a certain stored document. Starting from the document
ID - that you must know in advance - you can use the dinfo command to inform yourself
about:
The logical archive the document is stored in
Time of archiving and of last modification (i. e. entry of notes or annotations)
Components belonging to the document
Current storage location, composed of
- the path (valid for the whole document, on all storage media)
- name of the current storage media (for each component separately)
To really know the storage location(s) of a document component in the disk buffer, you have
do go one step further: The logical media names given in dinfo'S component list have to be
mapped to true storage locations in the Document Structure on Storage Media's filesystem.
This is done with the volInfo command as illustrated above. The BaseDir attribute of the
named volume together with the document path form the complete storage path.

Calling dsCHent, you may enter the user's password directly on the command line, as
illustrated above; however, this is somewhat insecure because the password is then visible
both on the command console display and in the computer's process list. It is also possible
(and more secure) to omit the password on the command line, in which case dsClient will
prompt you for the password upon startup; the typed password is not displayed on the screen
then.
To exit from dsClient, use the end command.

Document Structure on Storage Media 5-7


Tracing documents with dsClient (2):
OPEN TEXT
document component details

view of connection between the component


names in the ATTRIB.ATR:

3F28EOCO.002:.L=anno.ixos;1

Document Structure on Storage Media Slide 8

Normally, the logical name of a document component and the name of the corresponding file
are identical. However, this does not hold for component names longer than 8+3 characters:
To beconformant with the older ISO 9660 filesystem standard for CDs, the DocumentService
uses artificial 8.3 file names for such components.
In such a situation, it is not always obvious which document component corresponds to which
stored file. To retrieve the mapping, use dsClient's cinfo command as illustrated above.

5-8 710
QPENTEXT

Document structure in the cache


OPEN TEXT
'1 ibi1ll@lMM.

Same 4 directory levels


as on storage media

Directories represent
document components

Files are cached in


chunks of 64 kB, as
requested by client

Document Structure on Storage Media Slide 9

The directory and file structure in the Document Structure on Storage Media's cache is slightly
different to the one used on the storage media:
The three-layer directory structure down to the document directories is the same;
documents retain their specific path even in the cache.
Document components - normally stored as files - are represented as directories in
the cache. The name of such a component directory equals the name of the component
file on the storage media.
Within a component directory, the component contents is stored as a set of enumerated
files: Each of those files contains a chunk of the component file with size 64 kB (except
for the last one which may be smaller). This structure enables chuck-wise caching of
large documents - only those fragments of a document are cached which are actually
requested by a client. This speeds up caching and prevents huge documents from
flushing many smaller documents from the cache at once.
However, a document is always cached entirely (but still as a set of chuck files) in these
situations:
When it is cached as part of the media write or buffer purge action
When it is requested by a leading application that does not support chunk-wise
document retrieval

Document Structure on Storage Media 5-9


OPEN TEXT

Exercise: Examine a document on storage


OPEN TEXT
media

II Retrieve document 10 of sample


document
II Find its storage location using
dsClient

II Examine the document files


found in the disk buffer
II Add a note to the document and
examine the result
- in dsClient
- in disk buffer partition

Document Structure on Storage Media Slide 10

5-10 710
QPENTEXT

OPENTEXT

6 The IXW File system - WORM File ~v~~t~lm

How the Document Structure on Storage Media


manages data on IXW media

The IXW File system - WORM File system 6-1


OPEN TEXT

III IXW file system structure

Iii IXW file system database

III IXW finalization

The.IXW File system * WORM File system Slide 2

6-2 710
OPEN TEXT

IXW: The Open TextllXOS file system for UDO OPENTEXT


and WORM Im;e~d:ia~s;' • • • • • • • • • • • • •iiiiWi-iiiDiiifiWi"ijis:ti'*ll;f•••

iii Open Text! IXOS uses proprietary file system for UDO and
WORM medias
- No industry standard available

iii Requirements:
- Incremental writing
- Space efficiency
- Robustness against "bad blocks"
- Recoverability from write errors
Fast read access

II The Open Text! IXOS solution:


- File structure information in separate WORM file system database

The lXW Fife system - WORM File system Slide 3

The IXJN File system - WORM File system 6-3


How IXW medias are written
OPENTEXT

AIXWvolume .--------------------------------------------------.
, I

I _ .... ..,..~~~~~~~"'~~~L

~ I ~- ~~rl11~:~~~ :r~~~~~~
--

Fixed Variable
division division

The lXW File system - WORM File system Slide 4

From the point of view of the file system structure, a IXYV media can be regarded as a sequence of
blocks with fixed size (normally 1, 2, or 4 kB). The illustration above shows how the Archive Server
manages the data written to a IXYV media. The storage space of every IXYV media is subdivided into
these areas:
Free space is left at the beginning of the IXYV media. The exact amount differs, depending on
the used STORM version; it is never more than 1 MB. (For more details, see
https://esc.ixos.com/0984066998-363)
Attention! Is available from version 4.1
The volume header (also called VCB area) contains information about the volume itself, such
as the volume label and the time of initialization. The information is packed into a volume
control block (VCB). Since the status of the WORM may change later (e. g. due to a promotion
of a backup to an original), additional VCBs can be appended later on, each one superseding the
previous one. The space reserved for VCBs is 16kB.
The structure or FCB area contains the file control blocks (FCBs). Every FCB contains status
information about a written data file, including a pointer to the location of the file on the IXYV
media itself.
The last area on the IXYV media contains the application data, i. e. the actual documents files.
Between the FCB area and the data area, a certain amount of free space is left for later storing
the finalization data (see later in this chapter).
Whenever a data file is to be written to a IXYV media, these items are actually stored:
The FCB with file attributes and the pointer to the file storage location (block number).
The file itself, appended to already written data at the end of the IXYV media.
An inode is created in the WORM filesystem database. This inode basically mirrors the file
attributes already stored in the corresponding FCB.
While it does not add information to the FCBs, the WORM filesystem database is essential for fast
access to IXYV media data. The FCBs are always stored in chronological order of writing and therefore
cannot be searched efficiently.

6-4 710
OPEN TEXT

WORM file system database


OPEN TEXT

.. Mirror WORM file system File path, e. g.


FM001/1S/97/00012D/DATA.;1
structure on hard disks for fast
access
.. Also called the inode cache
.. Maps file paths to file attributes
and location File information:
,. Creation date
.. Stored as files (not in ROBMS) ,. Location (block offset)
on WORM
.. Fixed size, determined during ,. Size
Archive Server installation
- Size and filling level displayed in
Archive Server Monitor
or with command line tool: Indexes into inode data
cdadm read fs%worm%cache

.. Size and location of files


configured in STORM
config file server. cfg

The structure data itself

The role and significance of the WORM files system database (= inode cache) are the same
as of the well-known FAT (file allocation table) of MS-DOS file systems: It takes a file path as
input and maps it onto the file's physical storage location (necessary to read the file) as well as
other status information.
The WORM files system database is no database from the software point of view; no RDBMS
is used to store the information. Instead, the data is stored in a variable number of different
files (as illustrated above) that STORM uses as a database in a logical sense.
STORM's central configuration file, dXOS ROOT>/config/storm/server. cfg,
determines the number, sizes, and locationof the data files. The information is coded in the
ixworm section that looks like this:
ixworm {
numlnodes {100000 (max. total number of inodes)
ixwhashdir {
files {filel file2 } (list of configured files of this type)
filel {
path {W:/hashdirl} (path of file 1)
size {25} (max. size of filel in MB)

ixwhashfile { } (same structure as ixwhashdir)


ixwhashname { }
ixwinodes { }

The total number of inodes would be defined in the time of the configuration of the archive
server. If it later necessary to increase this value, please contact the Archive Server Support.

The IXW File system - WORM File system 6-5


IXW media finalization (1)
OPEN TEXT

WORM filesystem database


(inode cache)

,Finalization =
rn~ving structure information
, the IXW media

VCB~-~ I
L I

The lXW File system - WORM File system Slide 6

When a IXW media has been filled up with document files, it may become finalized. The chart
above illustrates how this is done:
1. The complete inode data describing the IXW media contents is copied from the WORM
filesystem database to the IXW media itself. The resulting ISO structure is a complete,
searchable structure description of the IXW media contents. It is written into the
remaining free space between the FeB and data areas; this space is kept free explicitly
for exactly that purpose.
2. The inodes of the IXW media volume are deleted from the WORM filesystem database.
3. A primary volume descriptor (PVD) is written at the beginning of the WORM volume,
pointing to the block where the ISO structure can be entered for searching.
The ISO structure and the PVD turn the IXW media into a read-only ISO medium which can be
accessed efficiently without the WORM FS database.
See next page for a discussion of further consequences of finalization.

6-6 710
OPEN TEXT

IXW media finalization (2): Properties


OPENTEXT

9 Finalized IXW media accessible without WORM FS database


- Objective: keep WORM filesystem database small

9 IXW media becomes read-only ISO 9660 filesystem


9 Backup IXW media is finalized automatically
- As soon as finalized original is backed up
- After this: no more distinction between original and backup

9 Finalization can be done during normal server operation


- Does not block other access to the required resources (media and drive)

III Finalization may fail in certain cases


- For example, if bad blocks in the reserved space prevent writing
- Old media initialized with ixwd can generally not be finalized
(migrate to new IXW medias first)
In these cases:
- WORM filesystem database must keep structure data "forever"
- Nevertheless, IXW partition remains read-only

The rxw File system - WORM File system Slide 7

For customers with Unix-based Archive Server installations upgraded from an original release
::;; 3.5, there is an important restriction: All WORM partitions that were initialized by IXOS's old
jukebox service ixwd cannot be finalized at all. In order to benefit from finalization, those
WORMs must first be copied to new ones which then can be finalized afterwards.
This is positional in the missing PVD (primary volume descriptor) field on the WORMs (see
also the picture on slide 6 on this chapter).

The IXW File system - WORM File system 6-7


IXW medias and the 1509660 filesystem
standard OPENTEXT
';, ¥MiiA'U@b1!Wt.

II Finalization converts a IXW media to an 1509660 filesystem


IJI 1509660 limitation: no more than 65'000 directories
II If a finalized IXW media contains more than 65'000 directories:
- WORM becomes nearly, but not fully 1509660 compliant
- No problem for file access by Archive Server

II Archive Server obeys this ISO limitation when writing to IXW


medias
- Leads to 100% IS09660-compliant WORM media
- WORMs may not become full (esp. when storing mainly very small files)
- Default setting
- To have the Archive Server fill up IXW medias even with small files,
configure it to ignore the limitation
• See ESC

The IXW File system - WORM File system Slide 8

The fact that - by default - an Archive Server stops adding directories (= documents) to a
IXW media partition when the 65'000 directories limit for IS09660 media is reached, may lead i '
(
to significant space waste on modern, large IXW medias - especially where mainly small
documents are stored. You should therefore consider switching this limit off; there is no
negative impact concerning the Archive Server's own use of media exceeding this limit.
The option for obeying the ISO directory number limit can be maintained in the Server
Configuration page of the Archive Server Administration, branch:
Storage Manager (STORM)
@- Configuration STORM (file server.cfg)
@- WORM Filesystem, entry Accept of also non-IS09660 format.

Find more information in the related ESC article:


https://esc.ixos.com/0993467663-736

6-8 710
OPEN TEXT

QPENTEXT

7 Configuring Logical Archives


Separating documents physically

Configuring Logical Archives 7-1


Chapter guide
OPEN TEXT

IIIIIIIIIII-------==~
Iii Logical archive fundamentals
- Definition
- Pros and cons for multiple logical archives

Iii Technical background


- Pools
- Devices Ii
I
Iii Configuration
- Archives
- Pools
Document processing options

Configuring Logical Archives $lide2

7-2 710
QPENTEXT

Definition: logical archives

III A subdivision of an Archive Server's storage resources


- Every storage medium is assigned uniquely to one logical archive
- Every stored document resides in one specific logical archive

III Visible to the leading application


- Leading application controls distribution of documents to logical archives

III Used to handle classes of documents differently, e. g.


- different retention periods
- different requirements for fast retrievability

III Characterized by a set of technical properties


- Media type
(DVD, WORM, UDO, HD, HD.WO)
- Writing schedule
- Caching behavior
- Document processing options
- Security settings

Configuring Logical Archives Slide 3

Configuring Logical Archives 7-3


Why use more than one logical archive?
OPEN TEXT

1
I ,

Configuring Logical Archives Slide 4

For maintaining background forms using the Livelink for SAP or Livelink for UAC add-on
Forms Management, a dedicated logical hard disk archive is strongly recommended.
Furthermore, it is recommended to use separate logical archives for document archiving and
for data archiving purposes.

7-4 710
OPEN TEXT

Restrictions for the number OPEN TEXT


of logical arclh;i~v~e~s:'• • • • • • • • • • • • •
II-:iim:,4:Il:mll:R!WJl!:'.=.~

III Archive IDs must be unique


- Each logical archive 10 must occur on only one Archive Server in your
Open Text/lXOS storage environment
- Leading application may impose similar restriction
Example SAP: One logical archive to be used by a single SAP R/3 only

III On one Archive Server: Avoid having too many


- For ISO (= write-at-once) storage media: Hard disk space for disk buffer
must be provided proportional to the number of logical archives
- The incoming archive traffic must be sufficient to fill a ISO media
(DVD, WORM or HD.WO) in time
- The needed number of optical media must fit into your jukebox
(esp. UDO and WORM)

Configuring Logical ArchIves Slide 5

Configuring Logical Archives 7-5


OPENTEXT I

Chapter guide
OPEN TEXT
_...-....-....-.-==~
'T©lUdtiiltW• •

III Logical archive fundamentals


- Definition
- Pros and cons for multiple logical archives

III Technical background


- Pools
- Devices

III Configuration
- Archives
- Pools
- Document processing options

Configuring Logical Archives Slide· 6

7-6 710
QPENTEXT

Inner structure of a logical archive

Leading Archive Server


application
Logical archive

Configuring logical Archives Slide 7

Each logical archive mustbe defined on both the leading application and the Archive Server
for with the same name ('A1' is just an example in the chart above); this is the foundation for
the storage dynamics controlled by the leading application and performed by the Archive
Server.
On the Archive Server, a logical archive normally has a single media pool; in practice, it is
therefore not necessary to strictly distinguish between the archive and its pool. However,
certain exceptions exist where a logical archive may have more than one pool:
If certain components of documents - specifically comments added by users, i. e.
notes and annotations - shall be stored on different media than the original
documents. The pools must then have different application type properties.
In practice, the most useful combination is:
- Storing original documents and notes on optical media
- Storing annotations on hard disks
This setup saves space on optical media. Since annotations are alterable at any time, it
is normally not necessary to store them on read-only media.
If a media migration shall be performed for that logical archive. One pool must then
have application type "Migration".
Using the pooling feature in former versions of Email Archiving for MS Exchange/ Lotus
Notes.

Configuring Logical Archives 7-7


QPENTEXTI!

QPENTEXT

III A named group of storage partitions of the same media type


III Following pool types possible:
- ISO (write-at-once) (formerly: CD pool)
- IXW (write-incremental) (formerly: WORM pool)
FS (single file) (buffered hard-disk)
VI (single file) (Single File Vendor Interface, with ECM Centera)
- HDSK (write-through) (recommended only for test purposes)

II ISO, IXW, FS and VI pools require a hard disk buffer


iii Each ISO , IXW, FS and VI pool has its own media write job,
including
- job schedule "when and how often to run?"
- job settings (caching, backup, ...) "what to do exactly during
media writing?"

Configuring Logical Archives Slide 8

Starting Archive Server ~ 9.6, pool types FS and VI are offered.

Pool type VI - Single File Vendor Interface


VI supports single files (instead of ISO images) for archiving on a storage system. Currently,
only EMC Centera is supported.

Pool type FS - Single File Filesystem


Both HDSK and FS pools use a hard disk device. Unlike HDSK pools however, FS pools use
the diskbuffer and have a designated write job. With Archive Server ~ 9.6, FS pools should be
used instead of HDSK pools. HDSK pools should only be used for test purposes.
FS pools can be used along with local hard disks or with suitable storage systems (i. e. HDS,
NetApp, ... ).

7-8 710
OPEN TEXT

Storage Systems supporting FS pools

II! Libraries to
hide storage
specific
settings of
WORM feature
and retention
periods.
II! Library name
corresponds to
volume
description
file.
II 05 and
STORM use
same libraries

Provide volume description file:


archive.vdf

Configuring Logical Archives Slide 9

Writing on hard-disk using Iibhdsk supports compliance features (unlike HDSK pools).

Configuring Logical Archives 7-9


OPEN TEXT

III Physical "containers" for storage media


II Supported device types
lecrtcOO
- WORM/UDO jukeboxes (IXW) +i@ Archives
- DVDIWORM jukeboxes (ISO) <Diil Cache Paths
lilli Cache SelVers
- "Virtual jukeboxes" (HD-WO) +,(ii! Buffers
:,,;11 Devices
- Hard disk drives (FS, HDSK) .q;, HardDisk
...... centera
i0$! DVDsim
,....;$t DVDsim2
1..·_-'&1'"
II Storage systems use
- for writing ISO images: "Virtual jukeboxes" (HD-WO)
- for writing single files: hard disk files (FS or VI)

II Hard disk devices are used for


- Disk buffers
- Hard disk pools

II Devices are handled via the Archive Server Administration

Configuring logical Archives Slide 10

Depending on the storage system, documents are either written as single files or ISO images.
ISO images are written using "virtual jukeboxes" in "hard-disk write-once mode" (HD-WO).
These are handled similar to ISO pools.
Single files are written usually using hard-disk drives in write-through method directly from OS.

7-10 710
QPENTEXT

Chapter guide
OPEN TEXT

l1li Logical archive fundamentals


- Definition
- Pros and cons for multiple logical archives

l1li Technical background


- Pools
- Devices

l1li Configuration
- Archives
Pools
- Document processing options

ConfIguring logical Archives Slide 11

Configuring Logical Archives 7-11


QPENTEXT

Create logical archive


QPENTEXT

Configuring Logical Archives Slide 12

Creating a logical archive in the Archive Server Administration is fairly easy: Invoke the
Create Archive dialog as illustrated above and supply the logical archive's name and-
optionally - a description.
Reminder: For a logical archive for SAP, always use two-letter, uppercase, alpha-numerical
names (restriction for SAP ArchiveLink Interface)

7-12 710
QPENTEXT

Create ISO (write-at-once) pool (1): QPENTEXT


Pool name.a~n~d~t~y~p:e~ • • • • • • • • • • •)=;_=IU=.=.=¥;"'.=-.

--_ .... --
(next page)
----~

Configuring Logical Archives Slide 13

Having created a logical archive (as shown on the previous page), the next configuration step
is creating a media pool. For this, right-click on the logical archive name in the Archive Server
Administration (as illustrated above) and choose Create Pool from the context menu. You
will then be guided through several dialogs where you have to enter the following pool
attributes:
CD The pool name (which does not need to be unique among the logical archives), pool
type (= type of media that the pool shall use: ISO, IXW, FS, VI or HDSK), and
application type which here means "document component type" (i. e. notes,
annotations, and OLE annotations). If you do not intend to separate components of
those types from other components to different media, choose "Default" - i. e. archive
all document components together into this pool.

Configuring Logical Archives 7-13


Create ISO (write-at-once) pool (2): Write

----
OPEN TEXT
configuration

(next page)
.........

.......... .....

• • •~ Fill in these fields only for asynchronous backup:


If backup shall be created
© in second jukebox
at a later point of time

Configuring Logical Archives Slide 14

Step @ (illustrated above) queries details of how data shall be transferred from disk buffer to optical
media. For a standard ISO pool configuration, specify the following items:
Backup: Do not select.
Allowed Media Type: Choose here the type of optical disks that you intend to use for this pool: CD-
R, DVD-R, or WORM.
Partition Name Pattern: Determines how newly burned disks will be labeled. "$ ( ... ) " are
placeholders for changeable values; $ (SEQ) (= sequential number) must always be present.
You can check the effect of your pattern with the ~est Pattern button.
Number of Partitions: For each ISO volume, that many identical pieces are created. For test data,
choose '1'; for production use, '2' (original plus backup) should be sufficient.
Minimum amount ...: If less than that amount of data is queued in this pool for writing, no disk will
be written; more archived data is waited for instead.
Original jukebox: Select the jukebox where optical disks for this pool shall be burned.
For burning backup disks in a separate jukebox, some fields have to be filled in differently:
Delete from Diskbuffer: Do not select for production use; the disk buffer must be used as a
temporary backup in this case.
Backup: Select this option.
Number of Drives: Tells the backup job - which copies original disks to their backups - how many
jukebox drives it is allowed to occupy simultaneously. This may speed up the backup process,
provided that enough drives are available. Minimum is '1'.
Number of Partitions: Choose '1' (only the original).
Number of Backups: Normally '1 '.
Backup jukebox: Select the jukebox where the backup disks shall be burned.

In Archive Server S 9.5, the following options were additionally available:


Copy to cache: Along with writing a document to optical disk, a copy is placed in the server's read
cache. Effective only together with the "Delete from Diskbuffer" option.
Delete from Diskbuffer: If deselected, documents are held in the disk buffer even after having been
written to optical disk

7-14 710
QPENTEXT

Create ISO (write-at-once) pool (3): Writing OPEN TEXT


schedule, d~iS:k~b~u~ff:e~r• • • • • • • • • • • • •;w_;=ib='i;'i;f.;"=M'=~m~
..

......

Configuring logical Archives Slide 15

@ Writing data from disk buffer to optical disks is a periodic job that is to be scheduled
here. First assign the job a name (the illustration shows a convention), then specify the
job period. The illustration shows a reasonable choice for a ISO pool: once per night.
Note: This job will later be visible and maintainable in the Archive Server
Administration's Jobs tab.
® For optical media pools, a disk buffer has to be chosen for collecting documents prior to
writing them to optical disks. See chapter Disk Buffer Configuration for reasonable
configurations.

Configuring Logical Archives 7-15


QPENTEXT

Use ISO pool with HD-WO (hard disk write-


QPENTEXT
once) medias (1): Pool name and type

11II HO-WO pools also write


ISO images
III operation method for hard
disk based storage system
(EMC Centera, HOS, NetApp,
... )

Configuring Logical Archives Slide 16

7-16 710
QPENTEXT

Use ISO pool with HD-WO (hard disk write-


once) medias (2): Write configuration

III Example for storage system writing ISO images in HO-WO mode
(i. e. EMC Centera, H05, NetApp)
III Further settings necessary
- See installation guides
- Advisable to involve Open Text Consulting for implementation

-- --

Configuring Logical Archives Slide 17

Allowed Media Type


Choose the media type HD-WO and define the maximum size of an ISO image (in MB) as part
of the whole storage media separated by a colon!

Minimum Amount of Data


Enter a value less than the maximum size of an ISO image that you enter in the Allowed Media
Type field.
Original Jukebox
Choose the device name that you have configured in the server. erg file.
Number of Partitions
Enter 1.
Backup Jukebox
Choose the device name that you have configured in the server. erg file.
Number of Backups
Enter 1.

Note:
Backup of documents stored on hard disk based storage systems (e. g. EMC Centera, HDS)
can be handled by the storage system itself and has to be configured appropriately.
In such a case you have to set the value for the number of backups to zero and leave the entry
for the backup jukebox vacant.

Configuring Logical Archives 7-17


OPEN TEXT

Create IXW (write-incremental) pool


OPEN TEXT

Start in Archive
Server Admin.,
logical archives
section

Same way as for ISO pool:


.---------......- - - - - - - ,
Schedule write job
• Naming proposal: Wri te_WORM_ A4
• Typical schedule: Hourly

Configuring Logical Archives Slide 18

Creating and setting up an IXW pool is different from a ISO pool; this corresponds to the
differences in media writing techniques (ISO: one-time, synchonous backup; IXW: incremental,
asynchronous backup). These are the attribute differences:
Q) The pool type must be "Write Incremental (IXW)".
(f) IXW write configuration does not refer to a minimum amount of data to be written. Only
the following parameters are to be specified:
Backup: If selected, IXW volumes of this pool will be backed up automatically;
always select for pools containing production data.
(Using IXOS-ARCHIVE S 4.2, there is an additional option in the WORM write configuration:
"Delete from disk buffer after copy". Never select this option for a pool for production data! You
always need the disk buffer as a temporary backup between writing a document to the original
WORM volume and duplicating it to the backup WORM. For test data, however, this is not
necessary.)
Auto Initialization is a recommended option. This way, new WORMs don't need to be initialized
and assigned manually. Auto Initialize also takes care of initializing the backup WORM.
@ Since there is no need to wait for a certain amount of archived data to fill a volume
completely, IXW media writing can be scheduled much more often than ISO media
writing. The shortest period that can be specified is every five minutes.
@) Like an ISO pool, an IXW pool needs a disk buffer for collecting documents prior to
writing them to optical disks. See chapter Disk Buffer Configuration for reasonable
configurations.

7-18 710
OPEN TEXT

Create FS (single file) Pool

Start in Archive
Server Admin.,
logical archives
section

Same way as for ISO pool:


Schedule write job
• Naming proposal:Write_HD_A4
• i.e. schedule: Hourly

Configuring Logical Archives Slide 19

Creating and setting up an FS (single file) pool is different from an ISO pool; this corresponds
to the differences in media writing techniques (ISO: one-time, synchronous backup; IXW:
incremental, asynchronous backup). These are the attribute differences:
CD The pool type must be "Single File (FS)".
(?) HD write configuration does not refer to a minimum amount of data to be written.
@ Since there is no need to wait for a certain amount of archived data to fill a volume
completely, HD writing can be scheduled much more often than ISO media writing. The
shortest period that can be specified is every five minutes.
@) Like an ISO pool, a Single File (FS) pool needs a disk buffer for collecting documents
prior to writing them to optical disks. See chapter Disk Buffer Configuration for
reasonable configurations.

Configuring Logical Archives 7-19


QPENTEXT

Create HDSK (write-through) pool


QPENTEXT

,--------------\ 0 ~----..,
Preparation step:
• Provide hard disk partition on operating
system level
• Do not make partition too large
Absolute limit: 1 TB (for Archive Server :5 9.5)
If more total space required:
Use several smaller partitions instead of
a single large one
~ Additional partitions can be assigned later

Configuring Logical Archives Slide 20

To complete the picture of media pool setup, here is how to create and set up a hard disk pool (e. g. for
testing purposes or for overlay forms):
@ As a preparation step, you first have to provide a hard disk partition on operating system level.
On a Unix-based Archive Server, make sure the root directory of the file system is owned by the
user/group that the Archive Server is operated as (e. g. ixosadm/ixossys) and has
permissions 77 O.
CD The pool type must be 'Write Thru (HDSK)".
Q) No job for writing to optical disks and no disk buffer are involved. A hard disk pool directly and
finally stores documents on hard disk volume(s); therefore, you have to assign the prepared hard
disk partition to the pool directly.
Specify the following:
Partition name: A (preferably meaningful) logical name for this volume; must be unique
throughout all volume names (including IXW medias) of this Archive Server. The Archive
Server will henceforth maintain the volume by this name.
Mount path: The root directory of the partition's file system. On Windows NT, this should be
a drive specification (including a backslash); on Unix platforms, it is the directory where
the partition is mounted; on Windows 2000, it can be either of both, depending on how
the partition is hooked into the file system.
If, on a Windows-based Archive Server, you want to use a network share instead of a
local hard disk drive, see ESC article https://esc. ixos. com/1072860397-483
about how to do that exactly.
The recommendation not to make the hard disk partition too large is due to the fact that some
administrative actions (like consistency checks) require examining the whole partition contents. The
more documents are stored there, the longer such a scan will take. If, moreover, a partition is full of very
small documents, the total number of files is very high; this may lead to unacceptably long execution
times of those actions. To prevent this type of problem, rather use multiple partitions of moderate size
instead of a single large partition. If you store rather large documents only (like SAP R/3 data archiving
files), the partition may be made larger as well; where mainly small documents are stored, the partition
size should be smaller (using BLOBs, however, reduces the number of stored files of small documents).
If you choose to divide the total storage space of the pool into more than one partition, you have to
attach all but the first partition to the pool after the pool has been created; see chapter Hard Disk
Resource Maintenance for more information.

Since the availability of FS (single file), HDSK (write-through) pool is only recommended for test
purposes. Whenever possible use FS pools instead.

7-20 710
QPENTEXT

Set document processing options (1)


QPENTEXT

II Caching of documents upon


reading from storage system
- Should always be activated - except if:
* Storing huge documents that are
expected to be retrieved only seldom

" Compression to save space on storage media


- Often recommended

Configuring Logical Archives Slide 21

All settings discussed on this and the following pages are to be made per logical archive, i. e.
you may configure the Archive Server to treat documents in separate logical archives
differently.

Configuring Logical Archives 7-21


Set document processing options (2)
OPEN TEXT

III BLOBs to optimize storage of small files


- Recommended where mainly small files are stored
III Single instance for storing identical files only
once
- Recommended for especially for email solutions

II ArchiSig Timestamp 0 enable validating


document authenticity
- Use only if explicitly demanded
- Requires further configuration steps (timestamp server
setup)

II Deferred Archivin 0 hold back documents


in Diskbuffer
- When retention period cannot be set by leading
application during archiving
- Leading application triggers writing to final storage
- Suitable for i.e. workflow scenarios
Configuring logical Archives Slide 22

ArchiSig Timestamps and Deferred Archiving are features introduced with Archive Server 9.6.
See the following pages for more details.

7-22 710
QPENTEXT

Set security options (for HTTP access)

" Signature: Allow or deny unsigned document access


Protection requires leading application to "sign" access requests with SecKeys
- To be specified separately for access types: read, create, update, delete
- Commonly used settings:
M Read, create, update, delete ~ full protection against unauthorized access
< Delete ~ all access (except deletion) freely allowed

" Enforce SSL usage for communication


- Requires a customer-specific certificate on the Archive Server
- Test certificate is preinstalled by IXOS

" Control Document Deletion

Meaning of the Use SSL options:


Use (y): Clients must use SSL.
Don't use (n): Archive Server forces clients not to use SSL.
May use (m): Archive Server accepts requests both using and not using SSL; every client
decides individually, dependent on client-side configuration settings
See the Archive Server Administration Guide for details about how to install a customer-
provided SSL certificate.

Document deletion options:


Allowed : Regular behavior.
Causes error : When a user or an application tries to delete a document, there will be an
error message as response.
Ignored : Attempts to delete a document will be ignored without an error message.

Configuring Logical Archives 7-23


OPEN TEXT

Exercise: Create logical archive with media OPEN TEXT


pool on Archi~v~e~s:e~rv;:e~r• • • • • • • • • • •I'}ia-ii·l.aidt;'I=lI.I4="ii;MJ.;"'.~

" Create logical archive


" Create an ISO, IXW, FS and
Harddisk pool
- Specify writing parameters
- Schedule write job

" Set processing options


" Set security options
- i. e. require signature for Delete
only

Configuring Logical Archives Slide 24

7-24 710
QPENTEXT

OPEN TEXT

8 Disk Buffer Configuration


Setups for intermediate document storage

Disk Buffer Configuration 8-1


III Disk buffer fundamentals

II Additional roles of the disk buffer


- Backup
- Caching

III Configuration examples

III Sizing

II Creating, assigning to pools

Disk Buffer Configuration Slide 2

8-2 710
QPENTEXT

Disk buffer fundamentals

III Stores documents safely until they are written to final storage

III May retain documents even after writing to optical disks


- For faster access than from optical disk --+ caching
- For data loss protection --+ temporary backup
III Documents have to be purged periodically
III Must be equipped with one ore several hard disk partitions
III Required by optical media pools, "virtual jukeboxes" & buffered
hard disk pools (ISO, IXW, HD-WO, FS, VI)

Disk Buffer Configuration Slide 3

Disk Buffer Configuration 8-3


QPENTEXT

Disk buffer purging


QPENTEXT

II Ways how documents can be . ~


deleted from the disk buffer:
- By ISO write job,
immediately after writing •
- By the buffer's own purge job
ISO write job properties

II Buffer purging properties:


- Removes "old" documents only, i. e.
those that have already been written
to optical media
- Job should be run once a day
• Preferably during times of low
system activity
- Selects documents for purging .....................~
according to given rules
- May move documents to the cache
instead of simply deleting them

Buffer purging properties


Disk Buffer Configuration Slide 4

Each disk buffer possesses a periodic purge buffer job that, when invoked, searches the buffer
for "old documents" (i. e. documents that have already been written to optical disk) and
removes them according to certain criteria:
• If a percentage of free buffer space for newly archived documents ("Required avail.
space") is specified, the job removes so many documents (oldest first) that the required
space amount is freed. If, however, the required space amount cannot be freed because
the buffer is populated with too many not-yet-written documents, the job frees as much
space as possible by removing old documents.
• If a retention period for old documents ("Clear archived documents older than ... ") is
specified, documents older than that period are removed - even if the claimed
percentage of space is already free.
Moreover, you can choose to copy a document to the read cache immediately before removing
it from the disk buffer ("Cache before purging"). This way, the document's fast availability is
continued by the cache after its removal from the disk buffer.

The buffer purge job causes considerable hard disk, CPU, and storage system on the Archive
Server:
1. It searches the disk buffer volumes for documents matching the purging selection
criteria.
2. For each selected document, it checks in the storage database and on the optical target
medium that the document is really stored there (in order not to lose documents in case
of an inconsistency between database and media).
3. If everything is okay, it deletes the document's files on the disk buffer volume.
For this reason, it should preferably be scheduled to run when there is not much other activity
on the Archive Server, e. g. during the night and not simultaneously with the local backup job
(making IXW media backups).

8-4 710
QPENTEXT

Chapter guide
QPENTEXT
........................:=~
iii Disk buffer fundamentals

iii Additional roles of the disk buffer


- Backup
- Caching

III Configuration examples

iii Sizing

iii Creating, assigning to pools

Disk Buffer Configuration SlideS

Disk Buffer Configuration 8-5


QPENTEXT

Disk buffer as temporary IXW media backup


QPENTEXT
11-' fE4i!([email protected]

II Disk buffer must keep


documents until backed up
on backup IXW media
- To prevent data loss if originallXW
media gets lost before backed up

II Configuration to prevent
"too early" purging:
Purge only after "safety period"
* E. g. a week
* Grants enough time to react if
WORM backup fails
Make purge job respect
the WORM backup
Buffer properties
" Allows purging as soon as one
backup is made, -------4~~
even if multiple (local or remote)
IXW media backups are configured

Disk Buffer Configuration Slide 6

The explanations given above refer to IXW media. Using ISO media (CD, DVD, HD-WO,
WORM), the disk buffer does normally not play the role of a temporary backup since the ISO
media backup is written immediately along with the original ("synchronous" backup); the buffer
purge configuration has no influence on data safety here.
Nevertheless: If a ISO media backup shall be made
• in a second jukebox or
• at a later point of time,
the step sequence of writing to original, backing up, and purging from disk buffer is exactly the
same as for IXW medias ("asynchronous" backup). In these cases, disk buffer purging should
follow the same rules as for IXW medias, as explained above.

Making the buffer purging respect the backup


As of Archive Server version 5.5, purging the buffer can be made dependent on the existence
of a document's backup on a backup IXW media; should the purge job encounter a document
that has not yet been backed up, it does not delete it from the buffer. This is the safest way to
operate IXW media backup and buffer purging.
To enable this function:
1. In the Archive Server Admins traHon, tab Jobs, right-click the purge job of the
disk buffer in question and choose Edit from the pop-up menu.
2. In the Edi t Job dialog, append" -b" to the disk buffer name in entry field
Arguments.
3. Confirm the dialog.

8-6 710
QPENTEXT

Ways of caching documents


after writing to optical media

II In disk buffer
- Documents are kept in disk buffer for a longer period,
purged later by buffer purge job
See next slide ...
.. In cache
- Documents are moved from disk buffer to cache as soon as possible
(either by media write job or by buffer purge job)
.. Not at all
- Documents are deleted from disk buffer as soon as possible
(either by media write job or by buffer purge job)
- For storage scenarios that do not require fast retrievability of "fresh" documents

Disk Buffer Configuration Slide 7

The Archive Server offers the above-mentioned methods to treat documents after they have
been written to their final storage location on optical media.
The decision whether or not to cache documents at all should be based on how the documents
are used: Immediately after they have been archived, will they be retrieved frequently by the
users? This will be the case, for example, for the following archiving scenarios:
Early archiving (= store for entry later) with workflow in SAP
Late indexing in UniversalArchive (IXOS-eCONtext for Applications)
Scenarios where documents are archived after users have finished working with them do not
strictly need this type of caching. This applies, for example, for the following archiving
scenarios:
Late archiving with barcode in SAP
All kinds of data archiving (from SAP, MS Exchange, Lotus Notes)
If you have decided to cache documents, there is still the choice where to keep the documents
for that purpose: either in the disk buffer or in the cache. See the next page for a discussion
about the pros and cons of both possibilities.

Disk Buffer Configuration 8-7


QPENTEXT

Caching: in disk buffer or in cache?


QPENTEXT

Disk Buffer Configuration Slide 8

The table above reveals the relevant properties of caching documents either in the disk buffer
or in the cache.
As a conclusion, caching in the disk buffer is the more stable but also more expensive solution;
caching in the cache is cheaper but has drawbacks in certain situations.

8-8 710
QPENTEXT

Chapter guide
OPEN TEXT
·y_mmmmWMP

III Disk buffer fundamentals

III Additional roles of the disk buffer


- Backup
- Caching

Ii Configuration examples

111I Sizing

III Creating, assigning to pools

Disk Buffer Configuration Slide 9

Disk Buffer Configuration 8-9


OPEN TEXT

Purge configuration examples


OPEN TEXT

with parameter "-b"

" IXWpool,
no caching:

" IXWpool,
caching in cache:

" Caching in disk buffer:


- Purging by filling rate
$ If disk buffer too small, documents
are purged earlier than desired
- Purging by retention period
• Setting without filling rate not
recommended
• If disk buffer too small, it
may become filled up
(-+ no archiving of new
documents possible any
longer)

- Purging by both
* Compromise between availability,
retention, and purging workload
Disk Buffer Configuration Slide 10

The slide above illustrates examples how the rules for disk buffer purging in various situations
- discussed in detail on the previous pages - can be realized in terms of buffer purging
options.
Some of these example configurations - particularly those referring to 1>WV medias on an
Archive Server:;;; 5.0 - contain an "available space" requirement of 0%:
This ensures that documents younger than 7 days are never deleted; this is reasonable
since this constraint is established for the sake of data loss protection.
On the other hand, in the extreme case of the disk buffer being filled up with "younger"
documents, the purge buffer job will not delete anything; i. e, with this setting, the disk
buffer may possibly become completely full if it is too small to hold 7 days' archiving
data. Therefore, using the 0% setting, it is the administrator's duty to keep an eye on
the disk buffer filling rate (e. g. by means of the Archive Server Monitor) and to enlarge
its disk space if it becomes too small.
Using IXW medias on an Archive Server ~5.5, you should always enable the "respect
WORM backup" property for buffer purging; see page Disk buffer as temporary WORM backup
(earlier in this chapter) for details.

8-10 710
QPENTEXT

Chapter guide

II! Disk buffer fundamentals

II! Additional roles of the disk buffer


- Backup
- Caching

II! Configuration examples

II! Sizing

III Creating, assigning to pools

Disk Buffer Configuration Slide 11

Disk Buffer Configuration 8-11


QPENTEXT

Sizing considerations (1)


OPEN TEXT

III For ISO media (write-at-once)


- Buffer must hold data to fill up one partition per assigned pool
~ Using DVDs: 4.7 GB
~ Using WORM: 4.6 GB
Using HD-WO: variable up to maximum size (i. e. 1GB with HD-WO:1000)
(Using CDs: 640 MB)

- Add amount of data typically archived between two ISO write job
invocations
~ In case the last write job has missed just a little amount of data for burning a disk
- Sizing example:

Disk Buffer Configuration Slide 12

Certain storage systems support writing ISO images to a "virtual jukebox" in write-once mode
(HD-WO).
Consider limits of maximum file size supported by storage system.

8-12 710
QPENTEXT

Sizing considerations (2)


QPENTEXT

iii For IXW media (incremental writing)


- Disk buffer serves as temporary backup. Typical retention period:
~ until next purge job execution, usually 1 day
- =
Min. size amount of data typically archived into assigned pools within
backup period

Ii For FS pool (single file) and VI pool


- Depends on frequency of write job
- Less critical than optical media, since final storage is also hard disk

11II For all storage systems


- Add generous reserve to be prepared for media writing problems,
e. g. jukebox damage, storage system failure

Ii If disk buffer is used as cache


- Add total amount of data to be cached
(average archiving rate x retention period)

Disk Buffer ConfIguration Slide 13

In case of media writing problems, no documents can be removed from the disk buffer; as a
consequence, documents queue up in the disk buffer as long as media writing is interrupted.
As soon as the disk buffer is filled up, archiving new documents is no longer possible.
In order to bridge the media writing downtime, it is reasonable to equip the disk buffer with
considerable space reserve. The bigger this reserve, the longer can archiving be continued.

Disk Buffer Configuration 8-13


II Disk buffer fundamentals

II Additional roles of the disk buffer


- Backup
- Caching

II Configuration examples

II Sizing

III Creating, assigning to pools

Disk Buffer Configuration Slide 14

8-14 710
QPENTEXT

Create disk buffer (1):


QPENTEXT
Provide hard disk partition

iii Provide hard disk partition (Unix: a file system) on operating


system level
iii To be used exclusively by the disk buffer
- Otherwise, free space calculations will fail

iii Must be read/writable by the Archive Server processes


II! Do not make partition too large
- Too many documents on a partition may lead to performance problems
• During disk buffer purging
• During volume consistency checks
- If more total space required:
Use several smaller partitions instead of a single large one
• One partition can be assigned during disk buffer creation
• Additional partitions can be assigned later
- For Archive Server S 9.5 there is an absolute built-in limit!
* 1 TB per partition

Disk Buffer Configuration Slide 15

To create a disk buffer, you first have to provide a hard disk partition on operating system
level; see the previous slides about how large this partition should be. On a Unix-based
Archive Server, make sure the root directory of the file system is owned by the user/group that
the Archive Server is operated as (e. g. ixosadmjixossys) and has permissions 770.
The recommendation not to make the hard disk partition too large is due to the fact that
some administrative actions (like disk buffer purging or consistency checks) require examining
the whole partition contents. The more documents are stored there, the longer such a scan will
take. If, moreover, a partition is full of very small documents, the total number of files is very
high; this may lead to unacceptably long execution times of those actions. To prevent this type
of problem, rather use multiple partitions of moderate size instead of a single large partition. If
you store rather large documents only (like SAP data archiving files), the partition may be
made larger as well; where mainly small documents are stored, the partition size should be
smaller (using BLOBs, however, reduces the number of stored files of small documents).
If you choose to divide the total buffer space into more than one partition, you have to attach
all but the first partition to the disk buffer after the disk buffer has been created; see chapter
Hard Disk Resource Maintenance for more information.

Disk Buffer Configuration 8-15


OPEN TEXT

Create disk buffer (2)


OPEN TEXT

Disk Buffer Configuration Slide 16

Once you have prepared a hard disk partition for exclusive use by the disk buffer (see previous
page), you can create the disk buffer by invoking the Archive Server Administration's Create
Buffer function as illustrated above. You will then be guided through a sequence of dialogs
where you make the following entries:
CD Specify the disk buffer's name (unique among all names of disk buffers on this Archive
Server) and the Buffer purge configuration attributes as discussed earlier in this
chapter.
@ Assign the prepared hard disk partition to the disk buffer by specifying the following:
Partition name: An (Archive Server -internal) logical name for the partition; must be
unique throughout all volume names (including IXW medias) of this server. The
Archive Server will henceforth maintain the volume by this name.
Mount path: The root directory of the partition's file system. On Unix platforms, it is
the directory where the partition is mounted; on Windows, it may be a mounted
partition or a drive specification.
If, on a Windows-based Archive Server, you want to use a network share instead
of a local hard disk drive, see ESC article
https: I lese. ixos. eom/1072860397-483 about how to do that exactly.
@ Purging the buffer is a periodic job that is to be scheduled here. First assign to the job a
name (the illustration shows a convention), then specify the job period. The illustration
shows a reasonable choice: once per night.
Notes:
- This job will later be visible and maintainable in the Archive Server
Administration's Jobs tab.
- Using IXW medias on an Archive Server, you should edit the job to make buffer
purging dependent on WORM backup; see page Disk buffer as temporary
WORM backup (earlier in this chapter) how to do this.

8-16 710
QPENTEXT

Assign disk buffer to media pool


OPEN TEXT

Disk Buffer Configuration Slide 17

A disk buffer must be chosen for each ISO or IXW pool already when the pool is created. To
assign a different disk buffer some time afterwards, invoke the Edit Pool Configuration
dialog as illustrated above and select one of the defined disk buffers from the list.
Whenever you assign a disk buffer to a pool, always consider that the settings for the media
write job and the settings for the buffer purge job interfere with each other; a reasonable
writing and caching setup must involve both setting groups. Refer to the Purge configuration
examples pages (earlier in this chapter) for more information.

Disk Buffer Configuration 8-17


OPENTEXT I

One or more disk buffers?


OPEN TEXT

Archive Server

Disk Buffer Configuration Slide 18

The term "disk buffer" shall not be confused with a hard disk partition for buffering data.
Instead, a disk buffer is a logical construct of the Archive Server- with certain properties -
that one or more hard disk partitions are assigned to.
1. If you have several logical archives with ISO or IXW pools, you may use a single disk
buffer ("MyBuffer1" in the above illustration) for them all.
2. To enlarge disk buffer capacity, an additional hard disk partition ("Disk2") may be
assigned to that buffer. Thus you normally need only one diskbuffer, even employing
multiple hard disk volumes for buffering.
3. Alternatively, you may use a second disk buffer ("MyBuffer2") with its attached hard disk
partition ("Disk3").
Using multiple disk buffers is recommended only for situations where there is a real
requirement for this, e.g. if different disk buffer configurations have to be used at the same
time for different kinds of archived data. Refer to page Purge configuration examples (1)
(earlier in this chapter) for more information.
However, as the number of logical archives increases, you will possibly run short of drive
letters to use on Windows NT.
Note: Due to Archive Server processing internals, it is more recommendable to enlarge disk
buffer space by extending the assigned hard disk partition (wherever possible) than to simply
attach a second partition.
Attention: It leads to severe problems on the Archive Server if you assign the same hard disk
partition to different disk buffers or to a disk buffer and cache at the same time!

8-18 710
QPENTEXT

Exercise: Create disk buffer

---------...,,:~
" Create disk buffer
" Assign to media pool
" Archive sample document

Disk Buffer Configuration Slide 19

Disk Buffer Configuration 8-19


8-20 710
QPENTEXT

QPENTEXT

9 Document Processing Options


Variants how the Archive Server treats stored
documents

Document Processing Options 9-1


QPENTEXT

Chapter Overview

III Caching
III Compression
III BLOBs
III Single instance archiving
III Encryption
III ArchiSig Timestamps
III Deferred Archiving
III Retention Settings

Qocument Processing Options Slide 2

This chapter introduces further possibilities to process documents, in addition to the "main"
document flow aspects discussed previously.
These document processing functions are not generally active on an Archive Server; instead,
they can be switched on or off individually for each logical archive (as illustrated above).
In addition to their activation, some of the functions obey further configuration parameters; they
can only be set globally on the Archive Server. Details are given on the following pages.

9-2 710
QPENTEXT

III Caching on Logical Archive level


sets read caching
- Cache when documents are displayed

III Different to caching on


Disk buffer level
- Cache when documents are purged
after write process

Document Processing Options Slide 3

When setting cache option on logical archive level, be aware that this applies only to read
caching. That means that Le. when documents are requested from an application for display,
the displayed component(s) will be cached in the appropriate cache partition. When users
request the component again, it can be retrieved qUickly from the cache partition.

This is different to the caching setting within the Disk buffer purge job. Here the document can
be moved to the cache partition from the disk buffer after it has been written to media.

Document Processing Options 9-3


Compression
OPEN TEXT

II! Shrinks size of document files on storage media, increases


space efficiency
II! Performed by media write job immediately before writing
- In HDSK pools: by dedicated job (command Compress_HDSK)
II! zlib algorithm used
l1! Affects certain file formats only: ASCII, ALF, OTF, PDF, BLOB
- List can be extended in Archive Server configuration
- Other file formats are already compressed: FAX, JPEG, SAP data archiving
format .
III Decompression takes place immediately after reading from
optical disk
III Activation always recommendable

Document Processing Options Slide 4

The list of file formats to be compressed - mentioned above - can be maintained in the
SeNer Configuration page of the Archive Server Administration, branch Document SeNice ~
Component settings ~ Compression.

BLOB =Binary Large Object

In the filesystem that examine by the write jobs are marked with the prefix:
rd. <data file-name>, _

9-4 710
QPENTEXT

BLOBs (1) - Properties

" Containers for many small " Configuration parameters


archived files (defined once per Archive
- On document components level Server)
- Store any kind of components, - Max. total size (default: 4 MB)
including notes and annotations - Max. element size (default: 40 kB)
- Max. no. of elements (default: 1000)
.. Effect: Storing few large files
instead of many small files iii Recommendable especially for
.. Benefit: Optimizing storage of high archiving traffic with small
small files document files
- Minimize space fragmentation on " Drawback: Documents stay in
storage media
disk buffer unwritten as long as
- Reduce overhead of file system
structure information
the BLOB is still open
- Using IXW medias, this is probably
- Speed up WORM writing
longer than waiting for the next IXW
write job execution
.. Document component is
appended to BLOB as soon as it
enters the disk buffer

Document Processing Opllons Slide 5

The sizing parameters for BLOBs - mentioned above - can be maintained in the Server
Configuration page of the Archive Server Administration, branch Document Service ~
Component settings ~ Blobs.

A useful command line tool to work with BLOBs is "ixblob". You find this tool in the folder
"<ixos-root>\bin".

A description of"ixblob" you can be found in the ESC:


https://esc.ixos.com/104868984-567

ixblob -p blob
ixblob -t [vms] blob [file ...]

blob:: path\name of a blob file


-v :: verbose (German: ausfUhrlich/wortreich)
-t :: list the content
-p :: print BLOB properties
-m :: sorteted by modification time
- s:: sorted by size

The open BLOBs can be seen in the directory dxos-root>\var\ds\temp\BLOBlog

Document Processing Options 9-5


BLOBs (2) - Tracing documents with dsClient
OPEN TEXT

COlmplonEints in BLOBs
not be found here

whole document is
in a BLOB, this
directory does not
even exist

Document Processing Options Slide 6

If BLOBs are activated, document components can be "swallowed" by a BLOB, i. e. they are
not stored on storage media as they would be without BLOBs. The chart above explains how
to trace a document component's storage location in this situation:
In the dinfo output of dsClient (see also chapter Document Structure on Storage Media),
the fact that a document component is buried in a BLOB can be recognized by the volume
attribute value BLOB.... You can then proceed by applying dinfo to the BLOB itself, as
illustrated above; this leads you to where the BLOB is actually stored on your server.

Note: Querying the DocumentService for a BLOB this way does only work if your dsClient
session is running on the Archive Server itself!

To close open BLOBs stop and restart the spawner process "dsaux", e. g. type in on com-
mand line:
spawncmd stop dsaux
spawncmd start dsaux

9-6 710
QPENTEXT

Single Instance Archiving (1) - Properties


QPENTEXT

II SIA = storing a file only once, II Special care is to be taken when


even if the same file is sent to exporting media from the archive
the archive multiple times - Wrong proceeding may result in
- Identification of identical files by "dead" references
SHA1 fingerprint (160 bits)
Iii Benefit:
II Result Space saving on storage media
- Single instance of stored file - Main target scenarios:
-+ the SIA target ~archiving of e-mail attachments
- Multiple references to stored file archiving of files
-+ the SIA sources - Useful in conjunction with:
• Stored as stub files with zero ~ Email Archiving
length in BLOBs File System Archiving

II Certain files are excluded II Drawback:


- As they will never be identical Induces system overhead
- Increases both processing work and
database volume
- Useful only if there are more than at
least 2 references to a file
- Should not generally be enabled

Document Processing Options Slide 7

Since the SHA1 hash value - used as fingerprint for an archived file - is 160 bits
wide, the probability of erroneous identification of two different files is 2- 16 °.
A reference count is maintained for a SIA target: The target component is deleted only
after the last reference to it has been deleted.
SIA sources (= references to the really stored files) are maintained in the storage
database. However, in order to make the "normal" database import/export mechanics of
the Archive Server work for SIA, they are also stored on the actual storage media as
stub files:
With zero length (do not allocate storage space)
- Always in BLOBs, even in BLOBs are not activated explicitly in the given context
(to avoid storage overhead for the empty files)
Never accessed for reading, except during a database export/import
Certain files can be excluded from the SIA mechanism - i. e. they will always be
stored individually - if there is no probability that they are identical; this depends on the
application context. For example, every stored e-mail archiving from MS Exchange will
contain a component file called REFERENCES; such a file will never match any other
one, thus there is no need to apply SIA to it.
On the Archive Server, excluding files from SIA can be configured:
According to MIME type (for storing via HTTP)
- According to IXOS component type (for storing via RPC)
According to component (= file) name
* Default: INFO. TXT, REFERENCES
(these files are never identical in MS Exchange)
This configuration can be maintained in the Server Configuration page of the Archive
Server Administration, branch Document Service ~ Component settings ~ Single
Instance ArchiVing.

Document Processing Options 9-7


Single Instance Archiving (2) -
OPEN TEXT
Tracing SIA components

Document Processing OpUons Slide 8

If single instance archiving is activated, document components may be just references


("sources") to the actual files stored at a different location ("targets"). The chart above explains
how to trace a document component's storage location in this situation:
In the dinfo output of dsClient (see also chapter Document Structure on Storage Media),
the fact that components of a certain document may be just SIA references can be recognized
by the document's status flag SIASOURCE.
To clarify the exact status of a certain component, use the cinfo command for querying
details about this component. If the pathName attribute has a suffix (BLOB. .. - >
_ S IA_ ... ) single instance archiving has indeed been applied to this component. The true
I

storage location of the component file (the SIA "target") is displayed as the first part of the
pathName attribute.

9-8 710
OPEN TEXT

III Makes documents unreadable outside the Archive Server


- Ensures document privacy in case of storage media theft

III Effective on storage media only


- Not in disk buffer, cache, network
- For protecting privacy during client/server transmission: Use HTTPS

III Performed by write job immediately before writing


- In HDSK pools: by dedicated job (command Compress_HDSK)

u AES Rijndael algorithm used


u Encryption key must be created and maintained on Archive
Server
- See Archive Server Administration Guide for details

iii Recommendation: Do not activate unless required explicitly

Document Processing Options Slide 9

To set encryption use the Archive Server Administration Client.

To export or import the encryption key(s) use the command line tool "recIO". You find this tool
in the folder "<archive-root>\bin" .

Document Processing Options 9-9


Timestamps (1) - Properties
QPENTEXT

III Documents are digitally signed (together with a timestamp) after


archiving
III Enables validating document authenticity at any time later
III Possible timestamp sources
- External timestamp service
" timeproof TSS80 or compatibles
AuthentiDate on a LunaSP box (Archive Server ~ 9.6)
AuthentiDate via the internet (Archive Server ~ 9.6)
- Livelink Timestamp Server (uses computer clock)
• not recommended for legal purposes

II Details:
- Archive Server Time Stamp Service - Administration Guide
- Archive Server· Administration Guide

Document Processing Options Slide 10

Find the mentioned Archive Server manuals in the ESC:


https://esc.ixos.com/l084264247-891

9-10 710
QPENTEXT

Timestamps (2) - Signing a document


OPEN TEXT
@i_iE'tlMtll"

Timestamping System
Document

I Hash value I Hash value

C9 Hash value

Hash value Attributes


Private
key
Signature
Attributes

Signature

Document Processing Options Slide 11

The chart above illustrates the steps involved in digitally signing a stored document:
• As soon as the document has entered the DocumentService (i. e. it is stored in the disk
buffer or in a HDSK pool), a hash value is calculated for the document.
The DocumentService sends the hash value to the timestamp service. This service may
be the local timestamp server itself or an external timestamp service provider.
The timestamp service forms a triple from the document's hash value, the timestamp of
the current time, and additional attributes, and creates a hash value for this triple.
• The timestamp service creates a digital signature from the triple's hash value, using its
private key, and adds this signature to the triple.
• The complete quadruple - including the signature - is sent back to the
DocumentService and stored as an additional component of the document.

Document· Processing Options 9-11


Timestamps (3) - Verifying a signed
OPEN TEXT
document

Document

I Hash value I

- - - - -....·1 Hash value

Attributes I
Signature - ~ - I Hash value I
i
Public
key

Document Processing Options Slide 12

The chart above illustrates the steps involved in verifying a digitally signed document. The
whole proceeding starts as soon as a user, currently viewing the document, requests
verification in the Archive Windows Viewer:
1. The Archive Windows Viewer calculates the hash value for the displayed document.
2. It retrieves the "signature quadruple" from the Archive Server and extracts the originally
stored hash value.
3. It compares the original and the stored hash value: If they differ, the document (or the
signature quadruple) has been manipulated in the meantime.
4. Next, the validity of the signature quadruple must be verified. For this, the client
calculates the hash value for the document's stored hash value, the timestamp, and the
additional attributes.
5. The included signature is decrypted using the timestamp service's public key, resulting
in the triple's original hash value.
6. The two hash values of the triple - original and current - are compared. If they differ,
the quadruple has been manipulated in the meantime; otherwise the document
authenticity is verified.

9-12 710 (
OPEN TEXT

Timestamps (4) - Renewal of Timestamps

lIi Electronic signatures & timestamps rely on cryptographic


algorithms
ill Cryptographic algorithms and keys loose their security
qualification in the course of time
lIi Availability and verifiability of certificates is limited
- i.e. in Germany, often only ca. 5 years

II Conclusion:
- Electronically signed or timestamped documents can loose their evidence
in the course of time!
i.e. in Germany, defined by the Bundesanzeiger

III Solution:
Use ArchiSig timestamps to renew electronic signatures & timestamps
~ Even if original signature would not be valid anymore, renewal with ArchiSig can
"refresh" validity of signature
~ Not only archive timestamps, but also validity of personal signatures on
documents can be prolonged this way
Document Processing Options Slide 13

Timestamps can become insecure after a certain time (Le. 5 years) due to the following
reasons:
Key length
• Algorithm
Public key method
Certificate becomes invalid

To avoid this the following needs to be done:


Re-hash time stamp & build new tree out of hashed time stamps
Get a time stamp for the new tree

Document Processing Options 9-13


Timestamps (5) - ArchiSig Timestamps
OPEN TEXT

III Recommended to use ArchiSig


Timestamps
- Instead of old Timestamps
- Old Timestamps can still be activated
additionally for compatibility reasons

III Not possible to turn off ArchiSig flag!

Ii! ArchiSig Timestamps are more efficient


- Many documents can be signed with one
single timestamp
- Allow signature-renewal
- Trustworthiness & evidence of
documents/signatures secured over long
time

Document Processil19 Options Slide 14

9-14 710
QPENTEXT

Timestamps (6) - Delivery of Signed


QPENTEXT
Documents

!!II Strict
- If the timestamp is not valid or does not
exist, the document is not delivered.

iii Relaxed
- If the timestamp is not valid, the
administrator is informed and the
document is delivered

iii None
- No timestamp verification

Document Processing Options Slide 15

In a Relaxed timestamp verification scenario, you can set notifications to be informed about
invalid timestamp requests.

Document Processing Options 9-15


Deferred Archiving (1) - Concept
QPENTEXT

Scenario:
II Retention period cannot be set
leading application during archiving store document
wI retention=EVENT
II{ Different document types with & deferred archiving
different retention periods
- in one logical archive
- Retention to be decided later later:
set retention period
How it works: & plan for OS job
(move to storage)
II{ Documents stored in Disk Buffer
II{ not written to storage subsystem
(yet)
II When retention known:
- Using http API,
" set retention period &
" create OS job entry

Document Processing Options Slide 16

9-16 710
OPEN TEXT

Deferred Archiving (2) - pool _DELAYED_


OPEN TEXT

Document Processing Options Slide 17

When archiving with the option deferred archiving, the document are stored in Disk buffer and
"parked" in pool _DELAYED_. Once the appropriate command is sent from the leading
application, documents will be moved to the correct pool for further processing to storage
system.

Document Processing Options 9-17


QPENTEXT

Deferred Archiving (3) - start archiving


QPENTEXT
,;~_ddij• • •

Document Processing Options Slide 18

9-18 710
QPENTEXT

10 Document Lifecycle Management


Introduction into the concept of DLM

Audit Trails are a new feature introduced with Archive Server 9.6

Document Lifecycle Management Overview 10-1


QPENTEXT

Document Lifecycle Management


QPENTEXT

III DLM in Archive Server is


- Retention Handling
- Storage reorganization
- Deferred archiving
- Deletion hold
Audit trails

iii DLM in Archive Server is not


- HSM implementation
- Copying content between or within archives

III DLM & retention settings should comply with company policies
- From administrative perspective, only make settings that are coordinated
with the business requirements and policies within the company

Document Lifecycle Management Overview Slide 2

It is important to understand what the intention of DLMis since DLM is often mixed up with
HSM (Hierarchical Storage Management):
The idea of HSM is to have content fast available when it is new or often used. In this case the
content is stored on a local disk; otherwise it is displaced to a slow media like a tape where
storage is much cheaper as on hard disks. Some HSM implementations provide multi level
displacement. But: Since the HSM mostly simulates a simple file system for an application, it is
always the HSM server which decides about displacements of content by own heuristics.
The intention of DLM is that the application at least classifies the content to determinate the
lifecycle. When dealing with business documents, only an application knows about needs
belonging availability of content or legal guidelines. In this manner Archive Server (starting
version 9.6) provides these mechanisms for DLM: Deferred archiving (which could be treated
as a first step in the direction "application controlled HSM"), retention handling, storage
reorganization which is automatically needed when storing content with retention in container
files (to be covered later) and audit trails which is an important monitoring instrument in
regulated scenarios.

10-2 710
QPENTEXT

Value of Records & Documents


QPENTEXT

III! Records/documents are a Retention


document
valuable asset of a company archived
Period

III Documents can become a


potential liability for a I C9 I
company
Ill! At the same time documents
can become important for
legal reasons
value for value for
business legal purposes
• Keeping documents beyond
their official retention often
not desired by companies

Document Lifecycle Management Overview Slide 3

Document Lifecycle Management Overview 10-3


QPENTEXT!

Retention Management Layers


QPENTEXT

Livelink ECM - Enterprise Server


(Records Management)
- Handles application specific requirements
- Knows the semantics of documents
- Controls Retention Management
- Triggers Retention Handling
Repository for all document related metadata

Livelink ECM - Archive Server


- offers Retention Handling (retention classes, .. .)
- Reacts only to requests from leading applications
- Exclusively controls the storage systems
- Handles media backup, media migration

Different types of storage with/without retention periods


- Opticals (DVD, WORM, UDO)
- Hard disk based Storage Systems (CAS, NAS, SAN ...)

Document Lifecycle Management Overview Slide 4

Livelink Enterprise Server (" Livelink") can provide a Records Management module. It is
intended to support & trigger retention handling of Archive Server from the Livelink Enterprise
Server.

DLM & retention settings should comply with company policies. From administrative
perspective, only make settings that are coordinated with the business requirements and
policies within the company.

10-4 710
QPENTEXT

Archive Server - Retention Handling


QPENTEXT

.. Retention Handling
is provided by Archive Server
is triggered by leading application

.. Leading application:
- sets retention period
- sets retention event
purges or destroys content

.. Archive Server
assures that content can't be deleted
within retention period
- does not automatically purge content
- all actions are monitored in audit
trails

Document Lifecycle Management Overview Slide 5

Document Lifecycle Management Overview 10-5


Retention Handling - Exterior View
OPEN TEXT
;,;~}? UMBN-IM.-

Document Iifecycle (leading application view):

createDoc
retention period=365 days

• Protected: document cannot be modified I deleted


• Expired: document may be modified or deleted, but is not deleted
automatically
o Retention date: time when the document protection ends

Document Lifecycle Management Overview Slide 6

Notes:
- Leading application is using the http API.
- Retention period is specified while document is being created.
- Retention date = creation date + retention period.
- Protection: no delete / modify.
- No dedicated action is taken at the time of expiration.

10-6 710
QPENTEXT

Retention period
OPEN TEXT
JllliU",t'lllW"

Retention period:
" Parameter for creation of a document or component
- extension of create call

" May be set by the leading application


" If missing, the archive default value will apply
" Allowed values:
- x> 0 (interpreted as number of days)
- Special values: NONE, INFINITE, EVENT
- Default value is No Retention

Document Lifecycle Management Overview Slide?

Document Lifecycle Management Overview 10-7


QPENTEXT

QPENTEXT

Retention date:
" attribute of the document
" set during the creation of the document (or the creation of the first
component):
retention date = creation date + retention period

" cannot be changed later


- set during document creation
- change only possible in certain scenarios
• event-based retention, delayed classification, volume migration

• Stored in DS table ds doc and in attribute file ATTRIB. ATR

• possible In certain scenarios (event-based retention, delayed classification, volume migration)

Document LifecycJe Management Overview Slide 8

Retention Date Details:


4 byte unsigned integer
- Range: 1970/01/01 00:00:01 to 2106/02/07 06:28:06 (UTe)
Special values for NONE, INFINITE, EVENT
Retention date will be truncated to the maximum possible value if the given retention
period is too large
Stored in OS table ds doc and in attribute file ATTRIB .ATR

10-8 710
OPEN TEXT

Protected documents

Retention protection:

Delete document allowed

Delete component allowed


Update component allowed
Update notes / annotations** allowed allowed
VieW document allowed allowed

* in non-compliance mode the OS administrator may modify/delete documents locally


(dsClient)

Documen! Ufecycle Management Overview Slide 9

When using Archive Server 9.6.1 with an installed patch EA096-078 or EA096-055, it is not
possible to delete a document when it has a retention protectio. Even administrators using
command line tools are not allowed to delete the documents any longer (independent of
compliance mode).
However, earlier and later Archive Server versions (Le. version 9.6.1 with Patch 087 or version
9.7.1) return again to the behaviour as described in the slide.

Document Lifecycle Management Overview 10-9


QPENTEXT

Example for Retention - Time based


OPEN TEXT

iii Example: Invoices need to be stored for 10 years due to tax laws


.....
~ Stage " ..tent;on pertad = 10 ye.'" 'I.rts

~cm~:
Document is created in ~ Retention period expires
archive server

Document Ufecycle Management Overview Slide 10

Retention period of 3650 days is not the exact value for 10 years but used just as a simple
example. In real scenarios, you would also have to take into account such effects as Le. leap
years.

10-10 710
OPEN TEXT

Examples for Retention - Event & Time based

Ii Example: According to industry-specific regulations,


product documentation needs to be kept 5 years after the
product has been phased-out
~ggEL1.: event-based retention (9 II , Stage 2: retention period = 5 years starts
......

Document is created in
~gg~:
archive server ~ Retention period expires

Document Lifecycle Management Overview Slide 11

Document Lifecycle Management Overview 10-11


Deferred Archiving
OPEN TEXT

l1li One Step archiving


- The "classical way", long term storage
- Using fixed retention period

III Two Step archiving: Deferred Archiving


- If a leading application changes the content very often and has to prevent
that the content is written to the backend storage.
- If storage parameters (e. g. retention period) cannot be specified during
creation in an early archiving scenario.

Document Ufecycle Management Overview Slide 12

With DLM there are new storage scenarios. Of course you can simply archive content without
a controlled lifecycle. On the other hand there is the deferred archiving feature which allows a
two step archiving. This mode is interesting in two cases, first Le. a scenario where the
application deals with working copies which are change often or deleted sometimes and the
application (Le. Tep) has to ensure that no content is written onto a read only media. This has
two aspects: The content would waste disk space and the content could compromise
someone.
The other case is interesting when the application has not enough information about the
document during the creation time but has to provide a retention period which definitely is a
creation parameter. In this case the application sets the retention to "event based" and uses
the deferred archiving to specify the retention within a further step. Another scenario belongs
to this case: When archiving from SAP there is no chance to add a retention parameter to the
ArchiveLink URL, so retention has to be set in a further step.

10-12 710
OPEN TEXT

Compliance Mode

III Turning on Compliance Mode


- enables Audit Trail
- nobody may delete archived documents (with enabled retention settings)

II Without Compliance Mode (and with enabled retention settings)


- leading applications may not delete archived documents
- Archive Server administrators may delete archived documents

II Recommended to enable Compliance Mode when using


retention settings

Keep in mind that once set, Compliance Mode cannot be turned offl

Document Lifecycle Management Overview Slide 13

When using retention features for legal purposes, it is usually advisable to turn on Compliance
Mode. Be aware that once you turn on Compliance Mode, you are not able to turn it off! Turn it
only on when you are sure that you want this feature.

When using Archive Server 9.6.1 with an installed patch EA096-078 or EA096-055, it is not
possible to delete a document when it has a retention protectio. Even administrators using
command line tools are not allowed to delete the documents any longer (independent of
compliance mode).
However, earlier and later Archive Server versions (Le. version 9.6.1 with Patch 087 or version
9.7.1) return again to the behaviour as described in the slide.

Document Lifecycle Management Overview 10-13


Exercise: Retention Settings

~~---=~
" Retention period
- Set retention period to 2 days
- Archive a document
- Check document info in dsClient

" Try to delete document with


retention setting

" Repeat steps with Compliance


Mode

Document lifecycle Management Overview Slide 14 1


i ,

10-14 710
QPENTExT

QPENTEXT

11 Archive Server Architecture


Main components of the Archive Server

Archive Server Architecture 11-1


Chapter Overview
OPEN TEXT

iii Functional components of Archive Server


iii Software installation directory structure
iii Software packages

Archive Server Architecture Slide 2

11-2 710
QPENTEXT

Archive Server Components

Archive Server Architecture Slide 3

The Archive Server is made up of the components shown above. Separate server processes
for administration and monitoring contribute to the Archive Server's modular architecture.
The central part of the Archive Server is the DocumentService; it stores and provides
documents and their components.
Depending on what media are being used, documents are stored on hard disk, WORM, CD (:::;
Archive Server 9.5), UDO (~Archive Server 5.5) or DVD partitions.
The WORM, UDO, CD and DVD partitions are handled by a separate sub-server called
STORM ("Storage Manager").
The Archive Server storage database, called DS (DocumentService Database), holds the
information about the archived documents and where they are stored.
The functions of the different components are explained on the following pages.

Only specific storage systems are supported for NAS/SAN connection or as "virtual jUkebox"
writing ISO images.

Archive Server Architecture 11-3


Document Service (1): Tasks
OPEN TEXT

III Archive documents


- Allocate document IDs
- Buffer documents on hard disk
- Maintain BLOBs and single-instance archiving
- Queue requests for writing to optical disk
- Sign documents with time stamps (optional)
- Store document administration data in database

III Write documents to optical media


- Compress/encrypt document files
- Prepare disk image for ISO media burning
- Synchronize IXW media backup volumes asynchronously

III Provide documents for reading


- Keep frequently accessed documents in read cache
- Minimize optical disk changes by reordering near-simultaneous access
requests
- Inflate/decrypt documents if needed
- Perform client-driven text search in print lists

Archive Server Architecture Slide 4

The DocumentService, performing all document management-related tasks, is the core of the
Archive Server. Its functionality is so extensive that the above enumeration can only give a
summary of the most important aspects.

11-4 710
QPENTEXT

Document Service (2): Components


OPEN TEXT

" Service processes


- Write component (WC)
" Archives documents
- Read component (RC)
• Handles document retrieval requests
- OS document pipeline (several services)
Maintains read cache
Prepares docu ments for transfer to client
- Additional helper services (bksrvr, dsaux, dsPerf, dssched)

II Programs invoked as jobs - examples:


- dsCD(. exe), dsWORM(. exe) ~ write documents to media in ISQ·, IXW pools
- dsHdsk(. exe), dsGS(. exe) ~ write documents to media in Single File pools
- bkupDS(. exe) ~ performs "asynchronous" media backup
- dSHdskRm(. exe) ~ removes expired data from disk buffer

II Interaction tools
- Archive Server Administration (various aspects)
- Command line tool dsClient
- Further, specialized command line tools

Archive Server Architecture Slide 5

Clients of the Archive Server access the two main services, write component (WC) and read
component (RC), separately through RPC or HTTP calls. When documents are transferred
through these protocols, they are split into chunks of 64 kByte each.
The write component allocates new document IDs either automatically when archiving a new
document, or in advance if the document 10 is needed during creation of a document and
before it can be transferred to the archive (the early archiving scenario).
When archiving documents, a pool name must be specified which can be determined from the
administration server, given the logical archive 10 and the component type. All documents are
stored on hard disk first, where they are immediately available for retrieval. When archiving
into an IXW or ISO pool, a write request to the specific type of media is created at the same
time.
The read component (up to four instances can run simultaneously) returns a list of component
names when given the document 10, and then delivers the requested components to the client.
Read document components can automatically be cached, as there is a good chance that a
document will be requested again in the near future. Apart from this mechanism, cache
requests can be sent to the read component when a document will be retrieved soon. That
way, the DocumentService can reorder requests for uncached documents to minimize the
number of disk change operations and make best use of the available drives.
Apart from the programs mentioned above, the DocumentService provides several utility
programs for performing special administration tasks, especially troubleshooting. dsClient is
the most interesting one of them; useful applications of it are described in different chapters of
this course material, and a summary is given in appendix Archive Server Command Line
Tools.
The programs which are run as jobs - as mentioned above - are not normally invoked
manually (although this is possible). Instead, this is accomplished by the
AdministrationServer's job scheduler; see page AdministrationServer later on in this chapter.

Archive Server Architecture 11-5


Storage Manager (STORM) OPEN TEXT
(if installed) • • • • • • • • • • • • • • • •;.";jib
;-iii:.i;l4\\;""":¢lI:••
'

III Tasks
- Manage media in jukeboxes
* Maintain jukebox inventory
Control media movements by jukebox robot
- Communicate with storage systems
- Provide access to media contents
* Communicate via SCSI with disk drives
* Make media accessible for Document Service via NFS protocol
* Burn ISO images onto empty media
* Maintain proprietary file system structure on WORM
• Mirror structure data of not-yet-finalized WORM media
into files on hard disk

III Consists of services:


- jbd(. exe) ("Jukebox daemon")
- j sd(. exe) (communication helper for finalization requests)

III Interaction tools


- Jukebox handling tools in the Archive Server Administration
- Command line tool cdadm

Archive Server Architecture Slide 6

Documents and document components may be stored on hard disk, WORM, CD, UDO or
DVD. Writing to and reading from hard disk media is handled directly by the Document
Service.
Read and write access to CD, DVD, UDO and WORM as well as managing the jukebox
inventory are handled by the jukebox server STORM ("Storage Manager"). It is possible to
install Archive Server without STORM if there only used hard disk media.
STORM has to use a proprietary WORM file system type since there is no industry standard
available. That way, IXOS developers took the chance to invent a file system optimized for the
purpose of high-performance, high-security archiving.
Depending on the specific storage system used, communication between the Archive Server
and the storage system may be realized differently.

{I

11-6 710
QPENTEXT

Administration Server

III Tasks
- Keep Archive Server configu-
ration information
" Everything maintainable in the Livelink Enterprise
Archive Server Administration
~ Keep track of structure changes
on related Archive Server
- Deliver configuration information on request to Livelink clients
* Livelink Archive Windows Client configuration profiles
~ Archive Modes for Enterprise Scan
IXOS·eCONtext for Applications (UniversalArchive) GUI configuration ("ALI")
- Execute scheduled jobs
- Provide administrative access to other server components
* Via Archive Server Administration

II Consists of single service: admsrv(. exe)


III Interaction tool: Archive Server Administration

Archive SeNer Architecture Slide 7

While the DocumentService manages everything concerning documents - i. e. the data kept
by the Archive Server -, the AdministrationServer deals with the Archive Server itself: its
customized structure, users, server relationships, media devices, and timely execution of
regular tasks.
Moreover, the AdministrationServer is contacted by clients to retrieve certain types of
configuration information relevant for them. For example, the scanning application Livelink
EnterpriseScan downloads the so-called Archiving Modes (containing definitions of allowed
storage scenarios) upon startup.
The AdministrationService consists of a single server process called admSrv (Unix) or
admsrv. exe (Windows) and uses the storage database (see later in this chapter) for safely
storing the configuration data it maintains.

Archive Server Architecture 11-7


DocumentPipeline
OPEN TEXT

III Preprocesses certain (not all) documents


before passing them on to
the DocumentService
III Stores documents in the DocumentPipeline
directory during processing
III Exact configuration depends on used IXOS product
III Consists of services:
- Document tools ("DocTools")
~ Each performs a single processing task
- Document pipeliner dp ( . exe)
~ Coordinates processing work

III Interaction tools


- DocumentPipeline Info
• Controlling and monitoring GUI
- Command line tool dpctrl
Archive Server Architecture Slide 8

Some kinds of documents require certain preprocessing steps before they are ready to be
archived. An example for this is: In order to enable an archive user displaying a retrieved SAP
R/3 print lisUo quickly jump to a specific print list page, a page index - mapping page
numbers to byte offsets in the print list file - is stored together with the original list; creation of
this page index is one DocumentPipeline processing step.
Examples for DocTools (for various storage scenarios) are:
page_idx creates print list page index
doctods passes document over to DocumentService
c fbx sends confirmation message to SAP R/3
i
DBinsert stores document attributes in retrieval database I

11-8 710
QPENTEXT

Volume Migration Server


QPENTEXT

III Enqueues documents on old media


for being copied to new media
(= media migration)
- Actual copying is done by write job of media pool

III Consists of single service: volmig(. exe)

III Interaction tools


Utilities in Archive Server Administration
- Command line tool vmclient

Archive Server Architecture Slide 9

See chapter Media Migration for more information about the Volume Migration server.

Archive Server Architecture 11-9


Monitor and Notification Servers
OPEN TEXT

.. Monitor server
Polls status information from other server components
* Process status
• Available storage space
• Processing faults
Consists of services:
~ ixmonSvc I ixmonsvc. exe (master process)
• ixmonClnt I ixmoncln. exe (monitor agents, normally 3)
Status display tools
Archive Server Monitor (:5 9.5.0)
Archive Server WebMonitor
Command line tool ixmonTest (Unix), ixmontst. exe (Windows)

.. Notification server
Raises alert upon system events, e. g.
• Error conditions
" CD, DVD, UDO has been finished
Alert types: E·mail, log file message, script execution, SNMP trap
Consists of single service notifSrvr. exe (Unix: j re process)
Configuration tool: Notifications in Archive Server Administration

Archive Server Architecture Slide 10 i


\ '

The monitor server gathers information about the status of relevant processes, filesystem,
database sizes, and available resources. The Archive Server Monitor client can then retrieve
and display this information.
Individual, so-called monitor agents acquire the data by accessing the monitored resources via
RPC, SQL queries or operating system calls. \ '

The notification server supplements the monitoring functionality by actively rising alerts in case
of certain system events. This has to be configured within the Archive Server Administration;
see chapter Monitoring the Archive Server for more information.

11-10 710
QPENTEXT

HTTP Interface Server


QPENTEXT

II Access via http://<archiveserver> : 40601


II Tasks
- Web-based administration interface for
~Initial Archive Server configuration
(right after Archive Server installation)
* Monitoring (~ WebMonitor)
" Accounting data management
HTTP communication mediator for
AdminstrationServer
Customer-specific project implementations
Not for DocumentService
• DS has a separate, built-in HTTP interface (ports 8080, 8090)
Delivery of Archive Java Viewer

II Consists of services:
Tomcat servlet engine (Windows: tomcat. exe, Unix: j re process)
loglimi ter(. exe) (helper for truncating Apache logfiles; see ESC)
- purgefiles(. exe) (helper for cleaning up Tomcat logfiles)

Archive Server Architecture Slide 11

In Archive Server S 9.5, also the Apache HTTP server is installed. This is not needed any
longer with Archive Server;:: 9.6.

The HTTP interface mentioned above is newly introduced in Archive Server 5.0. HTTP access
to stored documents, however, has already been possible before: The DocumentService's
built-in HTTP interface (listening on ports 8080 and 8090) has been invented in version 3.1.
In addition to the interface address mentioned above, SSL-based communication is also
possible via:
https://<archiveserver>:4061/

Find the mentioned ESC article The loglimiter binary as:


https://esc.ixos.com/l071753340-241

Archive Server Architecture 11-11


Storage Database DS
OPEN TEXT

iii Based on Oracle or MSSQL


iii Data stored in the database
- By Document Service
ru "Technical" document attributes
* Media attributes
- By Administration Server
• Configuration data of logical archives and pools
• Job data: schedule, parameters, execution protocol
* Relationships to other Archive Server and SAP systems
" Other configuration settings

iii DBMS-specific interaction tools


- Oracle: sqlplus
- MS SQL Server: SQL Server Manager, sql/w

Archive Server Architecture Slide 12

Data stored in the DS database by the DocumentService (mentioned above) includes:


Document-related data:
- Access path (directory hierarchy) on storage media
- Number, type, and file names of document components
- Up to three different storage locations of a document component, e. g. in the disk
buffer and on WORM
- Additional attributes, e.g. date of last change (i. e. change/addition of notes or
annotations)
- Write requests for documents not yet written to optical media
Storage media-related data (for each storage volume):
- Label
- Type (HD, ISO, WORM)
- Access path
- Capacity / free space
- Status (e. g.: full, write-locked, offline)
- Assignment to media pool/disk buffer
In case of DS database loss, the document and media-related data can be recovered
completely from the storage media.

11-12 710
QPENTEXT

Product-specific Components
OPEN TEXT

Some archive-based OT products


require further Archive Server components:
l1li Database table spaces DMS / UMS
- needed for Le. PDMS
- can be optionally installed on Archive Server DBMS
- DMS provides the Context Server DB for PDMS
~ contains metadata for PDMS solutions
- UMS provides the User Management DB for PDMS
contains user management specific information

l1li Email Composer


- needed for i.e. Email Archiving for MS Exchange
- Single Instance Archiving requires that email body & attachments are split
- Email composer puts back together email body & attachments for display

Archive Server Archilecture Slide 13

Depending on the product(s) that an Archive Server serves, additional components have to be
installed on the server.
In addition to the components mentioned above, some products comprise interface
components to their leading application that are normally installed on a different machine but
may optionally be operated on the Archive Server as well. This applies, for example, to the
Email Connector service of Email Archiving.

The PDMS product is using the Context Server which might also result in specific components
on the Archive Server.

Those product-specific server components are discussed in the product-related administration


courses of the Learning Services course portfolio.

Archive Server Architecture 11-13


Archive Server Installation Directory Tree
OPEN TEXT

Name of root directory


may be different,
defined on installation

Archive Server Architecture Slide 14

The illustrated directory structure is common to all Archive Server installations with a release;::: 4.0 (the
structure of former versions is significantly different). The directory layer directly below the installation
root is completely displayed above, whereas the chart shows only an excerpt of the deeper layer(s).
Some annotations on directories:
bin contains all executables needed by the Archive Server system itself as well as by the
administrator (e. g. command line tools). For this reason, it is best to have this directory
included in the shell command search path.
pkg contains the Archive Server software as copied from the installation medium. The software is
subdivided into packages (to be discussed on the following pages) each of which is
represented here as a separate subdirectory (e. g. AOMS, OS, SCSI, MONS). Parts of the
material contained herein is copied to other parts of the directory tree during installation, e. g.
binaries to bin, starUstop scripts to re, and config file templates to eonfig.
config contains all kinds of configuration information (for Windows: apart from the data held in the
Registry). This comprises package-specific configuration (e.g. package STORM) as well as
general setup settings (e. g. subdirectories dpeonfig, monitor, servtab).
prj contains Archive Server extensions developed for customer-specific archiving projects. On a
standard installation, it is empty.
opt contains third-party software that comes with and is needed by the Archive Server, e. g. the
perl scripting engine.
var is the only directory whose contents is changed by the Archive Server system itself. The most
prominent subdirectory is log, holding all Archive Server system log files; other
subdirectories are used by Archive Server components for internal purposes.
w3 contains files - HTML pages and CGI scripts - that constitute the Archive Server's HTIP
interface.

In addition, Archive Server;:: 9.6 also contains a folder webapps that contains java libraries and xml
files.

11-14 710
OPEN TEXT

Archive Server Software Packages (1)


OPEN TEXT
tIffilllliml1lj!.W-.

Iii BASE ~ Spawner service program


Iii DS ~ Document Service
Iii ADMS ~ Administration Server
" MONS ~ Monitor Server
" NOTS ~ Notification Server
• DBORAS,DBMSQS ~ DS database interface (Oracle, MS-SQL)
10 STORM ~ Storage Manager
" DP ~ general Document Pipeline elements
- Document pipeliner
- DocTools common to all pipeline variants

Archive Server Architecture Slide 15

All delivered Archive Server software is bound into packages, most (but not all) of which
comprise one complete functional component of the Archive Server or a client; this applies, for
example, to packages DS, ADMS, and STORM. On the other hand, the DocumentPipeline is
made up of several packages, depending on the installed Archive Server product and the
specific storage scenarios that the pipeline is expected to perform.
Each installed package resides in its own subdirectory of the pkg directory. However, parts of
the package contents is moved into other locations, e. g. configuration elements go into the
config directory structure.
The above enumeration lists only the most interesting Archive Server packages. There are
many more ones, some of which are employed only at extended server installations, e. g. if
maChine-generated documents are to be stored via the so-called COLD DocumentPipeline.
The list here represent some of the most important packages.

Archive Server Architecture 11-15


QPENTEXT

Archive Server Software Packages (2)


QPENTEXT

III SCSI ~ IXOS SCSI driver genscsi


- Used for all relevant devices:
re Jukeboxes (drives, robots)

III Administration tools


- ADMC ~ Archive Server Administration
- MONC ~ Archive Server Monitor
- DPIN ~ Document Pipeline Info

III Client packages (not part of the Archive Server)


- CWIN ~ Common client components
- SCAN ~ Enterprise Scan
- VIEW ~ Archive Windows Viewer

Archive Server Architecture Slide 16

The SCSI package provides a standard interface to all SCSI devices and contains all hardware
specific and operating system specific software. It provides an interface to the optical disk
drives (used by WORM, UDO and DVD).
While the Unix versions utilize the operating system's device drivers and support all available
SCSI interfaces, the Windows driver interfaces directly with the hardware and requires certain
supported SCSI controllers.
Vendor provided SCSI drivers for Windows (ASPI, CAM, or similar) are neither required nor
supported by Open Text. However, access of hard disks is done using the operating system
driver and the choice of SCSI controllers for hard disk access is not affected.

11-16 710
QPENTEXT

QPENTEXT

12 Where to Find What


Locations of configuration and data on the Livelink
Enterprise Archive Server Architecture

Where to Find What 12-1


QPENTEXTiI

Chapter Guide
OPENTEXT

III Ways to access configuration information

III Finding specific parts of the server installation


- Installed software
- Log files
- SCSI device files

II Locations for stored documents and data


- Buffered/archived documents
- Cached documents
Database files

Where to Find What Slide 2

12-2 710
OPEN TEXT

Configuring the "storage dynamics" in the


Archive Server Administration OPEN TEXT

II Available during Archive Server


Architecture uptime
II Configuration stored
in database DS
II See manual
Archive Server Architecture
Administration Guide
about usage details

Print or save
configuration report
Useful for:
• Your own documentation
• Correspondence with
IXOS technical support

Wheee to Find What Slide 3

Most aspects of the Archive Server behavior - i. e. the "storage dynamics" mentioned in the
slide title - can be configured in the tabs of the Archive Server Administration. Prominent
examples include:
• Logical archives
• Media pools
Disk buffers
• Job scheduling
To access these configuration items, no special tool is available nor necessary: The graphical
Archive Server Administration is the dedicated tool for this purpose.

To print or save (as a file) a report of your Archive Server configuration for documentation
purposes, follow these steps:
1. In the Archive Server Administration, click the "printer" button (as indicated above).
2. In the Print report dialog, choose which elements of the configuration shall be
documented. For a complete documentation, choose everything except Job protocol
and Alerts.
3. To print the report immediately, click button Print.
To save the report in a file, click button Preview. In the following Print preview ...
window, use the Save As (= "floppy disk") button to invoke the save function.

Find the mentioned manual Archive Server Administration Guide in ESC as: Products ECM
Suite I Archive Server I Product Documentation:
https://esc.ixos.com/l084264247-891

Where to Find What 12-3


OPEN TEXT

Maintaining Configuration Variables:


The Server Configuration page OPENTEXT

III Full access to


- configuration variables
- most dynamic log level settings

" Available during Archive Server


Architecture uptime
" Changes are recorded
- Can be undone later
, ....8J:1l [PI Administration Client (ADMC)
" See admin manual g ..~ [PI Administration Server (ADMS)
Configuration . i.. . .8J:1l [PI General Installation Variables
g"'JiiI [PI Log file configuration
Parameters l L..... : I •• I f

about usage details $·GiiJ [PI Jobs and Alerts


t;J 8J:1l [PI Default Values for Archilles
! 8J:1l [PI Default Values for Pools
" Important options: . iGiiJ [PI Securityseltings
Display undefined i L....8J:1l [PI Notification server messages
$·GiiJ [PI COnfiguration for COOB Pipeline (COOB)
values ffi·8J:1l [PI Configuration for EXOB Pipeline (EXDB)
Display runtime values [PI Configuration for FORM Pipeline (FORM)
[PI COnfiguration for XSL Pipeline (XMLP)
[PI Oocloois for RJ3 communication (R3LK)
OP

Where to Find What Slide 4

• This possibility of accessing most parts of the Archive Server configuration remotely is
introduced in version 5.0.
• For the sake of later undoing, all configuration changes are recorded in files in the
dXOS_ROOT> /var/ cfgbak directory. Undoing is then performed from the Server
Configuration dialog, menu File ~ Load configuration saved on ...
• A number of XML files defines how the presentation of configuration variables is arranged
(structure and descriptive texts). If, for example, an installed project requires administrative
access to non-standard configuration information, an expert can change or add to this
presentation setup. The files are held in the directory dXOS_ROOT>/config/xml/*. xml.
• The display options mentioned above govern details about what is actually displayed in the
configuration structure. You can set them in the View menu.
• Some configuration settings may be altered while the system is running but without storing
them permanently; such settings will be lost when the system is restarted the next time.
Specifically, this applies to dynamic log level settings (see chapter Logfi/es and Log/eve/s for
more information). For those parameters whose stored value differs from the currently
effective value, the Display runtime values option decides which one of the values will
be displayed.
• About the meaning of the item prefixes in the structure display:
[P] Persistent; value is saved in Registry / setup file, but service(s) need to be restarted
to activate it
[T] Temporary; value is sent to service and accepted without restart, but after restart of
service the former (persistent) value is used
[B] Both; value is effective immediately and permanently
• Find the mentioned administration manual Archive Server Configuration Parameters in ESC
as:
https://esc.ixos.com/l106224732-381

12-4 710
QPENTEXT

Storage of Configuration Variables: Registry OPENTEXT


(Win), setuP~fi~le~S~(~U~n~iX~)""""""""".0IW==in="ii=,jjlt#.""#=*=I.~
II Available also during
downtime of Archive
Server Architecture
services
II Changes are not logged
- Use with carel

II See admin manual


Configuration
Parameters
about available
parameters and their
meaning

Unix: configuration file


/usr/ixos-archive/config/ "=::::~H#
setup/[ ... l.Setup
('OGLI~Ili'O='off

Where to Find What Slide 5

Most parts of the "basic" Archive Server configuration (i. e. not related to the actual
arrangement of logical archives, pools, disk buffers, etc.) are stored as lists of configuration
variables:
On a Windows-based Archive Server: in the Windows Registry, branch
HKEY_LOCAL_MACHINE\SOFTWARE\ IXOS\ IXOS_ARCHIVE.
Use the Registry Editor (command regedit) to view and maintain the configuration.
On a Unix-based Archive Server: in configuration files
/usr/ixos-archive/config/setup/<comp>.Setup, where <comp >istheIXOS-
eCONserver component configured by the file.
Any text editor (e. g. vi) can be used to edit the configuration.
No matter which OS platform your Archive Server is based on, the total set of its configuration
variables is subdivided according to the elements of the Archive Server architecture (see
chapter Archive Server Architecture). Example are:
• DS DocumentService
• ADMS AdministrationService
• R3LK SAP R/3 communication DocTools of DocumentPipeline
A special configuration item is COMMON, containing variables used by more than a specific
Archive Server component.

Where to Find What 12-5


Structured Configuration Files
OPEN TEXT

III Base path:


- Win: <IXOS ROOT>\config
- Unix: /usr/ixos-archive/config

III Contain structured information not easily represent able


by Windows Registry
III Normally no need to edit them directly
- Set up automatically during Archive Server Architecture installation
- Maintained implicitly via Archive Server Architecture Administration

III Examples:

Where to Find What Slide 6

The configuration components mentioned above hold the following types of configuration
information:
Spawner configuration
Which processes to start in which order?
Monitor service configuration
Which processes to observe?
What parameters to display?
DocumentPipeline configuration
Which DocTools to chain into which pipeline threads?
Scripted routines
E. g. for periodic jobs in ADMS, it is possible to own scripts
STORM configuration
- Configuration parameters (e.g. loglevels)
- WORM filesystem data files
- Connected storage systems
The directory placeholder dXOS_ROOT> - mentioned above - shall be replaced by the path
defined as the root directory of the Archive Server software installation tree (see page Installed
software later in this chapter).

12-6 710
QPENTEXT

Chapter Guide
QPENTEXT

II! Ways to access configuration information

II! Finding specific parts of the server installation


- Installed software
- Logfiles
- SCSI device files

l1li Locations for stored documents and data


- Buffered/archived documents
- Cached documents
- Database files

Where to Find What Slide 7

Where to Find What 12-7


OPEN TEXT

Installed Software
OPENTEXT

II! Archive Server Architecture and clients


- Win: See Registry entry
HKLM\SOFTWARE\IXOS\IXOS_ARCHIVE\IXOS_ARCHIVE ROOT
- Unix: /usr/ixos-archive
(= symbolic link to true installation tree)

III Database system


- Oracle
• Win: Server Configuration
Registry
HKLM\SOFTWARE\ORACLE\ALL_HOMES\IDO\PATH
* Unix: retrieve with su - oracle -c ~echo $ORACLE_HOME~

- MS SQL Server
* See Registry entry
HKLM\SOFTWARE\MICROSOFT\MSSQLServer\Setup\SQLPath

Whee. to Find What Slide 8

The above mentioned scheme for determining the Archive Server software location holds for
all types of an Archive Sever system installation: server and clients.
Configuration locations for the mentioned items:
Database system Oracle:
Server Configuration: Document Service (OS) -,) Directories -,) Cache path (. ..)
Registry (Win): ... \DS\DS - CACHE- PATH
Setup file (Unix): DS . Setup, variable DS _CACHE_PATH

12-8 710
QPENTEXT

QPENTEXT

iii Archive Server logfiles


- Logfile directory
• Win: <IXOS ROOT>\var\log
$ Unix: /usr/ixos-arehive/var/log
- Logfile names: mostly equal or similar to program names.
Examples:
Administration service: admsrv --+ admSrv.log
" Read component: dsrel --+ RC1.log
STORM: jbd --+ jbd.log

II! Database system


- Oracle alert log
* Win: %ORACLE_HOME%\rdbms\traee\alert_DS.log
Unix: $ORACLE_HOME/rdbms/log/alert_Ds.log
- MS SQL Server log
• Directory: <SQL_SERVER_ROOT>\LOG
Files: ERRORLOG current log file
ERRORLOG. 1 to ERRORLOG. 6 former log files

Where to Find What Slide 9

The directory placeholder <IXOS_ROOT> shall be replaced by the path defined as the
root directory of the Archive Server software installation tree (see earlier in this
chapter).
The basename of STORM log files, j bd, is derived from STORM's secondary name
Jukebox Daemon.
The Unix environment variable ORACLE_HOME is usually only defined for user oracle;
to retrieve the alert log file as mentioned above, you should switch to this user ID first
(using the 'su - oracle' command).
The directory placeholder <SQL_SERVER_ROOT> shall be replaced by the path
defined as the root directory of the MS SOL Server software installation tree (see earlier
in this chapter).
The former SQL Server log files are numbered according to their age, i. e.
ERRORLOG. 1 is the most recent, ERRORLOG. 6 the oldest one. As soon as a new log is
to be created, the oldest one is deleted, the numbering of the remaining ones is shifted
by one, and ERRORLOG is renamed to ERRORLOG. 1.
The HTTP-based logfile access tool (Only Archive Server S 9.5)
http://<archiveserver>:4060/cgi-bin/tools/log.pl
proVides some additional useful possibilities:
- List only the tail of a file
- Filter for log message types (error, warning, information, ...)
- Filter for arbitrary search string
It is possible to use the pearl scripts from an older version, but check your security
environment!

Where to Find What 12-9


SCSI Device Files
OPENTEXT

l1li Used for accessing SCSI devices


- Single media drives (S 5.0)
- Jukeboxes: drives, disk change robot
- Scanners (Client version S 5.1)

l1li Win: \ \ • \p?b? t? • ? - - - - - LUN (logical unit no.)


tt
I SCSI 10
- - - - - - - - - Bus number of adaptor (always '0')
- - - - - - - - - SCSI host adaptor ("port") no.
l1li Unix: /dev/iXOS_SCSI? /?-SCSIIO
t SCSI host adaptor no.

II Query with IXOS tool scsidevs:


D:\>scsidevs
scsi \\.\pObOtO,O is TOSHIBA's CD-drive "CD-ROM XM-6302B"
scsi \\.\plbOtO,O is SEAGATE's Disk "ST39140W"

Where to Find What Slide 10

The command line tool scsidevs, delivered as part of the Archive Server, is very handy for
getting information about connected SCSI devices. The SCSI device file specification it
displays can be entered exactly the same way in the STORM (storage manager) configuration
files for storage systems.
You can retrieve more detailed information about a certain SCSI device with the command
scsidevs -full scsi <scsi address>
e.g. scsidevs -full scsi \\.\pObOtO,O
where <scsi address> is the device's SCSI address as described above.

To view the SCSI device properties use:


inquiry <scsi_address>
e. g. inquiry \ \. \pObOtO, 0

12-10 710
OPEN TEXT

Chapter Guide
OPENTEXT

l1li Ways to access configuration information

III Finding specific parts of the server installation


- Installed software
- Log files
- SCSI device files

iii Locations for stored documents and data


- Buffered/archived documents
- Cached documents
- Database files

Where to Find What Slide 11

Where to Find What 12-11


OPEN TEXT

Globally defined Storage Locations


OPENTEXT

II Cache partitions
) Administration -----II
-0 Server Configuration
~0 Registry I Setup file

II DocumentPipeline
directory ("DPDIR")
Server Configuration
-0 Registry I Setup file

II Batch import directories for COLD, for EXDB


(only for certain IXOS products)
-0 Server Configuration
-0 Registry I Setup file
May be overridden by project-specific enqueue job arguments

III ISO burn buffer


Server Configuration
Registry I Setup file

Where to Find What Slide 12

The hard disk storage locations mentioned above are defined globally per Archive Server, i. e. there is
- for example - no separate DocumentPipeline directory for each logical archive. Usually, those
locations are specified at Archive Server installation time and are rarely changed afterwards.
Configuration locations for the mentioned items:
Cache partitions
Server Configuration: Document Service (OS) ~ Directories ~ Cache path (.. .)
Registry (Win): ... \DS\DS_CACHE_PATH
Setup file (Unix): DS. Setup, variable DS_CACHE_PATH
DocumentPipeline directory
Server Configuration: General Archive Server settings (COMMON) ~ DP settings
~ Directory for Temporary Storage of Documents
Registry (Win): ... \ COMMON\DPDIR
Setup file (Unix): COMMON. Setup, variable DPDIR
COLD batch import directory (present for some IXOS products only)
Server Configuration: Configuration for CODB Pipeline (CODB) ~ General
Install. Variables ~ External Directory fo CODB pipeline
Registry (Win): ... \CODB\DATA_DIR
Setup file (Unix): CODB. Setup, variable DATA_DIR
EXDB batch import directory (present for some IXOS products only)
Server Configuration: Configuration for EXDB Pipeline (EXDB) ~ General
Install. Variables ~ External Directory fo EXDB pipeline
Registry (Win): ... \EXDB\EXT_DIR
Setup file (Unix): EXDB.Setup, variable EXT_DIR
ISO burn buffer
Server Configuration: Document Service (OS) ~ Media configuration
~ ISO settings ~ l5irectory where cdliso trees are built
and ~ Directory where cd/iso images are built
Registry (Win): ... \DS\CDDIR, ... \DS\CDIMG
Setup file (Unix): DS. Setup, variables CDDIR and CDIMG

12-12 710
OPEN TEXT

Storage locations defined per OPENTEXT


Archivel POlo~II~M:e~d~iu~m;.• • • • • • • • • • •'m=::k~.=!M.'''':Wt1=ffu.#"."lkicJllIf:

R/3 System
configuration:
(Barcode archiving
only)

Where to Find What Slide 13

The chart above illustrates how to find data storage locations which are specified individually
per logical archive, media pool, or disk buffer; retrieval is done in the Archive Server
Administration, tab Servers.
To find out the hard disk partition where a disk buffer volume resides:
1. Click the disk buffer in question under Buffers in the left-hand display area
2. See the name of the disk buffer volume in question in the Name column of the
Partitions list (right-hand display area)
3. Click the HardDisk icon under Devices in the left-hand display area
4. Look up the retrieved buffer volume name in the Partitions list (right-hand display
area) and see the corresponding Mount Path entry
To find out the disk partition where a hard disk pool volume resides:
1. Click the hard disk pool of the logical archive in question in the left-hand display area
2. See the name of the volume in question in the Name column of the Parti tions list
(right-hand display area)
3. Look up the drive letter or mount path in the Devices tab as described for disk buffer
volumes (above)

Where to Find What 12-13


QPENTEXT

STORM's WORM Management Data


OPENTEXT

iii WORM file system database


- Consisting of:
~ Hash files
~ Inode files

Server Configuration
Config file server. cfg

iii Temporary storage for WORM writing


Server Configuration
Config file server. cfg

iii STORM backup directory/-ies


Server Configuration
Config file server. cfg

Where to Find What Slide 14

Configuration locations for the mentioned items:


WORM filesystem database
Server Configuration: Storage Manager (STORM) -7 WORM Filesystem
-7 (all subentries) -7 Path and file name of ...
Config file .. . 1configl storml server. cfg:
section ixworm, parameters ixw .. . /file . . . /path;
see all these entries for the names and locations of all involved
files
Temporary storage for WORM writing
Server Configuration: Storage Manager (STORM) -7 WORM Filesystem
-7 Data File Path
Config file .. . 1configl storm/server. cfg:
section ixworm, parameter DataFilePath
STORM backup directory/-ies
Server Configuration: Storage Manager (STORM) -7 Backup STORM control files
-7 table column Path to Backup Destination
Config file .. . /config/storm/server. cfg:
section backup, parameters dest * Ipath
If relative paths are used in the server. cfg file; these are meant relative to the path where the
server. cfg itself resides.

For explanations about the meaning of the mentioned storage locations, see other course material
chapters:
WORM filesystem database -7 The WORM Filesystem
Temp. storage for WORM writing -7 Document Processing by the Livelink Enterprise
Archive Server
STORM backup directory/-ies -7 Backing up the Archive Server

12-14 710
QPENTEXT

Oracle Database Files


OPENTEXT

III Data files


~ Query database: select name from v$datafile;

III Control files


~ Query database: show parameter control files;
~ Config file ini tECR. ora

III Redo log files


~ Query database: select member from v$logfile;

Iii Archive redo log files Sample sqlplus session


~ Query database (as system): $ sqlplus ecr/ecr@ecr_myarchivesrvl
archive log list
SQL> select name from v$datafile;
~ Config file ini tECR . ora
NAME

/ixos/oradata/ECR/dat/ds2_system.dbf
/ixos/oradata/ECR/dat/ds2_datal.dbf
/ixos/oradata/ECR/idx/ds2_indexl.dbf

SQL>

Where to Find What Slide 15

In Archive Server ~ 9.6, there is a new database instance ECR that contains all tables related to the
Archive Server, including the OS storage tables.
In Archive Server S 9.5, there is no ECR database instance. Instead, the OS has its own database
instance called OS. Logon to the OS database instance with the following command:
sqlplus 'ecr/ ecr@doc_econserverl
The sources for the information mentioned above are the database itself and the database instance
configuration file ini tECR. ora.
To query the database for the mentioned information, do the following on a command line on your
Archive Server:
1. (Unix platforms only:) Become the database user by typing: su - oracle
(If the Oracle system is run under a different user name, take that one instead of oracle.)
2. Connect to the database with the command:
sqlplus <dbuser>/<password>@ecr_<servername>
Replace the <... > placeholders by the values used in your environment. Using the standard
\ database user name and password, this would look like:
l
sqlplus ecr/ecr@doc_<servername>
(The query for the archive redo logfiles cannot be done with the "normal" database user ixds; for
this, you have to connect to the database as internal.)
3. Perform the query needed for retrieving the desired information (see slide above).
4. When done, exit sqlplus with the exit command.
ini tDS. ora is the configuration file of the database instance. By default, it can be found in SUbdirectory
config/oracle of the Archive Server installation directory. (On a non-standard Archive Server
installation, look up the path and name of this file in the Server Configuration, branch Oracle Server
Database (DBORAS) -? Server Database Parameters ... -? Settings of the Server database -? Settings
of DB parameters, entry Parameter file for DB.)
On an Archive Server S 4.2, the file is located in the Oracle installation directory (see page Installed
software earlier in this chapter), SUbdirectory Database (Windows) or dbs (Unix), respectively.
In the initDS. ora file, the configuration items mentioned above can be found:
Archive redo log files: Parameter log_archive_dest
Control files: Parameter Control files

Where to Find What 12-15


QPENTEXT

MS SQL Server Database Files


OPEN TEXT

III Query for database files: Sample qsql/w session

exec sp_helpfile

- Issued within qsql or qsql/w


Using database user ecr
" Archive Server :S 9.5: ixds

III Relevant for database ds


and for system databases:
- master
- msdb
Files normally located in <SQL_DATA_ROOT>\DATA

Where to Find What Slide 16

To query for data files using the SOL Server query GUI (illustrated above):
1. Start the SQL Server Query Analyzer ("qsql jw") by choosing the Start menu item
Programs ~ Microsoft SQL Server 7.0 .~ Query Analyzer.
2. Log on to the Archive Server's database system as database user ecr (whose default
password is ecr).
3. Select the database to query within text field DB.
4. Issue the query in the main window area and press F5 to execute it.
If you prefer to use the command line query tool qsql instead:
1. Open a command prompt window and enter the <SQL_SERVER_ROOT>\BINN
directory.
2. Perform the following session:
D:\MSSQL\BINN> qsql -S <servername> jUecr jP<ecr-passwd>
1> use <database name>
2> exec sp_helpfile
3> go
<database name> should be one of the database names mentioned above.
3. Exit from isql wth the exit command.
For database backup purposes, it is important to get hold of the files belonging to all databases
mentioned above.
(The directory placeholder <SQL_DATA_ROOT> shall be replaced by the path defined as the
root directory for MS SOL Server data; this can be found as Windows registry entry
HKLM\SOFTWARE\MICROSOFT\MSSQLServer\Setup\SQLDataRoot and is by default
equal to the SOL Server installation root discussed earlier in this chapter.)

12-16 710
QPENTExT

Exercise: Find elements of the Archive Server


instaliation• • • • • • • • • • • • • • • O=li0~=bE:i;;:="ii=.!='~=¥J~i l#l:,JiP
III Make yourself familiar with the
Archive Server installation on
your classroom computer
III Find various locations of
document/data storage
ISO burn buffer,
WORM file system database
- Disk buffer volume(s)
- Database files

Where to Find What Slide 17

Where to Find What 12-17


12-18 710
OPEN TEXT

OPENTEXT

13 Archive Server Startup and Shutdown


Starting and stopping Archive Server safely

Archive Server Startup and Shutdown 13-1


OPEN TEXT

Arch ive Server process layers


OPENTEXT

Startup Shutdown
order order

Spawner

Database instance ECR

Archive Server Startup and Shutdown Slide 2

The Archive Server instal/ation is subdivided into two layers of processes which are started up
and shut down separately. However, this does not mean that they may be started up and shut
down independently from each other; the spawner layer depends on the availability of the
underlying database layer.
When the whole machine is booted or shut down, it is ensured that aI/ Archive Server
processes are started/stopped in the proper order. When performing a manual startup or
shutdown - e. g. for backup or maintenance reasons - or when developing your own
startup/shutdown scripts, however, you have to obey the startup/shutdown order displayed
above.

13-2 710
OPEN TEXT

Different types of spawner shutdown


OPENTEXT

IlFuW'shutdown IlStopall" shutdown


• Spawner and all its children are • All "active" processes are terminated
terminated • Spawner itself is kept alive
• All maintenance work possible • Virtually all maintenance work possible
- Very few reconfigurations require full
spawner shutdown
• No process status information • Process status information still available
available - Useful for checking shutdown success

• How-to • How-to
- spawncmd exit - spawncmd stopall
- Stop spawner service by OS means - Archive Server Administration: Menu
File -+ Spawner -+ Stop Archive
Processes
• Requires "full" startup afterwards • Requires "startall" startup afterwards
- How-to: Start spawner service by OS - How-to: spawncmd startall
means - No "full" startup - spawner itself is still
running!

Archive Server Startup and Shutdown Slide 3

Knowing the difference between the two variants of shutting down the spawner is essential for
being able to use the different ways of startup and shutdown, explained in this chapter.
Generally, the "stopall" shutdown method is the more "gentle" one; it keeps the spawner itself
running, yielding the considerable advantage to provide process status information during the
shutdown period. This is useful because, especially on Unix platforms, many spawner-
controlled processes terminate asynchronously, i. e. they are still alive when the shutdown
command (whichever you have chosen) returns. Being able to check when all processes have
terminated (using spawncmd status, see later in this chapter) is therefore an essential tool
for setting up your own maintenance tools, e. g. scripts for offline server backup.
For this reason, you should prefer using the "stopaII" shutdown - combined with the "startall"
startup - wherever possible.

Archive Server Startup and Shutdown 13-3


Startup and Shutdown on as level (1): OPENTEXT
Windows 2010~3• • • • • • • • • • • • • • •=='R='ii.'*.M.",.,;+llifli•
III Command line
- Startup:
net start OracleServiceECR or net start mssqlserver
net start Oracle<ORA HOME>TNSListener
net start "IXOS Spawner"

- Shutdown:
net stop "IXOS Spawner" ~,
w.",,,,,,,,,~=;;;;;;;;;;;;;;;;;;=:;;;;;:===~":~~~~~~J
net stop Oracle<ORAyOME>TNSListener (only Oracle data base)
net stop OracleServiceECR or net stop mssqlserver

III Using the Windows


Services panel
- Start/stop in the order
displayed above Automatic
Automatic
Automatic
II Using MS SQL Server Manual
instead of Oracle: Manual
- Service MSSQLServer Manual
Manual

Instead of the "long" service names "IXOS Jukebox Daemon" and "IXOS Spawner", you can
also use the short names "jbd" and "spawner", respectively.

The <ORA_HOME> placeholder mentioned above represents the Oracle Home name of your
Oracle DBMS installation; it has been defined during the Oracle installation routine. In case
you are unsure what the actual value of this parameter is, simply open the Windows services
list; you will easily recognize the name by having a look at the Oracle services mentioned
there.

Stopping the OracleServiceECR without explicitly shutting down the database instance before
causes no problems but leads to annoying warning messages in the Oracle trace file. In order
to keep your trace files clean, you may insert the following command in an archive shutdown
script right before the "net stop OracleServiceECR" command (given in cmd syntax; be sure to
enter it as one single line):
(echo connect internal/<passwd> && echo shutdown immediate) I svrmgrl

13-4 710
QPENTEXT

Startup and Shutdown on OS level (2): OPENTEXT


Unix Platfor.m;;;s.III1I1• • • • • • • • • • • •m;i.ui',,;.yi\imMi"';f'Ni=~iO.

II Central s tart/stop script for all process layers


Location: AIX: /etc/rc.ixos start (or stop)
HP-UX: /sbin/rc3.d/S910ixos start (or stop)
Solaris: /etc/rc3. d/S910ixos start (or stop)
Linux: /etc/init.d/ixos start (or stop)

II Separate start/stop scripts for each process layer


Location: /usr / ixos -archive/rc
- Scripts
~ Spawner layer: S18BASE
~ ECR database layer: S15MORA_ECR
~ ... plus additional scripts if add-ons are installed

II Script usage
- Startup: <scriptname> start
- Shutdown: <scriptname> stop

Archive Server Startup and Shutdown Slide 5

The S18BASE stop call is functionally equivalent to spawncmd exit.


(spawncmd is the command line frontend to the spawner service; to be found in: /usr / ixos-
archive/bin)

After having stopped all spawner-controlled processes (with whatever command), you must
wait for the termination of all of them before starting them again - otherwise some of them
won't come up properly. Termination of all processes usually takes about a minute.
In this respect, it is more useful to use spawncmd stopall and spawncmd startall
instead of S18BASE stop/start for shutting down and restarting the spawner-controlled
process layer because then it is still possible to query the processes' status (with spawncmd
status) during the shutdown period.

Check whether certain components are running:


• Oracle instance: ps -ef grep oracle
• Storage Manager (STORM): ps -ef grep jbd
• Spawner-controlled processes: ps -ef I grep daemon or
spawncmd status (if applicable)

Details about how to use spawncmd status are presented later in this chapter.

Archive Server Startup and Shutdown 13-5


QPENTEXT

Checking status after startup (1): OPEN TEXT


DS database~o~p~e;r~a~t;io~n;'• • • • • • • • • •'.mJm=;.iii.:'ili:"W.tltllll.i=+ll-,tP

III Execute command DBTest in command line shell

# OBTest

trying to connect

\
r-; "connected" ~ Everything okay,
database running and accessible
III Some error message ~ Database not accessible;
investigate this

Archive Server Startup and Shutdown Slide 6

DETest is a database connection testing tool provided as part of the Archive Server
installation. DETest tries to access the OS database exactly the same way the "productive"
Archive Server components do, e. g. document service and administration service.
If you have problems starting up the archive system but DETest succeeds, you can be sure
that it is not the database that is causing the problem. Reversely, if this database test does not
finish with the "connected" message, you can be sure that the spawner-driven archive
processes will not come up properly.

Archive Server:S:; 4.2: Use dsConTest 3 instead of DETest, which does exactly the same
test.

13-6 710
OPEN TEXT

Checking status after startup (2): QPENTEXT


Spawner lay.e~r• • • • • • • • • • • • • • • •lirl'i.;.=l!1ti"'iJill'ijjjjji'ti';.ttJ=.~
II Execute command spawncmd status in command line shell

c:\> spawncmd status


• Terminated irregularly
(exit code::f. 0) program-id sta pid start time stop time

• Log file name similar R 526 07/24/2002 13:47:31


to program- id ~1Iijilllill;lIIl11~1I 07/24/2002 13:47:31 07/24/2002 19:21:47
~ ~ 07/24/2002 13:47:31
07/24/2002 13:47:31
07/24/2002 13:47:35 07/24/2002 13:47:35
07/24/2002 13:47:35
07/24/2002 13:47:35 07/24/2002 13:47:36
doctods R 336 07/24/2002 13:47:36
dsrc1 R 353 07/24/2002 13:47:38
07/24/2002 13:47:38 07/24/2001 13:47:39
dswc R 07/24/2002 13:47:38
jbd S 345 07/24/2002 13:47:55
These are allowed rfcS46 R 397 07/24/2002 15:10:56
to be terminated 07/24/2002 13:46:45 07/24/2002 13:46:47
with exit code = 0 07/24/2002 13:46:40 07/24/2002 13:46:41

II If more than the allowed ones are terminated


~ Something's wrong; consult their log files

Archive Server Startup and Shutdown Slide 7

The best way to check whether the archive system startup was successful is the spawncmd
status command. The information presented in the status column ("sta") of the resulting
process list means:
s Process is currently starting up
R Process is running
T Process is terminated
In a sane operational state, all archive system processes listed in spawncmd status have to
be running - except for the ones marked in the chart above. (However, not all of them may be
present, depending on your Archive Server release and operating system type; if you do not
see one of them at all, this is no problem.) If any of the other programs is marked as
terminated, something irregular has happened to it. To investigate this, you will have to have a
look in the corresponding log file. Each of the listed programs writes to a log file whose name
is similar, yet not always exactly equal, to the displayed program name. Some important
examples:
admsrv ~ admSrv. log
dsrcl ~ RC1.log
dswc ~ WC. log
As mentioned earlier in this chapter, the spawncmd status check is also possible after a
shutdown has been made using spawncmd stopall. In this situation, use the check to make
sure all processes are listed with status T before going on (making a backup, starting up again,
or taking other actions).
On a scanning station with EnterpriseScan installation, a subset of the server processes is
installed and must be running as well. There, stockist is the only program that is allowed to
be terminated during normal operation.
IXOS-ARCHIVE:=;; 4.2: One additional process, named checkscsi, is always allowed to be
terminated; it is okay even if its exit code is 1. (Its purpose is to check whether the versions of
the (XOS generic SCSI driver and the operating system match, which is no longer necessary
on the eCONserver 5.0.)

Archive Server Startup and Shutdown 13-7


Spawner control in Archive Server
Administration OPENTEXT

Shutdown:
@ Only possible from
within admin session

Startup:

Archive Server Startup and Shutdown Slide 8

As of Archive Server 5.0, it is possible to control and check the spawner operation status also
remotely, using the Archive Server Administration (as illustrated above). The following actions
are possible:
Stop Archiv Processes: Performs a "stopaII" spawner shutdown, similar to spawncmd
status. Note that, in addition to the spawner itself, a few of its child processes
(constituting a HTTP interface for remote administrative access) are kept alive in this
case; this is necessary for the remote spawner actions discussed here to be enabled
even during a shutdown period. Nevertheless: Most maintenance work on the Archive
Server is still possible during this type of shutdown.
Start: Starts up the spawner child processes again. This is possible only if the spawner has
been shut down before using the "Stop Archive Processes" action!
Status: Displays a list of spawner-controlled processes and their operational status,
equivalent to spawncmd status. This is possible only if the Archive Server is running
or the spawner has been shut down before using the "Stop Archive Processes" action!
Exit: Performs a "full" spawner shutdown. After this, no remote startup or status check is
possible; a "full" startup has to be made instead.
These features of the Administration Client are a matter of convenience; the same actions are
still possible using the spawncmd tool on the command line directly.

13-8 710
QPENTEXT

STORM caveats
OPENTEXT

II! Stopping the spawner implies stopping STORM


III STORM needs some time to finish startup or shutdown properly
- Upon startup, WORM filesystem database must be opened
- Upon shutdown, media drives must be unloaded
- Normally finished within a minute
=:> Starting/stopping without waiting until the previous stop/start
is completed may cause STORM to hang!
III To be on the safe side, obey some rules:
- Check for completion of startup/shutdown before going on
~ Preferably use spawncmd stopall, spawncmd startall
instead of starting/stopping the spawner service
~ This way, spawncmd status works even after shutdown
- Before simply restarting the spawner in case of problems,
consider other methods of troubleshooting

Archive Server Startup and Shutdown Slide 9

Archive Server Startup and Shutdown 13-9


Exercise: Shut down and start up OPENTEXT
your ArChive~s~e~rv~e~r~• • • • • • • • • • • • . •;.fiftri·.ii.lj=",jiilMit1Ji#iHiW=.~.

II Shut down and start up again the


processes of your Archive
Server
- Try different ways to do this

II During this, monitor the


processes' operational status

Archive Server Startup and Shutdown Slide 10

13-10 710
OPEN TEXT

OPENTEXT

14 Archive Server Monitoring


Detecting problems in time

Archive Server Monitoring 14-1


Chapter guide
OPEN TEXT

III! Fundamental monitoring tasks and tools

- Archive Web Monitor


- Job protocol

III Advanced monitoring methods


- Notifications
- Scriptable monitoring tools
- Integration into external system management tools

Archive Server Monitoring Slide 2

14-2 710
QPENTEXT

Parameters observed by the Archive Web


QPENTEXT
Monitor o:IDifUMfJiMiiN:mmP

II Operational status of
system processes
(running/terminated)
- Document Service
- Storage Manager
- Document Pipeline

Ii Storage space
- Media pools
- Hard disk buffers
- Document pipeline
- Database

Ii Document Pipeline
processing errors
II Requests for reading from
unavailable volumes

Archive Server Monitoring Slide 3

The Archive Web Monitor supports the timely recognition of problematic conditions within the
Livelink Enterprise system. It not only monitors the Document Service but also the Document
Pipelines on the Archive Server and on scanning workstations.
The state of the observed parameters is visualized in a three-level scheme: normal, warning,
error. In addition, each parameter can be accessed in the detail view.
Multiple Archive Servers may be monitored at the same time. This also includes the scanning
hosts (with Livelink Enterprise Scan installed and used) - the Archive Web Monitor then
reveals processing errors within the Document Pipelines of the scanning station.
Up from Archive Server 9.5 it is necessary to install the Monitor Service on the Livelink
Enterprise Scan Client and also on the Document Pipeline Servers to monitor this stations with
the Archive Server Web Monitor.

Archive Server Monitoring 14-3


QPENTEXTi

Working with the Archive Web Monitor


QPENTEXT

Host is okay

Collapsed
host structure
display masks
an error item

notice is
propagated to
structure root

Archive Server Monitoring Slide 4

Warning or error notices of a single resource are propagated to a resource group and the host
item, so that they become visible even if a group is collapsed in the tree view. That way, error
conditions can be recognized at a glance. Moreover, it is possible to watch the overall state of
multiple Archive Server and Livelink Enterprise Scan hosts at the same time without wasting
display space: Just keep all host trees collapsed - as soon as a host icon indicates a warning
or an error, open the structure and follow the indication.
The Archive Server Monitor is not permanently connected to the Archive Server's (or scanning
client's) monitor service; instead, it polls status information to be displayed at regular intervals
(default: two minutes). The status bar message "Disconnected" therefore is the normal
operational state (not to be misinterpreted as a communication problem).

14-4 710
QPENTEXT

Features of Archive Web Monitor


QPENTEXT

m Access via
- http://<archiveserver>:4060
- https://<archiveserver>:4061

m "Zero" client installation


III Can be used instantly on any PC
in your network
- See Release Notes for
browser restrictions

III Displays same information as


Archive Server Monitor
- Monitoring multiple hosts
simultaneously also possible

" Monitoring client options can be


stored as URL arguments in
browser bookmark (for example
see note part of this page)

Archive Server Monitoring Slide 5

Find the newest revision of the mentioned Release Notes Archive Server in the ESC, starting
from folder:
https://esc.ixos.com/l0B0543304-759

Example URL for a browser bookmarks:


<webservice>://<servername>:<port>
<webservice>:= http -> <port>:=4060 http://<servername>:4060
<webservice>:= https -> <port>:=4061 https://<servername>4061

http://<servername>:4060/w3monc/index.html?host=<servername2>:<port>
& host=<servername3>:<port>
& iconType=<iconType>
& refreshlnterval=<number>
http://vrnlecr96:4060/w3monc/index.html?iconType=Faces&refreshlnterval
=10&host=vrnlsvrfcOO:4060&host=muc026Bl:4061

iconType=Faces 7 use Faces as icon type


refreshlnterval=10 7 change refresh interval to 10 sec.
host=vrnlsvrfcOO:4060 7 connect also to second server named 'vmlsvrfcOO'
over http port 4060/tcp
host=muc026Bl:4061 7 and connect also to third server named 'muc02681,
over https port 4061/tcp

Archive Server Monitoring 14-5


Review job messages and job protocol
OPEN TEXT

Archive Server Monitoring Slide 6

The only important status information not provided by the Archive Server monitoring tools is
the execution results of the periodic jobs run by the Archive Server scheduler. To monitor
those results (in order to detect any problems), open the Job Protocol window of the
Archive Server Administration as illustrated above. Like in the Archive Server Monitor display,
error conditions are indicated by red bulbs in the protocol list.
In order to get further information about what has gone wrong, you may choose a job entry
from the list and click the Messages button. This opens a further window displaying log
messages which the selected job invocation has written.

<ixos-root>\var\log\messages\*.*

14-6 710
QPENTEXT

Change Protocol Settings


OPEN TEXT

III Open Server Configuration in Administration Client


- Administration Server (ADMS)
~ Jobs and Alerts

Archive Server Monitoring Slide 7

Archive Server Monitoring 14-7


QPENTEXT

Chapter guide
QPENTEXT

III Fundamental monitoring tasks and tools


- Archive Server Monitor
- Job protocol

III Advanced monitoring methods


- Notifications
- Scriptable monitoring tools
- Integration into external system management tools

Archive Server Monitoring Slide 8

I
l

14-8 710
QPENTEXT

Notifications: Active alerts upon certain


OPEN TEXT
events on the Archive Server

Archive Server
rises alert in case
of certain events

Available alert types


• E-mail, SMS
• Alert message in Archive Server Administration
• Message written to log file
• Tel script execution
• Sending SNMP trap
Archive Server Monitoring Slide 9

The Archive Server is able to actively raise alerts in case of specified events. Such events
include the completion of a ISO media and different kinds of errors or interruptions; a
reasonable set of event types is already predefined.
Setting up a notification for a certain event is done as follows:
1. Within the Notifications tab in the Archive Server Administration, create a
notification as illustrated above. parameters to be specified include the alert type, the
period when this notification shall be active, and specific parameters for the various alert
types (e. g. the recipient in case of e-mail alert, or the file name in case of writing a log
message).
2. Select the event that you intend to assign the new notification to, then right-click on the
notification item; from the appearing context menu, choose Assign to Event.
Alerts of type "Admin Client Alert" can be displayed by clicking the "exclamation mark" button
on the Archive Server Administration's tool bar.
A notification's configuration may contain placeholders, like $HOST or $MSGTEXT (visible
within the e-mail subject above), which will be replaced by current values at each notification
invocation. A complete, detailed description of available placeholders can be found in the
Archive Server Administration Guide.
Assigning a notification to the predefined event Any Message from Monitor Server expands
the functionality of the Archive Server Monitor (discussed earlier in this chapter): Whenever a
monitoring bulb turns from green into yellow or red (or from yellow into red), this event is
raised and thus a notification sent. Configuring this, it is no longer necessary to look at the
Archive Server Monitor periodically to be informed about problematic situations. (The only
exception is a complete server breakdown; in this case, no notification can be sent by the
server itself of course.)

Archive Server Monitoring 14-9


Scriptable monitoring (1): ixmonTest
OPEN TEXT

B,,)· .i xtarclo
S ,., BackupServer
~~)~~!!t
(t ~~~~p:
j
BackupServer
Component: bksrvr

. i I .~ ~~~~~iI
status: Active
Details: Ok
8"
. I:, ~~:~~~_IXW
OS Pools

Archive SerVer Monitoring Slide 10

ixmonTest (on Windows: ixmonTst) is the command line equivalent to the Archive Server Monitor.
As illustrated above, it outputs the requested status information in textual form, ready for being analyzed
by external text processing tools (like grep and perl). You can use it to implement your own monitoring
routines, for example to raise notifications in whatever situation may be important for you.
Some hints on using ixmonTest:
The utility is installed on the Archive Server only, not - together with the Archive Server Monitor
- on the administrator workstation. However, you can use it via the network; for this, call it as
ixmonTest -h <host> .... (This way, you can monitor several Archive Servers centrally
from a single server.)
The monitored status items are arranged as an enumerated list. With the ixmonTest arguments
walk <start> <end>, you select a certain range from this list. However, for a full monitoring
the only reasonable choice is retrieving the whole status list.
On the other hand, the status list has a variable length; it depends on the installation and
configuration of your server. The best way to determine the exact end number is calling
ixmonTest manually and trying different numbers until you see "empty" items at the Irst end.
An item's status (okay, warning, or error) is expressed as a numerical value: 0 means okay,
everything else indicates a problem. (Do not rely on the warning and error values to always be
the ones shown above; they may vary depending on the type of problem.)
For a selection of what is interesting for your notification routine, you can refer to the name
attribute included in the output status list.
The following is a simplistic but, nevertheless, complete example shell script that uses ixmonTest; it
sends an e-mail whenever it finds a monitored item with a (non-empty) status other than zero:
# first check that we don't miss any status list items
if ! ixmonTest walk 300 300 I grep -q 'name= '""; then
echo "Too many status items!" I mail -s Problem [email protected]
fi
# now examine the monitoring output for non-okay items
if ixmonTest walk 1 300 I grep -9 'value= II [AO"] '; then
echo "Problem on eCONserver!" I mail -s Problem [email protected]
fi
This script is meant to be executed as a cron job (every five minutes, for example) on the operating
system. Feel free to amend the script to an arbitrary level of complexity to exactly suit your needs.

14-10 710
QPENTEXT

Scriptable monitoring (2): QPENTEXT


Scheduled jOb~S~p~r~o:to~C~O~I~_• • • • • • • •='1lw:If.:id.'ili:'ll:@@:.=!iiI.

II! Protocol entries to be queried from storage database:

Archive Server Monitoring Slide 11

For scripted monitoring, it is also necessary to retrieve status information about success or
failure of scheduled jobs on the Archive Server (see also page Review job messages and job
protocol earlier in this chapter). This information is stored in the storage database DS and can
therefore be retrieved by a suitable database query; see the above illustration for details.
Here is a simplistic but, nevertheless, complete example shell script - assuming Oracle as
database platform - that sends an e-mail whenever it finds a protocol entry indicating a job
failure:
if echo "select 'num_errors', count(*)
from ADM JOB PROTOCOL
where STAT <> 'INFO' i" \
I sqlplus -S ecr/ecr I grep -q 'num_errors.*[1-9] I

then
echo "Failed jobs found!" I mail -s Problem
[email protected]
fi
(On a Unix-based Archive Server, this script has to be executed under the user account that
the database is executed as; it is normally named oracle.)
Again, this script is most useful if executed periodically on operating system level.

Archive Server Monitoring 14-11


14-12 710 \
QPENTEXT

QPENTEXT

15 Handling Optical Archive Media


Daily operating tasks

Handling Optical Archive Media 15-1


QPENTEXTI

Possible states of optical archive media


QPENTEXT

Exported
from database
\. V
(Not part ofthis chapter)
'"

Handling OpticalArchive Media Slide 2

The chart above gives an overview of the different ways the Archive Server may regard optical
media.
Note that "being present in a jukebox (or single drive) device" does not mean the same as
"known to the storage database"; either combination of these two state properties are possible.
Media may move from one state to another. Some of these state transitions may happen
without manual interference, others require to be done by operating personnel. Those
transitions relevant for operating are labeled above:
A Filling an empty ISO medias with data and setting it to online, read-only, is done
automatically by a periodic ISO write job.
B Each new IXW media partition must be initialized for writing and reading. This can be
done either manually by the operator of automatically by the IXW write job.
C Likewise, backup IXW medias have to be initialized.
D After a IXW media partition has been filled up to the desired amount, it can be finalized
which sets it to a permanent read-only state.
E As soon as a IXW medium has been filled up with data and its backup has been
synchronized, the backup has to be taken out of the device and to be stored safely
away.
F When the jukebox becomes full, the operator can make room for new media by taking
"old" ones out and storing them at a safe place.
G In case an Livelink client requests to read a document from a taken-out medium, it is
the operator's duty to re-insert that particular medium into the jukebox.
The following pages present details about how to perform each of the mentioned transitions.

15-2 710
OPEN TEXT

Regular tasks when using ISO Medias


OPEN TEXT
@11_ia.illl.·

• Insert blank ISO medias into jukebox


• Monitor server for newly burned disks
In job protocol
Receiving notifications
In partition lists of pools or jukeboxes
• When a burn job has finished:
(Using double-sided DVDs:
Wait until both sides are completed)
Take original and backup disk out of the jukebox
Label them with their partition name
" Name is automatically assigned by burn job
Re-insert original disk into jUkebox
Store backup disk safely away

Store in
a safe
Backup Original

Handling Optical Archive Media Slide 3

To provide empty ISO medias to the Archive Server, you simply insert them into the jukebox.
Whenever a disk (or a set of disks: original and backup) is to be burned, the Archive Server
will automatically choose an empty one, assign a name to it, and attach it to the corresponding
ISO pool.

Handling Optical Archive Media 15-3


Regular tasks when using IXW Medias
OPEN TEXT

iii Provide empty IXW medias


- Insert blank IXW media
- Initialize both sides
- Repeat for backup IXW media
- Assign them to the
designated pool
~nextslide

m Remove full backup IXW


medias from the jukebox,
store at safe place

Check when
initializing
backup IXW
media

Unlike ISO media volumes, IXW media partitions must be initialized and assigned to a IXW
pool before documents can be written to them. Initializing a IXW media partition basically
includes giving it a name (for a reasonable naming convention, see page IXW media naming
scheme later in this chapter); the above chart illustrates how this is accomplished using the
Archive Server Administration.
Notes:
Since IXW medias comprise two partitions, each of them must be initialized separately.
A IXW media backup partition must be initialized with the same name as the original
volume; the archive system recognizes the relation between them by the name
correspondence.
See later in this chapter how IXW medias can be initialized and assigned to a pool
automatically.
Backup IXW medias have to be kept in the jukebox as long as data is still written to the original
medium (since the backup is synchronized incrementally). The point of time when to remove
the backup IXW media can be recognized by the status display for the original (as illustrated
above): 'F' means full, Le. no more data will be written to it. Then it is time to store away the
backup IXW media.

15-4 710
QPENTEXT

IXW media partition assignment to a pool


QPENTEXT

III! Priority = order in which partitions of a pool are filled with data
- Smallest priority value = filled first

III! IXW media jukebox must flip disk in order to access reverse side
III! Set priorities so that two sides
of the same disk are not adjacent
(not numerically consecutive)
-+ Avoids too much disk flipping
in jukebox

Side A: 1
~
Side B: .a 3 "-
ll>
"0
~ "-
0
~i C'l
Side A: '§ 2 §
;f u::
Side B: 4
Handling Optical Archive Media Slide 5

Since IXW disks are double-sided but IXW drives can access only one of them at a time, the
jukebox robot must turn the disk whenever the reverse side shall be accessed. For this reason,
it would be inefficient to begin filling the second side of a IXW media immediately after the first
side has been finished: Read requests for recently archived documents would be directed to
the first side whereas write requests for newly arrived documents would require the second
side - this would result in very frequent disk flipping.
If the jukebox has enough drives, it is better to distribute the filling order evenly to two (or
more) disks as illustrated above. That way, it is possible that the two disk sides currently under
frequent access (the one just finished and the one just begun) stay in different drives for a
longer period, allowing fast access to all currently prominent documents.
To accomplish this method, you always have to initialize and assign two IXW medias at the
same time (plus two backup IXW medias), possessing four different partitions altogether.

Handling Optical Archive Media 15-5


QPENTEXT

IXW media naming scheme


QPENTEXT

III Whole IXW media must be identified uniquely


III Two sides must be distinguished
III Affiliation to logical archive should be recognizable
III Sequential number should have fixed number of digits
III Proposal:
- IXWmedia: <archive_id>_<se~no>

- Side A: <archive_id>_<se~no>A

- Side B: <archive_id>_<se~no>B

FC0001A - - - -

FC0001B - - - - - - - -
Handling Optical Archive Media Slide 6

An Archive Server does not raise any constraints about IXW media partition names (except for
a length limitation to 32 characters) '- therefore, you should set up a naming convention at the
very beginning of Archive Server usage; the above chart gives a reasonable proposal.
Labeling the IXW media is done the following way: The IXW media as a whole is labeled
physically by writing the name on the case; the IXW media sides - i. e. partitions - are only
labeled by assigning the side names electronically during initialization. It does not matter which
side is initialized as 'A' or as '8'; the jukebox is capable of detecting this automatically.
Notes:
Including a date in the name does not make very much sense - you have to assign the
name before the IXW media usage begins, i. e. at a time when you do not yet know
when the IXW media filling will be finished. Even if you intend to interpret the date as
the ending date of filling the previous IXW media: You will initialize the IXW media
before the previous one is completely filled, not being able to precisely anticipate when
it will have been finished.
It is best to use a fixed number of digits for the sequence number (four will always be
sufficient); this makes it easy to order IXW media lists numerically in the Archive Server
Administration display.

15-6 710
QPENTEXT

Automatic IXW media initialization


OPEN TEXT

III Initialization and pool assignment can be done automatically


- Including backup IXW medias, if backup option enabled
- See ESC about setting
the sequence counter

III Proceeding
- IXW media write job checks
availability of assigned
"empty" IXW medias
after invocation
• "Empty" = filled to less
than 5% (changeable)
- If not enough assigned
"empty" IXW medias are
found, new ones are initi-
alized and assigned to pool
- Then writing from disk buffer
to IXW medias is started

Handling Optical Archive Media Slide 7

The naming pattern for automatic IXW media initialization may contain certain placeholders
which are replaced by actual values in the instance of WORM initialization. These
placeholders include:
$ (ARCHIVE) Logical archive name
$ (POOL) Pool name
$ (PREF) Name prefix as defined in Archive Server configuration (default: "IXOS")
$ (SEQ) Sequence number (mandatory)
$ (YEAR) Date and time variables
$ (MONTH)
$ (MDAY)
$ (HOUR)
$ (MIN)
$ (SEC)
$ENV(varname) Value of environment variable varname
(The parentheses around the placeholder names are not strictly necessary, but you will almost
always need them to separate placeholder names from other name pattern elements properly.)

When you activate the automatic initialization the first time for a certain pool, it will count the
initialized IXW volumes beginning at 0; this is undesirable if you have already got manually
initialized IXW medias in that pool. You can explicitly set the sequence counter to a defined
starting number in order to continue the numbering of the already present IXW medias; see
ESC article Check I Set the sequence number of the next IXW media to be burned
(https:/ lese. ixos. eom/1077278356-781) for details.

Handling Optical Archive Media 15-7


QPENTEXT

IXW media finalization (1): Properties


OPEN TEXT

II When a IXW media partition is filled up, it may be finalized


- Filesystem structure data is moved from STORM's filesystem database onto
the WORM partition itself
Keeps WORM filesystem database small
- IXW media becomes read-only ISO filesystem
- Backup IXW media is finalized automatically
" As soon as finalized original is backed up
- Finalization may fail in rare cases
~ WORM filesystem database must keep structure data "forever"
~ Nevertheless, IXW partition remains read-only

Handling Optical Archive Media Slide 8

For customers with Unix-based Archive Server installations upgraded from an original release
::; 3.5, there is an important restriction: All WORM partitions that were initialized by IXOS's old
jukebox service ixwd cannot be finalized at all. In order to benefit from finalization, those
WORMs must first be copied to new ones which then can be finalized afterwards.

15-8 710
QPENTEXT

IXW media finalization (2): Usage

.. Manual execution .. Automatic execution


- Per partition - As part of IXW write job
- For alllXW medias of a certain pool, - Rules: Filling rate, date of last
selected by writing
~ filling rate
~ date of last writing

Handling Optical Archive Media Slide 9

The possibility to finalize all partitions of a certain pool manually is useful in just one specific
situation: After you have done an Archive Server upgrade from a pre-5.0 version to 5.0 or 5.5,
you may have a vast number of WORMs that can now be finalized. Although finalizing "old"
WORMs does not influence the safety of the stored data, you should make use of this
possibility in order to discharge a considerable amount of WORM management data from the
WORM filesystem database.
The choice between the other two variants of doing the finalization - automatic or manual -
is rather a matter of operating preference; both lead to the same result.

Handling Optical Archive Media 15-9


QPENTEXT

When the jukebox(es} are filled up


OPEN TEXT

III Full media have to be taken out of the jukebox


and stored at a safe place
-+ Media are offline

III For this, choose media containing documents not often needed
- E. g. oldest first
- E. g. least-recently-accessed first
" Retrieve dates of last read access for all media with: cdadm survey -v +oL

III Documents remain known to the archive database,


can be retrieved again after re-inserting
corresponding volumes into jukebox ~ next slide

iI§ Trying to retrieve an offline document, user gets message:


"Volume ... is offline. 11

Handling Optical Archive Media Slide 10

There are two stages of removing documents from Archive Server::


• Just taking media out of the jukeboxes to make room for new ones, still keeping
documents on those media archived but not retrievable online (see above and next
page)
Permanently removing media containing expired documents (see second-next page)

.The cdadm survey -v +oL command (available as of IXOS-eCONserver 5.0) delivers a list
of all archive volumes and their dates of last read access. Unfortunately, it expresses the dates
as the number of seconds since the Unix epoch, which is hardly human-readable. However,
you may filter the list through the following Perl script in order to obtain a readable form:
while ( <> ) {
i f ( / C"\w+ [ \t] +) (\d+) / ) print $1 . (scalar local time $2) . "\n"; }
else print; }

(This is easily possible even on a Windows-based Archive Server since Perl is included in
every Archive Server installation.)

15-10 710
QPENTEXT

Re-inserting offline media


OPEN TEXT
into the jukebox on demand

Archive ServerMonitor indicates


request(s) for offline volumes

Handling Optical Archive Media Slide 11

As soon as the jukebox(es) have been filled up with used media, older media must be taken
out in order to make room for new empty ones; such media no longer present in a jukebox are
called unavailable or offline. Afterwards it may happen that some Archive Server user requests
a document from an offline volume. The user then gets a message that this document is
currently offline. The Archive Server Monitor, in turn, displays a warning notice at item
"DocService" ~ "unavail".
In such a situation, it is the operator's duty to re-insert the requested disk into the jukebox. To
learn which volume(s) are affected: Within the Devices section of the Archive Server
Administration, open the Unavailable Parti tions window as illustrated above; this
window reveals the volume names in question.

Handling Optical Archive Media 15-11


QPENTEXT

15-12 710
OPEN TEXT

QPENTEXT

16 Media Migration
Migration of optical media in Archive Server

Media Migration 16-1


QPENTEXT

Chapter guide

Ili
_ ............-==~
Media migration: general aspects
OPEN TEXT
:4'_iai~:uwmijl¥j.

III Doing the migration


- Involved steps

Ili Additional features


Ili Document migration

Media Migration Slide 2

16-2 710
OPEN TEXT

OPEN TEXT

III Fundamental idea


1. Copy (all or selected) documents from old to new media
2. Remove old media from Archive Server

III Motivation
Aging of media --+ data may volatilise
Aging of technology --+ compatible drives may not be available forever
Storage migration --+ migrate to storage system with "virtual jukebox"
Expiration of data --+ after given retention period
Re-organ isation --+ apply new features to old documents:
compression, encryption

III Archive Server tool: volume migration


Enables migration on document component level
More powerful and flexible than migration on file or filesystem block
("physical") level

Media Migration Slide 3

Media Migration 16-3


QPENTEXT i

Volume Migration - Process


QPENTEXT

creates doc.comp 1 doc.comp 1


doc.comp 2 doc.comp 2
doc.comp 3 doc.comp 3

Check migration
status utility

__~_.J0>
removes
Administrator Old medium New medium

Media Migration Slide 4

Volume migration is organized per medium (= "volume" - but the two sides of a UOO, WORM or OVO
count separately!). The whole migration process for a single medium is composed of the following
stages:
Creation of migration jobs
1. The administrator starts the migration utility (in the Archive Server administration) and specifies a
selection of media and documents to be migrated.
2. The utility creates one "migration job" per selected medium (stored in the OS database).
Enqueuing document components for migration
3. The Migration Server is triggered by a periodic Archive Server job.
4. The Migration Server reads the migration jobs, i. e. volumes that are queued for migration.
5. It reads from the OS database which document components are stored on the selected volumes
and therefore are to be migrated.
6. It enqueues each found document component in the normal "writing queue" that is also used for
managing the writing process from the disk buffers to optical media.
Copying document components to new media
7. The media write job of the migration target pool is started and reads the "writing queue".
8. It copies the document components from the source media to new target media.
9. It updates the database to reflect which components have been copied.
Updating status of migration jobs
10. The Migration Server is triggered the next time by the corresponding Archive Server job.
11. The Migration Server checks which document components have been copied in the meantime.
12. When all document components of a migration source medium are found to be copied to new
media, that medium's migration job is marked as "finished".
Finishing the migration
13. The administrator invokes the Check migration status utility in the administration client.
14. The utility reads the migration jobs and displays the status for each volume.
15. The administrator removes media whose migration is finished from the system (exports from OS
database and removes media from storage system).

16-4 710
QPENTEXT

Migration Server's work


OPEN TEXT

III Migration Server


- Service process controlled by spawner (volmig . exe)
- Operation controlled by periodic job command Migrate_ Volumes

III Checks/updates status of previously begun volume migrations


- Checks for enqueued components: already written to destination media?
- Checks for overdue components (not written within a week)
- Purges work items from finished volume migration jobs

II Processes new/pending volume migration jobs


- Gathers components from volumes according to administrator's selection
- Enqueues components for writing to destination pool
- Stops enqueuing when given limit is reached
$ Default: 10 GB per migration run
$ Remaining components are deferred to next Migration Server invocation
- Writes migration logfile to var/log/migration/ for each medium

Media Migration Slide 5

The items mentioned above reveal that the Migration Server's tasks are more elaborate than
the previous page suggests; to be mentioned here are the check for overdue components and
the limited amount of data processed at one migration run.
Correspondingly, the Migration Server has two important configuration options:
Max. amount of data to be enqueued per migration run (default: 10 GB)
Max. period of time after which enqueued,
not-yet written components are considered overdue (default: 7 days)
These parameters can be maintained in the Server Configuration page of the Archive Server
Administration, branch Volume Migration -7 Volume Migration Configuration.

Media Migration 16-5


Additional considerations
OPEN TEXT

Iii Migration of all media types possible (Archive Server ~ 9.6)


- Space requirements when migrating to DVD
ISO image and directory tree must fit into ISO burn buffer
" No space in disk buffer required
- Migration to WORM
" Direct copy without temporary buffer
- Migration WORM -+ WORM
" Two drives required
- Hard disk based pools (Le. FS pool) as target
- Bulk migration ISO images

iii Option to encrypt I compress data during migration


- Migrating compressed data to a logical archive without compression
will not uncompress data

III Retention settings can be applied during migration


II Remote Migration for ISO and IXW as source possible
Iii Job Manipulation options
Media Migration Slide 6

In Archive Server S 9.5, migration is restricted to optical media types only.

If you want to learn more about Volume Migration options, see Technical Information on
Volume Migration in ESC:
https://esc.ixos.com/1140188069-686

Old OR3 structures used in early DS versions are not supported.

16-6 710
.QPEN TEXT

Chapter guide

III! Media migration: general aspects

III Doing the migration


- Involved steps

III Additional features


III! Document migration

Media Migration Slide 7

Media Migration 16-7


QPENTEXT

Preparation steps
QPENTEXT

iii Schedule periodic job


for triggering Migration Server
- Command Migrate_Volumes
- Preferably schedule to run
during periods of low system workload,
e. g. during the night

iii Create migration target pool


- In same logical archive
as source media reside
- Choose target media type (not HDSK!)
- Application type: Migra tion
- Schedule and configure write job properly
To be done once per
"migration project",
at least once per
logical archive

Media Migration Slide 8

The job for running the Migrate_Volumes command is not present on an Archive Server by
default, i. e. you have to create it before you can start migration projects. To do so:
1. In the Archive Server Administration, tab Jobs, right-click the jobs list and choose Add
from the pop-up menu.
2. In the Create Job dialog, enter a name for the job. This name can be chosen freely, but
should preferably be meaningful.
3. Choose the job command Migrate-,-Volumes from the list, as illustrated above.
4. Schedule the job as desired. Normally, it is most reasonable to have the job run once a
night.
It might also be a good idea to concatenate this migration job and the write job of the
migration target pool; this way, both jobs can do their work without wasting any time,
giving you more flexibility to schedule other jobs (see also chapter Periodic Jobs).
For details about creating a media pool as the migration target, see chapter Configuring
LogicalArchives. The only specialty in this situation is the application type Migration, as
illustrated above.

16-8 710
QPENTEXT

Plan migration for selected volumes


OPEN TEXT

_
. _ ___ _--------::::::==---
ptional Retention SettingS_J---~--
_ __.__ _---_ _ _--...----- _____
.-----.
;;hm"""".'.,.,.,,'""";""'""''''''"''ii%~'''.''"''T'''''·''n'''';;,'.'''-;;'''''7"''.,"","","""."."""",,,,,,.,,,,,,,,,.;;,,,,,,:;;,,,,,,,

Optional Export after migration I


...-'

Make sure that media planned to be migrated are online!

Media Migration Slide 9

In order to plan migration of certain storage media, invoke the Migrate components on
volume utility from the Utili ties menu in the Archive Server Administration. Specify the
following:
Source volume: Names of the storage volumes to be migrated. Multiple volumes can
be selected by including regular expressions:
- Ranges in square brackets. Multiple ranges < from> - <to> and single values
can be concatenated with commas as separators.
This will mostly be used to select ranges of media sequence numbers.
Caution: Make sure you pad numbers with differently many digits with leading
zeroes to equal length! (Not: [1-123J, but: [001-123J)
- The wildcard character '*'.
Make sure you select volumes from a single logical archive only! This is mostly insured
by the logical archive name as part of the volume names, specified here literally (not by
wildcard).
• Target archive & pool that will contain the newly written media.
Migrate only components that were archived ... : You may optionally select
document components to be migrated according to their time of archiving. This is useful
if, by the opportunity of migration, you intend to get rid of expired documents on the
source media.
Do not migrate components ... newer versions ... : This is always reasonable
because newer component versions always supersede older versions; those older
versions are therefore never needed any longer.
The Migration Server will access the selected media for reading status information of the
stored documents during the document enqueuing stage already (see also next page); for this
reason, it is mandatory to have the media online (known to DS database and available in
storage system) as soon as their migration is planned the way described above.
Archive Server ~ 9.6 additionally allows to set
- Retention Settings, i. e. no. of days
- Additional arguments, i. e. -e for export after migration

Media Migration 16-9


QPENTEXT

Review migration progress of volumes


OPEN TEXT

II Output of ADMC's Volume migration status utility:

Status of volume migr:ation:


ID Soutce volume Not oldeJ: Ccount Sm.allest Created
State Destination pool Not yngt. Flags Largest Finished

Al_0017A 98756 140 2006-02-10


A5_iso e. 88400 2006-02-11
Al_0020A 81922 112 2006-02-10
'&Jait AS_iso 97655 72 ~
225 Al_0021A 78221 155 2006-02-10
Copy A5_iso 98116
227 Al_0023A 77285 145 2006-02-10
New A5_iBO 98279

• Status Fin ~ Migration of this volume successfully finished;


volume can be exported from archive

" Detailed enqueing log in var/log/migration/<volname>.log


- Includes listing of all enqueued components

Media tv11gration Slide 10

To retrieve the migration status listing shown above, choose Volume Migration Status
from the Utili ties menu in the Archive Server Administration. As the first step, selection
options will be displayed, offering you to restrict the view to new, in-work, and/or finished
migration items. Here, choosing no option at all is equivalent to choosing everything for
display. \

Status New: A migration job for the volume has been created (using the Migrate
components on volume utility), but the Migration Server has not yet begun to enqueue
document components stored on it.
Statuses In progress: The Migration Server has enqueued all components from the
source volume, but not all components are written to their destination media yet.
Status Fin: All enqueued document components of the medium are already written to
destination media (i. e. are listed in table ds_comp). The Migration Server has therefore
purged the corresponding component entries from table vmig_work.
Status Canceled: In Archive Server ~ 9.6, it is possible to interrupt or cancel the
migration process.
Status Error: Volume migration encountered a problem during the migration process.

You can check this tables in sqlplus, e. g.:


sqlplus ecr/ecr@ecr_<servername>;
7 select table_name from user_tables;
7 select * from vmig_jobs;
7 select * from vmig_work;

16-10 710
OPEN TEXT

Pause Migration Job


OPEN TEXT
-t@_#m'ttitWW4#i¥

iii Utility Pause Migration Job


- vmclient pauseJob <jobID>
- NEW, PREP, COpy 7 PAUS

iii Utility Continue Migration Job


- vmclient continueJob <jobID>
- PAUS, ERR 7 previous mode

Media t1l1igration Slide 11

Media Migration 16-11


After a migration project
OPEN TEXT

iii When all "old" media are migrated:


Make migration target pool the default pool (optional)
- New documents are then added to former migration pool
on "modern" media

Media Migration Slide 12

16-12 710
QPENTEXT

Chapter guide

II Media migration: general aspects

II Doing the migration


- Involved steps

II Additional features
II Document migration

Media r.;ligratlon Slide 13

Media Migration 16-13


QPENTEXTiI

Verification after Migration (1)


OPEN TEXT

l1li Volume Migration can verify


- Timestamps
- Checksums
- Compare content

iii BLOBs checked as one component


- no individual check of contained components

II Individual checksum types can be disabled


l1li Verification mode adjustable until migration job has finished
III Migration job remains in status WAIT until all components have
been verified

Media tvligration Slide 14

The following checksums are possible:


I '
- CRC32 I
CLSIG (typ. SHA-1)
SIG (message digest from timestamp)
DIG2 (SHA-1)
DIG3 (MD5, disabled per default)
DIG4 (RipeMD160)
DIG5 (SHA-256)
DIG6 (SHA-512)

16-14 710
QPENTEXT

Verification after Migration (2)


QPENTEXT
~j~1TU@m41tm@#tid.

1111 Server Configuration


Gill [PI Mministration Server (ADMS)
Gill [PI Administration Servlets (V113ADMIN)
. Gill [PI DBORA Database only, no Schema (Tables)
. §lj [PI DBORAS schema
Gill [PI Document Service (DS)
§lj [PI ECR Web Monitor (V113MONC)
;. Gill [PI General Archive Server settings (COMMON)
§lj [PI Generalized Store
II [PJ iNSTALL01 package
§lj [PjINSTBASE package
II [PI Key Storebackuplrestore tool (RCIO)
Gill [PI Log file configuration
Gill [PI Monitor Server (MONS)
§lj [PJ Notification Server (NOTS)
Gill [PI Notification Server Servlet (V113NOTS)
Gill [PI Pertinterpreter (PERL)
II [PI Spawner, DP andbaseDocTools (BASE)
Gill [PI Storage Manager (STORM)
II [PI TimestampServer (TSTP)
ql..• ~ IPlvolume Migration (IIMIG)
Gill ariables
Gill
§lj [PI Mapping ofNFSSERVER names
(i)o Gill [PI Log file configuration

Media Migration Slide 15

Media Migration 16-15


Verification after Migration (3)
QPENTEXT
\)~~f im4iHWMif¥@@

III! Checks can be combined


- Timestamp verification
- Checksum verification
- Binary comparison

Media Migration Slide 16

More options are available on using the command line tool vmclient.

16-16 710
QPENTEXT

Bulk migration of ISO images (1)


QPENTEXT
1t&.ib&4tltW#fuM@

III Fast migration = Bulk migration


III All kinds of ISO images can be migrated
- CD,DVD
- Cantara
- WORM
• If written by dsCD only!
* No finalized IXW media
- HSM

III Faster than classic migration


III No document date filter
III Volume name is retained

Media t1l1igration Slide 17

Media Migration 16-17


Bulk migration of ISO images (2)
OPEN TEXT

iii Simple backup medium


iii Cloned images are transparent to OS
- Impossible to decide where to read data from
- Impossible to perform verification on a document basis
- Whole ISO image has self-contained checksum
* verified automatically

iii All document attributes remain unchanged

Media Migration Slide 18

16-18 710
Bulk migration of ISO images (3)

iii Fast Migration of ISO Volume

III Source Volume


iii Target
Archive or Pool

Media Migration Slide 19

Media Migration 16-19


QPENTEXT

Bulk migration of remote ISO volumes (1)

III Fast Migration of remote ISO volume

III DB connection

III Source volume


ill! Target archive/pool
!II Retention settings
III Verification mode
III Options

16-20 710
QPENTEXT

Bulk migration of remote ISO volumes (2)


QPENTEXT

III Source and Target distinguishable


- Verification supported on document basis
- Change of archive/retention supported
- Deleted documents don't reappear

III "dumb" mode uses dsTools importeD


- No change of archive/retention
- Deleted documents will reappear

Medla tv1igralion Slide 21

Media Migration 16-21


QPENTEXT

Run Migration per Pool

III Optional parameter for


job migrate_volumes
-p <pool>

III One instance of


migrate_volumes job per
pool possible
III Higher priority for pool
by scheduling its
migrate_volumes job
more often
III Jobs can not run in
parallel Media Migration Slide 22

\ '

16-22 710
QPENTEXT

Exercise: Do media migration


QPENTEXT
.°t:liMdtktigmlffliiW'¥

" Prepare media migration


- Create migration job
- Create destination media pool

" Enqueue medium for migration


- Check status

" Run migration job


- Check status again

IO! Run destination pool's write job


- Check status again

IO! Run migration job again


- Check status

Media Migration Slide 23

Media Migration 16-23


QPENTEXT

Chapter guide

III
--...............~~
Media migration: general aspects
OPEN TEXT
-tt@M,IIjEnw}mWW·

III Doing the migration


- Involved steps

ill Additional features


ill Document migration

Media Migration Slide 24

16-24 710
QPENTEXT

Document Migration - Feature


OPEN TEXT

Migrate
Document

III Migration of specific document into other pool/Logical Archive


III Allows changing Logical Archive & Retention in OS database
immediately
III Logical Archive in ATTRIB.ATR is not changed
III After successful copying document to new pool
- Document can be purged from old pool

Media Migration Slide 25

Single File FS is the successor to HD write-thru pool. Unlike hard-disk write thru, Single File
FS is using Diskbuffer.

Media Migration 16-25


OPEN TEXT

Document Migration - Details


OPEN TEXT

III Scenario is referred to as Deferred Classification


- Leading application writes documents of different types to a temporary
archive
- Leading application later sends special command (migrate) to classify each
document according to its type
- Set the retention date to its final value
- Change the logical archive to the final target archive
- The write jobs of the target archives copy the documents to the final
storage system(s)

III Do not use BLOBs!


u Do not use Single Instance Archiving (SIA)!

Media Migration Slide 26

16-26 710
OPEN TEXT

Document Migration - Function Call


OPEN TEXT
1M;.i!!!.-• •

iii Function call via dsClient or http API

iii! Example via http:


http://<servername>:80BO/archive?migrate
&contRep=<source-archive-id>&docld=<doc-id>
&pVersion=0045&targetContRep=<target-archive-id>
&retention=<days>

II Example using dsh Tool:


migrate -a <source-archive-id> -d <doc-id>
-t <target-archive-id> -T <days>

iii Example via dsClient:


docMigr <DoclD> <Volume> <LogicalArchive>
<TargetPool> 1 <days> 0

Media Migration Slide 27

This call migrates the specified document to the specified logical archive or pool. If no pool is
specified, the default pool of the target logical archive is used. If neither a pool nor a target
logical archive is specified, the default pool of the mandatory logical archive is used. If a
volume is specified, only the components of this volume are migrated. The migration of all
component versions can be forced (otherwise only the latest will be migrated). If no retention
parameter is specified, the default retention value of the specified archive is used.

dsh Tool I http:


migrate -a <archive> -d <docid> [-v <vol_name>] [-P <pool_name>] [-t
<target_archive>] [-T <days>linfiniteleventlnone] [-c all] [-f 1]
-v <volume>: only components of this volume will be migrated.
-T <retention>: Set retention period (dayslinfinite/event)
-c all: force the migration of all component versions.
-f: write job entries for each component (always).

dsClient:
docMigr doc vol archld pool onlyMax reten flags
vol: Source volume
archld: Source Archive
Pool: Target Pool
onlyMax: 1 =only newest, 0: all
Reten: usual values (dayslinfinite/event)
Flags: 1 = overwrite data and generate additional dsjob entry

Media Migration 16-27


16-28 710
QPENTEXT

QPENTEXT

17 Export and Import of Storage Media


Keeping your Archive Server clean of outdated doc~ument,s>

Export and Import of Storage Media 17-1


Possible states of optical archive media
OPEN TEXT

i
1 I

Export and Import of Storage Media Slide 2

Serving as a continuation of chapter Handling Optical Archive Media, this chapter discusses
the media state transitions illustrated above:
A As soon as the retention period of all documents on a medium has passed, the medium
and its contents can be exported from the storage database; the Livelink Enterprise
Archive Server then "forgets" about those documents.
B However, exporting a medium is not one-way. An exported medium can be re-imported
into the server again - or, as well, be imported into a different Livelink Enterprise
Archive Server which then possesses the media contents. This is especially useful in
situations where you want to move all your stored contents to new server hardware.
The following pages present details about how to perform each of the mentioned transitions.

On command line:
c:\> dsTools export <volume name>
C:\>dsClient localhost dsadmin 1111

-> vollnfo [<volume name>]


-> delVolume <volume name>
-> end
C:\>cdadm delete <volume name>
C:\>sqlplus ecr/ecr@ecr_<servername>
-> select table_name from user_tables;
-> select * from ds_volid;
-> exit

17-2 710
QPENTEXT

Exporting a medium: when documents'


QPENTEXT
retention period has passed

101 Document administration data


is removed ("exported") from
storage database
- Medium must be online in jUkebox!
- If medium unavailable (e. g. destroyed):
Use option Export from DB

III For unfinalized IXW medias only:


Export WORM filesystem data
as well
-+ See Administration Guide
101 This way, Livelink Enterprise
Archive Server forgets about
those documents
101 Documents are no longer
retrievable
" Nevertheless: Do not
throwaway exported media

Export and Import of Storage Media Slide 3

In order to remove media containing expired documents from the Livelink Enterprise Archive
Server permanently, invoke the Export Partition dialog of the Livelink Enterprise Archive
Server Administration as illustrated above. Set the checkmarks in the dialog box as shown
above and confirm with OK. A message window will then appear, showing the progress of the
database export procedure as well as possible error messages.
Exporting an unfinalized IXW media from the Livelink Enterprise Archive Server, removing its
filesystem administration data from STORM's database is a separate action. For details, see
the Uvelink Enterprise Archive Server Administration Guide, section Exporting non-finalized
IXW partitions; currently this is section 6.3.3.3.
Notes:
Media should be exported according to the point of time of last writing. This way, you
make sure that all documents stored on them really have expired.
• About the export options in the Export Partition (s) dialog (illustrated above):
Export from DB: Without this option (i. e. in the standard case), the export tool
scans the medium in question for documents to be exported, then it deletes from
the DS database all data about exactly the found documents. In case of
inconsistencies between database and medium, this prevents erroneous deletion
of "wrong" documents.
With this option enabled, the medium is not touched; instead, the information
which documents are to be exported is taken from the database itself. If the
database and the medium are consistent to each other, the result is the same in
both cases.
Conclusion: Not using this option is the safer variant. Use it only in emergency
cases, i. e. if the medium is no longer accessible (lost or destroyed).
Export Partition Name: Using this option, the database "forgets" about the medium
itself along with the documents stored on it. This option should always be
enabled (otherwise re-importing the medium later would cause troUble).
For the sake of data loss protection, never dispose of any archiving volumes!

Export and Import of Storage Media 17-3


and media export

III
_..............
Single-instance archiving (SIA)

Exporting old media may leave "dead" SIA references behind


OPEN TEXT

III Export media of SIA-enabled archives with dsTools


- Standard Livelink Enterprise Archive Server command line tool
- dsTools listRefs <volume name>
" produces a list of volumes and components pointing to the volume
- Exporting media with dsTools [options] export <volume_name>
* Default behaviour: Does not export a target component
if references are still pointing to it
" Option f ("force"): Exports a target component
even if sources are pointing to it
" Options -v -v: Produces a list of volumes or components
pointing to the current volume

Export and Import of Storage Media Slide 4

Since single-instance archiving introduces dependencies between documents that may be


stored on different storage media, tools dealing with export and import of documents must take
this into account.
The situation is special because a SIA reference to a document may be created long after the
document itself has been stored; the reference will then probably be stored on a newer
medium than the document. On the other hand, "old" media are normally exported in
chronological order, i. e. the medium containing the originally stored document would be
exported before the reference (on the newer medium) would vanish from the archive.
On this background, the administrator's challenge is: How to avoid such dead references
during export of expired media?
This is possible using the Livelink Enterprise Archive Server command line tool dsTool s
instead of the export facility embedded in the Livelink Enterprise Archive Server
Administration. The dsTools features mentioned above enable the administrator to handle
SIA references across storage media reliably.
See next page for a description how to use dsTools for exporting media containing SIA
targets properly.

17-4 710
QPENTEXT

Steps for exporting a medium of


QPENTEXT
a SIA-enabled archive

1. Check whether SIA references point to this medium


- dsTools listRefs <volume name>

If so:
2. Export medium, preserving referenced SIA target documents
- dsTools export <volume_name>

3. Migrate medium to new media


- Using the Volume Migration tool (~chapter Media Migration)
- Only the preserved SIA targets will be copied

4. Export medium again, forcing complete export


- dsTools -f export <volume_name>
- Keeping SIA targets is continued on the media created in step 3

Export and Import of Storage Media Slide 5

Handling the export of media containing SIA targets involves the dsTools tool- introduced
on the previous page - as well as the media migration facility discussed in chapter Media
Migration. The step sequence described above utilized the fact that the media migration tool
copies only those documents to new media which are known to the OS database. Using
dsTools to "forget" all other documents in advance (step 2) leads to the behaviour desired
here: Only those documents are preserved on new media that are still referenced by SIA
"sources", which means that they will possibly be requested for access in the future.

Export and Import of Storage Media 17-5


Importing a medium
OPEN TEXT

II Makes medium and its contents


known to OS database again
- No difference to before export

II Usefulness:
- Moving stored data to other server
- Re.importing data that has been
erroneously exported

II Medium must be
in jukebox

Select menu item


corresponding to
medium type
For unusual situations, e. g.
moving documents
between logical archives I
\ '

Export and Import of Storage Media Slide 6

Media, once their contents has been exported from the storage database, can be re-imported
again (for example, if they have been exported erroneously). For this, right-click the medium in
the jukebox contents list in the Livelink Enterprise Archive Server Administration - as
illustrated above - and use the appropriate Import .•• Parti Hon (s) item of the
Utilities context menu.
For a normal medium import, the default options of the import dialog can be used.

The import ... Partition(s) windows are GUI versions of the command line tool dsTools .
To look for additional arguments start dsTools on command line, then you will see all
parameters.
-q speed up import by not determining component length from compression
header
-t <days> only import documents newer than <days> days, speeds up recovery from a
DB crash, where latest DB backup is less then <days> days old

17-6 710
QPENTEXT

Media Import & Index Reconstruction


QPENTEXT
iiIRii/Hflli!MliM.

remembers previously
Iii Media Import with deleted documents
Archive Server ~ 9.6
- reconstruction of index import
media
in OS prevented denied!
- remembers previously
deleted media
• deleted documents
tracked in table
de deleted

Iii Media Import with


Archive Server S 9.5
- allows reconstruction
of OS index

t=xport and Import of Storage Media Slide 7

In Archive Server versions up to 9.5, when importing media with documents that were
previously deleted, the documents would be in the system again.
With 9.6, DS database remembers documents that have been deleted before. This information
is stored in table ds_deleted. If you try to import media with the deleted documents, import of
the deleted documents will be prevented.

Export and Import of Storage Media 17-7


Exercise: Exportl import a medium

III Export a medium from the OS


database
III Check the status of a sample
document stored on the medium
- With dsClient
- Trying to retrieve the document

III Re-import the medium


III Check the document's status
again
1 '
I

Export and Import of Storage Media Slide 8

17-8 710
QPENTEXT

QPENTEXT

18 Consistency Checks for Storage Me,Ola


and Database
Detecting problems between storage media
and storage database

Consistency Checks for Storage Media and Database 18-1


QPENTEXT

Consistency checks: overview


QPENTEXT

iii Check database against partition


iii Check partition against database
iii Check only partition
III Check document
Iii Compare backup IXW medias

Consistency Checks for Storage Media and Database Slide 2

18-2 710
OPEN TEXT

Check database against partition


OPEN TEXT

III Are all documents known to database really stored on volume?


- Detects documents missing on storage medium

" Usefulness
- After restoring an original WORM from the backup
- When suspecting damage of a storage medium

.. Medium to be checked must be online


III Possible reactions on inconsistency
- Report error, but do not try to repair (check only)
- Try to recover missing file from other storage volume
(e. g. from disk buffer to !XW media)
Media to copy from must be online
- Delete "dead" reference from database ("export component")
-? May lead to document loss! Get help from Open Text Support if in doubt.

"Everything stored correctly


on this medium?"

Consistency Checks for Storage Media and Database Slide 3

The "export component" repair option is somewhat dangerous: Depending on the exact type of
inconsistency, you may lose references to documents that are still stored somewhere in the
archive. But even if there is no more instance of the document within the archive, recovering
the document from external sources (if that is applicable) may require the reference
information in the database to still exist.
Therefore: Use this repair option only if you are sure that you do not need the missing
documents any longer! If in doubt, rather contact Open Text Support for help.

Consistency Checks for Storage Media and Database 18-3


OPEN TEXT

Using consistency check tools (1): Starting a


utility

Start via context menu


of storage medium

Start via
Utilities
menu

Parameter entry
for utility (example)
Consistency Checks for Storage Media and Database Slide 4

18-4 710
QPENTEXT

Using consistency check tools (2): QPENTEXT


The message:s~W~in~d:O:W~• • • • • • • • • • •I:lr'f);_;·lam;,14;.llil@;u;m;i@;,.

Messages window
opens when utility
is started

Consistency Checks for Storage Media and Database Slide 5

Consistency Checks for Storage Media and Database 18-5


QPENTEXT

Check partition against database(1)


QPENTEXT

IIIi Are all documents stored on volume really known to database?


- Detects lost document references in database

III Usefulness
- Database recovery
- When suspecting problems with the database contents

ill Medium to be checked against must be online


III Possible reactions on inconsistency
- Report error, but do not try to repair (check only)
- Recreate missing reference in database ("import document")

"All documents stored on this medium


also referenced by the database?"

Consistency Checks for Storage Media and Database Slide 6

18-6 710
OPEN TEXT

Check partition against database (2)


OPEN TEXT

Start via context menu


of storage medium

Parameter entry
for utility (example)
Consistency Checks for Storage Media and Database Slide 7

Consistency Checks for Storage Media and Database 18-7


OPEN TEXT

Check only partition


OPEN TEXT

III Is document structure on volume consistent?


- Detects corrupted documents on medium

iii Usefulness
- When suspecting any kind of problem with a storage medium

III Medium to be checked must be online


III Error reporting only - no repair options available

"Alf documents on this medium


have consistent structure?"

Consistency Checks for Storage Media and Database Slide 8

18-8 710
QPENTEXT

Check document

iii Is document stored correctly on media, as known by database?


- Detects "lost" storage locations of a single document

.. Usefulness
- Analyzing trouble accessing a specific document

iii Media carrying components of the document must be online


" Possible reactions on inconsistency
- Report error, but do not try to repair (check only)
- Repair inconsistency:
• If missing file is still stored on another medium: recover it from there
• Otherwise: Delete the "dead" reference from database
-+ May lead to document loss! Get help from Open Text Support if in doubt.

"This document stored correctly


on all referenced media?"

Consistency Checks for Storage Media and Database Slide 9

The repair option of this check utility is somewhat dangerous: If a document component is
missing on the referenced storage volume and it is not known to be stored on any other
volume, the utility would delete this "dead" reference to the missing component. Depending on
the exact type of inconsistency, this may cause the database to ''forget'' document components
that are still stored somewhere in the archive. But even if there is no more instance of a
missing component within the archive, recovering the component from external sources (if that
is applicable) may require the reference information in the database to still exist.
Therefore: Use the repair option only if you are sure that you do not need the missing
document components any longer! If in doubt, rather contact Open Text Support for help.

Consistency Checks for Storage Media and Database 18-9


QPENTEXT

Compare backup IXW medias


OPEN TEXT

III Are WORM backups consistent with the original?


- Detects corrupt IXW backups

!iii Usefulness
- When suspecting corruption of WORM backups

III Original and backup IXW medias must be online


III Error reporting only - no repair options available

"Are backup(s) consistent


to this originallXW media?"

.-=::::::::::::=====--.::-
Original
Backup(s)

Consistency Checks for Storage Media and Database Slide 10

18-10 710
QPENTEXT

Exercise: Check and repair consistency


QPENTEXT
between medium and database

III Check IXW or ISO medias


against database
- Without repairing (reporting only)
- Examine results

III If necessary, run check again


with repair option
- Examine results

III Run check again


- Examine results; no errors should
occur now

Consistency Checks for Storage Media and Database Slide 11

Consistency Checks for Storage Media and Database 18-11


QPENTEXT

18-12 710
OPEN TEXT

QPENTExT

19 Expanded Archive Server Installations


Improving data safety and performance by
increasing redundancy

Expanded Archive Server Installations 19-1


Chapter overview
OPENTEXT

Possible major configurations:


III Archive Server - basic configuration
- Optical disks as originals plus backups in same jukebox (= standard)
- Backup copies in a separate jukebox

III RemoteStandby
- Remotely replicated archives and buffers

II HotStandby
- Automatic failover system

III CacheServer
- Separate server minimizing network load for read & write access

Expanded Archive Server Installations Slide 2

19-2 710
OPEN TEXT

Local Backup of Media


OPENTEXT

II Backup of media by Archive


Server
Livelink Enterprise
Archive Server iii Backup in same storage device
or another device

II Copies of ISO images are treated


as one logical volume
- Load balancing during read
- Read failover if one medium not
Copy of A CopyofB available

LOgiCaIVOIU~

Identical copies of an ISO image

Expanded Archive Server Installations Slide 3

Expanded Archive Server Installations 19-3


QPENTEXT

Single Archive Server with separate backup


jukebox OPENTEXT

Fire protection
wall possible
Second
jukebox

RAID

Expanded Acehive Server installations Sli

An Archive Server with one jukebox and backup copies of the media is the standard minimum
configuration. RAID 1 or 5 is used for the Archive Server's hard disk space.
This scenario assures that all data archived on optical disks is stored on duplicate partitions.
As the duplicate partitions are produced in the same physical jukebox where the originals
reside, the duplicates should be removed to a safe place for maximum security.
To improve protection against hardware failure and natural disaster you can create backup
copies in a separate jukebox.

19-4 710
OPEN TEXT

RemoteStandby
OPENTEXT

JOOl
Archive Server I=.~...'~
.' .~
CSI "",. "

Replication
-.----
WAN

Ii Periodic data replication


- On storage media level
- Configurable per logical archive I disk buffer
Ii Local read access
- Minimizes network load
- Load distribution on involved servers
Ii "Full read access" in case of fail over
- With certain scenario restrictions

Expanded Archive Server Installations Slide 5

This configuration supports remote replication. With it you can replicate archives and pools. In
the RemoteStandby scenario, a fully functional, remote Archive Server is capable of replicating
the archived data of an original Archive Server over significantly great distances by virtue of a
WAN connection. The configuration is implemented from the RemoteStandby server (a
maximum of three RemoteStandby servers can be configured). The RemoteStandby server
replicates asynchronously the archives and hard disk buffers of the desired original server.
The replication interval is specified on the RemoteStandby server. It is performed as a
"synchronize" function from the RemoteStandby.
A RemoteStandby server provides read-access to its replicated archives. Should anything
happen to the original server, all archived documents present at the time of the last "replicate
synchronization" can be retrieved from the RemoteStandby server.
The configuration "original archive - RemoteStandby archive" may be a reciprocal one. An
Archive Server may be an original server as well as a RemoteStandby server for a second
original server. This configuration provides two major advantages. First, you exploit the
hardware available to you by giving it double-duty. Second, access to a document retrieved
from a local replicate archive is much faster than retrieving the identical document from the
original server thousands of miles away.

Expanded Archive Server Installations 19-5


Architecture of Remote Standby (1)
OPEN TEXT

Logical archives are replicated:


Disk buffers and media

Storage Systems Remote


Standby Server
Storage Systems Original Archive Server
Note
Fully replicated archive server No synchronous write!

Performance balancing Difference between original and backup


possible
Full Read Access in case of failover (OS
not available) One RSB: same MEDIA are required, but
not same devices
Exception:
HD·WO to DVD, WORM

Expanded Archive Server Installations Slide 6

19-6 710
QPENTExT

Architecture of Remote Standby (2)


OPENTEXT
imMMrn{ftAt!.tp

Expanded Archive Server Installations Slide?

Expanded Archive Server Installations 19-7


Switch over to remote standby ...
OPENTEXT

-(D---------
User waits for response -
Wait up to 120 seconds before switch over
Up to 3 servers can be configured
Server priorities can be defined

Remote Standby Server


Expanded Archive Server Installations Slide 8

The next three slides, describe the process of retrieving a document, in case of a failure of a
piece of hardware.
We assume that there are three Archive Servers in the company. The clients connect by
default to Server one!
Server Priorities
In a remote standby configuration, documents can be requested from both the original
server and the remote standby servers. You use this command to define the sequence
in which the servers are accessed for each replicated archive. It is usually quickest and
most efficient to access the closest server.
It is not important on which server you specify the server access sequence. The setting
affects all the known servers.

19-8 710
OPEN TEXT

Scenarios for Remote Standby Server


OPENTEXT

III Only read access - archiving not possible


- Scanned documents can be buffered in local pipelines
for a while
- Supported by Livelink Enterprise API, CacheServer, Viewer,
Livelink for Microsoft Exchange/Lotus Notes Edition

III Useful scenarios


- Early & Late archiving with barcode
- COLD
- DocuLink
- eCONtext for Applications (UniversaIArchive)

III Not useful I not available:


- SAP Data Archiving
- PDMS (Workaround: use CacheServer)

Expanded Archive Server installations Slide 9

Expanded Archive Server Installations 19-9


OPENTEXT

Fire protection wall

Archive Server HotStandby server


RAID 1
active dormant

Backup
Original jukebox
jukebox

" Automatic failover system


" Symmetric cluster via fiber optic connection
- Same topology for all platforms, Windows and Unix

Expanded Archive Server Installations Slide 10

A HotStandby server is the key component for this configuration. The Archive Server high
availability system guarantees against loss of time as well as against loss of data. This
scenario provides a fully functional second Archive Server capable of taking over operations
automatically if the original sever should fail for any reason. This HotStandby server monitors
the original server; in the case of system failure, the HotStandby takes over automatically. In
Archive Server, this is referred to as an automatic failover system.
In the automatic failover configuration, two Archive Servers access the same RAID-protected
hard disk partitions, although not at the same time. The HotStandby server is connected to one
or more jukeboxes containing backups of the original archived documents. This is
implemented by backup jobs that run regularly between the two. The hard disk buffer and
pools are shared and they are protected with RAID 1.
If the original server should fail, the HotStandby starts automatically, working with the data
stored on the commonly accessed hard disk partitions. By means of a fire protection wall the
automatic fail over scenario can protect against the threat of fire.
Distances up to several kilometers between the cluster nodes are possible.

19-10 710
OPEN TEXT

Hot Standby Server with two hubs

Server1: active Shared disk array Server2: dormant

SCSI! fiber channel SCSI! fiber channel


<

Jukebox Jukebox

Expanded Archive Server Installations Slide 11

Expanded Archive Server Installations 19-11


Hot Standby and Remote Standby Server

Replicate Disk Buffer

Network

Replicate Partitions

Crosswise backup of media/partitions

Expanded Archive Server Installations Slide 12

19-12 710
QPENTEXT

Features of a Hot Standby Server


QPENTEXT

IiII Full system availability after approx. 5 minutes


III Clients who didn't access the archive server during downtime
will not notice the server swap
III Read I Write access in case of failover
- No mirroring of storage systems

III Recommended for critical scenarios


- Availability> 99 %
- High access rates
- Workflow scenarios

Expanded Archive Server Installations Slide 13

Expanded Archive Server Installations 19-13


Cache Server
OPENTEXT

Archive Server CacheServer


First read access
First read access

WAN

• Acts as a proxy server


• Caches transferred documents on hard disk
• No optical storage equipment
• No storage database

• Reduces WAN load for read & wri equests


• Write-through cache for archiving actions
• Requires HTTP communication for archiving Local
area
• m Archive Servers: n CacheServers possible network
• Not suitable for data recovery purposes

Expanded Archive Server Installations Slide 14

The CacheServer caches every document that someone has had a look at in your local
network. This avoids the WAN stroke as a performance bottleneck for further read accesses.
When a document is archived, the CacheServer transmits the document to the connected
Archive Server immediately ("write-through") and keeps a copy in its local cache.

In addition to enhancing read request performance, using the write-back cache feature
(Archive Server ~ 9.6.1) can improve also WAN load for write requests. Writing to Archive
Server is delayed and can be performed Le. during night when less load is expected.

19-14 710
Cache Server Scenario

-~~......=~

Central Archive Servers & Storage Devices


Logical Archives: EU, Japan, US

Logical Archive: Japan


CacheServer

Europe

Expanded Archive Server installations Slide 15

Expanded Archive Server Installations 19-15


QPENTEXT

Cache Server
QPENTEXT

III Decentralized data storage (copy of stored data)


III Separate hardware
- Just big file server
- No storage devices needed
- No database needed

III Write through cache or scheduled write-bac


III Read once
- Document validity is checked on a new read request

II Documents are stored in hard disk cache (FIFO or LRU)


II Transparent for clients
- Supported for Windows and Java Viewer

III Project specific perfecting possible

Expanded Archive Server Installations Slide 16

Some specific Cache Server aspects:


- Timestamp verification is done by the archive server.
- Cache server cannot be used in a stand alone modus when archive server
cannot be reached.
- If document is out of date, only the component (not document) is sent anew.
- Caches server supports FIFO (first in first out) and LRU (last recently used,
since Archive Server ~ 9.6.1) caching strategy.

19-16 710
OPEN TEXT

Input Scenario and Local Cache Server


OPENTEXT

Local sites:

Scanning

Retrieval mainly
of local documents
D-
Documents are fetched from local
CacheServer

No central document access


necessary for reading
(only Documentlnfo)

Central access to index


data

Central Archive Server with Index


Database
IT infrastructure
optical devices
administration
backup
Expanded Archive Server Installations Slide 17

Expanded Archive Server Installations 19-17


QPENTEXT

Remote Standby Server versus OPENTEXT


Cache serverllllllllllllllllllllllllllllllllll."iJl;_;iM;'"i,ji;j¢ji""i#i"imioo~
I

Remote Standby Server Cache Server I ,


III Security & Availability .. Volatile document storage
III Copy of entire logical archive .. Documents can be swapped out
guaranteed (FIFO or LRU)
II Additional storage devices for .. Suited for many small local sites
remote-standby server « 100 users)
necessary
.. No storage devices in local site I
III High (read only) access rates no administration I no database
III Temporary differences between " Less net traffic when using local
original and backup possible scanning with subsequent read
access
" Two replicates possible
III Less performance during write
.. Less investments

Expanded Archive Server Installations Slide 18

19-18 710
OPEN TEXT

OPENTEXT

Expanded Archive Server Installations Slide 19

This overview shows the main advantages of the different solutions. No solution alone can
protect you against every potential problem. Each Archive Server customer has to choose an
optimal solution according to risks, cost, and main concerns.

Expanded Archive Server Installations 19-19


QPENTEXT

19-20 710
QPENTEXT

QPENTEXT

20 Remote Standby Configuration and Oc,ercltinla


Steps for remote storage replication

Remote Standby Configuration and Operating 20-1


OPEN TEXT

Chapter guide
OPEN TEXT

iii Introduction

iii Configuring replication


- Servers
- Logical archives
- Disk buffers

II! Providing media replicates on Remote Standby server

iii Executing and reviewing the replication

iii Replication with Storage Systems· Examples

Remote Standby Configuration and Operating Slide 2

20-2 710
QPENTEXT

Basic concept and benefits


QPENTEXT
llliEftDlIiiiMI»

Remote Standby
Archive Server Archive Server

.---- Replication

WAN RAID 1 Remote Standby


Archive Server

" Periodic data replication


- On storage media level
- Configurable per logical archive I disk buffer
" Local read access
- Minimizes network load
- Load distribution on involved servers
" "Full read access" in case of fail over
- With certain scenario restrictions

Remote Standby Configuration and Operating Slide 3

This configuration supports remote replication. With it you can replicate archives and pools. In
the RemoteStandby scenario, a fully functional, remote Archive Server is capable of replicating
the archived data of an original Archive Server over significantly great distances by virtue of a
WAN connection. The configuration is implemented from the RemoteStandby server (a
maximum of three RemoteStandby servers can be configured). The RemoteStandby server
replicates asynchronously the archives and hard disk buffers of the desired original server.
The replication interval is specified on the RemoteStandby server. It is performed as a
"synchronize" function from the RemoteStandby.
A RemoteStandby server provides read-access to its replicated archives. Should anything
happen to the original server, all archived documents present at the time of the last "replicate
synchronization" can be retrieved from the RemoteStandby server.
The configuration "original archive - RemoteStandby archive" may be a reciprocal one. An
Archive Server may be an original server as well as a RemoteStandby server for a second
original server. This configuration provides two major advantages. First, you exploit the
hardware available to you by giving it double-duty. Second, access to a document retrieved
from a local replicate archive is much faster than retrieving the identical document from the
original server thousands of miles away.

Remote Standby Configuration and Operating 20-3


Proposal: central backup for multiple servers
OPEN TEXT

Local Archive Server 1

Local Archive Server 2

Local Archive Server n

:~~~,
'1" SCSI

J,f LAN I fast WAN

LAN I fast WAN

Remote Standby Configuration and Operating Slide 4

One style of RemoteStandby operation is the one illustrated above: Having multiple original
servers, possibly geographically distributed, plus one central RemoteStandby server backing
up all of them.

20-4 710
OPEN TEXT

How replication is performed

Remote Standby Configuration and Operating Slide 5

For RepmoteStandby replication, the following distinction must be considered:


It is configured on the level of disk buffers and logical archives.
It is performed on the level of storage volumes.

In addition to setting up the replication configuration for disk buffers and logical archives, the
server administrator or operator has to perform the following steps:
For each replicated disk buffer, replicates of all assigned original buffer volumes (i. e.
hard disk partitions) must be provided and initialized on the RemoteStandby server.
For all originallXW medias used by replicated archives, replicate IXW medias must be
provided and initialized on the RemoteStandby server (this task may be automated).
This is an ongoing task since new IXW medias will be allocated by the original server
regularly.
ISO media, however, do not need to be initialized explicitly on the RemoteStandby
server; it is sufficient to provide enough blank media there. The replication job will take
an arbitrary blank medium and fill it whenever a new medium has been written on the
original server.
The remaining media operation tasks - labelling, storing backups away, setting offline and
online as needed - are the same for replicates as for media on the original server; see
chapter Media Operating for more information.

Remote Standby Configuration and Operating 20-5


Chapter guide
OPEN TEXT

III Introduction

III Configuring replication


- Servers
- Logical archives
- Disk buffers

III Providing media replicates on Remote Standby server

III Executing and reviewing the replication

iii Replication with Storage Systems" Examples

Remote Standby Configuration and Operating Slide 6

20-6 710
QPENTEXT

Globally enable remote replication on original


QPENTEXT
server

On original server:

r··1ilJ (PI Administration Client (ADMC)


j····1ilJ (PI Administration Server (ADMS)
!····1ilJ (PI Document Pipeline (DP)
$l ..,g (PI Document Service (OS)
. S··1ilJ (PI Accounting and Statistics
1··...,& iW 1:!ttiI ai i;iY.,U,Uhi IUti1d
i····1ilJ (PI Cache configuration

Remote Standby Configuration and Operating Slide 7

The first configuration step is to enable the backup option - illustrated above - on the
original server (unless it is already enabled, especially if the server has already been involved
in a Remote Standby setup). This option makes the server record all changes to hard disk
volumes (of disk buffers or hard disk pools (HDSK. FS, VI» for promoting them to the Remote
Standby server later.
For checking/setting the option, the Server Configuration page of the Archive Server
Administration can be used (as shown above); see chapter Where to Find What for more
information about this.

Remote Standby Configuration and Operating 20-7


Make original and Remote Standby servers OPEN TEXT
known to eia.C~h~o~t~h~e;r~• • • • • • • • • • • • .=r=1=m:'I;jil=.M=..=.~
On Remote Standby server:

Remote Standby Configuration and Operating Slide 8

Before replication of logical archives and disk buffers can be configured, both involved Archive
Server must be made known to each other - as illustrated above, in the Archive Server
Administration.
Making the Remote Standby server known to the original server, make sure to enable the
Allow replication option - otherwise the original server will deny sharing its data with
the Remote Standby server.

20-8 710
OPEN TEXT

Configure replication of logical archive


OPEN TEXT

...
......
......
••
rl/AI: •••••
Configure
........
backup properties
of replicated pools

Remote Standby Configuration and Operating Slide 9

As soon as original and Remote Standby servers know each other (see previous page),
configuring the replication of a logical archive is fairly easy:
1. Within the Archive Server Administration, connect to the Remote Standby server.
2. Within tab Servers, structure item Known Servers, navigate to the desired logical
archive on the original server.
3. Right-click the archive, choose Replicate from the context menu, and confirm the "do
you really want ... " dialog (not shown above).
4. A dialog Edi t Replicate ... Pool will be displayed, asking you to configure the
properties of the pool replicate. These properties are only a subset of a "normal" media
pool's properties; they are just those related to asynchronous media backup - here
they will be applied to remote replication. Configure these properties as desired; see
chapter Configuring Logical Archives for a general discussion about their meaning.
If the original logical archive possesses more than one media pool, this step will be
repeated for each further pool.
These steps tell the Remote Standby server to perform remote replication for the chosen
logical archive. However, before the replication can be performed, appropriate storage media
have to be provided on the Remote Standby server; see section Providing media replicates on
Remote Standby server later in this chapter for more information.

Remote Standby Configuration and Operating 20-9


OPEN TEXT

Configure disk buffer replication


OPEN TEXT

Result:

Replicate may have


different name
(to avoid naming conflicts)

Remote Standby Configuration and Operating Slide 10

For a complete remote replication, it is necessary to replicate all original disk buffers as well, in
order to grasp also those documents that have not yet been written to optical media when
replication starts.
To configure replication for a disk buffer:
1. Within the Archive Server Administration, connect to the Remote Standby server.
2. Within tab Servers, structure item Known Servers, navigate to the desired disk
buffer on the original server.
3. Right-click the disk buffer and choose Replicate from the context menu.
4. In the Replicate buffer dialog, enter a name for the disk buffer replicate. This can
be the original disk buffer name - unless the Remote Standby server itself has already
a disk buffer with the same name; in this situation, a different name must be specified.
These steps tell the Remote Standby server to perform remote replication for the chosen disk
buffer. However, before the replication can be performed, appropriate hard disk volumes have
to be assigned to the buffer replicate; see section Providing media replicates on Remote
Standby server later in this chapter for more information.

20-10 710
QPENTEXT

Chapter guide .
QPENTEXT

III Introduction

III Configuring replication


- Servers
- Logical archives
- Disk buffers

III Providing media replicates on Remote Standby server

III Executing and reviewing the replication

II Replication with Storage Systems· Examples

Remote Standby Configuration and Operating Slide 11

Remote Standby Configuration and Operating 20-11


Replication status in the QPENTEXT
administratio~n.c:l:ie~n~t• • • • • • • • • • • •=I_:'=i•,•ij.'I.!!tJ!;®-=.: -.

xtarclo
Archives
..... Cache Partitions
8uffers
. 18uffer1 (8 uffer1 llisbonOO)
I, . Buffer2
Devices

The chart above reveals how the replication configuration and status is reflected in the Archive
Server Administration (when you are connected to the Remote Standby server):
For each replicated logical archive and disk buffer, you can see the name of the server
hosting the "original" archive or buffer.
Additionally, for each replicated disk buffer you see the name it has on its original
server. This is necessary because the original buffer and its replicate may nave
different names ( to avoid naming conflicts).
For each storage medium assigned to the original archive or disk buffer, you see the
state of its replicate. The most important information here is whether a replicate already
exists or not. If a replicate is missing, the administrator has to provide an appropriate
medium for this purpose; the exact way depends on the type of medium:
- Missing IXW and hard disk media have to be initialized explicitly before
replication can be carried out.
Missing ISO media do not need to be initialized; it is sufficient to supply empty
media in the jukebox of the Remote Standby server.
Details about handling media replicates are given on the following pages.

20-12 710
OPENTEXT

Initialize disk buffer partition replicate


OPEN TEXT

Remote Standby Configuration and Operating Slide 13

If a hard disk partition replicate is marked as "missing" on the Remote Standby server, a
suitable hard disk partition has to be provided for that purpose. Firstly, create such a partition
on operating system level. Make sure its capacity is at least the same as the original partition,
otherwise not all data held in the original can be replicated later!
To dedicate the created partition to the purpose of replication, connect to the Remote Standby
server with the Archive Server Administration and follow the steps illustrated above:
In tab Servers, choose Devices ~ HardDisk in the left-hand structure display.
Right-click the empty space in the right-hand Partitions list and choose Create
from the pop-up menu.
In the Create HardDisk Partition dialog, choose Option Create as
replicated partition.
Click button Select Partition.
In the Select Replicated Partition dialog, select the name of the original
partition you are preparing the replicate for, then confirm.
Specify the Mount path of the hard disk partition you have prepared on operating
system level and confirm.
You can then review the replicate status by selecting the disk buffer replicate in the
Servers structure. The assigned partition should now appear with type "replicate".
These steps have to be performed for all hard disk partitions of replicated disk buffers and
hard disk pools.

Remote Standby Configuration and Operating 20-13


QPENTEXT

Initialize IXW media replicate


QPENTEXT

Remote Standby Configuration and Operating Slide 14

For each IXW media added to the pool of a replicated archive, a corresponding IXW media
replicate has to be initialized on the Remote Standby server. You recognize the necessity of
this action by the status "missing".
To initialize an empty IXW media on the Remote Standby server for replication, connect to the
Remote Standby server with the Archive Server Administration and follow the steps illustrated
above:
1. In tab Servers, choose Devices, then click the name of the IXW media jukebox that
shall contain the IXW media replicate. Right-click an IXW media partition in right-hand
Partitions list and choose Init from the pop-up menu.
Note: Choosing a IXW media for replication, make sure it has the same block size and
capacity as the corresponding original - otherwise replication will not work! (This is no
issue as long as the same type of IXW medias is used on all involved servers.)
2. In the Initialize Jukebox Partition dialog, choose Option Create as
replicated partition.
3. Click button Select Partition.
4. In the Select Replicated Partition dialog, select the name of the original WORM
you are preparing the replicate for, then confirm.
5. Confirm the Initialize Jukebox Partition dialog. You can then review the
replicate status by selecting the pool of the archive replicate in the Servers structure.
The initialized IXW media should now appear with type "replicate".
These steps have to be performed for every new IXW media volume of replicated logical
archives with IXW media pools.

20-14 710
OPEN TEXT

Provide empty media for ISO media OPEN TEXT


replication • • • • • • • • • • • • • • • •f:;_;;i1fJ:!i&ll:m:@;_;."'=-

II No initialization of replicates necessary


II Provide blank media in jukebox .....11

• Available blanks must match originals in type and capacity

Remote Standby Configuration and Operating Slide 15

For each newly burned ISO medium of a replicated archive, the replication job wants to create
a corresponding replicate on the Remote Standby server. You recognize this "waiting" state by
the status "missing".
As opposed to hard disk and WORM volumes - discussed on the previous pages - ISO
media replicates do not need to be initialized in advance. Instead, the replication job picks an
available black medium from the jukebox and performs the steps of initializing and assigning
implicitly during the writing process.
It is the administrator's task to always provide suitable blank media in all Remote Standby
jukeboxes. This includes the condition that replicate media must have the same type and
capacity as the used originals. Moreover, consistent usage of either single or double-sided
DVDs for both originals and replicates is strongly recommendable (although not strictly
necessary).

Remote Standby Configuration and Operating 20-15


QPENTExT i

Chapter guide
OPEN TEXT

iii Introduction

iii Configuring replication


- Servers
- Logical archives
- Disk buffers

iii Providing media replicates on Remote Standby server

III Executing and reviewing the replication

iii Replication with Storage Systems· Examples

Remote Standby Configuration and Operating Slide 16

20-16 710
QPENTEXT

The replication job


QPENTEXT

- Every
synchronize

Write_CD Arch01 DVD Every

II Schedule the job to run, e. g., once a day


II Preferably let it run in periods of low system workload
- E. g. during the night

II Consider impact on network load during replication


- Especially when replicating DVDs

II Check for success or failure regularly

Remote Standby Configuration and Operating Slide 17

The replication job named Synchronize_Replicates is a predefined job on every Archive


Server. It must be executed on the Remote Standby server to perform the replication.
For successful replication, it is crucial to review the results of this job regularly for either
success or failure. This is done the same way as for all periodic jobs: In the Jobs tab of the
Archive Server Administration (shown above), right-click the jobs list and choose Protocol
from the pop-up menu. Then examine the displayed list of protocol items for job terminations
marked with ERROR or ABORT. If such an error item is present, right-click it and select
Messages from the pop-up menu to examine the logging output of the selected job invocation.

Remote Standby Configuration and Operating 20-17


Review status of replicates
OPEN TEXT

On Remote-
Standby
server:

On original
server:

Similar for replicated


• ISO media Info: Where is
replicate located?
• disk buffer partitions

Remote Standby Configuration and Operating Slide 18

The current status of media replicates can be reviewed in the Archive Server Administration as
illustrated above. The most important information is the point of time when a replicate medium
has last been accessed for writing ("Last Backup/Replication").
In addition, on the Archive Server hosting the original archive or disk buffer, you see the name
of the Remote Standby server holding the displayed replicate. This is important if an archive or
disk buffer is replicated to more than one Remote Standby server; in this situation, you have
the complete overview which of the different replicates are kept where and have been
synchronized with the original when.

20-18 710
QPENTEXT

Exercise: Configure and perform


OPEN TEXT
RemoteStandby replication 'igda!.:W'W.1$

" Configure remote replication for


a logical archive
- Cooperate with your classroom
neighbor: One has the original, one
has the Remote Standby server

'" Configure replication for a disk


buffer (optional)
10 Supply media replicates where
necessary
" Execute replication job
- Examine result

Remote Standby Configuration and Operating Slide 19

Remote Standby Configuration and Operating 20-19


QPENTEXT

Chapter guide
OPEN TEXT

III Introduction

III Configuring replication


- Servers
- Logical archives
- Disk buffers

III Providing media replicates on Remote Standby server

III Executing and reviewing the replication

III Replication with Storage Systems· Examples

Remote Standby Configuration and Operating Slide 20

20-20 710
OPEN TEXT

EMC Centera: ISO Images· Remote standby


QPENTEXT

Original Replication
I----------------i
Centera :
I
Archive Server I
I

.!!!0J•• • ~;B"!i.'.iB,!;¥@' :
I
I
I
I
I
I
Centera Archive Server I
I
I
I
I

, __=_=_:=
__=_ i_iiii_'ii_ii_ii:i_l:_=:_=_:_ j

1- - - - - - - - - - - - - - - - ,
I I

: Optical :
I I
Archive Server I
I
I
I
I I
I I
I I
I I
I I
I I
I I

.. Replication either on I
I
I
I
I
I

Centera or Optical I
I
I
I
I
I
I I
I I
I I
I I

-----------------
I I

Remote Standby Configuration and Operating Slide 21

Replication to either Centera -7 Centera or Centera -7 Optical.


Replication by Archive Server only

Remote Standby Configuration and Operating 20-21


OPEN TEXT

EMC Centera: Single file· Remote standby


OPEN TEXT

Original Replication
----------------j
Centera Centera :
I
Archive Server Archive Server I
I

ii!!!iL'iiiiiiiiii ii~1! :
I
I
I
I
I
I
Logical archives are replicated: I
I
Disk buffers and content I
I
I
I
I
I
l 2

iii Replication by Archive Server only


iii Single file replication only Centera ~ Centera

Remote Standby Configuration and Operating Slide 22

20-22 710
OPENTEXT

HOS ORU I HP XP: ISO - Remote Standby


OPEN TEXT

HDS DRU
HPXP HDS DRU
Archive Server Archive Server HPXP

Logical archives are replicated:


Disk buffers and media

III Replication by Archive Server only


.. ISO images are replicated
.. ISO images can be stored on RSB on other media with pool type
Write-at-once (ISO)

Remote Standby Configuration and Operating Slide 23

HDS DRU / HP XP - LUN Security XP Extension

Remote Standby Configuration and Operating 20-23


IBM DR550: single file & ISO - OPEN TEXT
Remote stan~d~b~Y~• • • • • • • •iIlII. . . . .ii·'W!-=';iiliii&i.jjli",=,,-;-;ttiilli.
DR550 Archive Server
" Replication by Archive Server
only
III Single documents & ISO images
are replicated
" ISO images can be stored on
RSB on other media with pool
type Wri te-at-once (ISO)
Logical archives
are replicated:
- Disk buffers
- media

DR550 Archive Server

Remote Standby Configuration and Operating Slide 24

20-24 710
QPENTEXT

NetApp Filer: single file & ISO - QPENTEXT


Remote stan~d~b~Y• • • • • • • • • • • • •T=_==iti1=i=!ttM=.=_=!jI~
Local System
.. Replication by Archive Server
Archive Server only
with filer as
hard disk " Single documents & ISO images
are replicated
.. ISO images can be stored on
RSB on other media with pool
type Write-at-once (ISO)

Archive Server
replication

r--------------------------
: Remote Standby System
I
I
I
I
Archive server
I
I
with filer as
I
I hard disk
I
I
I
I
I
I
I
I
I
I
I
I
I Remote Standby Configuration and Operating Slide 25

Remote Standby Configuration and Operating 20-25


20-26 710
1
QPENTEXT

21 Setting up an Administrator Wc.rk~sta;tlolrr


Prerequisites and tools

Setting up an Administrator Workstation 21-1


Chapter overview
OPEN TEXT

III Admin workstation requirements


III Installing the Archive Server administration tools
III Additional considerations

Setting up an Administrator Workstation Slide 2

21-2 710
QPENTEXT

Requirements for an administrator's OPEN TEXT


workstation;.• • • • • • • • • • • • • • • •fll_:~=w=';~ijjj=t@!=mil:M=.~

" Graphical Archive Server administration tools installed


- Supported platforms:
~ Windows
, AIX, HP-UX, Solaris, Linux
For details, see Archive Server Release Notes in ESC

" Remote access to Archive Server


- At least: File access __
• For viewing logfiles and configuration
- Good: Shell access (telnet, ssh, ...)
• For using command line tools
Ideal: Access to graphical console or screen
• Windows: Remote Desktop or 3rd-party tools (e. g. Dame Ware, pcAnywhere)
~ Unix: X server
" Archive Server admin tools may be used on the Archive Server,
no installation on administrator workstation necessary

Setting up an Administrator Workstation Slide 3

The three graphical administration tools should be installed on the computer the Archive
Server administrator uses. This makes it possible to administer the Archive Server remotely.
However, if the admin workstation has graphical remote access to the Archive Server, you
may prefer to use the administration tools on the Archive Server directly; in this case, you can
omit installing the admin tools on your own workstation computer.

Archive Server Release Notes in ESC:


https://esc.ixos.com/10B0543304-759

Setting up an Administrator Workstation 21-3


QPENTEXT r

Installing the graphical administration tools


OPEN TEXT

Installing on Archive Server or stand-alone client:


iii Install from Archive Server Installation CD
II Setup Administration Client and other requested components
(i. e. DocPipeline Info)

Installing on Enterprise Scan client (~9.5):

iii Special version of Administration Client on


Enterprise Scan client CD
II Install from Enterprise Scan Installation CD

Never mix components from server/client CDs with each other!

Setting up an Administrator Workstation Slide 4

For installations using Archive Server 0::5.x, components of the Archive Server and the clients
(Viewer & Scan clients) should never be mixed on one machine.
Therefore, never install client components on the Archive Server.

If you want to use the administration client on an Enterprise Scan machine, install it from the
Enterprise Scan client CD. (0::9.5)

See also reference to patch for Enterprise Scan 5.1 on next slide.

21-4 710
QPENTEXT

Additional considerations
OPEN TEXT

iii Release dependency


- Admin client release ~ Archive Server release
- Important for administering Archive Server with different releases

III Installation on Archive Server


- Always recommended (for local use)
- If missing: Can be post-installed without problems
* Precondition: Installed Archive Server and admin tools must have the same
version

III Installation on a scanning station with IXQS-EnterpriseScan 5.1


- No installation with standard setup tool possible
- Use patch SV55-007 instead
~ Contains a separate admin tool setup for this purpose

Setting up an Administrator Workstation Slide 5

Setting up an Administrator Workstation 21-5


21-6 710
QPENTEXT

OPEN TEXT

22 Periodic Jobs
Organizing recurring tasks on the Archive Server

Periodic Jobs 22-1


Chapter overview
OPEN TEXT
-X;tEiRiMK+${@ii#'*

III Tasks for jobs


iii Rules for running jobs simultaneously
iii Maintaining jobs in the administration client

Periodic Jobs Slide 2

22-2 710
OPEN TEXT

Tasks for jobs: synopsis (1)


OPEN TEXT

Periodic Jobs Slide 3

This and the following page give a complete list of tasks that are normally done as jobs on the
Archive Server. (It is possible, however, to implement jobs for further administrative tasks, but
this is beyond Archive Server standard; the topic is discussed in the course 715Archive
Server Advanced Admin)

Archive Server jobs are created and set up at different points of time:
• All jobs related to media pools / disk buffers are created (and also deleted) along with
the pool/buffer they are linked to; there is no need to self-define jobs for these
purposes.
Some jobs fulfilling "global" administrative tasks (i. e. not related to specific
configuration objects, such as pools and buffers) are already part of the initial Archive
Server setup; this applies to all above-mentioned jobs with a given standard job name.
Some other "global" jobs are not set up at server installation time; you have to create
them yourself once you need their functionality. Concerning the list of jobs given above,
this applies to the "start media migration" task.
The typical schedule entries in the table above are meant as very general suggestions, valid
only unless special preconditions apply. The administrator is responsible to deviate from these
rules of thumb wherever necessary. Example: If the average amount of stored data received
daily into a specific ISO media archive exceeds the capacity of one ISO media, the ISO write
job has to be scheduled to run more than once a day - otherwise documents would
continuously queue up in the disk buffer.
The resource-critical column in the table above indicates which jobs have to scheduled with
special care: They allocate/consume certain resources (like media drives, database activity) to
fulfill their tasks, thus they should be scheduled in a cooperative way so that they do not lock
out each other. (More information is given later in this chapter.)

Periodic Jobs 22-3


Tasks for jobs: synopsis (2)
OPEN TEXT

Update information about related


("known") Archive Server
STORM files backup
Reorganize STORM statistics
Reorganize accounting data
Remove old job protocol entries
Remove expired ACMC alerts
Cleanup old Admin Audit Entries

Periodic Jobs Slide 4

All jobs mentioned above under "global, server-related" are already part of the initial Archive
Server setup. The task remaining for the administrator is to schedule them appropriately as
part of the overall job scheduling concept.
See the previous page about the meaning of the typical schedule and resource-critical
columns.

Audit Trails are available with Archive Server <:: 9.6. Old administrative audit entries need to be
cleaned-up regularly. See chapter on Configuring Audit Trails for details.

22-4 710
QPENTEXT

Running several jobs simultaneously


OPEN TEXT
e_ibdIMi-*

" Generally, multiple jobs can be executed at the same time


" Schedule jobs so that they do not interfere with each other
Make sure enough resources are available for parallel execution
Coordinate jobs and periodic server downtimes (e.g. for offline backups)

" Important bottlenecks:


Simultaneous ISO media writing
• Burn buffer large enough for keeping multiple ISO images?
* Enough writer drives available?
* Enough bandwidth on SCSI connection to writer drives to avoid buffer under run?
Simultaneous IXW media writing for several pools
* Enough drives available, also for simultaneous reading from other 1x:N medias?
General
• Enough CPU capacity, especially for jobs causing heavy database activity
(writing ISO media, purging disk buffers)?

" Specifically, do not run simultaneously:


IXW media write jobs ~ local backup job ~ save STORM files job
Local backup job ~ disk buffer purge jobs (;"5.5: use job concatenation instead)

Further reading in ~

Periodic Jobs SlideS

Find the mentioned ESC article, entitled Scheduling jobs in Archive Administration, as:
https://esc.ixos.com/l008339174-156 ~
If gives further information about the resource consumption of different types of jobs as well as
"best practice" examples for different scheduling concepts.

Periodic Jobs 22-5


Scheduling automatic daily jobs: Example

12AM

GAM

12 PM

GPM

12AM

Periodic Jobs Slide 6

In the illustration above, the Archive Server configuration comprises two logical archives: A 1
and A 2 . « , : , .
The shown job schedule is characterized by the following considerations:
During business hours - i. e. when new documents are expected to arrive on the
Archive Server - the Write_ WORM jobs for the two IXW media pools run very
frequently (e. g., each half hour) in order to store documents securely on IXW medias
as soon as possible.
• The two other main tasks to be executed while the Archive Server is running, backing
up WORM data and purging the disk buffer, are arranged in a way that they do not
interfere with each other and with the IXW media write processes.
• The offline database backup must also be done while no other jobs are scheduled
because all Archive Server processes have to be shut down for this.
Some other scheduled jobs, like refreshing configuration information about other
Archive Server and cleaning up expired job protocol entries, can be done concurrently
to other jobs without any problems.
However, the above scheme is just an example. Every Archive Server administrator is
responsible to find a solution that suits the individual situation at his own company.
Moreover, the job schedule and coordination have to be checked and possibly changed
whenever the Archive Server configuration is changed. For example, when a new logical
archive with ISO pool is introduced, another Write_CD job - which may take between one
and two hours execution time, depending on the speed of the ISO media writer drive - must
be integrated into the existing schedule.

22-6 710
QPENTEXT

Jobs administration
QPENTEXT

Job is
queued for
running
as soon as
resources are
available

Periodic Jobs Slide 7

The Jobs property sheet of the Archive Server Administration shows the list of all defined jobs on the Archive Server
. The table yields the following information:
First (unnamed) column: If the job icon is grayed out, the job is currently disabled, i. e. ignored by the
scheduler.
The column may also display a further icon if the job is currently being executed, queued to run at the next
possible time, or stopping after an explicit stop command.
Name of the job.
Command to be executed as a job. (It is not normally useful to edit this.)
Month, Day, Hour, etc.: The job's schedule, if specified. A job may also configured to run immediately after a
certain other job has finished; this is displayed in the Job dependencies list of the dialog.
Right-clicking the jobs list opens the pop-up menu shown above, offering functions for job configuration and
operating:
Add a job to the list. (This is not necessary for media write and buffer purge jobs; these are created
automatically along with the object they belong to.)
Edit the job's command name, arguments, and - most important for configuration maintenance - the job
schedule.
Remove a job from the list. (This is not normally needed. If you want a job to no longer be executed, rather
disable it; see below.)
Enable or disable a job. When disabled, the job will no longer be executed by the scheduler; this may be useful
for certain troubleshooting situations.
Messages opens a window where you can view log messages of a currently running job. This is mainly useful
for troubleshooting and not normally used during regular operation.
Now lets you invoke a job manually.
Stop interrupts a currently running job. Not all jobs allow this; e. g. ISO write jobs cannot be interrupted.
Protocol opens a window showing a list of job invocation log entries, revealing success or failure of each job
run. For more information, see chapter Monitoring the Archive Server.
Stop/Start Scheduler completely switches execution of scheduled jobs off/on.

Periodic Jobs 22-7


QPENTEXT

QPENTEXT

III All jobs in the Job Administration


can be disabled
III Scheduling will not trigger the job

III Examples
- Job is triggered by scripts instead
- Redundant jobs not needed in scenario
- Temporarily disable a job for a certain period
~ Can be enabled at a later time

Pedodic Jobs Slide 8

Administrators who are using their own scripts to trigger certain jobs need to be aware of the
regular job scheduling in the Job Administration. Disablling jobs can prevent collision with
scripts running the specific tasks.
Disabled jobs can be enabled again anytime.

22-8 710
OPEN TEXT

Edit a job (1): scheduling


OPEN TEXT
!} iUVtlwm@-.
II! Command, arguments:
Do not alter for "standard" jobs
II! Invoke job by time
- Multiple selection
of time values possible
- Shortest interval: 1 minute

II! "Time Limit":


- If job is being executed at specified
point of time, it will be interrupted
("Emergency break")
- Some jobs will refuse to stop
- Does not prevent scheduled start
after given time!

Periodic Jobs Slide 9

Choosing Edi t in the context menu of the Archive Server Administration's Jobs page opens
the Edi t Job dialog of the selected job, as illustrated above.

The job properties Command and Arguments should not normally be edited since their default
values are always appropriate. Exceptions include:
Adding option -b to the arguments of a disk buffer purge job makes the job purge
documents only after they have been saved on backup IXW media; see chapter Disk
Buffer Configuration.
Application-specific jobs - mainly those starting batch import of documents - may
honor certain arguments (project-dependent).

The job time limit - a feature introduced in version 5.0 - can be used to make sure that
certain jobs are finished at a defined point of time during a day. For example, you can force a
disk buffer purge job to terminate in time before the database is shut down for an offline
backup.
Note that most, but not all types of jobs honor this time-driven interruption: All activities dealing
with burning ISO media (DVD, WORM write jobs, local backup job - if applied to ISO pools)
will simply keep running even if they receive an interrupt request.

Periodic Jobs 22-9


Edit a job {2}: conditional invocation
OPEN TEXT

.. Invoke job upon certain


event
- On AdministrationServer
startup ("autostart" spirit)
- Triggered by finishing of
another job
~ job concatenation
* Optional condition: specific
return code of previous job
(= successful exit?)

.. "Time Frame":
- Job will be invoked only during
given period
~ E. g. only during the night
- Running job will not be inter-
rupted on exceeding the
period!
.. Example concatenation:
1. Backup job for IXW medias
2. Purge disk buffer job

Periodic Jobs Slide 10

Job concatenation - a feature introduced in version 5.5 - eases the coordination of


resource-critical jobs. By simply chaining certain jobs together, you can be sure that the
second one never starts before the first one has finished.
Examples for reasonable job concatenations include:
All ISO write jobs one after the other
All IXW write jobs one after the other
Local backup job, then disk buffer purge job(s)

22-10 710
OPEN TEXT

Additional recurring tasks OPEN TEXT


not executa~b~le:.:a:s~j~O~b~s~• • • • • • • • • •=wt=lif.=id=iill.m:lIl_:-;-.~

I!I Various Archive Server backup tasks


- Disk buffer I hard disk pools
- Database

- Exception: STORM files online backup

I!I Archive database log files backup (Oracle)

Per;od;c Jobs Slide 11

The tasks mentioned above are indeed tasks to be performed regularly, but they cannot be
accomplished by the Archive Server's built-in scheduler. (The scheduler needs a running
database for operation and therefore cannot invoke an offline database backup.) However,
they are mentioned here because they must be coordinated with the other system jobs - for
example, an offline database backup requires all Archive Server processes to be stopped; no
other periodic jobs can be executed during this downtime at all.

Periodic Jobs 22-11


QPENTEXT

Exercise: Schedule jobs appropriately

III Examine job scheduling on your


classroom server
- Are jobs scheduled reasonably and
cooperatively?

III Improve scheduling where


necessary
- Make use of job concatenation
where appropriate

Periodic Jobs Slide 12

22-12 710
QPENTEXT

OPEN TEXT

23 Configuring Audit Trails

Configuring Audit Trails 23-1


OPEN TEXT
niDit4U i4M§i

IIIi Audit Trails


- Administrative Information
- Deleted Documents
- Single Document (getDocumentHistory)

iii Purge Audit Data


II Deletion Holds

Configuring Audit Trails Slide 2

[ \

23-2 710
QPENTEXT

Audit Trails· Overview


QPENTEXT

IIIi Audit of document content Iifecycle


IIIi Audit is enforced for compliant logical archives
- Typical actions to be audited
Create,copy,migrate, time stamp,delete ...
~ <Date><Time>
*' <log.Archive><documentId><componentId>
<Volume- l><Volume- 2>

IIIi Audit of administrative changes


- always turned on

III Access to audit information


- by a tool (for reports) or
- by a document based http call to display document information in a leading
application

Configuring Audit Trails Slide 3

Idea of an audit is that all activities and changes in the system are tracked and that audit
information can be provided for legal purposes or for documentation.

Configuring Audit Trails 23-3


QPENTEXT

Collect Audit Data


QPENTEXT

III Collect Audit Data on Documents (Storage)


- Collecting audit data for document content lifecycle:
• must be switched on per logical archive
~ by activating the Compliance Mode (irreversible)

- Each event related to document content is recorded


- Stored to a separate database table ds_audit

III Collect Audit Data on Administrative Changes (Admin)


- always active
- Each administrative change is recorded
• same set of events as for the old accounting feature
- Stored to a separate database table adm_audit
- Manuaruse of command line utilities (dsTools , cdadm, etc.) not recorded!

Configuring Audit Trai!s Slide 4

Recorded operations are


- related to state changes of the document,
related to state changes of the components,
- data relevant from the point of view of security,
concerning the storage.
The retention date is not recorded. Some events may occur even in read-only scenarios
(Remote Standby).

Each event related to document content is recorded:


• CREATE_DOC, UPDATE_ATTR, DOC_SET_EVENT, DOC_MIGRATED,
DOC DELETED
• CREATE_COMP, TIMESTAMPED, COMP_DELETED, COMP DESTROYED
• TIMESTAMP_VERIFIED, DOC_SECURITY, COMP_DELETE_FAILED,
TIMESTAMP VERIF FAILED
• COMP_MOVED, COMP_COPIED, COMP_PURGED

Each event stored to a separate database table ds_audit


• EVENT, TSTAMP
• ARCHIVEID, DOCIDSTR, COMPONENT, VERSION
• VOLID1, VOLID2
• CLNTADDR

23-4 710 (
QPENTEXT

Access Audit Information -


QPENTEXT
On Documents & Administrative Infos 1lL" imitim*.tti

I!II Retrieve document audit information:


- command line utility exportAudit with option -S
- writes output to a file (in $/XOS_SRV_ROOTlvarlaudit)

III Retrieve administrative audit information:


- command line utility exportAudit with option -A
- writes output to a file (in $/XOS_SRV_ROOTlvarlaudit)

Configuring Audit Trails Slide 5

Storage options (-S)


- output format: csv, printlist
- show only events concerning deleted documents
- restrict to time frame

Admin options (-A)


- output format: csv only
- restrict to time frame

Configuring Audit Trails 23-5


QPENTEXT

Purge Audit Data - exportAudit Command


QPENTEXT

iii Remove audit data from the database


II Command line utility exportAudit with additional option -x
- with option -5 or -A
- writes output to a file (in $IXOS_SRV_ROOTlvarlaudit)
- removes the listed events from the database

II! Options: as before

III Removing audit data for existing documents is not possible


- options -x and -a are mutually exclusive

Configuring Audit Trails Slide 6

Purge in this context means the removal of outdated audit entries from the database. This is
necessary to keep the database at a reasonable size.

23-6 710
QPENTEXT

Purge Audit Data - Periodic Job


QPENTEXT
w'ij".iIlEMlltTIliilM.

III Schedule job SYS_CLEANUP_ADMAUDIT


to clean old ADM audit entries periodically
- command AudiCSweeper
no Storage audit entries (-S)

III Server Configuration: ADMS > Auditing


- Configure maximum age of ADM audit entries (-A) to keep

Configuring Audit Trails Slide 7

The ADMS job deletes entries that have reached a certain document age. This parameter can
be configured.

Configuring Audit Trails 23-7


OPEN TEXT I\

Access Audit Information -


OPEN TEXT
On Single Document (1)

Access audit information for a single document:


iii Dedicated command getDocumentHistory
iii Available via http API
iii Can be called by the leading application

Configuring Audit Trails Slide 8

23-8 710
QPENTEXT

Access Audit Information -


QPENTEXT
On Single Document (2)

iii Retrieve history of a single document via Http API command


getDocumentHistory
- Via http or dsh Tool
- http://<ArchiveServer>:8080/archive?getDocumentHistory&contR
ep=<LogicalArchive>&docld=<DocID>&pVersion=0045

Configuring Audit Trails Slide 9

You can access the http API either via http call or using the dsh Tool.

Configuring Audit Trails 23-9


QPENTEXT

Deletion Holds
OPEN TEXT

In case of a litigation it might be required to disable deletion.


Iii Global Deletion Hold
- Valid for the complete archive server
- Activated by switching to maintenance mode
- Delete attempts are rejected or ignored
- Is deactivated during server restart!

ill Logical Archive Deletion Hold


- Valid for the logical archive only
- Archive security flag: "Document deletion"
- Delete attempts are rejected or ignored
- Persistent (survives server restart)

Configuring Audit Trails Slide 10

Global Deletion Hold is usually turned off after restart (temporary). It can be configured in
Server Configuration to persistent. You can configure it in
Server Configuration: DS > System settings > Default runlevel

23-10 710
QPENTEXT

Exercise: Audit Trails


QPENTEXT

" Get Document History for single


document
" Access audit info with
exportAudit

" Purge audit info with


exportAudit

" Purge audit data as scheduled


job

Configuring Audit Trails Slide 11

You can also get document history using dsh Tool:


- Logon to dsh Tool
dsh -h <servername>
- Get document history (i. e. logical archive "HO" and DoclD "aaaaf4u1dap...")
getDocumentHistory -a <archive-id> -d <doc-id>

Configuring Audit Trails 23-11


QPENTEXT

Exercise: Deletion Holds


QPENTEXT
ir&%diiEWmmij.p

" Put global deletion hold on your


Archive Server
" Try deleting a document and
verify results

" Prevent document deletion for a


logical archive
" Try deleting a document in this
logical archive and verify results

Configuring Audit Trails Slide 12

23-12 710
QPENTEXT

OPEN TEXT

24 Backing up the Archive Server


Operative data loss protection

Backing up the Archive Server 24-1


OPEN TEXT I,
I

Chapter guide
OPEN TEXT

III Hard disk mirroring

III Backing up optical media pools: ISO, IXW

I1i Backing up hard disk areas of the Archive Server

Backing up the Archive Server Slide 2

24-2 710
OPEN TEXT

Which HD areas have to be mirrored (RAID 1


QPENTEXT
or RAID 5)

Not necessary
• Contains copies of
already saved data only
• HD crash does not affect
system availability

Backing up the Archive Server Slide 3

The basic HD protection rule is: All HD partitions that may hold the only instance of a
document must be protected against data loss by mirroring or RAID.
Suitable techniques to avoid data loss in the instance of a hard disk crash include:
RAID 1 (= one-to-one mirroring)
RAID 5 (= striping with parity)
IXOS supports both one with equal preference.

Notes on mirroring specific Archive Server storage locations:


The document-related data in the DS database can be entirely restored from the
storage media. However, with a large archive this can be an extremely time-consuming
process as all optical disks must be read in again, so the database should regularly be
backed up to tape and/or included in the disk mirroring arrangement.
Mirroring the cache is advised to avoid reduced performance after the loss of cache,
when all requested documents must be read from optical media again.

Backing up the Archive Server 24-3


OPEN TEXT

Chapter guide
OPEN TEXT

Ii! Hard disk mirroring

iii Backing up optical media pools: ISO, IXW

Ii! Backing up hard disk areas of the Archive Server

Backing up the Archive Server Slide 4

24-4 710
OPEN TEXT

OPEN TEXT
;i1iIIB%_lIiIlW.

iii Backup disk is created automatically


along with each original disk
- Select 2 copies within ISO pool configuration

iii For each new backup disk:


1. Remove from jukebox
2. Write label on it
3. Store at a safe place

Backing up the Archive Server Slide 5

Backing up the Archive Server 24-5


QPENTEXT

QPENTEXT

iii Configure
IXWpool:

Select creation of
backup IXW medias

II Schedule local backup


job to run daily - ......

II Provide backup IXW medias


for new originallXW medias

Backing up the Arch'lva Server Slide 6

Unlike backup ISO medias which can be removed from the jukebox immediately after they
have been created, backup IXW medias must reside in the jukebox as long as their original
counterpart is being written to - because the backup IXW media is synchronized with the
original incrementally. As soon as the original has been filled completely and its backup has
been synchronized a last time, the backup can be removed and stored at a safe place; see
chapter Handling Optical Archive Media for more information.

Using Archive Server ~ 4.2, there is an additional option in the WORM write configuration:
"Delete from disk buffer after copy". Never select this option for a pool for production data! You
always need the disk buffer as a temporary backup between writing a document to the original
WORM volume and duplicating it to the backup WORM. (For test data, however, this is not
necessary.)

24-6 710
QPENTEXT

Chapter guide

II! Hard disk mirroring

II! Backing up optical media pools: ISO, IXW

Backing up the Archive Server Slide?

Backing up the Archive Server 24-7


OPEN TEXT

Which HD areas have to be backed up


OPEN TEXT

Not useful
• Data stored here for a
very short period only,
or
• contains copies of
already saved data only

Backing up the Archive Server Slide 8

Regularly backing up the diverse hard disk areas used by the Archive Server is a necessary precondition
for data recovery after a hard disk crash. However, such a recovery may serve different purposes:
Disk buffer, hard disk pool: A crash can lead to loss of original documents here, therefore backups are
mandatory for data loss protection.
Database OS, WORM filesystem database: Their contents can be restored from the storage media
containing the actual documents. However, with a large archive this can be an extremely time-
consuming process as all optical disks must be read in again. Backing up these databases helps to
restore the system much faster.
In addition to document management data, the OS database contains information about the
Archive Server configuration (logical archives, pools, jobs, etc.). This part of the database cannot
be recovered without a database backup! As a consequence of a total database loss, you would
. have to recreate your server configuration manually (which is, of course, far less harmful than a
loss of archived documents).
The WORM filesystem database is present on an Archive Server with WORM media only; on
other installations, there is nothing to be backed up here.
Software installation (operating system, database system, Livelink Enterprise Archive Server):
backing up these items helps to recover the whole system rapidly after a crash of the system disk.

Attention: The STORM configuration files are located within the Archive Server Software
installation! (see also page: STORM files backup).

Cache: No data loss can happen here since the cache contains only documents that are already saved
on optical disks. Nevertheless: After a loss of the cache, its whole contents has to be reloaded from the
optical disks upon corresponding retrieval requests; during that period, the server's retrieval performance
would be considerably degraded. Backing up the cache therefore helps to retain the good system
performance across a cache loss.
Burn buffer, temporary storage for WORM writing: If one of these becomes lost, only write jobs for
the corresponding optical media are disturbed; the users do not even notice such a problem. After
mounting a new hard disk, these write processes can immediately start working anew, no data recovery
- and therefore no backup - is necessary.
DocumentPipeline: Documents normally pass the DocumentPipeline in very short time (seconds or
minutes); moreover, the DocumentPipeline can be backed up in offline state only. As a con-sequence, a
tape backup would never find any data to be backed up and can thus be omitted.

24-8 710
QPENTEXT

FS & HDSK pool backup


OPEN TEXT
;miRaMIi• •

Livelink Enterprise
II Offline backup Archive Server
Using common backup tools
(TSM, Dataprotected, Legato etc.)
- Archive Server services
must be shut down

111 Online backup


For uninterrupted Archive Server operation
Backup tool must not lock files
Pool must comprise two or more partitions. Procedure:
1. Write-lock first partition
2. Back up first partition
3. Unlock first partition
4. Repeat for all remaining partitions
Ensure that no "compress hard disk" job is executed during backup
operation
Does not save consistent state

II Incremental backup (= only new/changed files) recommended

Backing up the Archive Server Slide 9

Setting partitions to write-locked status can be done manually in the Archive Server
Administration. However, for an automatic backup procedure, a scriptable way to do this is
necessary. The Archive Server command line tool dsClient (available on every Archive Server)
can be used in a Unix shell the following way:
dsClient localhost dsadmin <dsadmin-password> «EaT
chgVolS <volume name> wrlock
end
EaT
volume_name here is the logical name of the partition, as assigned and visible in the Archive
Server Administration. For unlocking the partition, replace wrlock by zero.
For a Windows batch script, you cannot use the «EaT construct; instead, write the chgVolS
and end commands to a file and invoke dsClient this way:
dsClient localhost dsadmin <dsadmin-password> < filename

Backing up the Archive Server 24-9


Disk buffer backup
OPEN TEXT

Livelink Enterprise
III Necessary to protect data that cannot Archive Server
be written to optical disks for more
than a day
- For ISO pools with low archiving traffic
In case of media write interruptions
• i.e.
• broken network connection to storage system
jukebox damage
• storage system failure

iii Procedure: Same as for hard disk pools


- Cyclic locking of several partitions for backup

iii During online backup, ensure that no periodic jobs are executed
on the disk buffer:
- ISO or IXW write
- Purge buffer

Backing up the Archive Server Slide 10

24-10 710
OPEN TEXT

Database backup
OPEN TEXT

III Operate database in archive log mode Llvelink Enterprise


Archive Server
- All database changes are recorded·in
archive log files
- Archive log files are necessary to recover
difference between last full (offline) backup
and status at database failure

iii Back up database files in offline (shutdown) state:


- Data files (of all tablespaces)
- Redo logs, Transaction logs
- Control files r:::ir
III Back up archive log files at least once a day
- Keep archive log files together with full (offline) backup that they start at

iii For uninterrupted Archive Server operation: Online backup


- Done during normal operation, less performance

Backing up the Archive Server Slide 11

See also Administration Guide chapter on Backup and Recovery.

Backing up the Archive Server 24-11


QPENTEXT

STORM files backup (1):


OPEN TEXT
What has to be backed up

II WORM file system database Live/ink Enterprise


Archive Server
- Keeps low-level structure data of IXW volumes
Can be recovered from IXW medias in case of loss
- Recovery may take very long
(up to several hours or even days,
depending on number of un-finalized IXW medias)
- Backup strongly recommended
to protect against long downtime

II STORM configuration and runtime files


- Found in <IXOS_ROOT>/config/storm
- Backed up implicitly when making a software installation backup

Attention: In case you use online backup of the file system of your
software installation, you may run into an error (please see also in the note
part of this page)

Backing up the Archive Server Slide 12

Attention: The STORM configuration files are located within the Archive Server Software
installation! Those never must be part of an online backup.

If you use online backup for the software installation the following files to be excluded from
online backup:
• config/storm/*
• all parts of the worm file system (section ixworm of server. cfg)
including DataFilePath defined in section ixworm of server. cfg

The job Save_Storm_Files will take care of a valid online backup of the STORM files.

Please read also in the ESC:


https://esc.ixos.com/1134724127-818
Storm Terminates and reports "cannot sync. memory map" - "cannot sync. journal file".

24-12 710
QPENTEXT

STORM files backup (2): QPENTEXT


Backup melthlo~d:S:'• •••••••••••=;!_==t~:'a;"",;-Ii:B=m*=--
l\l Offline backup
- When Archive Server (spawner, STORM)
is shut down:
jbdbackup -configRoot=
E:/IXOS-ARCHIVE/config/storm
(for example)

l\l Online backup


- STORM is kept running
- IXW medias are read-only during backup
- All other system operations continue without restrictions
- Accomplish with Save_Storm_Files job in Archive Server Administration

m Both methods create copy of STORM files on local hard disk


- Backup destination configured in STORM's server. cfg file

l\l After backup is made: Store backup files away (e. g. on tape)

Backing up the Archive Server Slide 13

Both backUp methods mentioned above - the j bdbackup command line utility and the
Save_Storm_Files job in the Archive Server Administration - create a copy of all relevant
STORM files on the server's hard disks. The destination of this backup copy is specified in
STORM's configuration file server. cfg (see chapter Where to Find What). Therein you will
find a section like:
backup {
list { dest1 }
backuproot { dest1
dest1 {
path { V:/jbd backup
size { 1024 }-
}

Here, V: / j bd_backup is the backUp destination directory (which must actually exist when
the backup is started; it will not be created automatically!). Multiple directories may be
specified instead of a single one in order to spread the backup copy over several hard disk
volumes; this way, capacity problems can be avoided in case the WORM database is very
huge.
After the backup copy has been made, it is your task to store the backup safely away,
preferrably on a tape.

Using IXOS-ARCHIVE S; 4.2, a true online backup of the STORM files is not possible. Instead,
you can perform a "dirty" online backup: STORM is brought down while all other eCONserver
processes are kept running (that is why it is called "dirty" here), then an offline backup is made,
finally STORM is started again. Due to STORM being shut down during the backup, no access
to optical disks is possible at all in the meantime.
Exactly this procedure is performed by the Save_Storm_Files job on an Archive Server:::; 4.2.

Backing up the Archive Server 24-13


OPENTEXT i

Software backup
OPEN TEXT

III Back up software installation Livelink Enterprise


Archive Server
- Once after finishing installation and setup
- After each configuration change

I!I Attention:
The STORM configuration files are located
within the Archive Server Software installation!
Those never must be part of an online backup
of the software installation.
(see also page: STORM files backup)

I!I Back up:


- Operating system
- Archive Server software and configuration
- Database system software and configuration

Backing up the Archive Server Slide 14

24-14 710
QPENTEXT

25 Hard Disk Resource Maintenance


Adding hard disk space to Archive Server resources
as needed

Hard Disk Resource Maintenance 25-1


OPEN TEXT I

.......---~~~
Chapter overview
OPEN TEXT
-):Mlf.idiit1iliiiD.

Enlarging hard disk space for:


II Disk buffer
II FS/HDSK pool
II Cache
II DS database
II WORM FS database
II DocumentPipeline
II Burn buffer
II "Temp" space for
IXWwriting

Hard DiskResource Maintenarice Slide 2

25-2 710
OPEN TEXT

OPEN TEXT

I! Disk buffer needs more space when archiving traffic increases


- Higher average archiving rate
- Higher archiving volume peaks
- More logical archives (with ISO media pools) using the same buffer

III Ways to provide more space for disk buffer:


- Enlarge current hard disk partition (if operating system permits this)
- Attach additional partition (~ next page)

iii Do not make partitions too large


- Too many documents on a partition may lead to performance problems
~ During disk buffer purging
* During volume consistency checks
- For Archive Server S 9.5 absolute built-in limit: 1 T8 per partition

Hard Disk Resource Maintenance Slide 3

Detecting disk space shortage of disk buffers is fairly easy: The Archive Server Monitor
shows a warning state if the free space of a buffer is less than 30% of total space. (This
threshold percentage may be altered if it is unsuitable.)
The recommendation not to use too large hard disk partitions is due to the fact that some
administrative actions (like disk buffer purging or consistency checks) require examining the
whole partition contents. The more documents are stored there, the longer such a scan will
take. If, moreover, a partition is full of.very small documents, the total number of files is very
high; this may lead to unacceptably long execution times of those actions. To prevent this type
of problem, rather use multiple partitions of moderate size instead of a single large partition. If
you store rather large documents only (like SAP data archiving files), the partitions may be
made larger as well; where mainly small documents are stored, the partition sizes should be
smaller (using BLOBs, however, reduces the number of stored files of small documents).

Hard Disk Resource Maintenance 25-3


Hard Disk Resource Maintenance Slide 4

To add an additional hard disk partition to a hard disk pool or buffer, you first have to provide a hard disk
partition on operating system level. On a Unix-based Archive Server, make sure the root directory of the
file system is owned by the user/group that the Archive Server is operated as (e. g.
ixosadm/ixossys) and has permissions 770.
Once the disk partition is prepared, continue with the steps illustrated above:
<D Make the new partition known to the Archive Server by invoking the "Create Hard Disk Partition"
dialog as illustrated above. (The term "Create ..." is actually misleading here; you cannot create
a partition from within the Archive Server Administration.) Specify the following:
Partition name: An (Archive Server -internal) logical name for the partition; must be unique
throughout all volume names (including IXJN medias) of this Archive Server. The Archive
Server will henceforth maintain the partition by this name.
Create as replicated partition: If selected, the partition shall serve as the replicate of a
partition on another Archive Server. Only for RemoteStandby server configurations; for a
normal pool or buffer partition, do not select this option.
Mount path: The root directory of the partition's file system. On Windows NT, this should be
a drive specification (including a backslash); on Unix platforms, it is the directory where
the partition is mounted; up from Windows 2000, it can be either of both, depending on
how the partition is hooked into the file system.
If, on a Windows-based Archive Server, you want to use a network share instead of a
local hard disk drive, see ESC article https://esc. ixos. com/1072860397-483
about how to do that exactly.
® Assign the prepared hard disk partition to the disk buffer by means of the "Attach Partition to
Buffer" dialog as illustrated above.
In case of a hard disk pool instead of a disk buffer: Within Archive Server Administration's logical
archives list, select the hard disk pool that the partition shall be added to; then invoke the "Attach
Partition to Buffer" dialog as illustrated above.

25-4 710
OPEN TEXT

FS & HDSK pool

iii FS & HDSK pools need more space when total amount of stored
data is about to exceed assigned disk space

iii Providing more space for FS & HDSK pool:


same as for disk buffer
- Enlarge current hard disk partition (if operating system allows this)
- Attach additional partition (~ previous page)
- Do not make partitions too large

Hard Disk Resource Maintenance Slide 5

Detecting disk space shortage of hard disk pools is fairly easy: The Archive Server Monitor
shows a warning state if the free space of a pool is less than 30% of total space. (This
threshold percentage may be altered if it is unsuitable.)

Assumption for FS pool is that it is using local hard disk only. Sizing an FS pool that is working
with a sophisticated storage system may differ.

Hard Disk Resource Maintenance 25-5


QPENTEXT

DocumentService cache (1)


QPENTEXT

iii Cache needs more space when documents are deleted from
cache too early
III Ways to provide more cache space:
Enlarge current hard disk partition (if operating system permit this)
- Assign additional partition:
1. Provide a partition for exclusive use by the cache
Partition will become filled up completely
2. Add drive letter or mount point to cache volume list
3. Change will be effective after next Archive Server resta
S 9.6.0 only: Current cache contents will be discarded!

III Do not make partitions too large


Use several partitions of moderate size
- Helps troubleshooting consistency problems

Hard Disk Resource Maintenance Slide 6

The recommendation not to use too large hard disk partitions bears an advantage in
situations where the cache index - due to whatever reason - has become damaged. Such a
cache index problem is normally restricted to a single cache partition, and a common solution
is deleting all contents of this cache volume. If the cache consisted of just one single large
partition, you would lose the whole cache contents by this action (which does not mean real
data loss; only the cache would have to be filled again by subsequent document read
requests, during which your server's performance would be impaired); a cache consisting of
several smaller partitions would only lose a small part of its total contents.

In Archive Server S 9.6.0, when adding additional hard disk to local cache, all contents of the
current cache will be discarded.
In Archive Server::: 9.6.1, this problem no longer exists due to new caching technology used.
Additional hard disk can be added to local cache without losing previous content.

25-6 710
Hard Disk Resource Maihlenance

Hard Disk Resource Maintenance 25-7


Increase Cache Paths

~------....;:~

Hard Disk Resource Ma'lntenance Slide 8

25-8 710
QPENTEXT

Databases: DS, WORM filesystem


QPENTEXT
:%V,·,itiiMlitW.tfD

iii OS database
- See Archive Server Monitor for filling rate
- If too small: Enlarge with database tools

iii WORM filesystem database


- See Archive Server Monitorfor filling rate
- If too small:
Try to reduce data by finalizing IXW medias; if that does not help:
Call1XOS Support for enlarging

Hard Disk Resource Maintenance Slide 9

WORM filesystem database


Unfortunately, no easy way to enlarge this resource exists. Nevertheless, it is possible to
recreate it from scratch and to reimport all IXW medias into it. Since this is a very time-
consuming activity during which IXW media access is not possible, a special, quite
complicated workaround should be applied. Please contact IXOS Support for help in this
situation.
On the other hand, the WORM filesystem should nowadays no longer become filled up as
filled IXW medias can be finalized, keeping the WORM filesystem's filling rate on a rather
constant level.

Hard Disk Resource Maintenance 25-9


Other HD resources
OPEN TEXT

II For these resources, no special enlargement methods exist


II Enlarge current HD partition online, if possible
II Enlarge "offline" otherwise:
1. Shut down Archive Server
2. Save content away (e.g. on tape)
3. (Re)create larger partition
4. If new partition has different mount point / drive letter: adjust Archive Server configuration
5. Restore previous content to new partition
6. Start Archive Server

Hard Disk Resource Maintenance Slide 10

25-10 710
OPEN TEXT

ill Add second partition


to disk buffer
Initialize new partition
- Assign to disk buffer

Advanced exercise:
ill Enlarge DocumentPipeline
directory
- Move DocumentPipeline directory to
larger partition
Take care to save contents across
change
Adjust DPDIR setting in Archive
Server configuration
• Pay attention to configured and
implicit directory structure
Continue processing of saved
processing items

Hard Disk Resource Maintenance Slide 11

Hard Disk Resource Maintenance 25-11


25-12 710
OPEN TEXT

QPENTEXT

26 Accounting information
Billing your customers based on Archive Server U"'Cl!-1<;i

The Accounting feature is not available anymore with Archive Server 9.6.
While log files on usage are still generated, scripts are not provided anymore to evaluate the
usage information for billing purposes.

Accounting information 26-1


Motivation and Objective
OPEN TEXT

Archive Server
operated by
service provider

Customer B

Accounting ~~=
data ~~""':::-

Accounting information Slide· 2

The accounting feature of the Archive Server is dedicated to application service providers
(ASPs) operating an Archive Server for mUltiple customers. It measures various quantities of
the server usage, such as:
Number of access requests
Number of requested documents
Number of active users (estimated)
Amount of transmitted data (= traffic)
These quantities can be retrieved on a per-customer basis; that way, it is possible to invoice
customers for their Archive Server usage based on their individual server resource
consumption.

26-2 710
QPENTEXT

OPEN TEXT
IE_-·
Archive Server
logs usage quan

dministrator retrieves
ccounting data

Invoices are created


based on accounting data

Accounting infonnation Slide 3

The whole process of collecting and evaluating accounting data comprises the four steps
illustrated above. Details on each step are presented on the following pages.

Accounting information 26-3


QPENTEXT!

Access logging by Archive Server


OPEN TEXT

1
fi! Archive Server logs all access traffic, with certain restrictions:
- HTTP requests only (no RPC, RFC)
Requires activation for some leading systems and Archive Clients
- Only requests answered with HTTP_OK
Requests resulting in errors are not logged
- Only when switched on

IPI Accounting and statistics


(PI Backup configuration
(PI Cache configuration

fi! Data is stored in files:


<IXOS_ROOT>/var/acc/<date>_<comp>.acc
- One file per day and per component (RC, WC, (ADMS :::; 9.5»
- Path may be altered (e. g. to point to a separate file system)

Accounting information Slide 4

To open the Server Configuration dialog for maintaining the properties of accounting data
collection (illustrated above):
Within the Archive Server Administration, choose menu item File ~
Server Configuration.
In the structure display on the left-hand side, choose Document Service (DS), then
click the '+' sign next to that entry.
Choose entry Accounting and Statistics.
You can then maintain the accounting configuration as desired. After that, choose menu item
File ~ Save changes. Your changes will become effective after the next server restart.
See the administration manual Archive Server Configuration Parameters in ESC for further
details about using the Server configuration dialog.

Note: If you do not intend to make use of the accounting functionality, you should disable it
completely (it is enabled by default) as described above! However, deactivating the
accounting also disables IXOS's Windows Performance Monitor interface (for Archive Server :::;
9.5, see chapter Archive Server Statistics and Performance Monitoring).

With Archive Server 9.6, while there are no scripts available for evaluation & billing, the log
files in <IXOS_ROOT> /var / acc are still generated.

26-4 710
QPENTEXT

Accounting data retrieval (1):


QPENTEXT
Interactive

Requires logon as
authorized user (e. g. dsadmin)

View on-screen ...


... or download as CSV file

Accounting information Slide 5

Before using the collected accounting data for billing, it must first be downloaded from the
Archive Server. This is always done via the Archive Server administrative HTTP interface,
either interactively or script-based. The illustration above shows the steps of the interactive
download:
1. Open a web browser and visit http: / / <archiveserver>: 4060
2. Select Accounting.
3. A logon dialog will be displayed; log on to the Archive Server.
(The user chosen here does not have to be dsadmin; however, it must be given the
View accounting information (ac_view) privilege in the Archive Server Administration;
see the Archive Server Administration Guide for details.)
4. In the following screen, select the date range you are interested in as well as the
download form: View as HTML for on-screen display or Download as CSV. The latter
is needed for subsequent data processing by a financial calculation tool.
5. Click Go. A table with the selected range of accounting data items will then be displayed
in the browser or downloaded as a CSV file.

Accounting information 26-5


QPENTEXT

Accounting data retrieval (2):


QPENTEXT
Command line or script-based
2
m Retrieval generally possible with any HTTP client application
III Retrieval URL (example):
http://archiveserver:4060/cgi-bin/acc/
runacc.pl?select=last month&for.mat=csv

III Examples
- With MS Excel add-on directly from Archive Server into Excel
Detailed description in Archive Server Administration Guide
- With arbitrary HTTP client tool, e. g. curl

Accounting infomlation Slide 6

For routine operation, you will probably not want to download the accounting data interactively
for each accounting period. Nevertheless, non-interactive download methods still have to use
the Archive Server HTTP interface via port 4060. You may choose yourself which of the
available scriptable HTTP clients you prefer to accomplish this task.
curl is a freely available command line HTTP client. Here is an example command for
downloading accounting data from an Archive Server and storing it as a local file (to be
entered as a single line only):
curl -u dsadmin:<password> -0 ixos_acc.csv
http://archiveserver:4060/cgi-
binjaccjrunacc.pl?select=last_month&format=csv

The user used for access authentication does not have to be dsadmin; however, it must be
given the View accounting information (ac_view) privilege in the Archive Server Administration
(see the Archive Server Administration Guide for details).

26-6 710
QPENTEXT

Using retrieved accounting data


OPEN TEXT
for billing
3
III! Customer analyzes accounting information
using preferred calculation tool
- MS Excel, MS Access, ...

III! Logged accounting data useful for that purpose:


I
Parameter ExalTlple Meaning .......... Useful for
RequestTime 156 Milliseconds needed to Billing by
serve request server workload
ContentServer DA Addressed Grouping the billing by
logical archive logical archives
and, finally, customers
Userld 149.235.50.215.2 User name, if known; Billing by number of
0030116.15.03.18 cookie 10 otherwise active users
ContentLength 45372 Number of transferred Billing by
bytes traffic volume

Accounting information Slide 7

The table above mentions those pieces of accounting data that are useful for billing.
Additionally, the following items are logged for each request:
TimeStamp - when did request take place?
JobNumber - classification of requests
ClientAddress -IP address of client or intermediate proxy server
Applicationld - name of IXOS (or other) application that sent the request
NumComponents - number of transmitted (send or received) document components;
one of 0, 1, or 2
Documentld of requested document; (for document-related requests only)
Componentld - name of transmitted component (for data transmission requests only)

Accounting information 26-7


QPENTEXT

Reorganization of "old"
accounting d~a~t:a• • • • • • • • • • • • • ===;;
iii Accounting data directory must be cleaned up regularly
II Done by periodic job Organize_Accounting_Data
- Schedule job to run once after each accounting period

iii Possibilities
- . Keep useful only if you want to manage old files yourself
- Delete useful if you keep downloaded CSV files somewhere
- Store in a logical archive useful in all other cases

[PI Accounting and statistics


[PI Backup configuration
[PI Cache configuration
[PI Compatibilityto old (pre 2.
[PI Component settings
[PI Directories
[PI Document settings
[PI HTTP settings

Accounting information Slide 8

Unless collecting accounting data is disabled (see earlier in this chapter), the directory where
the data files are stored will normally be filled with huge amounts of accounting logging quite
fast. Deleting or moving away those files which have already been used for billing is therefore
a mandatory regular task of Archive Server operating.
However, this task can be automated by the predefined periodic Archive Server job
Organize_Accounting_Data which can be configured in the Server Configuration dialog of the
Archive Server Administration as explained above.
Notes about the Pool for the accounting data parameter:
It is not explicitly set and therefore not displayed by default. To have it displayed,
choose menu item View -7 Display undefined values.
- It expresses the storage destination used if the reorganization method "archive
into given pool" is chosen.
It must follow the syntax <archive_ID>_ <pool_name> i in the example
displayed above, A4 is the logical archive ID and the archive's pool is named
WORM.

Accounting data files which have been stored in a media pool (preferrably on optical media)
can later be restored to their original location using command line tool dsAccTool -r.

26-8 710
OPEN TEXT

Exercise: Download accounting data


OPEN TEXT
for billing

.. Download accounting data from


your Archive Server
- As CSVfile

.. Import data into MS Excel


.. Calculate sum of transmitted
data volume (= traffic)

Accounting infonnation Slide 9

This exercise can only be performed with an Archive Server::; 9.5

Accounting information 26-9


QPENTEXT

26-10 710
QPENTEXT

OPENTEXT

27 Statistics and Performance


Various kinds of Archive Server usage information

Statistics and Performance Monitoring 27-1


Chapter guide

III
_ ......~---==~
DocumentService statistics
OPENTEXT
mtt-iMfMg'MtHh.

III STORM statistics

iii Performance monitoring

Statistics and Performance Monitoring Slide 2

27-2 710
OPEN TEXT

Read/write statistics for storage media


OPEN TEXT

" Counters for read and write requests per medium


II Retrieve with dsClient:
stat RC volume (read access)
stat WC volume (write access)

" Useful information


- Which media are no longer accessed by users?
- Decide which media to remove from jukebox when space for new media is required

Statistics and Performance Monitoring Slide 3

Please use the stat command carefully and validate its results.

Statistics and Performance Monitoring 27-3


OPEN TEXT

Statistics about reading


from cache I jukebox I hard disk OPENTEXT
dW- [email protected],.

II Accumulated counters for access to cache I opt. media I disk buffer


II Useful information
- Is caching configured appropriately for fast document access?
- Is the cache large enough?

II Retrieve with dsClient:


stat RC read

Amount of data
retrieved from ...
... cache

... disk buffer I HDSK pool

... jukebox

Statistics and Performance Monitoring Slide 4

The comparison between the data amount read from optical media vs. from hard disk
resources (disk buffer, HDSK pool, cache) indicates whether the server uses these resources
efficiently. The general rule is: The "direct reads" amount should be low compared to "cache
reads" plus "non-cacheable reads".
The absolute numbers and ratio, however, are not very useful for a judgment; they are too
dependent on how the Archive Server is used (leading applications and storage/retrieval
scenarios). You can though perform a long-term observation to see whether the ratio changes
over the time. If the relative amount of "direct reads" increases, you should consider improving
the caching setup. Possibilities include:
Enlarge the cache.
If documents are cached in the disk buffer (see chapter Disk Buffer Configuration),
extend the buffer retention period in the buffer purge configuration.
If caching after media writing or caching before buffer purging are deactivated, activate
the appropriate option.
If caching is deactivated as config option for a logical archive, activate it.
The decision what to do depends on the exact Archive Server configuration and usage context;
profound knowledge of this context is needed to make a well-founded decision.

27-4 710
OPEN TEXT

OS statistics: technical aspects


OPENTEXT

II! How the OS keeps the statistics


- Values are kept during runtime
- Values are dropped at shutdown, reset at startup
- Can be reset manually at any time

III Recommended use


- Retrieve a statistics report regulary, e. g. daily
- Retrieve report directly before regular maintenance shutdown
- Make your own evaluation using your preferred tools
Because statistics are reset during each restart, this tool is not recommended
for capturing a complete & reliable statistic evaluation.
Contact OT Global Services for such solutions.

II! Reset statistics


In dsClient:
stat RC readclear
stat RC volumeclear
stat WC volumeclear
- Moreover, statistics are reset at each Archive Server restart
Statistics and Performance Monitoring Slide 5

The explanations given above apply to the types of statistics discussed on the previous pages.
Generally, the DocumentService maintains a lot of other statistics as well; those explained
here are the ones really useful in normal administrative practice.

Statistics and Performance Monitoring 27-5


Chapter guide
OPENTEXT
_ .............-==:~ :::ll_im~*Mji;q·M,*.

iii DocumentService statistics

iii STORM statistics

iii Performance monitoring

Statistics and Performance Monitoring Slide 6

27-6 710
QPENTEXT

Statistics for jukeboxes, drives,


media OPENTEXT

111 STORM writes statistics information to a file

III Usefulness:
- Are enough drives available for efficient media access?
- Keep track of hardware wearout, especially of jukebox robots
• Anticipate need for hardware maintenance ("early watch")

111 • ini-Iike statistics file format


- Perl script available to convert statistics. txt
to a well-readable HTML file;
can be customized for customer needs

III Interpretation of statistics information


is not covered by IXOS support contracts

Statistics and Performance Monitoring Slide?

Statistics and Performance Monitoring 27-7


QPENTEXT

Structure of STORM statistics files


QPENTEXT

No. of access requests while No. of access requests while


medium was not available in drive medium was available in drive

Statistics and Performance Monitoring Slide 8

Meaning of the figures listed in the statistics file (after the '=' sign, from left to right):
Changer (robot) information:
1. Online time of jukebox (seconds)
2. Number of disk moves
3. Number of failed moves recovered by STORM
4. Number of disk inserts
5. Number of disk ejects
Drive information:
6. Online time (seconds)
7. Data volume written (MB)
8. Data volume read (MB)
Volume (medium) information:
9. Checksum of volume ID (before the '=' sign)
10. Volume name (High Sierra)
11. Volume ID
12. Volume creation time (Unix timestamp)
13. Online time of medium (seconds)
14. No. of access requests while medium was not available in drive (expressed as
number of NFS data blocks of 8 kB)
15. No. of access requests while medium was available in drive (expressed as
number of NFS data blocks of 8 kB)
16. Data volume written (MB)
17. Data volume read (MB)
See the Archive Server Administration Guide for a complete statistics file documentation.

27-8 710
QPENTEXT

Statistics file processing


OPENTEXT

STORM
Writes every
D 0
1.·
10 minutes jbd_stat.l047888469
(by default)
~.
Renames, collects D
D
jbd_stat.l047894530

0
jbd_stat.log
at startup
------------------~p jbd_stat.l047895276 statistics.txt

ADMSjob Compress_Storm_Statistics
accumulates data into single file

III Reset statistics:


- Delete collected file(s) jbd_stat_<timestamp>
- Delete accumulated file statistics. txt
- Edit files manually, set specific values to zero (no built-in reset function yet)

Statistics and Performance Monitoring Slide 9

Per default, all STORM statistics files are written and collected in directory
<IXOS_ROOT>/var/stats.
Configuration parameters for STORM statistics collecting can be maintained in
ADMC's Server Configuration page, branch Storage Manager ~ Parameters for
STORM Statistics.
Since some of the configuration variables are not explicitly set by default, choose menu
item View ~ Display undefined values to get the full view on all parameters.

Statistics and Performance Monitoring 27-9


QPENTEXT

_ .....-------===~
Chapter guide

III! DocumentService statistics

iii STORM statistics

III Performance monitoring

Statistics and Performance Monitoring Slide 10

27-10 710
OPEN TEXT

Statistics interface to
Windows Performance Monitor OPEN TEXT

III Performance topics to be observed


- Number of access requests
- Number of transferred data packages
- Average processing time per request
- For read and write access separately

III Usefulness: Keep Archive Server running smoothly


- Detect peaks of processing load
- Detect processing bottlenecks

III Based on accounting data collection


- Effective only
for HTTP communication
when accounting functionality is activated

III Unix-based Archive Server can be observed remotely


- Connector interface has to be installed
on the Windows machine running PerfMon
" Included in IXOS product CD f~~~"'''~'~'''~''''~'''''''''~''''''''''''~''-~''''~~''~''!
Further reading in ESC:
Performance measurement
Statistics and Performance Monitoring Slide 11

Find the mentioned ESC article Performance measurement of retrieving documents:


considerations as:
https://esc.ixos.com/1013528663-844

Statistics and Performance Monitoring 27-11


OPEN TEXT

Statistics add-on: IXOS-Insight


OPENTEXT

" Collects and reports statistical information that reveals


- current syste":l usage, workload, load peaks, ' ..
- usage trends, growth rates

" Objectives:
- Know how the system can be optimized according to today's requirements
- Prepare for the future by prOViding needed resources in time

.. Components
- Archive Server add-on for data collection and reporting +
on-site visit for installation I introduction
- Result analysis consulting (optional)

" Availability
- As solution package
- on Unix & Windows
- Detailed information in ESC

Statistics and Performance Monitoring Slide 12

IXQS-Insight is an Archive Server add-on that systematically collects all available statistics
data, stores it for later analysis, and conditions it for convenient viewing. Its added value -
compared to simply using the statistics tools presented earlier in this chapter - is a
synoptically, coherent view on all relevant information. This eases deducing measures to
optimize your server for current and future requirements.
IXQS-Insight is not part of the Archive Server standard distribution; it has to be ordered and
installed separately.

Find the mentioned ESC folder with further information as:


https:jjesc.ixos.comjescjcgi-binjindex.cgijl039086864-532

27-12 710
OPEN TEXT

OPENTEXT

28 Logfiles and Loglevels


Managing and using Archive Server logging output

Logfiles and Loglevels 28-1


iii Structure and relationship of log messages

iii Working with loglevels

II Size limitations for logfiles

III Relevant logfiles

Logfiles and Loglevels Slide 2

28-2 710
QPENTEXT

Log message structure (1): General

Line-wrapped appearance
in text editor
alid argument: cannot open

I Message type acco"gng to log awnch name


Relevant types for troubleshooting:
IMP INF WRN ERR FTL SEC

~ Date and time of log entry

~..
Log entry origin (module, funct. name,
i source file name, line number)
Relevant for IXOS developers only

~ Log message text

Logfiles and Loglevels Slide 3

The chart above explains an example of a typical log message. The structure is the same for
most Archive Server log files; exceptions are detailed later in this chapter.

Logfiles and Loglevels 28-3


QPENTEXT

Log message structure (2): QPENTEXT


STORM trace f~i~le;'• • • • • • • • • • • • •t •••==id=~lll=@=M.:'=-.~

1 :1f5:1e:25e @ ee e7 "scsi" "open.c" 1f25 scsi_open(\\.\p1fbet1f):

\\.\p1fbet1f,e) failed
2ee2/e7/11 11: 1f5: 1e: 25e @ ee e7 "sched" "sch_subdWrkThread. c" 1131f
eq_init(\\.\p1fbet1f,e): cannot open drive
2ee2/e7/11 11: 1f5: 1e: 875 II ee e7 "sched" "sch_common. c" 736 stopping picker of WORM

or log :~:~:[~~:~~~~;~~~;y~~~i~~u'~~11"'"---------..· ~
...
Date and time of log entry t-~--- 'I,

acti~~~:t~~~c~~:~~i~=~~;:~1---------.. ~i.S
I
Type of

Request number (may be empty)


Helps filtering long log files for
messages belonging to a certain incident
STORM component issuing the message t--L---------~
Source file, line number 1
For IXOS-internal debugging only

Log message text t-'


Logfiies and Loglevels Slide 4

The request number mentioned above is assigned by STORM arbitrarily to each client request
that cannot be fulfilled immediately. For example: If all disk drives are occupied, the next
incoming data read request is queued for later processing and is assigned such an internal
request number.
Since STORM handles multiple pending requests in a parallel manner, log messages of
concurrent requests occur interwoven in the log file. Nevertheless, it is easy to filter out all
messages belonging to a certain request by searching log lines for the request number.

28-4 710
OPEN TEXT

Log message interrelations (1): QPENTEXT


Within a log ~fi~'e:' •••••••••••••i:M1;1;.::'ffl1:'i:;t~:ij:_:@=-.
JIll Often an earlier message tells why a later error has occurred

l1li Example:

14:29:40.109

essages with identical time label


ally belong to the same inciden

Logfiies and LogleveJs Slide 5

Often an operational problem is due to some malfunction or misconfiguration that takes effect
earlier within the processing sequence of an operation. This is reflected within the log files:
The final error message tells what kind of operation could not be fulfilled - but, in order to find
the true reason for the failure, you have to scan through the preceding messages, too; one of
them may reveal the decisive information for diagnosing the problem.
The log messages' time label can provide valuable help in this respect. In many cases, you
may restrict your search for relevant information to the range of messages with a time
statement (nearly) identical to the time of the error message in question; those messages
reveal the operation history that took place immediately before the failure occurred.

The example above is taken from the log file of the OocumentPipeline tool Prepdoc whose
task it was to reserve a document 10 for a document being processed in the pipeline. The
OocumentService rejected this request as being unauthorized - the security options of the
target logical archive required signatures for all kinds of access requests, which the OocTool
was not configured to deliver.

Logfiles and Loglevels 28-5


QPENTEXT

Log message interrelations (2): OPEN TEXT


Across logif.ile;s•••••••••••••••iJll.;id:.~='lil=tM=~.=';@.
III A system component may fail due to a previous failure of
another component
III Example:
2 2001/03/28 17:06:04:497 @ 00 Cannot read at xO
4 2001/03/28 17:06:04:497 @ 00 scheduler - ossibleFault Drive

jbd_trace.log

1 @ 00 scheduler -

( .~ 08 .•.. NfsClnt::stat NfsClnt.cxx-693

RC1.log

(~
IXClient.10gl::~:::::::::~~~:!~~!~5~~~:!:
(not on the
Achive Server)

Often an error becomes visible at some system component but has been caused by a different
one. The example above shows how the jukebox server STORM fails to read a document from
a CD, probably because the CD is damaged (top). The DocumentService's read component-
which has requested this CD reading operation from STORM - writes a message about the
failure in its own log file (middle). Finally, the Archive Client which originated the retrieval
request is informed about the reading failure and writes a log entry about not being able to
retrieve the document to the client-side log file (bottom).
For troubleshooting purposes, you examine the log files the opposite direction: You begin with
the one nearest to the error occurrence (the client log file in the above example) and proceed
to the one(s) of the underlying components. In this case of synoptically log file analysis, it is
essential that you pay attention the log messages' time labels in order to track their causal
relationship.
Common "causal connections" between messages of different log files include:
Document storage from Enterprise Scan:
doctods . log (on scanning client) - wc .log
Document storage via DocumentPipeline:
doctods .log (on Archive Server) - wc .log
Document retrieval:
IXClient .log (on retrieval client) - RCI .log - dscachel/2 .log -
jbd_trace.log
ISO media burning:
admsrv.log dsCD.log jbd_trace.log
IXW media writing:
admsrv.log - dsWORM.log - jbd_trace.log

28-6 710
QPENTEXT

Chapter guide
OPEN TEXT

III Structure and relationship of log messages

l1li Working with loglevels

III Size limitations for logfiles

III Relevant logfiles

logfiles and Log!eveJs Slide 7

Logfiles and Loglevels 28-7


Static vs. dynamic loglevel settings
OPEN TEXT
Wii.iND1IIM.

Logfiles and Loglevels Slide 8

28-8 710
QPENTEXT

Log switches (1)

m Determine what kind of information is written to logfiles


III To be set per function component (OS, AOMS, BASE, ...)
III Dynamic setting possible for some components

Example: Log settings for DocumentService

Log settings per


Archive Server comp

Either "Configuration of log files


or file configuration"

logfiles and Loglevels Slide 9

Every functional component of the Archive Server (as discussed in chapter Archive Server
Architecture) has its own set of log switches, enabling to control the amount and focus of
logging output quite precisely. (See next page about the available log switches and their
meaning.)
The preferred tool for viewing and changing log settings in the Server Configuration page of
the Archive Server Administration (illustrated above). Each folder containing log settings is
located underneath the Archive Server component the settings belong to.
Setting log switches dynamically, however, is possible in the Server Configuration page for
only a subset of the server components (including DocumentService and
AdministrationService). For some of the components, command line tools are available for
viewing and setting log switches dynamically:
Document service's read and write components (RC1--4, We): dsClient
Document pipeline DocTools: dpctrl
See appendix Archive Server Command Line Tools for details.

Logfiles and Loglevels 28-9


Log switches (2)
QPENTEXT

III Log switches relevant for troubleshooting:

l1li Some log switches are always active:


- LOG_ERROR Errors
- LOG_FATAL Errors leading to process termination \
(

- LOG_SECU Security violations

l1li Other log switches are relevant for IXOS developers only

Logfiles and Loglevels Slide 10

28-10 710
QPENTEXT

Log switches (3)

III Server Configuration includes on central folder with subfolders


for all components

Logfiles and loglevels Slide 11

Logfiles and Loglevels 28-11


STORM loglevels
OPEN TEXT

iii Individual numeric log levels for STORM components


- Possible values: 0, ..., 4

iii Static and/or dynamic setting possible in Server Configuration


III Use in cooperation with IXOS Support

Logfiles and log levels Slide 12

STORM's loglevels, as set in the Server Configuration page of the Archive Server
Administration (illustrated above), can also be accessed the following ways:
Static log settings are stored in STORM's configuration file:
Win: <IXOS_ROOT>\config\storm\server. cfg
Unix: /usr/ixos -archive/config/storm/server. cfg
Entries: loglevels { <component> { <log_level> } }
Dynamic log settings can be set and queried with command line tool cdadm; see
appendix Archive Server Command Line Tools for details.

28-12 710
QPENTEXT

Chapter guide

III Structure and relationship of log messages

l1li Working with loglevels

III Size limitations for logfiles

III Relevant logfiles

Logfiles and Loglevels Slide 13

Logfiles and Loglevels 28-13


QPENTEXT

Size limitations for logfiles(general)


OPEN TEXT

III When logfile reaches size limit:


<filename> .log -+ <filename>. old
* Old <filename>.old is dropped
New <filename> .log is created and written into

III Configure
size limit
component-wise:

Global settings are effective


for components
without individual settings

!II Spawner logfile spawner .log:


Dropped and recreated at every spawner startup

Logfiles and Logleve!s Slide 14

28-14 710
QPENTEXT

Size limitations for STORM log QPENTEXT


and trace fi.'eis• • • • • • • • • • • • • • • •J••:lb.'i;i;lif4;"*;.14;""".
Jll

II! When file reaches size limit and at every STORM startup:
<filename>. log ~ <filename>. 000 ~ .... 001 ~ ... ~ .... Oxx
Old <filename>. Oxx is dropped (maximum for xx:: 99)
New <filename> .log is created and written into

II! Applies to files:


jbd.log - "standard" logfile with unified logging format
jbd_trace.log - trace file, more detailed than standard logfile
jbd_lwords.log - "last words", debugging aid when STORM has crashed

II! Configure
'l'€il [P] Storage Manager (STORM)
size limits: : lilil IP] InternallnstaUation Variables
~- 8i1 [P]lnstallation Variables
, E3 IP] Configuration STORM (file server. erg)
.iRJ [PI Parameters Sizing STORM Server
@J IB] Parameter SCSI report
.. riliJ IS] Parameters jbd schedUler (ontine
@J IP) Parameters jbd scheduler
. @D (B1 Parameters jbd presentation (onli
@J [PI Parameters jbd presentation
@J PI Parameters 1809660 Finalizatio
.. W1

Logfiles and Loglevels Slide 15

Logfiles and Log/evels 28-15


OPEN TEXT

OPEN TEXT

III Structure and relationship of log messages

II Working with loglevels

Ii Size limitations for logfiles

III Relevant logfiles

Logfiles and Loglevels Slide 16

28-16 710
QPENTEXT

Relevant logfiles for Archive Server OPEN TEXT


i;·U=I!ii;II;'.;lI=!!l;l!.~
QPerations• • • • • • • • • • • • • • • • • •;t_i·

II Burning ISO images I writing .. Purging disk buffer


to IXW medias - dsHdskRm.log
- jbd_trace.log
- dseD.log I dsWORM.log
- jbd_trace.log .. Document retrieval
- Ixclient .log (on retrieval client)
II! Backing up IXW medias - RCl.log
- bkupDS. log dscachel/2 • log
- bkupSrvr.log jbd_trace.log
- bkWorm.log
.. Write to VI pool
- jbd_trace.log
(Single File - Centera)
- dsGs.log
II Finalizing IXW medias
- dsFinalize.log .. Write to FS pool
- jsd.log (buffered hard disk)
- jbd_trace.log - dsHdsk.log

Logfiles and loglevels Slide 17

For problems with storing documents, check also we. log.

The VI pool is used for writing single files to a EMC Centera storage system.
The FS pool is the successor to the HDSK pool and supports the disk buffer.

Using Archive Server 4.x, there are some differences concerning the jUkebox server logfiles:
Using WORMs on a Unix server which was upgraded from a pre-4.0 IXOS-ARCHIVE
version, jukebox server ixwd is used instead of STORM. Its logfile is named ixwd. log.
o All other cases: There is no STORM trace file yet; use STORM's logfile j bd. log
instead.

Logfiles and Log/evels 28-17


OPEN TEXT

Access to log file via perl script


OPEN TEXT

III Enter one of the following URL's in a browser


http://<ArchiveServer>:4060/cgi-bin/tools/log.pl

or
https://<ArchiveServer>:4061/cgi-bin/tools/log.pl

Logfiles and logievels Slide 18

28-18 710
OPEN TEXT

Logging:
Further sources of information

Ii Survey of all logging possibilities


Generating and Accessing Logging Information for Archive Server in ESC
~ For older IXOS-ARCHIVE releases

Ii Complete description and explanation of log settings


Archive Server Administration Guide, chapter Consulting the log files

Ii Collections of error messages and troubleshooting help


ESC folder Log files and error messages

Logfiles and Loglevels Slide 19

Archive Server Troubleshooting Tools in ESC (:54.x):


https://esc.ixos.com/0935134824-235

Find the Archive Server Administration Guide in the ESC:


https://esc.ixos.com/l084264247-891

The ESC folder Log files and error messages can be accessed as:
https://esc.ixos.com/0976524753-322

For higher versions, look also in the ESC.

Logfiles and Loglevels 28-19


28-20 710
QPENTEXT

OPENTEXT

29 Summary of troubleshooting tasks on


Server
... before it is too late!

Summary of troubleshooting tasks on the Archive Server 29-1


OPEN TEXT

Chapter overview

_ ................:=~ OPENTEXT
mil ,. #MM@@¥¢pmi'.

iii Avoiding problems


iii Examining error symptoms
- Finding proper log files for further information

II Contacting IXOS Support

Summary of troubleshooting tasks on the Archive Server Slide 2

29-2 710
OPEN TEXT

Avoiding problems
OPENTEXT

III It is still the better strategy to avoid problems than


to solve problems
III Some general hints
- Make backups of the Archive Server - including its database - regularly
- Test restoring of backups
- Monitor the Archive Server
- Test whether it is possible to restart the Archive Server without problems
- Follow recommended Archive Server upgrades (see ESC)
- Install essential patches (see ESC)
- Verify compatibility of used products & components (see ESC)
- Train your IT staff
- Train your scanning personnel (if applicable)
- Hardware and software service contracts can help

Summary of troubleshooting tasks on the Archive Server Slide 3

Release and Phase Out Dates for archive-based products in ESC:


https://esc.ixos.com/1088598538-150

Find Archive Server Patches in the ESC:


https://esc.ixos.com/1090S01134-S70
Patches that apply to your server platform (for server patches) and IXOS release and that are
marked as "recommended" or "strongly recommended" should be installed.

Product Compaibility Matrix in ESC:


https://esc.ixos.com/1127140S77-928

Summary of troubleshooting tasks on the Archive Server 29-3


OPEN TEXT

Symptom examination (1):


Error state ("red bulb") in Server Monitor OPENTEXT

Categories of error conditions: a":·'; leading-edge


SH"'; OP Space
III Disk/database space shortage ! 1...·'·'·'."II!
-+ Examine whether
11'·" Storage Me:~~ger
B'..'.'; OocService
* disk is too small for normal operation, or

[I~
* data has queued up irregularly

!'II Program terminated I


- Message: "Can't call this RPC server" !iI.. :.'.' os Pools
-+ Consult program's log file; $":",' Tablespace
log file name is equal or similar B"""; OP Error Queues
to program name
! !.:.,; documents (CARA) cpfile
i i····:·,·: documents (CARA) pagejdx
Item in DocumentPipeline error queue i i···:·'·:III'*;:Il.I!I_I~I_
iii
i i''':''; documents (CARA) dbx
-+ Examine DocumentPipeline using i L..:.'; documents (CARA) docrm
DocumentPipeline Info ~'.,.' RFC Server(s)

See also ESC section


Archive Monitor Diagnosis

Summary of troubleshooting tasks on the Archive Server Slide 4

This and the following pages explain how to investigate error states by dedUcing possible error
causes from the visible error symptoms. Since it is impossible to list all imaginable malfunction
reasons, the explanations concentrate on the first investigation step from the error cause to a
resource for more detailed error information. This more detailed resource will mostly be some
log file; the IXOS log files usually give decisive hints about where to look for the true error
cause, so this should be sufficient in the context of this course.

Depending on the category of the error indicated in the Archive Server Monitor, there are
different ways of examining possible causes, as explained above.
Concerning the correspondence of program names and log file names, see the following page.
The only exception lies in the "Storage Manager" branch of the monitor tree: STORM's log file
is named j bd. log, and the trace file j bd_ trace. log should also be examined for
troubleshooting.
For an explanation about examining DocumentPipeline error items, see page Symptom
examination (4): DocumentPipeline errors later in this chapter.

The articles in ESC section Archive Monitor Diagnosis give further explanations about possible
error causes and actions for problem solving. Find the section as:
https://esc.ixos.com/0951320904-700

29-4 710
QPENTEXT

Symptom examination (2):


"Dead" services in spawncmd status OPENTEXT

iii On the Archive Server, 5 (on Unix: 3) terminated processes in


the spawncmd status list are the normal state
III If more than those are terminated -+ consult their log files
C:\> spawncmd status
• Terminated irregularly
(exit code :t:O) program-id sta pid start time stop time

• Log file name similar R 526 07/24/2001 13:47:31


to program- id ~~",lIIilllII~~~ 07/24/2001 13:47:31 07/24/2001 19:21:47
~ ~ 07/24/2001 13:47:31
07/24/2001 13:47:35 07/24/2001 13:47:35
07/24/2001 13:47:35
07/24/2001 13:47:35 07/24/2001 13:47:36
07/24/2001 13:47:36
07/24/2001 13:47:36
07/24/2001 13:47:38
These are 1---I'~$$J;~;i:ik:Lst ~~i. .I~ : } 07/24/2001 13:47:38 07/24/2001 13:47:39
07/24/2001 13:47:38
allowed to 07/24/2001 15:10:56
be terminated 07/24/2001 13:48:40 07/24/2001 13:48:40
with exit code 0 = wfcfbc R 504
07/24/2001
07/24/2001
12:47:10
13:48:40
07/24/2001 13:47:10

Summary of troubleshooting tasks on the Archive Server Slide 5

In a sane operational state, all Archive Server processes listed in spawncmd status have to
be running - except for the ones marked in the chart above. If any of the other programs is
marked as terminated ('T' in column "sta"), something irregular has happened to them. To
investigate this, you will have to have a look in the corresponding log file. Each of the listed
programs writes to a log file whose name is similar, yet not always exactly equal, to the
displayed program name. Some important examples:
admsrv ~ admSrv.log
dsrcl ~ RC1.log
dswc ~ WC.log
On a scanning station with IXOS-EnterpriseScan installation, a subset of the Archive Server
processes is installed and must be running as well. There, stockist is the only program that
is allowed to be terminated during normal operation.

IXOS-ARCHIVE ::; 4.2: One additional process, named checkscsi, is always allowed to be
terminated; it is okay even if its exit code is 1. (Its purpose is to check whether the versions of
the IXOS generic SCSI driver and the operating system match, which is no longer necessary
starting IXOS-eCONserver 5.0.)

Summary of troubleshooting tasks on the Archive Server 29-5


OPEN TEXT

Symptom examination (3): Red bulb in


Archive Server job protocol OPENTEXT

Mostly some diagnostic


note already here

Summary of troubleshooting tasks on the Archive Server Slide 6

The Job Protocol window (shown above) of the Archive Server Administration indicates
unsuccessfully finished periodic jobs by red bulbs in the leftmost column. There are three
possibilities to gather information about possible causes:
The protocol item itself mostly yields some brief note about what has gone wrong.
Messages about missing empty media (as in the example above) should already be
sufficient as diagnosis.
You may click on the protocol row in question and then click the Messages button. This
will open a window (shown above, bottom) displaying all log messages of the chosen
job run.
In case you need to examine messages of earlier job runs, you will have to consult the
corresponding log file. The log file's name can be deduced from the protocol entry: It is
mostly equal to the job's program name mentioned in the protocol window's "Message"
column.

29-6 710
QPENTEXT

Symptom examination (4):


DocumentPipeline errors OPEN TEXT

Document log

c:/IXOS/dirs/DPDIR/mucl
m/00000009.00000000
document protocol
2000109/2716:15:15 [doctOdsj ERROR:
dscCpComp(archiv8_id='RD',pooJ=",name='im',
type='ASCILNOTE',file='c:fJXOSJdirsl
DPDJR/muc00536/mJOOOOOOOB.
OOOOOOOO/IM',aprUype='notice, failed

Slide?

The DocumentPipeline Info shows the processing status of documents being processed. If
some document is being held in an error queue (see illustration above), there are two
possibilities to gather information about possible causes:

Within the DocumentPipeline Info window, click on the row containing the document in
question ("Archive document" in the example above); the window's status bar then
displays the name of the DocTool at which the error has occurred ("doctods" in the
example above). You may then consult the log file with exactly that name (with .log
appended); it will reveal meaningful messages about the error(s) in question.
A way to get information directly within the DocumentPipeline Info, yet often less
informative:
1. Right-click the row containing the document in question. From the context menu,
choose Documents 7 Show.
2. The first time you do this within a DocumentPipeline Info session, you will be
prompted to log on to the DocumentPipeline as an Archive Server administrator.
3. Beneath the chosen DocumentPipeline row, a sub-list is displayed containing all
documents currently being kept at this processing step. Right-click on one of the
documents in question and choose Protocol from the context menu; this will
open a window showing just that portion of the DocToollog file concerning the
chosen document (shown above, top right).

Summary of troubleshooting tasks on the Archive Server 29-7


Further Information
OPENTEXT

m Find information about known problems


in the Expert Service Center (ESC)
- https:/Iesc.ixos.com

m Further information in the Knowledge Center (KC)


- esp. on products integrated with Livelink
- http://knowledge.opentext.com

SLimmary of troubleshooting tasks on the Archive Server Slide 8

For information about contacting the IXOS Support Centers, see:


http://www.ixos.com/home/services/ser-support

29-8 710
OPEN TEXT

Contacting Customer Support


OPENTEXT

, ,,
If you need support ...

... the Support Hotline


will always help you!

II Contact Customer Support (Hotline)


- Global Support Centers
• Germany
UK
* Americas
* Asia & Australia
• Japan

Summary of troubleshooting tasks on the Archive Server Slide 9

For information about contacting the IXOS Support Centers, see:


http://www.ixos.com/home/services/ser-support

Summary of troubleshooting tasks on the Archive Server 29-9


29-10 710

You might also like