710 - Archive Server 9.7.0 Administration
710 - Archive Server 9.7.0 Administration
July 2008
QPENTEXT
Great Minds Working Together TrV1
Impressum Learning Services course material
Revision 9.6q
Author: Learning Services
Date: July 2008
Trademarks SAP, R/3, SAPmail, SAPoffice, SAPscript, SAP Business Workflow, SAP ArchiveLink:
SAPAG
IXOS, IXtrain: IXOS SOFTWARE AG, Munchen
UNIX: Unis Systems Laboratories, Inc.
OSF, Motif, OSF/Motif: Open Software Foundation, Inc.
X Window System: Massachusetts Institute of Technology
PostScript: Adobe Systems, Inc.
FrameMaker: Frame Technology Corporation
ORACLE: ORACLE Corporation, Kalifornien USA
Microsoft, WINDOWS, EXCEL, NT: Microsoft Corporation
Intel, Intel Inside: Intel Corporation
Other product names have been used only to identify those products and may be
trademarks of their respective owners.
QPENTEXT
Table of Contents
1 Course Objectives and Contents
710 - Course objectives 1- 2
710 - Contents overview 1- 3
Where to go from here 1- 4
0-4 710
QPENTEXT
0-6 710
QPENTEXT
16 Media Migration
Chapter guide 16- 2
Introduction 16- 3
Volume Migration - Process 16- 4
Migration Server's work 16- 5
Additional considerations 16- 6
Chapter guide 16- 7
Preparation steps 16- 8
Plan migration for selected volumes 16- 9
Review migration progress of volumes 16-10
Pause Migration Job 16-11
After a migration project. 16-12
Chapter guide 16-13
Verification after Migration (1) 16-14
Verification after Migration (2) 16-15
Verification after Migration (3) 16-16
Bulk migration of ISO images (1) 16-17
Bulk migration of ISO images (2) 16-18
Bulk migration of ISO images (3) 16-19
Bulk migration of remote ISO volumes (1) 16-20
Bulk migration of remote ISO volumes (2) 16-21
Run Migration per Pool 16-22
Exercise: Do media migration 16-23
Chapter guide 16-24
Document Migration - Feature 16-25
Document Migration - Details 16-26
Document Migration - Function Call 16-27
0-8 710
OPEN TEXT
22 Periodic Jobs
Chapter overview 22- 2
Tasks for jobs: synopsis (1) 22- 3
Tasks for jobs: synopsis (2) 22- 4
Running several jobs simultaneously 22- 5
Scheduling automatic daily jobs: Example 22- 6
Jobs administration 22- 7
Disable a Job 22- 8
Edit a job (1): scheduling 22- 9
Edit a job (2): conditional invocation 22-10
Additional recurring tasks not executable as jobs 22-11
Exercise: Schedule jobs appropriately 22-12
26 Accounting information
Motivation and Objective 26- 2
Involved steps 26- 3
Access logging by Archive Server 26- 4
Accounting data retrieval (1): Interactive 26- 5
Accounting data retrieval (2): Command line or script-based 26- 6
Using retrieved accounting data for billing 26- 7
Reorganization of "old" accounting data 26- 8
Exercise: Download accounting data for billing 26- 9
0-10 710
OPEN TEXT
Appendix C Glossary
0-12 710
QPENTEXT
OPENTEXT
1-2 710
710 . Contents overview
See
http://opentext.com/training
for more information on
.... ....
Learning Services offering.
5 days
.... ....
.... .... .... ....
.... .... ........
.... .... Archive Server
Installation
.... .... 4 days
.... ....
....
"
I
I
T
Application-specific
customizing courses
Course Objectives and Contents Slide 4
Starting from the present course 710, Archive Server Administration, Open Text Learning
Services offers a variety of courses for different education needs.
Since course 710 covers the server part of an Livelink Enterprise Archive only, it is
recommended for all administrators to complete their Administration skills with the
application-specific counterpart. For this purpose, Learning Services offers
administration courses for all Livelink Enterprise Archive products separately; all of
them require having attended course 710 before.
715 Archive Server Administration Advanced is not needed for administering a
"normal" Archive system. Target group of this course are administrators of huge Archive
installations, especially in outsourcing centers; here they learn how to automate
administrative tasks, integrate the Archive Server into a computing center infrastructure
more tightly, and do advanced troubleshooting.
Some of the leading systems, e. g. SAP, require additional customizing or configuration
in order to integrate optical document storage into their "main" functionality. Learning
Services offers appropriate, product-specific customizing courses for building up the
necessary skills.
For specialized interest, further courses and workshops are available. For full
information including scheduled course dates, see http://www.opentext.com/training
1-4 710
QPENTEXT
QPENTEXT
2-2 710
QPENTEXT
Traditionally, optical media like DVD or WORM have been used to ensure that documents are
no longer changed or manipulated. Modern storage systems can usually be switched to "write-
once mode" to ensure that documents cannot be changed even though hard-disk is used as
final storage media. While non-manipulation of documents is desirable, disposition
management (removing documents after their retention period has expired should also be
considered.
Besides technical components, the right organisational implementation and its documentation
are important for ensuring legally compliant archiving according to the specific lawmakers.
---------....=~
2-4 710
QPENTEXT
( Retention Periods \
For details on supported operating systems and interface view the storage platform release
notes.
WO Feature
The write-once feature allows Le. hard-disk based storage systems to store documents and
data on the hard-disk similar to e.g. a DVD-R. After the inital write process, the data is stored
as "read-only" and may only be modified or deleted after a certain retention period.
2-6 710
OPEN TEXT
IJ Advantage
Connection:
- Support of offline media
SCSI or
fibre channel - Non erasable, robust
Tamper proof
II Drawback
- Nearline media
- Cache areas required for fast access
Optical jukeboxes:
DVD, WORM, UDO
Archiving with Open Text! IXOS Slide 7
2-8 710
OPEN TEXT
Single
documents II Advantage:
- 150 image: easy backup to optical media
- Flexible partition size
IP connection
.. Drawback
- Centera 5DK used
- Can't be used as disk subsystem
Virtual Jukebox N - No file system available
II! Advantage:
- Easy backup to optical media
II! Drawback
- API necessary
Archiving with Open Text flXOS Slide 10
2-10 710
OPEN TEXT
10 Advantage
- No special API necessary
- Can be used together with optical
and other storage media
ISO files
NAS - HSM-
QPENTEXT
Hierarchical Storage Management
II Single documents
- 05 through disk buffer
" Hint:
- Release to tape not supported
always copy on hard disk required
HSM is policy-based management of file backup and archiving in a way that uses storage
devices economically and without the user needing to be aware of when files are being
retrieved from backup storage media.
Although HSM can be implemented on a standalone system, it is more frequently used in the
distributed network of an enterprise. The hierarchy represents different types of storage media,
such as redundant array of independent disk systems, optical storage, or tape, each type
representing a different level of cost and speed of retrieval when access is needed.
For example, as a file ages in an archive, it can be automatically moved to a slower but less
expensive form of storage. Using an HSM product, an administrator can establish and state
guidelines for how often different kinds of files are to be copied to a backup storage device.
Once the guideline has been set up, the HSM software manages everything automatically.
HSM adds to archiving and file protection for disaster recovery the capability to manage
storage devices efficiently, especially in large-scale user environments where storage costs
can mount rapidly;
An administrator can set high and low thresholds for hard disk capacity that HSM software will
use to decide when to migrate older or less-frequently used files to another medium. Certain
file types, such as executable files (programs), can be excluded from those to be migrated. For
connection with the Archive Server, these thresholds should be set appropriately so that files
stay on hard disk.
2-12 710
QPENTEXT
2-14 710
QPENTEXT
Document Archiving
OPEN TEXT
lli;t\MlWJ!t'll'lliW.
Scanning client
~
~til@i"
ArchiV~
Scanning
Archive Server
imagefiles ~
Server of
Direct archiving Download
leading
via specific interface
application
(e. g. SAP R/3)
Display on client
Document retrieval is carried out the same way, no matter how the documents have entered
the archive (as detailed above): The retrieval client component downloads the document from
the Archive Server to the user's workstation, then the document is displayed in a suitable
viewing application (for most frequently used document formats, this is the Livelink Archive
Windows Viewer).
User workstations
Archive Server
Compared to document archiving (see previous page), data archiving implies different roles
of the leading system and the Archive Server.
A typical server-based application produces and/or stores large amounts of electronic data.
However, the available storage space for that application data is limited by the server
hardware. In many cases, the server is not even able to keep all data it is supposed to due to
business requirements (e. g. a certain legally in forced data retention period). In such a
situation, selected application data can be sourced out from the application server to the
Archive Server; this is referred to as data archiving.
The application server can then access the archived data again for various kinds of
processing; this includes:
Display of archived data items (in non-changeable mode)
Reloading archived data into the server's own storage space
From the point of view of the leading application, the Archive Server is therefore a mere (safe
and huge) storage backend. As a consequence, the system users (and their computers) have
no direct relation or connection to the Archive Server; they only "see" the leading application's
server that interacts with the Archive Server behind the scenes.
The following IXOS products make use of the Archive Server the data archiving way:
Livelink Integration for SAP Solutions
Livelink E-mail Archiving for MS Exchange
Livelink E-mail Archiving for Lotus Notes
2-16 710
QPENTEXT
For making up an optical archiving solution, it is not sufficient just to store documents or
document images on storage media. In order to make the documents serve some business
purpose, they must be made available for retrieval by one of these methods:
Maintaining attributes for each document by which document users can search for
specific documents of interest. Such attributes can include:
- Date of origin
- Document number
- Customer number
- Document type: order, invoice, correspondence, ...
- ... and many more
Linking documents to some kind of "object" maintained by another business data
system. For example, an invoice document may correspond to an invoice booking in the
SAP database. A SAP user can search for and retrieve the invoice booking, then
retrieve the corresponding document by activating a link to it that is stored as part of the
booking data.
Since the choice of how to make documents retrievable fundamentally decides how
documents are used in business, the system performing this task is called the leading
application. It mayor may not be part of the optical archiving system itself; Livelink for
Electronic Archiving (VAG (former IXOS-eGONtext for Applications)) and Livelink for SAP
Solutions (former IXOS-eGONtext for SAP) are two opposite examples for this.
2-18 710
QPENTEXT
Leading application
OPEN TEXT
requests 10 aaahx4c...
•
User
o~
requests
--.~
•.. --
Leading Application
2-20 710
QPENTEXT
QPENTEXT
3-2 710
QPENTEXT
Global Services
OPENTEXT
Objectives:
m Helping customers to fully exploit the potential of
OpenText solutions, i.e.
- Document and Data Archiving
- Workflow Integration
- Web-based Portals
- Migration and parallel legacy and SAP systems
- Existing archive migration
Solution packages are implementations for specialized archiving requirements. They are not
part of the standard products but can be added to them in order to expand their functionality.
Some solution packages are ready-to-run, others require a certain amount of consulting to be
established at a customer.
Cooperation with Global Services is initiated via your local sales representative; please contact
them for further details.
Iii Workshops
- 725 Scanning Documents
- SAp·CST·PL Customizing SAP Print Lists for Archiving
OpenText's training center Learning Services offers a wide variety of courses for different
OpenText and IXOS products and target groups; examples are given above.
Please check our webpage to see to contact your local Open Text training registrar:
http://www.opentext.com/training/contacts.html
3-4 710
QPENTEXT
Standard Support
QPBNTEXT
:jj[ NMii~IMMt145it·
Software Maintenance
Program (Standard Support)
II Support Services
- Phone, Web, E-mail
- Only for Standard Products
II Software Updates
II Customer Care Program
- e-Newsletters
- LiveLinkUp Webinar Series
- Champion Toolkit
The Premier Support Program is an optional support service that is offered in addition to the
Software Maintenance Program.
It provides you with a level of support that brings together highly experienced Technical
Specialists who will work with your in-house Service Management teams to assist with these
challenges and further the achievement of your deployment goals.
Benefits:
- Optimized Customer Support Processes
- Improved Understanding of Open Text Software
- Improved Risk Management
- Improved Strategic Planning
- Improved Issue Support
- Proactive Services
All of the services delivered by the Technical Services team are developed and delivered
within the ITIL framework. All members of the Technical Services team are ITIL certified.
Program Manager
The Program Manager is your single point of contact within Open Text Customer Support,
responsible for the relationship and all communication between your Service Management
Team and Open Text Customer Support/Development. They are also responsible for the
management of the delivery of the program to which you subscribe.
Technical Specialist
A Technical Specialist is responsible for working with the Program Manager and your Service
Management Team to manage the technical scope of the program to which you subscribe.
Their repsonsibilities depend on the Service Catalog options selected.
3-6 710
OPEN TEXT
.. Application Support
- In addition to Premier Support
- Support for specific applications
II Knowledge of Customizing
- Code & Configuration
3-8 710
OPEN TEXT
3-10 710
QPENTEXT
III Find there: Here you find all infonnation aluHn the current generation 1)( the
former IXOS ECM portfolio.
Manuals
The heart of the ECM Suite is the [jveflnk Enterprise Archive Server
(LEA) formerly knO\lm as the Enterprise Content Repository (ECR),
Release Notes which greatly facilitates the integration and operation of Document
Management, Web Conlent Management, Business Process
Management, ERP, CRM, Groupware Management and Archiving.
Installation/upgrade guides The top channel ECM SlIite on the left provides general information
about the ECM Suite and the rebranding of former IXOS products into
the Livelink scheme. For more information about Ihe different ECM
Patches components. please click on Ihe respective channels in the navigation
baronthelefl.
Troubleshooting help A complete mapping table including the former IXQS product names
is available here.
• Notes on specific
problem issues
• Troubleshooting guides
Historically, a lot of information esp. on the Archive Server can be found in the ESC. A
migration of the ESC content to the Open Text Knowledge Center is planned.
To get your own personal account, send an email to support@opentext. com with the
following information:
III Your portal for all Open Text related product information
II! Access via: http://knowledge.opentext.com
Select a product family below to locate your product, and then proceed to the product family page where
you will find the latest downloads, patches, documentation, and more for your Open Text product.
To get your own personal account, send an email to [email protected] with the
following information:
3-12 710
OPEN TEXT
OP!l'D 'Text Ollnnetoro) i:S:s business e~ronmlint1h$i S6f..-uyO'.lllleOO!1l g~ tM f(llJsl o:isI ui' 16£<1' OpenteX! prGdutlS
The CGffiml.mitiei allow JOllIe leilm abOll! best pt<l'Cliceil,v.fl.$lt Wtilk$, 'M'o<rt dUi?$rrl:; coolt\'ttm1fQltf\:aIil, you f1ilm"" itW£lllb~
IO;l:di~ C9nt$ht,bUlUwlU l)!1IYgel b(>t!erWithYO<J,aclr>1' tQllttbllt<ltl($ft.~
Iollll1lr.$y!l'l'Wll'.SMllh
WO. C<Mt>s;W'..« ......iMp~".,,,
fi¢~takollfffi~~tl)~I\'trCl.li'l:!OOti
(lI$tmne~ClmiS\ir¥t!l:' f¢ryrM t~t()l'M$
·!i>OOmuffle.
Open Text Online (OTO) is a business environment that serves your need to get the most out
of your Open Text products. The communities allow you to learn about best practices, ask
questions in forums and allow customers to share their experience.
Solution Packages by
Open Text Global Services
3-14 710
OPEN TEXT
Contact your Global Services Consultant for more information on solution packages that you
are interested in.
3-16 710
QPENTEXT
QPENTEXT
4-2 710
OPEN TEXT
- . " Archival
Writing to
. . storage
media
--+ Retrieval
The chart above gives a complete overview of the possible paths a document may take as it is
processed by the Document Processing by the Archive Server. The illustration also reveals that the
whole "life cycle" of a document is composed of three stages:
The archival of the document from its source to either the disk buffer or a hard disk pool. In
some situations, documents pass the DocumentPipeline before they enter the Document
Processing by the Archive Server's core component, the DocumentService.
An important aspect of this is that a document is defined to be archived already while it is still
held in the disk buffer, actually before it is stored on optical media or buffered disk. While this can
be interpreted as a potential safety gap (data is less safe in a hard-disk based buffer than on
optical disks), it is a mandatory precondition for many archiving applications requiring access to
documents immediately after their storage: The disk buffer provides this feature.
Writing the document from the disk buffer to an optical medium or a buffered hard disk
(= FS pool, available since Archive Server ~ 9.6).
Exception: Documents that have been stored in a write-through hard disk (= HDSK pool).
Retrieving the document from its current storage location to a client.
The chart also already names the Document Processing by the Archive Server components that perform
the involved tasks:
The DocumentPipeline preprocesses certain documents before they are stored.
The DocumentService manages the buffering, optical storage, and retrieval of documents.
The storage database DS is used by the DocumentService for storing technical attributes of
stored documents; they are needed to keep track of the current state of a document and to find it
upon retrieval requests.
The StorageManager (also called STORM) manages optical media in jukeboxes and provides
write and read access to storage systems.
Details about all three mentioned document processing stages are explained on the following pages.
Document archival
OPEN TEXT
The chart above illustrates the steps the Document Processing by the Archive Servertakes
when a it receives a document for archival:
(A) The document is stored as a file (or a set of files) in the DocumentPipeline directory.
This does, however, not apply to all documents. Depending on the leading application
and the used storage scenario, a document may as well bypass the DocumentPipeline
and directly enter the DocumentService where step (C) is performed.
(8) The DocumentPipeline preprocesses the document: A sequence of document tools
(also called DocTools) accesses the document one after the other and performs various
tasks. The exact sequence of steps depends again on the type of document and the
storage scenario. Examples of preprocessing actions include:
Extracting attributes from the document's contents
- Storing retrieved attributes in an index database of the leading application
- Adding information (example: a page index for a print list)
- Converting the document (example: collecting multiple scanned document pages
into a single multi-page image file)
(C) After the DocumentPipeline has finished its work - or when it has been bypassed -
the document is then handed over to the DocumentService. Depending on the archive
configuration, the document is stored in one of two places:
- If the document shall later be written to an optical medium, it is stored in a disk
buffer.
- If it shall be stored on a hard disk permanently, it is directly written to the
destination hard disk pool.
(D) The DocumentService stored status information about the received document in the
storage database; this includes the newly allocated document 10 and the chosen
storage location.
4-4 710
QPENTExT
The chart above illustrates the steps involved in writing documents from the disk buffer to ISO
media to DVD. (Optionally, an ISO image can be also written to a WORM media). This is
organized as a periodic job; whenever the job is invoked (usually once a night), it performs the
following steps:
(A) It checks the disk buffer for the amount of collected data. If it is too little to fill a medium,
nothing happens; the job finishes, waiting for its next invocation. Otherwise it continues
with the next step.
(8) As a preparation for ISO media burning, the ISO image is created in the burn buffer.
This "image" is a single large file containing the complete file system layout for the
target media; its contents is the complete set of documents selected for burning.
To optimize read performance for the target media, the document files are sorted by
their size (large files first) before the ISO image is assembled; for this, the ISO tree
structure is created in the burn buffer.
(C)A medium is inserted into the jukebox's writer drive and the ISO image is written to it.
Immediately afterwards, the medium is checked for writing errors ("verified") by reading
it completely and comparing it with the ISO image in the burn buffer. Should writing
faults be detected, the medium is marked as "bad" and a further attempt to burn the ISO
image on another medium is made. (After the third unsuccessful attempt, the job
assumes that the writer drive is damaged, stops operation, and terminates with an error
status.)
(D) If thus configured, a second medium - i. e. the backup - is burned from the same ISO
image and verified. Original and backup are completely identical; no distinction is
possible and necessary.
(E) Depending on the configuration, either (or none) of these actions is taken:
- The copied documents are deleted from the disk buffer.
- The copied documents are moved from the disk buffer to the cache (so that they
remain to be accessible fast).
(F) The storage database DS is updated to reflect the new status and location of the
processed documents.
The chart above illustrates the steps involved in writing documents from the disk buffer to ISO
media on Le. a hard-disk based storage system. This is organized as a periodic job; whenever
the job is invoked (usually once a night), it performs the following steps:
(A) It checks the disk buffer for the amount of collected data. If it is too little to fill a medium,
nothing happens; the job finishes, waiting for its next invocation. Otherwise it continues
with the next step.
(B) As a preparation for ISO media burning, the ISO image is created in the burn buffer.
This "image" is a single large file containing the complete file system layout for the
target media; its contents is the complete set of documents selected for burning.
To optimize read performance for the target media, the document files are sorted by
their size (large files first) before the ISO image is assembled; for this, the ISO tree
structure is created in the burn buffer.
(C) The ISO image is transferred to the storage system (conventions may vary depending
on storage system).
Immediately afterwards, the medium is checked for writing errors ("verified") by reading
it completely and comparing it with the ISO image in the burn buffer. Should writing
faults be detected, the medium is marked as "bad" and a further attempt to burn the ISO
image on another medium is made. (After the third unsuccessful attempt, the job stops
operation, and terminates with an error status.)
Theoretically a second medium - i. e. the backup - can be written from the same ISO
image and verified. However, using backup usually won't apply to storage systems
using HD-WO method. Storage systems normally have their own backup mechanisms.
(D) Depending on the configuration, either (or none) of these actions is taken:
- The copied documents are deleted from the disk buffer.
- The copied documents are moved from the disk buffer to the cache (so that they
remain to be accessible fast)
Generally, storage systems have better access times than DVD or WORM
jukeboxes. Therefore, usually caching the disk buffer in a cache partition on the
Archive Server is less critical.
(E) The storage database DS is updated to reflect the new status and location of the
processed documents.
4-6 710
QPENTEXT
The chart above illustrates the steps involved in writing documents from the disk buffer to IYMJ media.
As opposed to ISO media writing (detailed on the previous page), there are three separate periodic
jobs Involved. They are invoked by the job schedule independently from each other, but the
corresponding actions on a specific document are always carried out in the order explained here.
CD IXW write job. The write job copies document files from the disk buffer to the target IYMJ media
one by one. (Unlike writing to ISO media, no ISO image preparation is involved). This involves
the following subtasks for each written file:
- The file is copied temporarily to another hard disk area.
- The file is copied from there to the IYMJ media.
- The file is read back from the IYMJ media and compared with the temporarily stored file
instance in order to ensure no writing errors have occurred. (If writing has failed, another
attempt is made to write the file.)
- The WORM filesystem database is updated so that it now knows the written file.
- The storage database OS is informed that the file is now resident on the IYMJ media.
The write job is usually scheduled to run rather often, e. g. every 30 minutes.
@ Backup job. Since writing to IYMJ media is comparatively slow, the IYMJ media write job never
copies documents to the IYMJ media backup itself; this task is left to the backup job that is
normally executed once a night. This job copies newly written data from all IYMJ medias to their
corresponding backups; it does this on filesystem block level rather than on file level which
makes the process faster. After this, original and backup IYMJ medias have identical contents.
® Purge buffer job. This job is not really part of IYMJ media writing, but it must be considered here
since no other instance deletes written documents from the disk buffer. The purge I'ob scans the
disk buffer for documents and deletes those ones that are already written to optica media. It
may, however, decide to keep even such "old" documents depending on given purging rules; for
example, a rule may dictate that documents have to be retained for a certain number of days.
When a document has been found that is subject to deletion according to the purging rules, the
following steps are performed:
1. It is checked that the document is really present on the optical medium it is said to be, to
prevent deleting documents that are not really stored anywhere else (e. g. due to a
medium damage that has happened in the meantime).
2. Optionally (depending on purging rules), the document is copied to the cache.
3. The document is deleted from the disk buffer.
4. The storage database OS is informed about the deletion.
The percentage of space to be left empty is set globally during Archive Server installation.
2% is just a default suggestion of the installation routine since Archive Server <:: 9.6.
In Archive Server::; 9.5, this value was 10%.
If this global setting is unsuitable for your purpose, it can be altered at any time:
Administration Client 7 Server Configuration:
Administration Server (ADMS)
L Default Values for Pools
Unix: /usr/ixos-archive/config/setup/ADMS. Setup, parameter
ADMS - WM- PART- PERCENT- FREE
Windows: Registry: HKEY_LOCAL_MACHINE/SOFTWARE/IXOS/IXOS_ARCHIVE/
ADMS/ADMS_WM_PART_PERCENT_FREE
Once altered, the change becomes effective after the next Archive Server restart. However,
the change affects only medias that are initialized henceforth. The space reservation for
medias already in use can be altered using the dsClient utility; see ESC document for details:
https://esc.ixos.com/0914004631-652
4-8 710
QPENTEXT
The chart above illustrates the steps involved in writing documents from the disk buffer to an FS pool.
Both HOSK and FS pool write to hard disk as final destination. As opposed to HOSK pool, however,
the FS pool utilizes a disk buffer and has its own write job. This provides certain advantages, esp.
when FS pool is used in combination with certain storage systems like Le. NetApp filers.
FS pool is available starting Archive Server ~ 9.6 and replaces the HOSK pool. HOSK pools can
usually be migrated to FS pools fairly easily and it is recommended that HOSK pools are only used
for test purposes in the future.
The periodic jobs involved in writing to FS pools are the write job and the purge job. They are
invoked by the job schedule independently from each other, but the corresponding actions on a
specific document are always carried out In the order explained here.
Q) FS write job. The write job copies document files from the disk buffer to the target FS pool. The
storage database OS is informed that the file is now resident on disk location designated to the
FS pool. The write job is usually scheduled to run rather often, e. g. every 30 minutes.
@ Purge buffer job. This job is not really part of the writing, but it must be considered here since
no other instance deletes written documents from the disk buffer. The purge I'ob
scans the disk
buffer for documents and deletes those ones that are already written to final ocation. It may,
however, decide to keep even such "old" documents depending on given purging rules; for
example, a rule may dictate that documents have to be retained for a certain number of days.
When a document has been found that is subject to deletion according to the purging rules,'the
following steps are performed:
1. It is checked that the document is really present on the optical medium it is said to be, to
prevent deleting documents that are not really stored anywhere else (e. g. due to a
medium damage that has happened in the meantime).
2. Optionally (depending on purging rules), the document is copied to the cache.
3. The document is deleted from the disk buffer.
4. The storage database OS is informed about the deletion.
The chart above illustrates how the Archive Server organizes to provide a document for
retrieval by a client. Since a document may be resident in one (or more than one at the
same time) of several locations, a reasonable, well-defined order of precedence is obeyed
for accessing the document:
1. First the storage database is queried whether the document is available either in a disk
buffer or in a hard disk pool. In either case, it is taken from there and transmitted to the
client.
2. If the document is not present in either a disk buffer or a hard disk pool (HDSK or
FS), it is checked whether the document is present in the cache. If so, it is taken from
there and transmitted to the client.
3. Only if the document cannot be taken from any hard disk location (cases 1 and 2) it is
read from an optical medium; this is the least attractive situation because reading from
a jukebox is much slower than from hard disk.
Before the document is actually transmitted to the client, it is first copied to the cache
so that subsequent read requests can be fulfilled from there. As a matter of optimization
for very large documents (like print lists), a document is cached in fragments of 64 kB
size; only those parts of the document are read, cached, and transmitted that are
actually requested by the client. As the user browses through the document in the
Archive Windows Viewer, the client automatically requests the desired parts from the
server, step by step.
If the client application requesting the document is not able to load the document
fragment-wise, i. e. it insists on receiving the complete document immediately, then the
cache will receive the whole document as well.
When the cache becomed full,it flushes old documents as needed to make room for
newly requested ones (FIFO or LRU mechanism); unlike for the disk buffer, there is no
periodic job needed for cache reorganization.
4-10 710
QPENTEXT
OPEN TEXT
---_.-=~
11II Document structure
- Documents and components
- Directories and files
II Cache structure
5-2 710
QPENTEXT
Document
III Documents are identified by .--_ _......c omponents
unique document IDs (also r-'----:;;=";:
known as document string)
5-4 710
QPENTEXT
Files of documents
Every document component is stored as a file in the document's directory (discussed on the
previous page).
Before copying documents to final media, the write jobs marks components as read-only with
the prefix .rd. This ensures that the files are not changed anymore i. e. on the diskbuffer.
5-6 710
QPENTEXT
The Document Structure on Storage Media command line tool dsClient is useful for
retrieving complete information about a certain stored document. Starting from the document
ID - that you must know in advance - you can use the dinfo command to inform yourself
about:
The logical archive the document is stored in
Time of archiving and of last modification (i. e. entry of notes or annotations)
Components belonging to the document
Current storage location, composed of
- the path (valid for the whole document, on all storage media)
- name of the current storage media (for each component separately)
To really know the storage location(s) of a document component in the disk buffer, you have
do go one step further: The logical media names given in dinfo'S component list have to be
mapped to true storage locations in the Document Structure on Storage Media's filesystem.
This is done with the volInfo command as illustrated above. The BaseDir attribute of the
named volume together with the document path form the complete storage path.
Calling dsCHent, you may enter the user's password directly on the command line, as
illustrated above; however, this is somewhat insecure because the password is then visible
both on the command console display and in the computer's process list. It is also possible
(and more secure) to omit the password on the command line, in which case dsClient will
prompt you for the password upon startup; the typed password is not displayed on the screen
then.
To exit from dsClient, use the end command.
3F28EOCO.002:.L=anno.ixos;1
Normally, the logical name of a document component and the name of the corresponding file
are identical. However, this does not hold for component names longer than 8+3 characters:
To beconformant with the older ISO 9660 filesystem standard for CDs, the DocumentService
uses artificial 8.3 file names for such components.
In such a situation, it is not always obvious which document component corresponds to which
stored file. To retrieve the mapping, use dsClient's cinfo command as illustrated above.
5-8 710
QPENTEXT
Directories represent
document components
The directory and file structure in the Document Structure on Storage Media's cache is slightly
different to the one used on the storage media:
The three-layer directory structure down to the document directories is the same;
documents retain their specific path even in the cache.
Document components - normally stored as files - are represented as directories in
the cache. The name of such a component directory equals the name of the component
file on the storage media.
Within a component directory, the component contents is stored as a set of enumerated
files: Each of those files contains a chunk of the component file with size 64 kB (except
for the last one which may be smaller). This structure enables chuck-wise caching of
large documents - only those fragments of a document are cached which are actually
requested by a client. This speeds up caching and prevents huge documents from
flushing many smaller documents from the cache at once.
However, a document is always cached entirely (but still as a set of chuck files) in these
situations:
When it is cached as part of the media write or buffer purge action
When it is requested by a leading application that does not support chunk-wise
document retrieval
5-10 710
QPENTEXT
OPENTEXT
6-2 710
OPEN TEXT
iii Open Text! IXOS uses proprietary file system for UDO and
WORM medias
- No industry standard available
iii Requirements:
- Incremental writing
- Space efficiency
- Robustness against "bad blocks"
- Recoverability from write errors
Fast read access
AIXWvolume .--------------------------------------------------.
, I
I _ .... ..,..~~~~~~~"'~~~L
~ I ~- ~~rl11~:~~~ :r~~~~~~
--
Fixed Variable
division division
From the point of view of the file system structure, a IXYV media can be regarded as a sequence of
blocks with fixed size (normally 1, 2, or 4 kB). The illustration above shows how the Archive Server
manages the data written to a IXYV media. The storage space of every IXYV media is subdivided into
these areas:
Free space is left at the beginning of the IXYV media. The exact amount differs, depending on
the used STORM version; it is never more than 1 MB. (For more details, see
https://esc.ixos.com/0984066998-363)
Attention! Is available from version 4.1
The volume header (also called VCB area) contains information about the volume itself, such
as the volume label and the time of initialization. The information is packed into a volume
control block (VCB). Since the status of the WORM may change later (e. g. due to a promotion
of a backup to an original), additional VCBs can be appended later on, each one superseding the
previous one. The space reserved for VCBs is 16kB.
The structure or FCB area contains the file control blocks (FCBs). Every FCB contains status
information about a written data file, including a pointer to the location of the file on the IXYV
media itself.
The last area on the IXYV media contains the application data, i. e. the actual documents files.
Between the FCB area and the data area, a certain amount of free space is left for later storing
the finalization data (see later in this chapter).
Whenever a data file is to be written to a IXYV media, these items are actually stored:
The FCB with file attributes and the pointer to the file storage location (block number).
The file itself, appended to already written data at the end of the IXYV media.
An inode is created in the WORM filesystem database. This inode basically mirrors the file
attributes already stored in the corresponding FCB.
While it does not add information to the FCBs, the WORM filesystem database is essential for fast
access to IXYV media data. The FCBs are always stored in chronological order of writing and therefore
cannot be searched efficiently.
6-4 710
OPEN TEXT
The role and significance of the WORM files system database (= inode cache) are the same
as of the well-known FAT (file allocation table) of MS-DOS file systems: It takes a file path as
input and maps it onto the file's physical storage location (necessary to read the file) as well as
other status information.
The WORM files system database is no database from the software point of view; no RDBMS
is used to store the information. Instead, the data is stored in a variable number of different
files (as illustrated above) that STORM uses as a database in a logical sense.
STORM's central configuration file, dXOS ROOT>/config/storm/server. cfg,
determines the number, sizes, and locationof the data files. The information is coded in the
ixworm section that looks like this:
ixworm {
numlnodes {100000 (max. total number of inodes)
ixwhashdir {
files {filel file2 } (list of configured files of this type)
filel {
path {W:/hashdirl} (path of file 1)
size {25} (max. size of filel in MB)
The total number of inodes would be defined in the time of the configuration of the archive
server. If it later necessary to increase this value, please contact the Archive Server Support.
,Finalization =
rn~ving structure information
, the IXW media
VCB~-~ I
L I
When a IXW media has been filled up with document files, it may become finalized. The chart
above illustrates how this is done:
1. The complete inode data describing the IXW media contents is copied from the WORM
filesystem database to the IXW media itself. The resulting ISO structure is a complete,
searchable structure description of the IXW media contents. It is written into the
remaining free space between the FeB and data areas; this space is kept free explicitly
for exactly that purpose.
2. The inodes of the IXW media volume are deleted from the WORM filesystem database.
3. A primary volume descriptor (PVD) is written at the beginning of the WORM volume,
pointing to the block where the ISO structure can be entered for searching.
The ISO structure and the PVD turn the IXW media into a read-only ISO medium which can be
accessed efficiently without the WORM FS database.
See next page for a discussion of further consequences of finalization.
6-6 710
OPEN TEXT
For customers with Unix-based Archive Server installations upgraded from an original release
::;; 3.5, there is an important restriction: All WORM partitions that were initialized by IXOS's old
jukebox service ixwd cannot be finalized at all. In order to benefit from finalization, those
WORMs must first be copied to new ones which then can be finalized afterwards.
This is positional in the missing PVD (primary volume descriptor) field on the WORMs (see
also the picture on slide 6 on this chapter).
The fact that - by default - an Archive Server stops adding directories (= documents) to a
IXW media partition when the 65'000 directories limit for IS09660 media is reached, may lead i '
(
to significant space waste on modern, large IXW medias - especially where mainly small
documents are stored. You should therefore consider switching this limit off; there is no
negative impact concerning the Archive Server's own use of media exceeding this limit.
The option for obeying the ISO directory number limit can be maintained in the Server
Configuration page of the Archive Server Administration, branch:
Storage Manager (STORM)
@- Configuration STORM (file server.cfg)
@- WORM Filesystem, entry Accept of also non-IS09660 format.
6-8 710
OPEN TEXT
QPENTEXT
IIIIIIIIIII-------==~
Iii Logical archive fundamentals
- Definition
- Pros and cons for multiple logical archives
7-2 710
QPENTEXT
1
I ,
For maintaining background forms using the Livelink for SAP or Livelink for UAC add-on
Forms Management, a dedicated logical hard disk archive is strongly recommended.
Furthermore, it is recommended to use separate logical archives for document archiving and
for data archiving purposes.
7-4 710
OPEN TEXT
Chapter guide
OPEN TEXT
_...-....-....-.-==~
'T©lUdtiiltW• •
III Configuration
- Archives
- Pools
- Document processing options
7-6 710
QPENTEXT
Each logical archive mustbe defined on both the leading application and the Archive Server
for with the same name ('A1' is just an example in the chart above); this is the foundation for
the storage dynamics controlled by the leading application and performed by the Archive
Server.
On the Archive Server, a logical archive normally has a single media pool; in practice, it is
therefore not necessary to strictly distinguish between the archive and its pool. However,
certain exceptions exist where a logical archive may have more than one pool:
If certain components of documents - specifically comments added by users, i. e.
notes and annotations - shall be stored on different media than the original
documents. The pools must then have different application type properties.
In practice, the most useful combination is:
- Storing original documents and notes on optical media
- Storing annotations on hard disks
This setup saves space on optical media. Since annotations are alterable at any time, it
is normally not necessary to store them on read-only media.
If a media migration shall be performed for that logical archive. One pool must then
have application type "Migration".
Using the pooling feature in former versions of Email Archiving for MS Exchange/ Lotus
Notes.
QPENTEXT
7-8 710
OPEN TEXT
II! Libraries to
hide storage
specific
settings of
WORM feature
and retention
periods.
II! Library name
corresponds to
volume
description
file.
II 05 and
STORM use
same libraries
Writing on hard-disk using Iibhdsk supports compliance features (unlike HDSK pools).
Depending on the storage system, documents are either written as single files or ISO images.
ISO images are written using "virtual jukeboxes" in "hard-disk write-once mode" (HD-WO).
These are handled similar to ISO pools.
Single files are written usually using hard-disk drives in write-through method directly from OS.
7-10 710
QPENTEXT
Chapter guide
OPEN TEXT
l1li Configuration
- Archives
Pools
- Document processing options
Creating a logical archive in the Archive Server Administration is fairly easy: Invoke the
Create Archive dialog as illustrated above and supply the logical archive's name and-
optionally - a description.
Reminder: For a logical archive for SAP, always use two-letter, uppercase, alpha-numerical
names (restriction for SAP ArchiveLink Interface)
7-12 710
QPENTEXT
--_ .... --
(next page)
----~
Having created a logical archive (as shown on the previous page), the next configuration step
is creating a media pool. For this, right-click on the logical archive name in the Archive Server
Administration (as illustrated above) and choose Create Pool from the context menu. You
will then be guided through several dialogs where you have to enter the following pool
attributes:
CD The pool name (which does not need to be unique among the logical archives), pool
type (= type of media that the pool shall use: ISO, IXW, FS, VI or HDSK), and
application type which here means "document component type" (i. e. notes,
annotations, and OLE annotations). If you do not intend to separate components of
those types from other components to different media, choose "Default" - i. e. archive
all document components together into this pool.
----
OPEN TEXT
configuration
(next page)
.........
.......... .....
Step @ (illustrated above) queries details of how data shall be transferred from disk buffer to optical
media. For a standard ISO pool configuration, specify the following items:
Backup: Do not select.
Allowed Media Type: Choose here the type of optical disks that you intend to use for this pool: CD-
R, DVD-R, or WORM.
Partition Name Pattern: Determines how newly burned disks will be labeled. "$ ( ... ) " are
placeholders for changeable values; $ (SEQ) (= sequential number) must always be present.
You can check the effect of your pattern with the ~est Pattern button.
Number of Partitions: For each ISO volume, that many identical pieces are created. For test data,
choose '1'; for production use, '2' (original plus backup) should be sufficient.
Minimum amount ...: If less than that amount of data is queued in this pool for writing, no disk will
be written; more archived data is waited for instead.
Original jukebox: Select the jukebox where optical disks for this pool shall be burned.
For burning backup disks in a separate jukebox, some fields have to be filled in differently:
Delete from Diskbuffer: Do not select for production use; the disk buffer must be used as a
temporary backup in this case.
Backup: Select this option.
Number of Drives: Tells the backup job - which copies original disks to their backups - how many
jukebox drives it is allowed to occupy simultaneously. This may speed up the backup process,
provided that enough drives are available. Minimum is '1'.
Number of Partitions: Choose '1' (only the original).
Number of Backups: Normally '1 '.
Backup jukebox: Select the jukebox where the backup disks shall be burned.
7-14 710
QPENTEXT
......
@ Writing data from disk buffer to optical disks is a periodic job that is to be scheduled
here. First assign the job a name (the illustration shows a convention), then specify the
job period. The illustration shows a reasonable choice for a ISO pool: once per night.
Note: This job will later be visible and maintainable in the Archive Server
Administration's Jobs tab.
® For optical media pools, a disk buffer has to be chosen for collecting documents prior to
writing them to optical disks. See chapter Disk Buffer Configuration for reasonable
configurations.
7-16 710
QPENTEXT
III Example for storage system writing ISO images in HO-WO mode
(i. e. EMC Centera, H05, NetApp)
III Further settings necessary
- See installation guides
- Advisable to involve Open Text Consulting for implementation
-- --
Note:
Backup of documents stored on hard disk based storage systems (e. g. EMC Centera, HDS)
can be handled by the storage system itself and has to be configured appropriately.
In such a case you have to set the value for the number of backups to zero and leave the entry
for the backup jukebox vacant.
Start in Archive
Server Admin.,
logical archives
section
Creating and setting up an IXW pool is different from a ISO pool; this corresponds to the
differences in media writing techniques (ISO: one-time, synchonous backup; IXW: incremental,
asynchronous backup). These are the attribute differences:
Q) The pool type must be "Write Incremental (IXW)".
(f) IXW write configuration does not refer to a minimum amount of data to be written. Only
the following parameters are to be specified:
Backup: If selected, IXW volumes of this pool will be backed up automatically;
always select for pools containing production data.
(Using IXOS-ARCHIVE S 4.2, there is an additional option in the WORM write configuration:
"Delete from disk buffer after copy". Never select this option for a pool for production data! You
always need the disk buffer as a temporary backup between writing a document to the original
WORM volume and duplicating it to the backup WORM. For test data, however, this is not
necessary.)
Auto Initialization is a recommended option. This way, new WORMs don't need to be initialized
and assigned manually. Auto Initialize also takes care of initializing the backup WORM.
@ Since there is no need to wait for a certain amount of archived data to fill a volume
completely, IXW media writing can be scheduled much more often than ISO media
writing. The shortest period that can be specified is every five minutes.
@) Like an ISO pool, an IXW pool needs a disk buffer for collecting documents prior to
writing them to optical disks. See chapter Disk Buffer Configuration for reasonable
configurations.
7-18 710
OPEN TEXT
Start in Archive
Server Admin.,
logical archives
section
Creating and setting up an FS (single file) pool is different from an ISO pool; this corresponds
to the differences in media writing techniques (ISO: one-time, synchronous backup; IXW:
incremental, asynchronous backup). These are the attribute differences:
CD The pool type must be "Single File (FS)".
(?) HD write configuration does not refer to a minimum amount of data to be written.
@ Since there is no need to wait for a certain amount of archived data to fill a volume
completely, HD writing can be scheduled much more often than ISO media writing. The
shortest period that can be specified is every five minutes.
@) Like an ISO pool, a Single File (FS) pool needs a disk buffer for collecting documents
prior to writing them to optical disks. See chapter Disk Buffer Configuration for
reasonable configurations.
,--------------\ 0 ~----..,
Preparation step:
• Provide hard disk partition on operating
system level
• Do not make partition too large
Absolute limit: 1 TB (for Archive Server :5 9.5)
If more total space required:
Use several smaller partitions instead of
a single large one
~ Additional partitions can be assigned later
To complete the picture of media pool setup, here is how to create and set up a hard disk pool (e. g. for
testing purposes or for overlay forms):
@ As a preparation step, you first have to provide a hard disk partition on operating system level.
On a Unix-based Archive Server, make sure the root directory of the file system is owned by the
user/group that the Archive Server is operated as (e. g. ixosadm/ixossys) and has
permissions 77 O.
CD The pool type must be 'Write Thru (HDSK)".
Q) No job for writing to optical disks and no disk buffer are involved. A hard disk pool directly and
finally stores documents on hard disk volume(s); therefore, you have to assign the prepared hard
disk partition to the pool directly.
Specify the following:
Partition name: A (preferably meaningful) logical name for this volume; must be unique
throughout all volume names (including IXW medias) of this Archive Server. The Archive
Server will henceforth maintain the volume by this name.
Mount path: The root directory of the partition's file system. On Windows NT, this should be
a drive specification (including a backslash); on Unix platforms, it is the directory where
the partition is mounted; on Windows 2000, it can be either of both, depending on how
the partition is hooked into the file system.
If, on a Windows-based Archive Server, you want to use a network share instead of a
local hard disk drive, see ESC article https://esc. ixos. com/1072860397-483
about how to do that exactly.
The recommendation not to make the hard disk partition too large is due to the fact that some
administrative actions (like consistency checks) require examining the whole partition contents. The
more documents are stored there, the longer such a scan will take. If, moreover, a partition is full of very
small documents, the total number of files is very high; this may lead to unacceptably long execution
times of those actions. To prevent this type of problem, rather use multiple partitions of moderate size
instead of a single large partition. If you store rather large documents only (like SAP R/3 data archiving
files), the partition may be made larger as well; where mainly small documents are stored, the partition
size should be smaller (using BLOBs, however, reduces the number of stored files of small documents).
If you choose to divide the total storage space of the pool into more than one partition, you have to
attach all but the first partition to the pool after the pool has been created; see chapter Hard Disk
Resource Maintenance for more information.
Since the availability of FS (single file), HDSK (write-through) pool is only recommended for test
purposes. Whenever possible use FS pools instead.
7-20 710
QPENTEXT
All settings discussed on this and the following pages are to be made per logical archive, i. e.
you may configure the Archive Server to treat documents in separate logical archives
differently.
ArchiSig Timestamps and Deferred Archiving are features introduced with Archive Server 9.6.
See the following pages for more details.
7-22 710
QPENTEXT
7-24 710
QPENTEXT
OPEN TEXT
III Sizing
8-2 710
QPENTEXT
III Stores documents safely until they are written to final storage
Each disk buffer possesses a periodic purge buffer job that, when invoked, searches the buffer
for "old documents" (i. e. documents that have already been written to optical disk) and
removes them according to certain criteria:
• If a percentage of free buffer space for newly archived documents ("Required avail.
space") is specified, the job removes so many documents (oldest first) that the required
space amount is freed. If, however, the required space amount cannot be freed because
the buffer is populated with too many not-yet-written documents, the job frees as much
space as possible by removing old documents.
• If a retention period for old documents ("Clear archived documents older than ... ") is
specified, documents older than that period are removed - even if the claimed
percentage of space is already free.
Moreover, you can choose to copy a document to the read cache immediately before removing
it from the disk buffer ("Cache before purging"). This way, the document's fast availability is
continued by the cache after its removal from the disk buffer.
The buffer purge job causes considerable hard disk, CPU, and storage system on the Archive
Server:
1. It searches the disk buffer volumes for documents matching the purging selection
criteria.
2. For each selected document, it checks in the storage database and on the optical target
medium that the document is really stored there (in order not to lose documents in case
of an inconsistency between database and media).
3. If everything is okay, it deletes the document's files on the disk buffer volume.
For this reason, it should preferably be scheduled to run when there is not much other activity
on the Archive Server, e. g. during the night and not simultaneously with the local backup job
(making IXW media backups).
8-4 710
QPENTEXT
Chapter guide
QPENTEXT
........................:=~
iii Disk buffer fundamentals
iii Sizing
II Configuration to prevent
"too early" purging:
Purge only after "safety period"
* E. g. a week
* Grants enough time to react if
WORM backup fails
Make purge job respect
the WORM backup
Buffer properties
" Allows purging as soon as one
backup is made, -------4~~
even if multiple (local or remote)
IXW media backups are configured
The explanations given above refer to IXW media. Using ISO media (CD, DVD, HD-WO,
WORM), the disk buffer does normally not play the role of a temporary backup since the ISO
media backup is written immediately along with the original ("synchronous" backup); the buffer
purge configuration has no influence on data safety here.
Nevertheless: If a ISO media backup shall be made
• in a second jukebox or
• at a later point of time,
the step sequence of writing to original, backing up, and purging from disk buffer is exactly the
same as for IXW medias ("asynchronous" backup). In these cases, disk buffer purging should
follow the same rules as for IXW medias, as explained above.
8-6 710
QPENTEXT
II In disk buffer
- Documents are kept in disk buffer for a longer period,
purged later by buffer purge job
See next slide ...
.. In cache
- Documents are moved from disk buffer to cache as soon as possible
(either by media write job or by buffer purge job)
.. Not at all
- Documents are deleted from disk buffer as soon as possible
(either by media write job or by buffer purge job)
- For storage scenarios that do not require fast retrievability of "fresh" documents
The Archive Server offers the above-mentioned methods to treat documents after they have
been written to their final storage location on optical media.
The decision whether or not to cache documents at all should be based on how the documents
are used: Immediately after they have been archived, will they be retrieved frequently by the
users? This will be the case, for example, for the following archiving scenarios:
Early archiving (= store for entry later) with workflow in SAP
Late indexing in UniversalArchive (IXOS-eCONtext for Applications)
Scenarios where documents are archived after users have finished working with them do not
strictly need this type of caching. This applies, for example, for the following archiving
scenarios:
Late archiving with barcode in SAP
All kinds of data archiving (from SAP, MS Exchange, Lotus Notes)
If you have decided to cache documents, there is still the choice where to keep the documents
for that purpose: either in the disk buffer or in the cache. See the next page for a discussion
about the pros and cons of both possibilities.
The table above reveals the relevant properties of caching documents either in the disk buffer
or in the cache.
As a conclusion, caching in the disk buffer is the more stable but also more expensive solution;
caching in the cache is cheaper but has drawbacks in certain situations.
8-8 710
QPENTEXT
Chapter guide
OPEN TEXT
·y_mmmmWMP
Ii Configuration examples
111I Sizing
" IXWpool,
no caching:
" IXWpool,
caching in cache:
- Purging by both
* Compromise between availability,
retention, and purging workload
Disk Buffer Configuration Slide 10
The slide above illustrates examples how the rules for disk buffer purging in various situations
- discussed in detail on the previous pages - can be realized in terms of buffer purging
options.
Some of these example configurations - particularly those referring to 1>WV medias on an
Archive Server:;;; 5.0 - contain an "available space" requirement of 0%:
This ensures that documents younger than 7 days are never deleted; this is reasonable
since this constraint is established for the sake of data loss protection.
On the other hand, in the extreme case of the disk buffer being filled up with "younger"
documents, the purge buffer job will not delete anything; i. e, with this setting, the disk
buffer may possibly become completely full if it is too small to hold 7 days' archiving
data. Therefore, using the 0% setting, it is the administrator's duty to keep an eye on
the disk buffer filling rate (e. g. by means of the Archive Server Monitor) and to enlarge
its disk space if it becomes too small.
Using IXW medias on an Archive Server ~5.5, you should always enable the "respect
WORM backup" property for buffer purging; see page Disk buffer as temporary WORM backup
(earlier in this chapter) for details.
8-10 710
QPENTEXT
Chapter guide
II! Sizing
- Add amount of data typically archived between two ISO write job
invocations
~ In case the last write job has missed just a little amount of data for burning a disk
- Sizing example:
Certain storage systems support writing ISO images to a "virtual jukebox" in write-once mode
(HD-WO).
Consider limits of maximum file size supported by storage system.
8-12 710
QPENTEXT
In case of media writing problems, no documents can be removed from the disk buffer; as a
consequence, documents queue up in the disk buffer as long as media writing is interrupted.
As soon as the disk buffer is filled up, archiving new documents is no longer possible.
In order to bridge the media writing downtime, it is reasonable to equip the disk buffer with
considerable space reserve. The bigger this reserve, the longer can archiving be continued.
II Configuration examples
II Sizing
8-14 710
QPENTEXT
To create a disk buffer, you first have to provide a hard disk partition on operating system
level; see the previous slides about how large this partition should be. On a Unix-based
Archive Server, make sure the root directory of the file system is owned by the user/group that
the Archive Server is operated as (e. g. ixosadmjixossys) and has permissions 770.
The recommendation not to make the hard disk partition too large is due to the fact that
some administrative actions (like disk buffer purging or consistency checks) require examining
the whole partition contents. The more documents are stored there, the longer such a scan will
take. If, moreover, a partition is full of very small documents, the total number of files is very
high; this may lead to unacceptably long execution times of those actions. To prevent this type
of problem, rather use multiple partitions of moderate size instead of a single large partition. If
you store rather large documents only (like SAP data archiving files), the partition may be
made larger as well; where mainly small documents are stored, the partition size should be
smaller (using BLOBs, however, reduces the number of stored files of small documents).
If you choose to divide the total buffer space into more than one partition, you have to attach
all but the first partition to the disk buffer after the disk buffer has been created; see chapter
Hard Disk Resource Maintenance for more information.
Once you have prepared a hard disk partition for exclusive use by the disk buffer (see previous
page), you can create the disk buffer by invoking the Archive Server Administration's Create
Buffer function as illustrated above. You will then be guided through a sequence of dialogs
where you make the following entries:
CD Specify the disk buffer's name (unique among all names of disk buffers on this Archive
Server) and the Buffer purge configuration attributes as discussed earlier in this
chapter.
@ Assign the prepared hard disk partition to the disk buffer by specifying the following:
Partition name: An (Archive Server -internal) logical name for the partition; must be
unique throughout all volume names (including IXW medias) of this server. The
Archive Server will henceforth maintain the volume by this name.
Mount path: The root directory of the partition's file system. On Unix platforms, it is
the directory where the partition is mounted; on Windows, it may be a mounted
partition or a drive specification.
If, on a Windows-based Archive Server, you want to use a network share instead
of a local hard disk drive, see ESC article
https: I lese. ixos. eom/1072860397-483 about how to do that exactly.
@ Purging the buffer is a periodic job that is to be scheduled here. First assign to the job a
name (the illustration shows a convention), then specify the job period. The illustration
shows a reasonable choice: once per night.
Notes:
- This job will later be visible and maintainable in the Archive Server
Administration's Jobs tab.
- Using IXW medias on an Archive Server, you should edit the job to make buffer
purging dependent on WORM backup; see page Disk buffer as temporary
WORM backup (earlier in this chapter) how to do this.
8-16 710
QPENTEXT
A disk buffer must be chosen for each ISO or IXW pool already when the pool is created. To
assign a different disk buffer some time afterwards, invoke the Edit Pool Configuration
dialog as illustrated above and select one of the defined disk buffers from the list.
Whenever you assign a disk buffer to a pool, always consider that the settings for the media
write job and the settings for the buffer purge job interfere with each other; a reasonable
writing and caching setup must involve both setting groups. Refer to the Purge configuration
examples pages (earlier in this chapter) for more information.
Archive Server
The term "disk buffer" shall not be confused with a hard disk partition for buffering data.
Instead, a disk buffer is a logical construct of the Archive Server- with certain properties -
that one or more hard disk partitions are assigned to.
1. If you have several logical archives with ISO or IXW pools, you may use a single disk
buffer ("MyBuffer1" in the above illustration) for them all.
2. To enlarge disk buffer capacity, an additional hard disk partition ("Disk2") may be
assigned to that buffer. Thus you normally need only one diskbuffer, even employing
multiple hard disk volumes for buffering.
3. Alternatively, you may use a second disk buffer ("MyBuffer2") with its attached hard disk
partition ("Disk3").
Using multiple disk buffers is recommended only for situations where there is a real
requirement for this, e.g. if different disk buffer configurations have to be used at the same
time for different kinds of archived data. Refer to page Purge configuration examples (1)
(earlier in this chapter) for more information.
However, as the number of logical archives increases, you will possibly run short of drive
letters to use on Windows NT.
Note: Due to Archive Server processing internals, it is more recommendable to enlarge disk
buffer space by extending the assigned hard disk partition (wherever possible) than to simply
attach a second partition.
Attention: It leads to severe problems on the Archive Server if you assign the same hard disk
partition to different disk buffers or to a disk buffer and cache at the same time!
8-18 710
QPENTEXT
---------...,,:~
" Create disk buffer
" Assign to media pool
" Archive sample document
QPENTEXT
Chapter Overview
III Caching
III Compression
III BLOBs
III Single instance archiving
III Encryption
III ArchiSig Timestamps
III Deferred Archiving
III Retention Settings
This chapter introduces further possibilities to process documents, in addition to the "main"
document flow aspects discussed previously.
These document processing functions are not generally active on an Archive Server; instead,
they can be switched on or off individually for each logical archive (as illustrated above).
In addition to their activation, some of the functions obey further configuration parameters; they
can only be set globally on the Archive Server. Details are given on the following pages.
9-2 710
QPENTEXT
When setting cache option on logical archive level, be aware that this applies only to read
caching. That means that Le. when documents are requested from an application for display,
the displayed component(s) will be cached in the appropriate cache partition. When users
request the component again, it can be retrieved qUickly from the cache partition.
This is different to the caching setting within the Disk buffer purge job. Here the document can
be moved to the cache partition from the disk buffer after it has been written to media.
The list of file formats to be compressed - mentioned above - can be maintained in the
SeNer Configuration page of the Archive Server Administration, branch Document SeNice ~
Component settings ~ Compression.
In the filesystem that examine by the write jobs are marked with the prefix:
rd. <data file-name>, _
9-4 710
QPENTEXT
The sizing parameters for BLOBs - mentioned above - can be maintained in the Server
Configuration page of the Archive Server Administration, branch Document Service ~
Component settings ~ Blobs.
A useful command line tool to work with BLOBs is "ixblob". You find this tool in the folder
"<ixos-root>\bin".
ixblob -p blob
ixblob -t [vms] blob [file ...]
COlmplonEints in BLOBs
not be found here
whole document is
in a BLOB, this
directory does not
even exist
If BLOBs are activated, document components can be "swallowed" by a BLOB, i. e. they are
not stored on storage media as they would be without BLOBs. The chart above explains how
to trace a document component's storage location in this situation:
In the dinfo output of dsClient (see also chapter Document Structure on Storage Media),
the fact that a document component is buried in a BLOB can be recognized by the volume
attribute value BLOB.... You can then proceed by applying dinfo to the BLOB itself, as
illustrated above; this leads you to where the BLOB is actually stored on your server.
Note: Querying the DocumentService for a BLOB this way does only work if your dsClient
session is running on the Archive Server itself!
To close open BLOBs stop and restart the spawner process "dsaux", e. g. type in on com-
mand line:
spawncmd stop dsaux
spawncmd start dsaux
9-6 710
QPENTEXT
Since the SHA1 hash value - used as fingerprint for an archived file - is 160 bits
wide, the probability of erroneous identification of two different files is 2- 16 °.
A reference count is maintained for a SIA target: The target component is deleted only
after the last reference to it has been deleted.
SIA sources (= references to the really stored files) are maintained in the storage
database. However, in order to make the "normal" database import/export mechanics of
the Archive Server work for SIA, they are also stored on the actual storage media as
stub files:
With zero length (do not allocate storage space)
- Always in BLOBs, even in BLOBs are not activated explicitly in the given context
(to avoid storage overhead for the empty files)
Never accessed for reading, except during a database export/import
Certain files can be excluded from the SIA mechanism - i. e. they will always be
stored individually - if there is no probability that they are identical; this depends on the
application context. For example, every stored e-mail archiving from MS Exchange will
contain a component file called REFERENCES; such a file will never match any other
one, thus there is no need to apply SIA to it.
On the Archive Server, excluding files from SIA can be configured:
According to MIME type (for storing via HTTP)
- According to IXOS component type (for storing via RPC)
According to component (= file) name
* Default: INFO. TXT, REFERENCES
(these files are never identical in MS Exchange)
This configuration can be maintained in the Server Configuration page of the Archive
Server Administration, branch Document Service ~ Component settings ~ Single
Instance ArchiVing.
storage location of the component file (the SIA "target") is displayed as the first part of the
pathName attribute.
9-8 710
OPEN TEXT
To export or import the encryption key(s) use the command line tool "recIO". You find this tool
in the folder "<archive-root>\bin" .
II Details:
- Archive Server Time Stamp Service - Administration Guide
- Archive Server· Administration Guide
9-10 710
QPENTEXT
Timestamping System
Document
C9 Hash value
Signature
The chart above illustrates the steps involved in digitally signing a stored document:
• As soon as the document has entered the DocumentService (i. e. it is stored in the disk
buffer or in a HDSK pool), a hash value is calculated for the document.
The DocumentService sends the hash value to the timestamp service. This service may
be the local timestamp server itself or an external timestamp service provider.
The timestamp service forms a triple from the document's hash value, the timestamp of
the current time, and additional attributes, and creates a hash value for this triple.
• The timestamp service creates a digital signature from the triple's hash value, using its
private key, and adds this signature to the triple.
• The complete quadruple - including the signature - is sent back to the
DocumentService and stored as an additional component of the document.
Document
I Hash value I
Attributes I
Signature - ~ - I Hash value I
i
Public
key
The chart above illustrates the steps involved in verifying a digitally signed document. The
whole proceeding starts as soon as a user, currently viewing the document, requests
verification in the Archive Windows Viewer:
1. The Archive Windows Viewer calculates the hash value for the displayed document.
2. It retrieves the "signature quadruple" from the Archive Server and extracts the originally
stored hash value.
3. It compares the original and the stored hash value: If they differ, the document (or the
signature quadruple) has been manipulated in the meantime.
4. Next, the validity of the signature quadruple must be verified. For this, the client
calculates the hash value for the document's stored hash value, the timestamp, and the
additional attributes.
5. The included signature is decrypted using the timestamp service's public key, resulting
in the triple's original hash value.
6. The two hash values of the triple - original and current - are compared. If they differ,
the quadruple has been manipulated in the meantime; otherwise the document
authenticity is verified.
9-12 710 (
OPEN TEXT
II Conclusion:
- Electronically signed or timestamped documents can loose their evidence
in the course of time!
i.e. in Germany, defined by the Bundesanzeiger
III Solution:
Use ArchiSig timestamps to renew electronic signatures & timestamps
~ Even if original signature would not be valid anymore, renewal with ArchiSig can
"refresh" validity of signature
~ Not only archive timestamps, but also validity of personal signatures on
documents can be prolonged this way
Document Processing Options Slide 13
Timestamps can become insecure after a certain time (Le. 5 years) due to the following
reasons:
Key length
• Algorithm
Public key method
Certificate becomes invalid
9-14 710
QPENTEXT
!!II Strict
- If the timestamp is not valid or does not
exist, the document is not delivered.
iii Relaxed
- If the timestamp is not valid, the
administrator is informed and the
document is delivered
iii None
- No timestamp verification
In a Relaxed timestamp verification scenario, you can set notifications to be informed about
invalid timestamp requests.
Scenario:
II Retention period cannot be set
leading application during archiving store document
wI retention=EVENT
II{ Different document types with & deferred archiving
different retention periods
- in one logical archive
- Retention to be decided later later:
set retention period
How it works: & plan for OS job
(move to storage)
II{ Documents stored in Disk Buffer
II{ not written to storage subsystem
(yet)
II When retention known:
- Using http API,
" set retention period &
" create OS job entry
9-16 710
OPEN TEXT
When archiving with the option deferred archiving, the document are stored in Disk buffer and
"parked" in pool _DELAYED_. Once the appropriate command is sent from the leading
application, documents will be moved to the correct pool for further processing to storage
system.
9-18 710
QPENTEXT
Audit Trails are a new feature introduced with Archive Server 9.6
III DLM & retention settings should comply with company policies
- From administrative perspective, only make settings that are coordinated
with the business requirements and policies within the company
It is important to understand what the intention of DLMis since DLM is often mixed up with
HSM (Hierarchical Storage Management):
The idea of HSM is to have content fast available when it is new or often used. In this case the
content is stored on a local disk; otherwise it is displaced to a slow media like a tape where
storage is much cheaper as on hard disks. Some HSM implementations provide multi level
displacement. But: Since the HSM mostly simulates a simple file system for an application, it is
always the HSM server which decides about displacements of content by own heuristics.
The intention of DLM is that the application at least classifies the content to determinate the
lifecycle. When dealing with business documents, only an application knows about needs
belonging availability of content or legal guidelines. In this manner Archive Server (starting
version 9.6) provides these mechanisms for DLM: Deferred archiving (which could be treated
as a first step in the direction "application controlled HSM"), retention handling, storage
reorganization which is automatically needed when storing content with retention in container
files (to be covered later) and audit trails which is an important monitoring instrument in
regulated scenarios.
10-2 710
QPENTEXT
Livelink Enterprise Server (" Livelink") can provide a Records Management module. It is
intended to support & trigger retention handling of Archive Server from the Livelink Enterprise
Server.
DLM & retention settings should comply with company policies. From administrative
perspective, only make settings that are coordinated with the business requirements and
policies within the company.
10-4 710
QPENTEXT
.. Retention Handling
is provided by Archive Server
is triggered by leading application
.. Leading application:
- sets retention period
- sets retention event
purges or destroys content
.. Archive Server
assures that content can't be deleted
within retention period
- does not automatically purge content
- all actions are monitored in audit
trails
createDoc
retention period=365 days
Notes:
- Leading application is using the http API.
- Retention period is specified while document is being created.
- Retention date = creation date + retention period.
- Protection: no delete / modify.
- No dedicated action is taken at the time of expiration.
10-6 710
QPENTEXT
Retention period
OPEN TEXT
JllliU",t'lllW"
Retention period:
" Parameter for creation of a document or component
- extension of create call
QPENTEXT
Retention date:
" attribute of the document
" set during the creation of the document (or the creation of the first
component):
retention date = creation date + retention period
10-8 710
OPEN TEXT
Protected documents
Retention protection:
When using Archive Server 9.6.1 with an installed patch EA096-078 or EA096-055, it is not
possible to delete a document when it has a retention protectio. Even administrators using
command line tools are not allowed to delete the documents any longer (independent of
compliance mode).
However, earlier and later Archive Server versions (Le. version 9.6.1 with Patch 087 or version
9.7.1) return again to the behaviour as described in the slide.
iii Example: Invoices need to be stored for 10 years due to tax laws
•
.....
~ Stage " ..tent;on pertad = 10 ye.'" 'I.rts
~cm~:
Document is created in ~ Retention period expires
archive server
Retention period of 3650 days is not the exact value for 10 years but used just as a simple
example. In real scenarios, you would also have to take into account such effects as Le. leap
years.
10-10 710
OPEN TEXT
Document is created in
~gg~:
archive server ~ Retention period expires
With DLM there are new storage scenarios. Of course you can simply archive content without
a controlled lifecycle. On the other hand there is the deferred archiving feature which allows a
two step archiving. This mode is interesting in two cases, first Le. a scenario where the
application deals with working copies which are change often or deleted sometimes and the
application (Le. Tep) has to ensure that no content is written onto a read only media. This has
two aspects: The content would waste disk space and the content could compromise
someone.
The other case is interesting when the application has not enough information about the
document during the creation time but has to provide a retention period which definitely is a
creation parameter. In this case the application sets the retention to "event based" and uses
the deferred archiving to specify the retention within a further step. Another scenario belongs
to this case: When archiving from SAP there is no chance to add a retention parameter to the
ArchiveLink URL, so retention has to be set in a further step.
10-12 710
OPEN TEXT
Compliance Mode
Keep in mind that once set, Compliance Mode cannot be turned offl
When using retention features for legal purposes, it is usually advisable to turn on Compliance
Mode. Be aware that once you turn on Compliance Mode, you are not able to turn it off! Turn it
only on when you are sure that you want this feature.
When using Archive Server 9.6.1 with an installed patch EA096-078 or EA096-055, it is not
possible to delete a document when it has a retention protectio. Even administrators using
command line tools are not allowed to delete the documents any longer (independent of
compliance mode).
However, earlier and later Archive Server versions (Le. version 9.6.1 with Patch 087 or version
9.7.1) return again to the behaviour as described in the slide.
~~---=~
" Retention period
- Set retention period to 2 days
- Archive a document
- Check document info in dsClient
10-14 710
QPENTExT
QPENTEXT
11-2 710
QPENTEXT
The Archive Server is made up of the components shown above. Separate server processes
for administration and monitoring contribute to the Archive Server's modular architecture.
The central part of the Archive Server is the DocumentService; it stores and provides
documents and their components.
Depending on what media are being used, documents are stored on hard disk, WORM, CD (:::;
Archive Server 9.5), UDO (~Archive Server 5.5) or DVD partitions.
The WORM, UDO, CD and DVD partitions are handled by a separate sub-server called
STORM ("Storage Manager").
The Archive Server storage database, called DS (DocumentService Database), holds the
information about the archived documents and where they are stored.
The functions of the different components are explained on the following pages.
Only specific storage systems are supported for NAS/SAN connection or as "virtual jUkebox"
writing ISO images.
The DocumentService, performing all document management-related tasks, is the core of the
Archive Server. Its functionality is so extensive that the above enumeration can only give a
summary of the most important aspects.
11-4 710
QPENTEXT
II Interaction tools
- Archive Server Administration (various aspects)
- Command line tool dsClient
- Further, specialized command line tools
Clients of the Archive Server access the two main services, write component (WC) and read
component (RC), separately through RPC or HTTP calls. When documents are transferred
through these protocols, they are split into chunks of 64 kByte each.
The write component allocates new document IDs either automatically when archiving a new
document, or in advance if the document 10 is needed during creation of a document and
before it can be transferred to the archive (the early archiving scenario).
When archiving documents, a pool name must be specified which can be determined from the
administration server, given the logical archive 10 and the component type. All documents are
stored on hard disk first, where they are immediately available for retrieval. When archiving
into an IXW or ISO pool, a write request to the specific type of media is created at the same
time.
The read component (up to four instances can run simultaneously) returns a list of component
names when given the document 10, and then delivers the requested components to the client.
Read document components can automatically be cached, as there is a good chance that a
document will be requested again in the near future. Apart from this mechanism, cache
requests can be sent to the read component when a document will be retrieved soon. That
way, the DocumentService can reorder requests for uncached documents to minimize the
number of disk change operations and make best use of the available drives.
Apart from the programs mentioned above, the DocumentService provides several utility
programs for performing special administration tasks, especially troubleshooting. dsClient is
the most interesting one of them; useful applications of it are described in different chapters of
this course material, and a summary is given in appendix Archive Server Command Line
Tools.
The programs which are run as jobs - as mentioned above - are not normally invoked
manually (although this is possible). Instead, this is accomplished by the
AdministrationServer's job scheduler; see page AdministrationServer later on in this chapter.
III Tasks
- Manage media in jukeboxes
* Maintain jukebox inventory
Control media movements by jukebox robot
- Communicate with storage systems
- Provide access to media contents
* Communicate via SCSI with disk drives
* Make media accessible for Document Service via NFS protocol
* Burn ISO images onto empty media
* Maintain proprietary file system structure on WORM
• Mirror structure data of not-yet-finalized WORM media
into files on hard disk
Documents and document components may be stored on hard disk, WORM, CD, UDO or
DVD. Writing to and reading from hard disk media is handled directly by the Document
Service.
Read and write access to CD, DVD, UDO and WORM as well as managing the jukebox
inventory are handled by the jukebox server STORM ("Storage Manager"). It is possible to
install Archive Server without STORM if there only used hard disk media.
STORM has to use a proprietary WORM file system type since there is no industry standard
available. That way, IXOS developers took the chance to invent a file system optimized for the
purpose of high-performance, high-security archiving.
Depending on the specific storage system used, communication between the Archive Server
and the storage system may be realized differently.
{I
11-6 710
QPENTEXT
Administration Server
III Tasks
- Keep Archive Server configu-
ration information
" Everything maintainable in the Livelink Enterprise
Archive Server Administration
~ Keep track of structure changes
on related Archive Server
- Deliver configuration information on request to Livelink clients
* Livelink Archive Windows Client configuration profiles
~ Archive Modes for Enterprise Scan
IXOS·eCONtext for Applications (UniversalArchive) GUI configuration ("ALI")
- Execute scheduled jobs
- Provide administrative access to other server components
* Via Archive Server Administration
While the DocumentService manages everything concerning documents - i. e. the data kept
by the Archive Server -, the AdministrationServer deals with the Archive Server itself: its
customized structure, users, server relationships, media devices, and timely execution of
regular tasks.
Moreover, the AdministrationServer is contacted by clients to retrieve certain types of
configuration information relevant for them. For example, the scanning application Livelink
EnterpriseScan downloads the so-called Archiving Modes (containing definitions of allowed
storage scenarios) upon startup.
The AdministrationService consists of a single server process called admSrv (Unix) or
admsrv. exe (Windows) and uses the storage database (see later in this chapter) for safely
storing the configuration data it maintains.
Some kinds of documents require certain preprocessing steps before they are ready to be
archived. An example for this is: In order to enable an archive user displaying a retrieved SAP
R/3 print lisUo quickly jump to a specific print list page, a page index - mapping page
numbers to byte offsets in the print list file - is stored together with the original list; creation of
this page index is one DocumentPipeline processing step.
Examples for DocTools (for various storage scenarios) are:
page_idx creates print list page index
doctods passes document over to DocumentService
c fbx sends confirmation message to SAP R/3
i
DBinsert stores document attributes in retrieval database I
11-8 710
QPENTEXT
See chapter Media Migration for more information about the Volume Migration server.
.. Monitor server
Polls status information from other server components
* Process status
• Available storage space
• Processing faults
Consists of services:
~ ixmonSvc I ixmonsvc. exe (master process)
• ixmonClnt I ixmoncln. exe (monitor agents, normally 3)
Status display tools
Archive Server Monitor (:5 9.5.0)
Archive Server WebMonitor
Command line tool ixmonTest (Unix), ixmontst. exe (Windows)
.. Notification server
Raises alert upon system events, e. g.
• Error conditions
" CD, DVD, UDO has been finished
Alert types: E·mail, log file message, script execution, SNMP trap
Consists of single service notifSrvr. exe (Unix: j re process)
Configuration tool: Notifications in Archive Server Administration
The monitor server gathers information about the status of relevant processes, filesystem,
database sizes, and available resources. The Archive Server Monitor client can then retrieve
and display this information.
Individual, so-called monitor agents acquire the data by accessing the monitored resources via
RPC, SQL queries or operating system calls. \ '
The notification server supplements the monitoring functionality by actively rising alerts in case
of certain system events. This has to be configured within the Archive Server Administration;
see chapter Monitoring the Archive Server for more information.
11-10 710
QPENTEXT
II Consists of services:
Tomcat servlet engine (Windows: tomcat. exe, Unix: j re process)
loglimi ter(. exe) (helper for truncating Apache logfiles; see ESC)
- purgefiles(. exe) (helper for cleaning up Tomcat logfiles)
In Archive Server S 9.5, also the Apache HTTP server is installed. This is not needed any
longer with Archive Server;:: 9.6.
The HTTP interface mentioned above is newly introduced in Archive Server 5.0. HTTP access
to stored documents, however, has already been possible before: The DocumentService's
built-in HTTP interface (listening on ports 8080 and 8090) has been invented in version 3.1.
In addition to the interface address mentioned above, SSL-based communication is also
possible via:
https://<archiveserver>:4061/
11-12 710
QPENTEXT
Product-specific Components
OPEN TEXT
Depending on the product(s) that an Archive Server serves, additional components have to be
installed on the server.
In addition to the components mentioned above, some products comprise interface
components to their leading application that are normally installed on a different machine but
may optionally be operated on the Archive Server as well. This applies, for example, to the
Email Connector service of Email Archiving.
The PDMS product is using the Context Server which might also result in specific components
on the Archive Server.
The illustrated directory structure is common to all Archive Server installations with a release;::: 4.0 (the
structure of former versions is significantly different). The directory layer directly below the installation
root is completely displayed above, whereas the chart shows only an excerpt of the deeper layer(s).
Some annotations on directories:
bin contains all executables needed by the Archive Server system itself as well as by the
administrator (e. g. command line tools). For this reason, it is best to have this directory
included in the shell command search path.
pkg contains the Archive Server software as copied from the installation medium. The software is
subdivided into packages (to be discussed on the following pages) each of which is
represented here as a separate subdirectory (e. g. AOMS, OS, SCSI, MONS). Parts of the
material contained herein is copied to other parts of the directory tree during installation, e. g.
binaries to bin, starUstop scripts to re, and config file templates to eonfig.
config contains all kinds of configuration information (for Windows: apart from the data held in the
Registry). This comprises package-specific configuration (e.g. package STORM) as well as
general setup settings (e. g. subdirectories dpeonfig, monitor, servtab).
prj contains Archive Server extensions developed for customer-specific archiving projects. On a
standard installation, it is empty.
opt contains third-party software that comes with and is needed by the Archive Server, e. g. the
perl scripting engine.
var is the only directory whose contents is changed by the Archive Server system itself. The most
prominent subdirectory is log, holding all Archive Server system log files; other
subdirectories are used by Archive Server components for internal purposes.
w3 contains files - HTML pages and CGI scripts - that constitute the Archive Server's HTIP
interface.
In addition, Archive Server;:: 9.6 also contains a folder webapps that contains java libraries and xml
files.
11-14 710
OPEN TEXT
All delivered Archive Server software is bound into packages, most (but not all) of which
comprise one complete functional component of the Archive Server or a client; this applies, for
example, to packages DS, ADMS, and STORM. On the other hand, the DocumentPipeline is
made up of several packages, depending on the installed Archive Server product and the
specific storage scenarios that the pipeline is expected to perform.
Each installed package resides in its own subdirectory of the pkg directory. However, parts of
the package contents is moved into other locations, e. g. configuration elements go into the
config directory structure.
The above enumeration lists only the most interesting Archive Server packages. There are
many more ones, some of which are employed only at extended server installations, e. g. if
maChine-generated documents are to be stored via the so-called COLD DocumentPipeline.
The list here represent some of the most important packages.
The SCSI package provides a standard interface to all SCSI devices and contains all hardware
specific and operating system specific software. It provides an interface to the optical disk
drives (used by WORM, UDO and DVD).
While the Unix versions utilize the operating system's device drivers and support all available
SCSI interfaces, the Windows driver interfaces directly with the hardware and requires certain
supported SCSI controllers.
Vendor provided SCSI drivers for Windows (ASPI, CAM, or similar) are neither required nor
supported by Open Text. However, access of hard disks is done using the operating system
driver and the choice of SCSI controllers for hard disk access is not affected.
11-16 710
QPENTEXT
QPENTEXT
Chapter Guide
OPENTEXT
12-2 710
OPEN TEXT
Print or save
configuration report
Useful for:
• Your own documentation
• Correspondence with
IXOS technical support
Most aspects of the Archive Server behavior - i. e. the "storage dynamics" mentioned in the
slide title - can be configured in the tabs of the Archive Server Administration. Prominent
examples include:
• Logical archives
• Media pools
Disk buffers
• Job scheduling
To access these configuration items, no special tool is available nor necessary: The graphical
Archive Server Administration is the dedicated tool for this purpose.
To print or save (as a file) a report of your Archive Server configuration for documentation
purposes, follow these steps:
1. In the Archive Server Administration, click the "printer" button (as indicated above).
2. In the Print report dialog, choose which elements of the configuration shall be
documented. For a complete documentation, choose everything except Job protocol
and Alerts.
3. To print the report immediately, click button Print.
To save the report in a file, click button Preview. In the following Print preview ...
window, use the Save As (= "floppy disk") button to invoke the save function.
Find the mentioned manual Archive Server Administration Guide in ESC as: Products ECM
Suite I Archive Server I Product Documentation:
https://esc.ixos.com/l084264247-891
• This possibility of accessing most parts of the Archive Server configuration remotely is
introduced in version 5.0.
• For the sake of later undoing, all configuration changes are recorded in files in the
dXOS_ROOT> /var/ cfgbak directory. Undoing is then performed from the Server
Configuration dialog, menu File ~ Load configuration saved on ...
• A number of XML files defines how the presentation of configuration variables is arranged
(structure and descriptive texts). If, for example, an installed project requires administrative
access to non-standard configuration information, an expert can change or add to this
presentation setup. The files are held in the directory dXOS_ROOT>/config/xml/*. xml.
• The display options mentioned above govern details about what is actually displayed in the
configuration structure. You can set them in the View menu.
• Some configuration settings may be altered while the system is running but without storing
them permanently; such settings will be lost when the system is restarted the next time.
Specifically, this applies to dynamic log level settings (see chapter Logfi/es and Log/eve/s for
more information). For those parameters whose stored value differs from the currently
effective value, the Display runtime values option decides which one of the values will
be displayed.
• About the meaning of the item prefixes in the structure display:
[P] Persistent; value is saved in Registry / setup file, but service(s) need to be restarted
to activate it
[T] Temporary; value is sent to service and accepted without restart, but after restart of
service the former (persistent) value is used
[B] Both; value is effective immediately and permanently
• Find the mentioned administration manual Archive Server Configuration Parameters in ESC
as:
https://esc.ixos.com/l106224732-381
12-4 710
QPENTEXT
Most parts of the "basic" Archive Server configuration (i. e. not related to the actual
arrangement of logical archives, pools, disk buffers, etc.) are stored as lists of configuration
variables:
On a Windows-based Archive Server: in the Windows Registry, branch
HKEY_LOCAL_MACHINE\SOFTWARE\ IXOS\ IXOS_ARCHIVE.
Use the Registry Editor (command regedit) to view and maintain the configuration.
On a Unix-based Archive Server: in configuration files
/usr/ixos-archive/config/setup/<comp>.Setup, where <comp >istheIXOS-
eCONserver component configured by the file.
Any text editor (e. g. vi) can be used to edit the configuration.
No matter which OS platform your Archive Server is based on, the total set of its configuration
variables is subdivided according to the elements of the Archive Server architecture (see
chapter Archive Server Architecture). Example are:
• DS DocumentService
• ADMS AdministrationService
• R3LK SAP R/3 communication DocTools of DocumentPipeline
A special configuration item is COMMON, containing variables used by more than a specific
Archive Server component.
III Examples:
The configuration components mentioned above hold the following types of configuration
information:
Spawner configuration
Which processes to start in which order?
Monitor service configuration
Which processes to observe?
What parameters to display?
DocumentPipeline configuration
Which DocTools to chain into which pipeline threads?
Scripted routines
E. g. for periodic jobs in ADMS, it is possible to own scripts
STORM configuration
- Configuration parameters (e.g. loglevels)
- WORM filesystem data files
- Connected storage systems
The directory placeholder dXOS_ROOT> - mentioned above - shall be replaced by the path
defined as the root directory of the Archive Server software installation tree (see page Installed
software later in this chapter).
12-6 710
QPENTEXT
Chapter Guide
QPENTEXT
Installed Software
OPENTEXT
- MS SQL Server
* See Registry entry
HKLM\SOFTWARE\MICROSOFT\MSSQLServer\Setup\SQLPath
The above mentioned scheme for determining the Archive Server software location holds for
all types of an Archive Sever system installation: server and clients.
Configuration locations for the mentioned items:
Database system Oracle:
Server Configuration: Document Service (OS) -,) Directories -,) Cache path (. ..)
Registry (Win): ... \DS\DS - CACHE- PATH
Setup file (Unix): DS . Setup, variable DS _CACHE_PATH
12-8 710
QPENTEXT
QPENTEXT
The directory placeholder <IXOS_ROOT> shall be replaced by the path defined as the
root directory of the Archive Server software installation tree (see earlier in this
chapter).
The basename of STORM log files, j bd, is derived from STORM's secondary name
Jukebox Daemon.
The Unix environment variable ORACLE_HOME is usually only defined for user oracle;
to retrieve the alert log file as mentioned above, you should switch to this user ID first
(using the 'su - oracle' command).
The directory placeholder <SQL_SERVER_ROOT> shall be replaced by the path
defined as the root directory of the MS SOL Server software installation tree (see earlier
in this chapter).
The former SQL Server log files are numbered according to their age, i. e.
ERRORLOG. 1 is the most recent, ERRORLOG. 6 the oldest one. As soon as a new log is
to be created, the oldest one is deleted, the numbering of the remaining ones is shifted
by one, and ERRORLOG is renamed to ERRORLOG. 1.
The HTTP-based logfile access tool (Only Archive Server S 9.5)
http://<archiveserver>:4060/cgi-bin/tools/log.pl
proVides some additional useful possibilities:
- List only the tail of a file
- Filter for log message types (error, warning, information, ...)
- Filter for arbitrary search string
It is possible to use the pearl scripts from an older version, but check your security
environment!
The command line tool scsidevs, delivered as part of the Archive Server, is very handy for
getting information about connected SCSI devices. The SCSI device file specification it
displays can be entered exactly the same way in the STORM (storage manager) configuration
files for storage systems.
You can retrieve more detailed information about a certain SCSI device with the command
scsidevs -full scsi <scsi address>
e.g. scsidevs -full scsi \\.\pObOtO,O
where <scsi address> is the device's SCSI address as described above.
12-10 710
OPEN TEXT
Chapter Guide
OPENTEXT
II Cache partitions
) Administration -----II
-0 Server Configuration
~0 Registry I Setup file
II DocumentPipeline
directory ("DPDIR")
Server Configuration
-0 Registry I Setup file
The hard disk storage locations mentioned above are defined globally per Archive Server, i. e. there is
- for example - no separate DocumentPipeline directory for each logical archive. Usually, those
locations are specified at Archive Server installation time and are rarely changed afterwards.
Configuration locations for the mentioned items:
Cache partitions
Server Configuration: Document Service (OS) ~ Directories ~ Cache path (.. .)
Registry (Win): ... \DS\DS_CACHE_PATH
Setup file (Unix): DS. Setup, variable DS_CACHE_PATH
DocumentPipeline directory
Server Configuration: General Archive Server settings (COMMON) ~ DP settings
~ Directory for Temporary Storage of Documents
Registry (Win): ... \ COMMON\DPDIR
Setup file (Unix): COMMON. Setup, variable DPDIR
COLD batch import directory (present for some IXOS products only)
Server Configuration: Configuration for CODB Pipeline (CODB) ~ General
Install. Variables ~ External Directory fo CODB pipeline
Registry (Win): ... \CODB\DATA_DIR
Setup file (Unix): CODB. Setup, variable DATA_DIR
EXDB batch import directory (present for some IXOS products only)
Server Configuration: Configuration for EXDB Pipeline (EXDB) ~ General
Install. Variables ~ External Directory fo EXDB pipeline
Registry (Win): ... \EXDB\EXT_DIR
Setup file (Unix): EXDB.Setup, variable EXT_DIR
ISO burn buffer
Server Configuration: Document Service (OS) ~ Media configuration
~ ISO settings ~ l5irectory where cdliso trees are built
and ~ Directory where cd/iso images are built
Registry (Win): ... \DS\CDDIR, ... \DS\CDIMG
Setup file (Unix): DS. Setup, variables CDDIR and CDIMG
12-12 710
OPEN TEXT
R/3 System
configuration:
(Barcode archiving
only)
The chart above illustrates how to find data storage locations which are specified individually
per logical archive, media pool, or disk buffer; retrieval is done in the Archive Server
Administration, tab Servers.
To find out the hard disk partition where a disk buffer volume resides:
1. Click the disk buffer in question under Buffers in the left-hand display area
2. See the name of the disk buffer volume in question in the Name column of the
Partitions list (right-hand display area)
3. Click the HardDisk icon under Devices in the left-hand display area
4. Look up the retrieved buffer volume name in the Partitions list (right-hand display
area) and see the corresponding Mount Path entry
To find out the disk partition where a hard disk pool volume resides:
1. Click the hard disk pool of the logical archive in question in the left-hand display area
2. See the name of the volume in question in the Name column of the Parti tions list
(right-hand display area)
3. Look up the drive letter or mount path in the Devices tab as described for disk buffer
volumes (above)
Server Configuration
Config file server. cfg
For explanations about the meaning of the mentioned storage locations, see other course material
chapters:
WORM filesystem database -7 The WORM Filesystem
Temp. storage for WORM writing -7 Document Processing by the Livelink Enterprise
Archive Server
STORM backup directory/-ies -7 Backing up the Archive Server
12-14 710
QPENTEXT
/ixos/oradata/ECR/dat/ds2_system.dbf
/ixos/oradata/ECR/dat/ds2_datal.dbf
/ixos/oradata/ECR/idx/ds2_indexl.dbf
SQL>
In Archive Server ~ 9.6, there is a new database instance ECR that contains all tables related to the
Archive Server, including the OS storage tables.
In Archive Server S 9.5, there is no ECR database instance. Instead, the OS has its own database
instance called OS. Logon to the OS database instance with the following command:
sqlplus 'ecr/ ecr@doc_econserverl
The sources for the information mentioned above are the database itself and the database instance
configuration file ini tECR. ora.
To query the database for the mentioned information, do the following on a command line on your
Archive Server:
1. (Unix platforms only:) Become the database user by typing: su - oracle
(If the Oracle system is run under a different user name, take that one instead of oracle.)
2. Connect to the database with the command:
sqlplus <dbuser>/<password>@ecr_<servername>
Replace the <... > placeholders by the values used in your environment. Using the standard
\ database user name and password, this would look like:
l
sqlplus ecr/ecr@doc_<servername>
(The query for the archive redo logfiles cannot be done with the "normal" database user ixds; for
this, you have to connect to the database as internal.)
3. Perform the query needed for retrieving the desired information (see slide above).
4. When done, exit sqlplus with the exit command.
ini tDS. ora is the configuration file of the database instance. By default, it can be found in SUbdirectory
config/oracle of the Archive Server installation directory. (On a non-standard Archive Server
installation, look up the path and name of this file in the Server Configuration, branch Oracle Server
Database (DBORAS) -? Server Database Parameters ... -? Settings of the Server database -? Settings
of DB parameters, entry Parameter file for DB.)
On an Archive Server S 4.2, the file is located in the Oracle installation directory (see page Installed
software earlier in this chapter), SUbdirectory Database (Windows) or dbs (Unix), respectively.
In the initDS. ora file, the configuration items mentioned above can be found:
Archive redo log files: Parameter log_archive_dest
Control files: Parameter Control files
exec sp_helpfile
To query for data files using the SOL Server query GUI (illustrated above):
1. Start the SQL Server Query Analyzer ("qsql jw") by choosing the Start menu item
Programs ~ Microsoft SQL Server 7.0 .~ Query Analyzer.
2. Log on to the Archive Server's database system as database user ecr (whose default
password is ecr).
3. Select the database to query within text field DB.
4. Issue the query in the main window area and press F5 to execute it.
If you prefer to use the command line query tool qsql instead:
1. Open a command prompt window and enter the <SQL_SERVER_ROOT>\BINN
directory.
2. Perform the following session:
D:\MSSQL\BINN> qsql -S <servername> jUecr jP<ecr-passwd>
1> use <database name>
2> exec sp_helpfile
3> go
<database name> should be one of the database names mentioned above.
3. Exit from isql wth the exit command.
For database backup purposes, it is important to get hold of the files belonging to all databases
mentioned above.
(The directory placeholder <SQL_DATA_ROOT> shall be replaced by the path defined as the
root directory for MS SOL Server data; this can be found as Windows registry entry
HKLM\SOFTWARE\MICROSOFT\MSSQLServer\Setup\SQLDataRoot and is by default
equal to the SOL Server installation root discussed earlier in this chapter.)
12-16 710
QPENTExT
OPENTEXT
Startup Shutdown
order order
Spawner
The Archive Server instal/ation is subdivided into two layers of processes which are started up
and shut down separately. However, this does not mean that they may be started up and shut
down independently from each other; the spawner layer depends on the availability of the
underlying database layer.
When the whole machine is booted or shut down, it is ensured that aI/ Archive Server
processes are started/stopped in the proper order. When performing a manual startup or
shutdown - e. g. for backup or maintenance reasons - or when developing your own
startup/shutdown scripts, however, you have to obey the startup/shutdown order displayed
above.
13-2 710
OPEN TEXT
• How-to • How-to
- spawncmd exit - spawncmd stopall
- Stop spawner service by OS means - Archive Server Administration: Menu
File -+ Spawner -+ Stop Archive
Processes
• Requires "full" startup afterwards • Requires "startall" startup afterwards
- How-to: Start spawner service by OS - How-to: spawncmd startall
means - No "full" startup - spawner itself is still
running!
Knowing the difference between the two variants of shutting down the spawner is essential for
being able to use the different ways of startup and shutdown, explained in this chapter.
Generally, the "stopall" shutdown method is the more "gentle" one; it keeps the spawner itself
running, yielding the considerable advantage to provide process status information during the
shutdown period. This is useful because, especially on Unix platforms, many spawner-
controlled processes terminate asynchronously, i. e. they are still alive when the shutdown
command (whichever you have chosen) returns. Being able to check when all processes have
terminated (using spawncmd status, see later in this chapter) is therefore an essential tool
for setting up your own maintenance tools, e. g. scripts for offline server backup.
For this reason, you should prefer using the "stopaII" shutdown - combined with the "startall"
startup - wherever possible.
- Shutdown:
net stop "IXOS Spawner" ~,
w.",,,,,,,,,~=;;;;;;;;;;;;;;;;;;=:;;;;;:===~":~~~~~~J
net stop Oracle<ORAyOME>TNSListener (only Oracle data base)
net stop OracleServiceECR or net stop mssqlserver
Instead of the "long" service names "IXOS Jukebox Daemon" and "IXOS Spawner", you can
also use the short names "jbd" and "spawner", respectively.
The <ORA_HOME> placeholder mentioned above represents the Oracle Home name of your
Oracle DBMS installation; it has been defined during the Oracle installation routine. In case
you are unsure what the actual value of this parameter is, simply open the Windows services
list; you will easily recognize the name by having a look at the Oracle services mentioned
there.
Stopping the OracleServiceECR without explicitly shutting down the database instance before
causes no problems but leads to annoying warning messages in the Oracle trace file. In order
to keep your trace files clean, you may insert the following command in an archive shutdown
script right before the "net stop OracleServiceECR" command (given in cmd syntax; be sure to
enter it as one single line):
(echo connect internal/<passwd> && echo shutdown immediate) I svrmgrl
13-4 710
QPENTEXT
II Script usage
- Startup: <scriptname> start
- Shutdown: <scriptname> stop
After having stopped all spawner-controlled processes (with whatever command), you must
wait for the termination of all of them before starting them again - otherwise some of them
won't come up properly. Termination of all processes usually takes about a minute.
In this respect, it is more useful to use spawncmd stopall and spawncmd startall
instead of S18BASE stop/start for shutting down and restarting the spawner-controlled
process layer because then it is still possible to query the processes' status (with spawncmd
status) during the shutdown period.
Details about how to use spawncmd status are presented later in this chapter.
# OBTest
trying to connect
\
r-; "connected" ~ Everything okay,
database running and accessible
III Some error message ~ Database not accessible;
investigate this
DETest is a database connection testing tool provided as part of the Archive Server
installation. DETest tries to access the OS database exactly the same way the "productive"
Archive Server components do, e. g. document service and administration service.
If you have problems starting up the archive system but DETest succeeds, you can be sure
that it is not the database that is causing the problem. Reversely, if this database test does not
finish with the "connected" message, you can be sure that the spawner-driven archive
processes will not come up properly.
Archive Server:S:; 4.2: Use dsConTest 3 instead of DETest, which does exactly the same
test.
13-6 710
OPEN TEXT
The best way to check whether the archive system startup was successful is the spawncmd
status command. The information presented in the status column ("sta") of the resulting
process list means:
s Process is currently starting up
R Process is running
T Process is terminated
In a sane operational state, all archive system processes listed in spawncmd status have to
be running - except for the ones marked in the chart above. (However, not all of them may be
present, depending on your Archive Server release and operating system type; if you do not
see one of them at all, this is no problem.) If any of the other programs is marked as
terminated, something irregular has happened to it. To investigate this, you will have to have a
look in the corresponding log file. Each of the listed programs writes to a log file whose name
is similar, yet not always exactly equal, to the displayed program name. Some important
examples:
admsrv ~ admSrv. log
dsrcl ~ RC1.log
dswc ~ WC. log
As mentioned earlier in this chapter, the spawncmd status check is also possible after a
shutdown has been made using spawncmd stopall. In this situation, use the check to make
sure all processes are listed with status T before going on (making a backup, starting up again,
or taking other actions).
On a scanning station with EnterpriseScan installation, a subset of the server processes is
installed and must be running as well. There, stockist is the only program that is allowed to
be terminated during normal operation.
IXOS-ARCHIVE:=;; 4.2: One additional process, named checkscsi, is always allowed to be
terminated; it is okay even if its exit code is 1. (Its purpose is to check whether the versions of
the (XOS generic SCSI driver and the operating system match, which is no longer necessary
on the eCONserver 5.0.)
Shutdown:
@ Only possible from
within admin session
Startup:
As of Archive Server 5.0, it is possible to control and check the spawner operation status also
remotely, using the Archive Server Administration (as illustrated above). The following actions
are possible:
Stop Archiv Processes: Performs a "stopaII" spawner shutdown, similar to spawncmd
status. Note that, in addition to the spawner itself, a few of its child processes
(constituting a HTTP interface for remote administrative access) are kept alive in this
case; this is necessary for the remote spawner actions discussed here to be enabled
even during a shutdown period. Nevertheless: Most maintenance work on the Archive
Server is still possible during this type of shutdown.
Start: Starts up the spawner child processes again. This is possible only if the spawner has
been shut down before using the "Stop Archive Processes" action!
Status: Displays a list of spawner-controlled processes and their operational status,
equivalent to spawncmd status. This is possible only if the Archive Server is running
or the spawner has been shut down before using the "Stop Archive Processes" action!
Exit: Performs a "full" spawner shutdown. After this, no remote startup or status check is
possible; a "full" startup has to be made instead.
These features of the Administration Client are a matter of convenience; the same actions are
still possible using the spawncmd tool on the command line directly.
13-8 710
QPENTEXT
STORM caveats
OPENTEXT
13-10 710
OPEN TEXT
OPENTEXT
14-2 710
QPENTEXT
II Operational status of
system processes
(running/terminated)
- Document Service
- Storage Manager
- Document Pipeline
Ii Storage space
- Media pools
- Hard disk buffers
- Document pipeline
- Database
Ii Document Pipeline
processing errors
II Requests for reading from
unavailable volumes
The Archive Web Monitor supports the timely recognition of problematic conditions within the
Livelink Enterprise system. It not only monitors the Document Service but also the Document
Pipelines on the Archive Server and on scanning workstations.
The state of the observed parameters is visualized in a three-level scheme: normal, warning,
error. In addition, each parameter can be accessed in the detail view.
Multiple Archive Servers may be monitored at the same time. This also includes the scanning
hosts (with Livelink Enterprise Scan installed and used) - the Archive Web Monitor then
reveals processing errors within the Document Pipelines of the scanning station.
Up from Archive Server 9.5 it is necessary to install the Monitor Service on the Livelink
Enterprise Scan Client and also on the Document Pipeline Servers to monitor this stations with
the Archive Server Web Monitor.
Host is okay
Collapsed
host structure
display masks
an error item
notice is
propagated to
structure root
Warning or error notices of a single resource are propagated to a resource group and the host
item, so that they become visible even if a group is collapsed in the tree view. That way, error
conditions can be recognized at a glance. Moreover, it is possible to watch the overall state of
multiple Archive Server and Livelink Enterprise Scan hosts at the same time without wasting
display space: Just keep all host trees collapsed - as soon as a host icon indicates a warning
or an error, open the structure and follow the indication.
The Archive Server Monitor is not permanently connected to the Archive Server's (or scanning
client's) monitor service; instead, it polls status information to be displayed at regular intervals
(default: two minutes). The status bar message "Disconnected" therefore is the normal
operational state (not to be misinterpreted as a communication problem).
14-4 710
QPENTEXT
m Access via
- http://<archiveserver>:4060
- https://<archiveserver>:4061
Find the newest revision of the mentioned Release Notes Archive Server in the ESC, starting
from folder:
https://esc.ixos.com/l0B0543304-759
http://<servername>:4060/w3monc/index.html?host=<servername2>:<port>
& host=<servername3>:<port>
& iconType=<iconType>
& refreshlnterval=<number>
http://vrnlecr96:4060/w3monc/index.html?iconType=Faces&refreshlnterval
=10&host=vrnlsvrfcOO:4060&host=muc026Bl:4061
The only important status information not provided by the Archive Server monitoring tools is
the execution results of the periodic jobs run by the Archive Server scheduler. To monitor
those results (in order to detect any problems), open the Job Protocol window of the
Archive Server Administration as illustrated above. Like in the Archive Server Monitor display,
error conditions are indicated by red bulbs in the protocol list.
In order to get further information about what has gone wrong, you may choose a job entry
from the list and click the Messages button. This opens a further window displaying log
messages which the selected job invocation has written.
<ixos-root>\var\log\messages\*.*
14-6 710
QPENTEXT
Chapter guide
QPENTEXT
I
l
14-8 710
QPENTEXT
Archive Server
rises alert in case
of certain events
The Archive Server is able to actively raise alerts in case of specified events. Such events
include the completion of a ISO media and different kinds of errors or interruptions; a
reasonable set of event types is already predefined.
Setting up a notification for a certain event is done as follows:
1. Within the Notifications tab in the Archive Server Administration, create a
notification as illustrated above. parameters to be specified include the alert type, the
period when this notification shall be active, and specific parameters for the various alert
types (e. g. the recipient in case of e-mail alert, or the file name in case of writing a log
message).
2. Select the event that you intend to assign the new notification to, then right-click on the
notification item; from the appearing context menu, choose Assign to Event.
Alerts of type "Admin Client Alert" can be displayed by clicking the "exclamation mark" button
on the Archive Server Administration's tool bar.
A notification's configuration may contain placeholders, like $HOST or $MSGTEXT (visible
within the e-mail subject above), which will be replaced by current values at each notification
invocation. A complete, detailed description of available placeholders can be found in the
Archive Server Administration Guide.
Assigning a notification to the predefined event Any Message from Monitor Server expands
the functionality of the Archive Server Monitor (discussed earlier in this chapter): Whenever a
monitoring bulb turns from green into yellow or red (or from yellow into red), this event is
raised and thus a notification sent. Configuring this, it is no longer necessary to look at the
Archive Server Monitor periodically to be informed about problematic situations. (The only
exception is a complete server breakdown; in this case, no notification can be sent by the
server itself of course.)
B,,)· .i xtarclo
S ,., BackupServer
~~)~~!!t
(t ~~~~p:
j
BackupServer
Component: bksrvr
. i I .~ ~~~~~iI
status: Active
Details: Ok
8"
. I:, ~~:~~~_IXW
OS Pools
ixmonTest (on Windows: ixmonTst) is the command line equivalent to the Archive Server Monitor.
As illustrated above, it outputs the requested status information in textual form, ready for being analyzed
by external text processing tools (like grep and perl). You can use it to implement your own monitoring
routines, for example to raise notifications in whatever situation may be important for you.
Some hints on using ixmonTest:
The utility is installed on the Archive Server only, not - together with the Archive Server Monitor
- on the administrator workstation. However, you can use it via the network; for this, call it as
ixmonTest -h <host> .... (This way, you can monitor several Archive Servers centrally
from a single server.)
The monitored status items are arranged as an enumerated list. With the ixmonTest arguments
walk <start> <end>, you select a certain range from this list. However, for a full monitoring
the only reasonable choice is retrieving the whole status list.
On the other hand, the status list has a variable length; it depends on the installation and
configuration of your server. The best way to determine the exact end number is calling
ixmonTest manually and trying different numbers until you see "empty" items at the Irst end.
An item's status (okay, warning, or error) is expressed as a numerical value: 0 means okay,
everything else indicates a problem. (Do not rely on the warning and error values to always be
the ones shown above; they may vary depending on the type of problem.)
For a selection of what is interesting for your notification routine, you can refer to the name
attribute included in the output status list.
The following is a simplistic but, nevertheless, complete example shell script that uses ixmonTest; it
sends an e-mail whenever it finds a monitored item with a (non-empty) status other than zero:
# first check that we don't miss any status list items
if ! ixmonTest walk 300 300 I grep -q 'name= '""; then
echo "Too many status items!" I mail -s Problem [email protected]
fi
# now examine the monitoring output for non-okay items
if ixmonTest walk 1 300 I grep -9 'value= II [AO"] '; then
echo "Problem on eCONserver!" I mail -s Problem [email protected]
fi
This script is meant to be executed as a cron job (every five minutes, for example) on the operating
system. Feel free to amend the script to an arbitrary level of complexity to exactly suit your needs.
14-10 710
QPENTEXT
For scripted monitoring, it is also necessary to retrieve status information about success or
failure of scheduled jobs on the Archive Server (see also page Review job messages and job
protocol earlier in this chapter). This information is stored in the storage database DS and can
therefore be retrieved by a suitable database query; see the above illustration for details.
Here is a simplistic but, nevertheless, complete example shell script - assuming Oracle as
database platform - that sends an e-mail whenever it finds a protocol entry indicating a job
failure:
if echo "select 'num_errors', count(*)
from ADM JOB PROTOCOL
where STAT <> 'INFO' i" \
I sqlplus -S ecr/ecr I grep -q 'num_errors.*[1-9] I
then
echo "Failed jobs found!" I mail -s Problem
[email protected]
fi
(On a Unix-based Archive Server, this script has to be executed under the user account that
the database is executed as; it is normally named oracle.)
Again, this script is most useful if executed periodically on operating system level.
QPENTEXT
Exported
from database
\. V
(Not part ofthis chapter)
'"
The chart above gives an overview of the different ways the Archive Server may regard optical
media.
Note that "being present in a jukebox (or single drive) device" does not mean the same as
"known to the storage database"; either combination of these two state properties are possible.
Media may move from one state to another. Some of these state transitions may happen
without manual interference, others require to be done by operating personnel. Those
transitions relevant for operating are labeled above:
A Filling an empty ISO medias with data and setting it to online, read-only, is done
automatically by a periodic ISO write job.
B Each new IXW media partition must be initialized for writing and reading. This can be
done either manually by the operator of automatically by the IXW write job.
C Likewise, backup IXW medias have to be initialized.
D After a IXW media partition has been filled up to the desired amount, it can be finalized
which sets it to a permanent read-only state.
E As soon as a IXW medium has been filled up with data and its backup has been
synchronized, the backup has to be taken out of the device and to be stored safely
away.
F When the jukebox becomes full, the operator can make room for new media by taking
"old" ones out and storing them at a safe place.
G In case an Livelink client requests to read a document from a taken-out medium, it is
the operator's duty to re-insert that particular medium into the jukebox.
The following pages present details about how to perform each of the mentioned transitions.
15-2 710
OPEN TEXT
Store in
a safe
Backup Original
To provide empty ISO medias to the Archive Server, you simply insert them into the jukebox.
Whenever a disk (or a set of disks: original and backup) is to be burned, the Archive Server
will automatically choose an empty one, assign a name to it, and attach it to the corresponding
ISO pool.
Check when
initializing
backup IXW
media
Unlike ISO media volumes, IXW media partitions must be initialized and assigned to a IXW
pool before documents can be written to them. Initializing a IXW media partition basically
includes giving it a name (for a reasonable naming convention, see page IXW media naming
scheme later in this chapter); the above chart illustrates how this is accomplished using the
Archive Server Administration.
Notes:
Since IXW medias comprise two partitions, each of them must be initialized separately.
A IXW media backup partition must be initialized with the same name as the original
volume; the archive system recognizes the relation between them by the name
correspondence.
See later in this chapter how IXW medias can be initialized and assigned to a pool
automatically.
Backup IXW medias have to be kept in the jukebox as long as data is still written to the original
medium (since the backup is synchronized incrementally). The point of time when to remove
the backup IXW media can be recognized by the status display for the original (as illustrated
above): 'F' means full, Le. no more data will be written to it. Then it is time to store away the
backup IXW media.
15-4 710
QPENTEXT
III! Priority = order in which partitions of a pool are filled with data
- Smallest priority value = filled first
III! IXW media jukebox must flip disk in order to access reverse side
III! Set priorities so that two sides
of the same disk are not adjacent
(not numerically consecutive)
-+ Avoids too much disk flipping
in jukebox
Side A: 1
~
Side B: .a 3 "-
ll>
"0
~ "-
0
~i C'l
Side A: '§ 2 §
;f u::
Side B: 4
Handling Optical Archive Media Slide 5
Since IXW disks are double-sided but IXW drives can access only one of them at a time, the
jukebox robot must turn the disk whenever the reverse side shall be accessed. For this reason,
it would be inefficient to begin filling the second side of a IXW media immediately after the first
side has been finished: Read requests for recently archived documents would be directed to
the first side whereas write requests for newly arrived documents would require the second
side - this would result in very frequent disk flipping.
If the jukebox has enough drives, it is better to distribute the filling order evenly to two (or
more) disks as illustrated above. That way, it is possible that the two disk sides currently under
frequent access (the one just finished and the one just begun) stay in different drives for a
longer period, allowing fast access to all currently prominent documents.
To accomplish this method, you always have to initialize and assign two IXW medias at the
same time (plus two backup IXW medias), possessing four different partitions altogether.
- Side A: <archive_id>_<se~no>A
- Side B: <archive_id>_<se~no>B
FC0001A - - - -
FC0001B - - - - - - - -
Handling Optical Archive Media Slide 6
An Archive Server does not raise any constraints about IXW media partition names (except for
a length limitation to 32 characters) '- therefore, you should set up a naming convention at the
very beginning of Archive Server usage; the above chart gives a reasonable proposal.
Labeling the IXW media is done the following way: The IXW media as a whole is labeled
physically by writing the name on the case; the IXW media sides - i. e. partitions - are only
labeled by assigning the side names electronically during initialization. It does not matter which
side is initialized as 'A' or as '8'; the jukebox is capable of detecting this automatically.
Notes:
Including a date in the name does not make very much sense - you have to assign the
name before the IXW media usage begins, i. e. at a time when you do not yet know
when the IXW media filling will be finished. Even if you intend to interpret the date as
the ending date of filling the previous IXW media: You will initialize the IXW media
before the previous one is completely filled, not being able to precisely anticipate when
it will have been finished.
It is best to use a fixed number of digits for the sequence number (four will always be
sufficient); this makes it easy to order IXW media lists numerically in the Archive Server
Administration display.
15-6 710
QPENTEXT
III Proceeding
- IXW media write job checks
availability of assigned
"empty" IXW medias
after invocation
• "Empty" = filled to less
than 5% (changeable)
- If not enough assigned
"empty" IXW medias are
found, new ones are initi-
alized and assigned to pool
- Then writing from disk buffer
to IXW medias is started
The naming pattern for automatic IXW media initialization may contain certain placeholders
which are replaced by actual values in the instance of WORM initialization. These
placeholders include:
$ (ARCHIVE) Logical archive name
$ (POOL) Pool name
$ (PREF) Name prefix as defined in Archive Server configuration (default: "IXOS")
$ (SEQ) Sequence number (mandatory)
$ (YEAR) Date and time variables
$ (MONTH)
$ (MDAY)
$ (HOUR)
$ (MIN)
$ (SEC)
$ENV(varname) Value of environment variable varname
(The parentheses around the placeholder names are not strictly necessary, but you will almost
always need them to separate placeholder names from other name pattern elements properly.)
When you activate the automatic initialization the first time for a certain pool, it will count the
initialized IXW volumes beginning at 0; this is undesirable if you have already got manually
initialized IXW medias in that pool. You can explicitly set the sequence counter to a defined
starting number in order to continue the numbering of the already present IXW medias; see
ESC article Check I Set the sequence number of the next IXW media to be burned
(https:/ lese. ixos. eom/1077278356-781) for details.
For customers with Unix-based Archive Server installations upgraded from an original release
::; 3.5, there is an important restriction: All WORM partitions that were initialized by IXOS's old
jukebox service ixwd cannot be finalized at all. In order to benefit from finalization, those
WORMs must first be copied to new ones which then can be finalized afterwards.
15-8 710
QPENTEXT
The possibility to finalize all partitions of a certain pool manually is useful in just one specific
situation: After you have done an Archive Server upgrade from a pre-5.0 version to 5.0 or 5.5,
you may have a vast number of WORMs that can now be finalized. Although finalizing "old"
WORMs does not influence the safety of the stored data, you should make use of this
possibility in order to discharge a considerable amount of WORM management data from the
WORM filesystem database.
The choice between the other two variants of doing the finalization - automatic or manual -
is rather a matter of operating preference; both lead to the same result.
III For this, choose media containing documents not often needed
- E. g. oldest first
- E. g. least-recently-accessed first
" Retrieve dates of last read access for all media with: cdadm survey -v +oL
.The cdadm survey -v +oL command (available as of IXOS-eCONserver 5.0) delivers a list
of all archive volumes and their dates of last read access. Unfortunately, it expresses the dates
as the number of seconds since the Unix epoch, which is hardly human-readable. However,
you may filter the list through the following Perl script in order to obtain a readable form:
while ( <> ) {
i f ( / C"\w+ [ \t] +) (\d+) / ) print $1 . (scalar local time $2) . "\n"; }
else print; }
(This is easily possible even on a Windows-based Archive Server since Perl is included in
every Archive Server installation.)
15-10 710
QPENTEXT
As soon as the jukebox(es) have been filled up with used media, older media must be taken
out in order to make room for new empty ones; such media no longer present in a jukebox are
called unavailable or offline. Afterwards it may happen that some Archive Server user requests
a document from an offline volume. The user then gets a message that this document is
currently offline. The Archive Server Monitor, in turn, displays a warning notice at item
"DocService" ~ "unavail".
In such a situation, it is the operator's duty to re-insert the requested disk into the jukebox. To
learn which volume(s) are affected: Within the Devices section of the Archive Server
Administration, open the Unavailable Parti tions window as illustrated above; this
window reveals the volume names in question.
15-12 710
OPEN TEXT
QPENTEXT
16 Media Migration
Migration of optical media in Archive Server
Chapter guide
Ili
_ ............-==~
Media migration: general aspects
OPEN TEXT
:4'_iai~:uwmijl¥j.
16-2 710
OPEN TEXT
OPEN TEXT
III Motivation
Aging of media --+ data may volatilise
Aging of technology --+ compatible drives may not be available forever
Storage migration --+ migrate to storage system with "virtual jukebox"
Expiration of data --+ after given retention period
Re-organ isation --+ apply new features to old documents:
compression, encryption
Check migration
status utility
__~_.J0>
removes
Administrator Old medium New medium
Volume migration is organized per medium (= "volume" - but the two sides of a UOO, WORM or OVO
count separately!). The whole migration process for a single medium is composed of the following
stages:
Creation of migration jobs
1. The administrator starts the migration utility (in the Archive Server administration) and specifies a
selection of media and documents to be migrated.
2. The utility creates one "migration job" per selected medium (stored in the OS database).
Enqueuing document components for migration
3. The Migration Server is triggered by a periodic Archive Server job.
4. The Migration Server reads the migration jobs, i. e. volumes that are queued for migration.
5. It reads from the OS database which document components are stored on the selected volumes
and therefore are to be migrated.
6. It enqueues each found document component in the normal "writing queue" that is also used for
managing the writing process from the disk buffers to optical media.
Copying document components to new media
7. The media write job of the migration target pool is started and reads the "writing queue".
8. It copies the document components from the source media to new target media.
9. It updates the database to reflect which components have been copied.
Updating status of migration jobs
10. The Migration Server is triggered the next time by the corresponding Archive Server job.
11. The Migration Server checks which document components have been copied in the meantime.
12. When all document components of a migration source medium are found to be copied to new
media, that medium's migration job is marked as "finished".
Finishing the migration
13. The administrator invokes the Check migration status utility in the administration client.
14. The utility reads the migration jobs and displays the status for each volume.
15. The administrator removes media whose migration is finished from the system (exports from OS
database and removes media from storage system).
16-4 710
QPENTEXT
The items mentioned above reveal that the Migration Server's tasks are more elaborate than
the previous page suggests; to be mentioned here are the check for overdue components and
the limited amount of data processed at one migration run.
Correspondingly, the Migration Server has two important configuration options:
Max. amount of data to be enqueued per migration run (default: 10 GB)
Max. period of time after which enqueued,
not-yet written components are considered overdue (default: 7 days)
These parameters can be maintained in the Server Configuration page of the Archive Server
Administration, branch Volume Migration -7 Volume Migration Configuration.
If you want to learn more about Volume Migration options, see Technical Information on
Volume Migration in ESC:
https://esc.ixos.com/1140188069-686
16-6 710
.QPEN TEXT
Chapter guide
Preparation steps
QPENTEXT
The job for running the Migrate_Volumes command is not present on an Archive Server by
default, i. e. you have to create it before you can start migration projects. To do so:
1. In the Archive Server Administration, tab Jobs, right-click the jobs list and choose Add
from the pop-up menu.
2. In the Create Job dialog, enter a name for the job. This name can be chosen freely, but
should preferably be meaningful.
3. Choose the job command Migrate-,-Volumes from the list, as illustrated above.
4. Schedule the job as desired. Normally, it is most reasonable to have the job run once a
night.
It might also be a good idea to concatenate this migration job and the write job of the
migration target pool; this way, both jobs can do their work without wasting any time,
giving you more flexibility to schedule other jobs (see also chapter Periodic Jobs).
For details about creating a media pool as the migration target, see chapter Configuring
LogicalArchives. The only specialty in this situation is the application type Migration, as
illustrated above.
16-8 710
QPENTEXT
_
. _ ___ _--------::::::==---
ptional Retention SettingS_J---~--
_ __.__ _---_ _ _--...----- _____
.-----.
;;hm"""".'.,.,.,,'""";""'""''''''"''ii%~'''.''"''T'''''·''n'''';;,'.'''-;;'''''7"''.,"","","""."."""",,,,,,.,,,,,,,,,.;;,,,,,,:;;,,,,,,,
In order to plan migration of certain storage media, invoke the Migrate components on
volume utility from the Utili ties menu in the Archive Server Administration. Specify the
following:
Source volume: Names of the storage volumes to be migrated. Multiple volumes can
be selected by including regular expressions:
- Ranges in square brackets. Multiple ranges < from> - <to> and single values
can be concatenated with commas as separators.
This will mostly be used to select ranges of media sequence numbers.
Caution: Make sure you pad numbers with differently many digits with leading
zeroes to equal length! (Not: [1-123J, but: [001-123J)
- The wildcard character '*'.
Make sure you select volumes from a single logical archive only! This is mostly insured
by the logical archive name as part of the volume names, specified here literally (not by
wildcard).
• Target archive & pool that will contain the newly written media.
Migrate only components that were archived ... : You may optionally select
document components to be migrated according to their time of archiving. This is useful
if, by the opportunity of migration, you intend to get rid of expired documents on the
source media.
Do not migrate components ... newer versions ... : This is always reasonable
because newer component versions always supersede older versions; those older
versions are therefore never needed any longer.
The Migration Server will access the selected media for reading status information of the
stored documents during the document enqueuing stage already (see also next page); for this
reason, it is mandatory to have the media online (known to DS database and available in
storage system) as soon as their migration is planned the way described above.
Archive Server ~ 9.6 additionally allows to set
- Retention Settings, i. e. no. of days
- Additional arguments, i. e. -e for export after migration
To retrieve the migration status listing shown above, choose Volume Migration Status
from the Utili ties menu in the Archive Server Administration. As the first step, selection
options will be displayed, offering you to restrict the view to new, in-work, and/or finished
migration items. Here, choosing no option at all is equivalent to choosing everything for
display. \
Status New: A migration job for the volume has been created (using the Migrate
components on volume utility), but the Migration Server has not yet begun to enqueue
document components stored on it.
Statuses In progress: The Migration Server has enqueued all components from the
source volume, but not all components are written to their destination media yet.
Status Fin: All enqueued document components of the medium are already written to
destination media (i. e. are listed in table ds_comp). The Migration Server has therefore
purged the corresponding component entries from table vmig_work.
Status Canceled: In Archive Server ~ 9.6, it is possible to interrupt or cancel the
migration process.
Status Error: Volume migration encountered a problem during the migration process.
16-10 710
OPEN TEXT
16-12 710
QPENTEXT
Chapter guide
II Additional features
II Document migration
16-14 710
QPENTEXT
More options are available on using the command line tool vmclient.
16-16 710
QPENTEXT
16-18 710
Bulk migration of ISO images (3)
III DB connection
16-20 710
QPENTEXT
\ '
16-22 710
QPENTEXT
Chapter guide
III
--...............~~
Media migration: general aspects
OPEN TEXT
-tt@M,IIjEnw}mWW·
16-24 710
QPENTEXT
Migrate
Document
Single File FS is the successor to HD write-thru pool. Unlike hard-disk write thru, Single File
FS is using Diskbuffer.
16-26 710
OPEN TEXT
This call migrates the specified document to the specified logical archive or pool. If no pool is
specified, the default pool of the target logical archive is used. If neither a pool nor a target
logical archive is specified, the default pool of the mandatory logical archive is used. If a
volume is specified, only the components of this volume are migrated. The migration of all
component versions can be forced (otherwise only the latest will be migrated). If no retention
parameter is specified, the default retention value of the specified archive is used.
dsClient:
docMigr doc vol archld pool onlyMax reten flags
vol: Source volume
archld: Source Archive
Pool: Target Pool
onlyMax: 1 =only newest, 0: all
Reten: usual values (dayslinfinite/event)
Flags: 1 = overwrite data and generate additional dsjob entry
QPENTEXT
i
1 I
Serving as a continuation of chapter Handling Optical Archive Media, this chapter discusses
the media state transitions illustrated above:
A As soon as the retention period of all documents on a medium has passed, the medium
and its contents can be exported from the storage database; the Livelink Enterprise
Archive Server then "forgets" about those documents.
B However, exporting a medium is not one-way. An exported medium can be re-imported
into the server again - or, as well, be imported into a different Livelink Enterprise
Archive Server which then possesses the media contents. This is especially useful in
situations where you want to move all your stored contents to new server hardware.
The following pages present details about how to perform each of the mentioned transitions.
On command line:
c:\> dsTools export <volume name>
C:\>dsClient localhost dsadmin 1111
17-2 710
QPENTEXT
In order to remove media containing expired documents from the Livelink Enterprise Archive
Server permanently, invoke the Export Partition dialog of the Livelink Enterprise Archive
Server Administration as illustrated above. Set the checkmarks in the dialog box as shown
above and confirm with OK. A message window will then appear, showing the progress of the
database export procedure as well as possible error messages.
Exporting an unfinalized IXW media from the Livelink Enterprise Archive Server, removing its
filesystem administration data from STORM's database is a separate action. For details, see
the Uvelink Enterprise Archive Server Administration Guide, section Exporting non-finalized
IXW partitions; currently this is section 6.3.3.3.
Notes:
Media should be exported according to the point of time of last writing. This way, you
make sure that all documents stored on them really have expired.
• About the export options in the Export Partition (s) dialog (illustrated above):
Export from DB: Without this option (i. e. in the standard case), the export tool
scans the medium in question for documents to be exported, then it deletes from
the DS database all data about exactly the found documents. In case of
inconsistencies between database and medium, this prevents erroneous deletion
of "wrong" documents.
With this option enabled, the medium is not touched; instead, the information
which documents are to be exported is taken from the database itself. If the
database and the medium are consistent to each other, the result is the same in
both cases.
Conclusion: Not using this option is the safer variant. Use it only in emergency
cases, i. e. if the medium is no longer accessible (lost or destroyed).
Export Partition Name: Using this option, the database "forgets" about the medium
itself along with the documents stored on it. This option should always be
enabled (otherwise re-importing the medium later would cause troUble).
For the sake of data loss protection, never dispose of any archiving volumes!
III
_..............
Single-instance archiving (SIA)
17-4 710
QPENTEXT
If so:
2. Export medium, preserving referenced SIA target documents
- dsTools export <volume_name>
Handling the export of media containing SIA targets involves the dsTools tool- introduced
on the previous page - as well as the media migration facility discussed in chapter Media
Migration. The step sequence described above utilized the fact that the media migration tool
copies only those documents to new media which are known to the OS database. Using
dsTools to "forget" all other documents in advance (step 2) leads to the behaviour desired
here: Only those documents are preserved on new media that are still referenced by SIA
"sources", which means that they will possibly be requested for access in the future.
II Usefulness:
- Moving stored data to other server
- Re.importing data that has been
erroneously exported
II Medium must be
in jukebox
Media, once their contents has been exported from the storage database, can be re-imported
again (for example, if they have been exported erroneously). For this, right-click the medium in
the jukebox contents list in the Livelink Enterprise Archive Server Administration - as
illustrated above - and use the appropriate Import .•• Parti Hon (s) item of the
Utilities context menu.
For a normal medium import, the default options of the import dialog can be used.
The import ... Partition(s) windows are GUI versions of the command line tool dsTools .
To look for additional arguments start dsTools on command line, then you will see all
parameters.
-q speed up import by not determining component length from compression
header
-t <days> only import documents newer than <days> days, speeds up recovery from a
DB crash, where latest DB backup is less then <days> days old
17-6 710
QPENTEXT
remembers previously
Iii Media Import with deleted documents
Archive Server ~ 9.6
- reconstruction of index import
media
in OS prevented denied!
- remembers previously
deleted media
• deleted documents
tracked in table
de deleted
In Archive Server versions up to 9.5, when importing media with documents that were
previously deleted, the documents would be in the system again.
With 9.6, DS database remembers documents that have been deleted before. This information
is stored in table ds_deleted. If you try to import media with the deleted documents, import of
the deleted documents will be prevented.
17-8 710
QPENTEXT
QPENTEXT
18-2 710
OPEN TEXT
" Usefulness
- After restoring an original WORM from the backup
- When suspecting damage of a storage medium
The "export component" repair option is somewhat dangerous: Depending on the exact type of
inconsistency, you may lose references to documents that are still stored somewhere in the
archive. But even if there is no more instance of the document within the archive, recovering
the document from external sources (if that is applicable) may require the reference
information in the database to still exist.
Therefore: Use this repair option only if you are sure that you do not need the missing
documents any longer! If in doubt, rather contact Open Text Support for help.
Start via
Utilities
menu
Parameter entry
for utility (example)
Consistency Checks for Storage Media and Database Slide 4
18-4 710
QPENTEXT
Messages window
opens when utility
is started
III Usefulness
- Database recovery
- When suspecting problems with the database contents
18-6 710
OPEN TEXT
Parameter entry
for utility (example)
Consistency Checks for Storage Media and Database Slide 7
iii Usefulness
- When suspecting any kind of problem with a storage medium
18-8 710
QPENTEXT
Check document
.. Usefulness
- Analyzing trouble accessing a specific document
The repair option of this check utility is somewhat dangerous: If a document component is
missing on the referenced storage volume and it is not known to be stored on any other
volume, the utility would delete this "dead" reference to the missing component. Depending on
the exact type of inconsistency, this may cause the database to ''forget'' document components
that are still stored somewhere in the archive. But even if there is no more instance of a
missing component within the archive, recovering the component from external sources (if that
is applicable) may require the reference information in the database to still exist.
Therefore: Use the repair option only if you are sure that you do not need the missing
document components any longer! If in doubt, rather contact Open Text Support for help.
!iii Usefulness
- When suspecting corruption of WORM backups
.-=::::::::::::=====--.::-
Original
Backup(s)
18-10 710
QPENTEXT
18-12 710
OPEN TEXT
QPENTExT
III RemoteStandby
- Remotely replicated archives and buffers
II HotStandby
- Automatic failover system
III CacheServer
- Separate server minimizing network load for read & write access
19-2 710
OPEN TEXT
LOgiCaIVOIU~
Fire protection
wall possible
Second
jukebox
RAID
An Archive Server with one jukebox and backup copies of the media is the standard minimum
configuration. RAID 1 or 5 is used for the Archive Server's hard disk space.
This scenario assures that all data archived on optical disks is stored on duplicate partitions.
As the duplicate partitions are produced in the same physical jukebox where the originals
reside, the duplicates should be removed to a safe place for maximum security.
To improve protection against hardware failure and natural disaster you can create backup
copies in a separate jukebox.
19-4 710
OPEN TEXT
RemoteStandby
OPENTEXT
JOOl
Archive Server I=.~...'~
.' .~
CSI "",. "
Replication
-.----
WAN
This configuration supports remote replication. With it you can replicate archives and pools. In
the RemoteStandby scenario, a fully functional, remote Archive Server is capable of replicating
the archived data of an original Archive Server over significantly great distances by virtue of a
WAN connection. The configuration is implemented from the RemoteStandby server (a
maximum of three RemoteStandby servers can be configured). The RemoteStandby server
replicates asynchronously the archives and hard disk buffers of the desired original server.
The replication interval is specified on the RemoteStandby server. It is performed as a
"synchronize" function from the RemoteStandby.
A RemoteStandby server provides read-access to its replicated archives. Should anything
happen to the original server, all archived documents present at the time of the last "replicate
synchronization" can be retrieved from the RemoteStandby server.
The configuration "original archive - RemoteStandby archive" may be a reciprocal one. An
Archive Server may be an original server as well as a RemoteStandby server for a second
original server. This configuration provides two major advantages. First, you exploit the
hardware available to you by giving it double-duty. Second, access to a document retrieved
from a local replicate archive is much faster than retrieving the identical document from the
original server thousands of miles away.
19-6 710
QPENTExT
-(D---------
User waits for response -
Wait up to 120 seconds before switch over
Up to 3 servers can be configured
Server priorities can be defined
The next three slides, describe the process of retrieving a document, in case of a failure of a
piece of hardware.
We assume that there are three Archive Servers in the company. The clients connect by
default to Server one!
Server Priorities
In a remote standby configuration, documents can be requested from both the original
server and the remote standby servers. You use this command to define the sequence
in which the servers are accessed for each replicated archive. It is usually quickest and
most efficient to access the closest server.
It is not important on which server you specify the server access sequence. The setting
affects all the known servers.
19-8 710
OPEN TEXT
Backup
Original jukebox
jukebox
A HotStandby server is the key component for this configuration. The Archive Server high
availability system guarantees against loss of time as well as against loss of data. This
scenario provides a fully functional second Archive Server capable of taking over operations
automatically if the original sever should fail for any reason. This HotStandby server monitors
the original server; in the case of system failure, the HotStandby takes over automatically. In
Archive Server, this is referred to as an automatic failover system.
In the automatic failover configuration, two Archive Servers access the same RAID-protected
hard disk partitions, although not at the same time. The HotStandby server is connected to one
or more jukeboxes containing backups of the original archived documents. This is
implemented by backup jobs that run regularly between the two. The hard disk buffer and
pools are shared and they are protected with RAID 1.
If the original server should fail, the HotStandby starts automatically, working with the data
stored on the commonly accessed hard disk partitions. By means of a fire protection wall the
automatic fail over scenario can protect against the threat of fire.
Distances up to several kilometers between the cluster nodes are possible.
19-10 710
OPEN TEXT
Jukebox Jukebox
Network
Replicate Partitions
19-12 710
QPENTEXT
WAN
The CacheServer caches every document that someone has had a look at in your local
network. This avoids the WAN stroke as a performance bottleneck for further read accesses.
When a document is archived, the CacheServer transmits the document to the connected
Archive Server immediately ("write-through") and keeps a copy in its local cache.
In addition to enhancing read request performance, using the write-back cache feature
(Archive Server ~ 9.6.1) can improve also WAN load for write requests. Writing to Archive
Server is delayed and can be performed Le. during night when less load is expected.
19-14 710
Cache Server Scenario
-~~......=~
Europe
Cache Server
QPENTEXT
19-16 710
OPEN TEXT
Local sites:
Scanning
Retrieval mainly
of local documents
D-
Documents are fetched from local
CacheServer
19-18 710
OPEN TEXT
OPENTEXT
This overview shows the main advantages of the different solutions. No solution alone can
protect you against every potential problem. Each Archive Server customer has to choose an
optimal solution according to risks, cost, and main concerns.
19-20 710
QPENTEXT
QPENTEXT
Chapter guide
OPEN TEXT
iii Introduction
20-2 710
QPENTEXT
Remote Standby
Archive Server Archive Server
.---- Replication
This configuration supports remote replication. With it you can replicate archives and pools. In
the RemoteStandby scenario, a fully functional, remote Archive Server is capable of replicating
the archived data of an original Archive Server over significantly great distances by virtue of a
WAN connection. The configuration is implemented from the RemoteStandby server (a
maximum of three RemoteStandby servers can be configured). The RemoteStandby server
replicates asynchronously the archives and hard disk buffers of the desired original server.
The replication interval is specified on the RemoteStandby server. It is performed as a
"synchronize" function from the RemoteStandby.
A RemoteStandby server provides read-access to its replicated archives. Should anything
happen to the original server, all archived documents present at the time of the last "replicate
synchronization" can be retrieved from the RemoteStandby server.
The configuration "original archive - RemoteStandby archive" may be a reciprocal one. An
Archive Server may be an original server as well as a RemoteStandby server for a second
original server. This configuration provides two major advantages. First, you exploit the
hardware available to you by giving it double-duty. Second, access to a document retrieved
from a local replicate archive is much faster than retrieving the identical document from the
original server thousands of miles away.
:~~~,
'1" SCSI
One style of RemoteStandby operation is the one illustrated above: Having multiple original
servers, possibly geographically distributed, plus one central RemoteStandby server backing
up all of them.
20-4 710
OPEN TEXT
In addition to setting up the replication configuration for disk buffers and logical archives, the
server administrator or operator has to perform the following steps:
For each replicated disk buffer, replicates of all assigned original buffer volumes (i. e.
hard disk partitions) must be provided and initialized on the RemoteStandby server.
For all originallXW medias used by replicated archives, replicate IXW medias must be
provided and initialized on the RemoteStandby server (this task may be automated).
This is an ongoing task since new IXW medias will be allocated by the original server
regularly.
ISO media, however, do not need to be initialized explicitly on the RemoteStandby
server; it is sufficient to provide enough blank media there. The replication job will take
an arbitrary blank medium and fill it whenever a new medium has been written on the
original server.
The remaining media operation tasks - labelling, storing backups away, setting offline and
online as needed - are the same for replicates as for media on the original server; see
chapter Media Operating for more information.
III Introduction
20-6 710
QPENTEXT
On original server:
The first configuration step is to enable the backup option - illustrated above - on the
original server (unless it is already enabled, especially if the server has already been involved
in a Remote Standby setup). This option makes the server record all changes to hard disk
volumes (of disk buffers or hard disk pools (HDSK. FS, VI» for promoting them to the Remote
Standby server later.
For checking/setting the option, the Server Configuration page of the Archive Server
Administration can be used (as shown above); see chapter Where to Find What for more
information about this.
Before replication of logical archives and disk buffers can be configured, both involved Archive
Server must be made known to each other - as illustrated above, in the Archive Server
Administration.
Making the Remote Standby server known to the original server, make sure to enable the
Allow replication option - otherwise the original server will deny sharing its data with
the Remote Standby server.
20-8 710
OPEN TEXT
...
......
......
••
rl/AI: •••••
Configure
........
backup properties
of replicated pools
As soon as original and Remote Standby servers know each other (see previous page),
configuring the replication of a logical archive is fairly easy:
1. Within the Archive Server Administration, connect to the Remote Standby server.
2. Within tab Servers, structure item Known Servers, navigate to the desired logical
archive on the original server.
3. Right-click the archive, choose Replicate from the context menu, and confirm the "do
you really want ... " dialog (not shown above).
4. A dialog Edi t Replicate ... Pool will be displayed, asking you to configure the
properties of the pool replicate. These properties are only a subset of a "normal" media
pool's properties; they are just those related to asynchronous media backup - here
they will be applied to remote replication. Configure these properties as desired; see
chapter Configuring Logical Archives for a general discussion about their meaning.
If the original logical archive possesses more than one media pool, this step will be
repeated for each further pool.
These steps tell the Remote Standby server to perform remote replication for the chosen
logical archive. However, before the replication can be performed, appropriate storage media
have to be provided on the Remote Standby server; see section Providing media replicates on
Remote Standby server later in this chapter for more information.
Result:
For a complete remote replication, it is necessary to replicate all original disk buffers as well, in
order to grasp also those documents that have not yet been written to optical media when
replication starts.
To configure replication for a disk buffer:
1. Within the Archive Server Administration, connect to the Remote Standby server.
2. Within tab Servers, structure item Known Servers, navigate to the desired disk
buffer on the original server.
3. Right-click the disk buffer and choose Replicate from the context menu.
4. In the Replicate buffer dialog, enter a name for the disk buffer replicate. This can
be the original disk buffer name - unless the Remote Standby server itself has already
a disk buffer with the same name; in this situation, a different name must be specified.
These steps tell the Remote Standby server to perform remote replication for the chosen disk
buffer. However, before the replication can be performed, appropriate hard disk volumes have
to be assigned to the buffer replicate; see section Providing media replicates on Remote
Standby server later in this chapter for more information.
20-10 710
QPENTEXT
Chapter guide .
QPENTEXT
III Introduction
xtarclo
Archives
..... Cache Partitions
8uffers
. 18uffer1 (8 uffer1 llisbonOO)
I, . Buffer2
Devices
The chart above reveals how the replication configuration and status is reflected in the Archive
Server Administration (when you are connected to the Remote Standby server):
For each replicated logical archive and disk buffer, you can see the name of the server
hosting the "original" archive or buffer.
Additionally, for each replicated disk buffer you see the name it has on its original
server. This is necessary because the original buffer and its replicate may nave
different names ( to avoid naming conflicts).
For each storage medium assigned to the original archive or disk buffer, you see the
state of its replicate. The most important information here is whether a replicate already
exists or not. If a replicate is missing, the administrator has to provide an appropriate
medium for this purpose; the exact way depends on the type of medium:
- Missing IXW and hard disk media have to be initialized explicitly before
replication can be carried out.
Missing ISO media do not need to be initialized; it is sufficient to supply empty
media in the jukebox of the Remote Standby server.
Details about handling media replicates are given on the following pages.
20-12 710
OPENTEXT
If a hard disk partition replicate is marked as "missing" on the Remote Standby server, a
suitable hard disk partition has to be provided for that purpose. Firstly, create such a partition
on operating system level. Make sure its capacity is at least the same as the original partition,
otherwise not all data held in the original can be replicated later!
To dedicate the created partition to the purpose of replication, connect to the Remote Standby
server with the Archive Server Administration and follow the steps illustrated above:
In tab Servers, choose Devices ~ HardDisk in the left-hand structure display.
Right-click the empty space in the right-hand Partitions list and choose Create
from the pop-up menu.
In the Create HardDisk Partition dialog, choose Option Create as
replicated partition.
Click button Select Partition.
In the Select Replicated Partition dialog, select the name of the original
partition you are preparing the replicate for, then confirm.
Specify the Mount path of the hard disk partition you have prepared on operating
system level and confirm.
You can then review the replicate status by selecting the disk buffer replicate in the
Servers structure. The assigned partition should now appear with type "replicate".
These steps have to be performed for all hard disk partitions of replicated disk buffers and
hard disk pools.
For each IXW media added to the pool of a replicated archive, a corresponding IXW media
replicate has to be initialized on the Remote Standby server. You recognize the necessity of
this action by the status "missing".
To initialize an empty IXW media on the Remote Standby server for replication, connect to the
Remote Standby server with the Archive Server Administration and follow the steps illustrated
above:
1. In tab Servers, choose Devices, then click the name of the IXW media jukebox that
shall contain the IXW media replicate. Right-click an IXW media partition in right-hand
Partitions list and choose Init from the pop-up menu.
Note: Choosing a IXW media for replication, make sure it has the same block size and
capacity as the corresponding original - otherwise replication will not work! (This is no
issue as long as the same type of IXW medias is used on all involved servers.)
2. In the Initialize Jukebox Partition dialog, choose Option Create as
replicated partition.
3. Click button Select Partition.
4. In the Select Replicated Partition dialog, select the name of the original WORM
you are preparing the replicate for, then confirm.
5. Confirm the Initialize Jukebox Partition dialog. You can then review the
replicate status by selecting the pool of the archive replicate in the Servers structure.
The initialized IXW media should now appear with type "replicate".
These steps have to be performed for every new IXW media volume of replicated logical
archives with IXW media pools.
20-14 710
OPEN TEXT
For each newly burned ISO medium of a replicated archive, the replication job wants to create
a corresponding replicate on the Remote Standby server. You recognize this "waiting" state by
the status "missing".
As opposed to hard disk and WORM volumes - discussed on the previous pages - ISO
media replicates do not need to be initialized in advance. Instead, the replication job picks an
available black medium from the jukebox and performs the steps of initializing and assigning
implicitly during the writing process.
It is the administrator's task to always provide suitable blank media in all Remote Standby
jukeboxes. This includes the condition that replicate media must have the same type and
capacity as the used originals. Moreover, consistent usage of either single or double-sided
DVDs for both originals and replicates is strongly recommendable (although not strictly
necessary).
Chapter guide
OPEN TEXT
iii Introduction
20-16 710
QPENTEXT
- Every
synchronize
On Remote-
Standby
server:
On original
server:
The current status of media replicates can be reviewed in the Archive Server Administration as
illustrated above. The most important information is the point of time when a replicate medium
has last been accessed for writing ("Last Backup/Replication").
In addition, on the Archive Server hosting the original archive or disk buffer, you see the name
of the Remote Standby server holding the displayed replicate. This is important if an archive or
disk buffer is replicated to more than one Remote Standby server; in this situation, you have
the complete overview which of the different replicates are kept where and have been
synchronized with the original when.
20-18 710
QPENTEXT
Chapter guide
OPEN TEXT
III Introduction
20-20 710
OPEN TEXT
Original Replication
I----------------i
Centera :
I
Archive Server I
I
.!!!0J•• • ~;B"!i.'.iB,!;¥@' :
I
I
I
I
I
I
Centera Archive Server I
I
I
I
I
, __=_=_:=
__=_ i_iiii_'ii_ii_ii:i_l:_=:_=_:_ j
1- - - - - - - - - - - - - - - - ,
I I
: Optical :
I I
Archive Server I
I
I
I
I I
I I
I I
I I
I I
I I
I I
.. Replication either on I
I
I
I
I
I
Centera or Optical I
I
I
I
I
I
I I
I I
I I
I I
-----------------
I I
Original Replication
----------------j
Centera Centera :
I
Archive Server Archive Server I
I
ii!!!iL'iiiiiiiiii ii~1! :
I
I
I
I
I
I
Logical archives are replicated: I
I
Disk buffers and content I
I
I
I
I
I
l 2
20-22 710
OPENTEXT
HDS DRU
HPXP HDS DRU
Archive Server Archive Server HPXP
20-24 710
QPENTEXT
Archive Server
replication
r--------------------------
: Remote Standby System
I
I
I
I
Archive server
I
I
with filer as
I
I hard disk
I
I
I
I
I
I
I
I
I
I
I
I
I Remote Standby Configuration and Operating Slide 25
21-2 710
QPENTEXT
The three graphical administration tools should be installed on the computer the Archive
Server administrator uses. This makes it possible to administer the Archive Server remotely.
However, if the admin workstation has graphical remote access to the Archive Server, you
may prefer to use the administration tools on the Archive Server directly; in this case, you can
omit installing the admin tools on your own workstation computer.
For installations using Archive Server 0::5.x, components of the Archive Server and the clients
(Viewer & Scan clients) should never be mixed on one machine.
Therefore, never install client components on the Archive Server.
If you want to use the administration client on an Enterprise Scan machine, install it from the
Enterprise Scan client CD. (0::9.5)
See also reference to patch for Enterprise Scan 5.1 on next slide.
21-4 710
QPENTEXT
Additional considerations
OPEN TEXT
OPEN TEXT
22 Periodic Jobs
Organizing recurring tasks on the Archive Server
22-2 710
OPEN TEXT
This and the following page give a complete list of tasks that are normally done as jobs on the
Archive Server. (It is possible, however, to implement jobs for further administrative tasks, but
this is beyond Archive Server standard; the topic is discussed in the course 715Archive
Server Advanced Admin)
Archive Server jobs are created and set up at different points of time:
• All jobs related to media pools / disk buffers are created (and also deleted) along with
the pool/buffer they are linked to; there is no need to self-define jobs for these
purposes.
Some jobs fulfilling "global" administrative tasks (i. e. not related to specific
configuration objects, such as pools and buffers) are already part of the initial Archive
Server setup; this applies to all above-mentioned jobs with a given standard job name.
Some other "global" jobs are not set up at server installation time; you have to create
them yourself once you need their functionality. Concerning the list of jobs given above,
this applies to the "start media migration" task.
The typical schedule entries in the table above are meant as very general suggestions, valid
only unless special preconditions apply. The administrator is responsible to deviate from these
rules of thumb wherever necessary. Example: If the average amount of stored data received
daily into a specific ISO media archive exceeds the capacity of one ISO media, the ISO write
job has to be scheduled to run more than once a day - otherwise documents would
continuously queue up in the disk buffer.
The resource-critical column in the table above indicates which jobs have to scheduled with
special care: They allocate/consume certain resources (like media drives, database activity) to
fulfill their tasks, thus they should be scheduled in a cooperative way so that they do not lock
out each other. (More information is given later in this chapter.)
All jobs mentioned above under "global, server-related" are already part of the initial Archive
Server setup. The task remaining for the administrator is to schedule them appropriately as
part of the overall job scheduling concept.
See the previous page about the meaning of the typical schedule and resource-critical
columns.
Audit Trails are available with Archive Server <:: 9.6. Old administrative audit entries need to be
cleaned-up regularly. See chapter on Configuring Audit Trails for details.
22-4 710
QPENTEXT
Further reading in ~
Find the mentioned ESC article, entitled Scheduling jobs in Archive Administration, as:
https://esc.ixos.com/l008339174-156 ~
If gives further information about the resource consumption of different types of jobs as well as
"best practice" examples for different scheduling concepts.
12AM
GAM
12 PM
GPM
12AM
In the illustration above, the Archive Server configuration comprises two logical archives: A 1
and A 2 . « , : , .
The shown job schedule is characterized by the following considerations:
During business hours - i. e. when new documents are expected to arrive on the
Archive Server - the Write_ WORM jobs for the two IXW media pools run very
frequently (e. g., each half hour) in order to store documents securely on IXW medias
as soon as possible.
• The two other main tasks to be executed while the Archive Server is running, backing
up WORM data and purging the disk buffer, are arranged in a way that they do not
interfere with each other and with the IXW media write processes.
• The offline database backup must also be done while no other jobs are scheduled
because all Archive Server processes have to be shut down for this.
Some other scheduled jobs, like refreshing configuration information about other
Archive Server and cleaning up expired job protocol entries, can be done concurrently
to other jobs without any problems.
However, the above scheme is just an example. Every Archive Server administrator is
responsible to find a solution that suits the individual situation at his own company.
Moreover, the job schedule and coordination have to be checked and possibly changed
whenever the Archive Server configuration is changed. For example, when a new logical
archive with ISO pool is introduced, another Write_CD job - which may take between one
and two hours execution time, depending on the speed of the ISO media writer drive - must
be integrated into the existing schedule.
22-6 710
QPENTEXT
Jobs administration
QPENTEXT
Job is
queued for
running
as soon as
resources are
available
The Jobs property sheet of the Archive Server Administration shows the list of all defined jobs on the Archive Server
. The table yields the following information:
First (unnamed) column: If the job icon is grayed out, the job is currently disabled, i. e. ignored by the
scheduler.
The column may also display a further icon if the job is currently being executed, queued to run at the next
possible time, or stopping after an explicit stop command.
Name of the job.
Command to be executed as a job. (It is not normally useful to edit this.)
Month, Day, Hour, etc.: The job's schedule, if specified. A job may also configured to run immediately after a
certain other job has finished; this is displayed in the Job dependencies list of the dialog.
Right-clicking the jobs list opens the pop-up menu shown above, offering functions for job configuration and
operating:
Add a job to the list. (This is not necessary for media write and buffer purge jobs; these are created
automatically along with the object they belong to.)
Edit the job's command name, arguments, and - most important for configuration maintenance - the job
schedule.
Remove a job from the list. (This is not normally needed. If you want a job to no longer be executed, rather
disable it; see below.)
Enable or disable a job. When disabled, the job will no longer be executed by the scheduler; this may be useful
for certain troubleshooting situations.
Messages opens a window where you can view log messages of a currently running job. This is mainly useful
for troubleshooting and not normally used during regular operation.
Now lets you invoke a job manually.
Stop interrupts a currently running job. Not all jobs allow this; e. g. ISO write jobs cannot be interrupted.
Protocol opens a window showing a list of job invocation log entries, revealing success or failure of each job
run. For more information, see chapter Monitoring the Archive Server.
Stop/Start Scheduler completely switches execution of scheduled jobs off/on.
QPENTEXT
III Examples
- Job is triggered by scripts instead
- Redundant jobs not needed in scenario
- Temporarily disable a job for a certain period
~ Can be enabled at a later time
Administrators who are using their own scripts to trigger certain jobs need to be aware of the
regular job scheduling in the Job Administration. Disablling jobs can prevent collision with
scripts running the specific tasks.
Disabled jobs can be enabled again anytime.
22-8 710
OPEN TEXT
Choosing Edi t in the context menu of the Archive Server Administration's Jobs page opens
the Edi t Job dialog of the selected job, as illustrated above.
The job properties Command and Arguments should not normally be edited since their default
values are always appropriate. Exceptions include:
Adding option -b to the arguments of a disk buffer purge job makes the job purge
documents only after they have been saved on backup IXW media; see chapter Disk
Buffer Configuration.
Application-specific jobs - mainly those starting batch import of documents - may
honor certain arguments (project-dependent).
The job time limit - a feature introduced in version 5.0 - can be used to make sure that
certain jobs are finished at a defined point of time during a day. For example, you can force a
disk buffer purge job to terminate in time before the database is shut down for an offline
backup.
Note that most, but not all types of jobs honor this time-driven interruption: All activities dealing
with burning ISO media (DVD, WORM write jobs, local backup job - if applied to ISO pools)
will simply keep running even if they receive an interrupt request.
.. "Time Frame":
- Job will be invoked only during
given period
~ E. g. only during the night
- Running job will not be inter-
rupted on exceeding the
period!
.. Example concatenation:
1. Backup job for IXW medias
2. Purge disk buffer job
22-10 710
OPEN TEXT
The tasks mentioned above are indeed tasks to be performed regularly, but they cannot be
accomplished by the Archive Server's built-in scheduler. (The scheduler needs a running
database for operation and therefore cannot invoke an offline database backup.) However,
they are mentioned here because they must be coordinated with the other system jobs - for
example, an offline database backup requires all Archive Server processes to be stopped; no
other periodic jobs can be executed during this downtime at all.
22-12 710
QPENTEXT
OPEN TEXT
[ \
23-2 710
QPENTEXT
Idea of an audit is that all activities and changes in the system are tracked and that audit
information can be provided for legal purposes or for documentation.
23-4 710 (
QPENTEXT
Purge in this context means the removal of outdated audit entries from the database. This is
necessary to keep the database at a reasonable size.
23-6 710
QPENTEXT
The ADMS job deletes entries that have reached a certain document age. This parameter can
be configured.
23-8 710
QPENTEXT
You can access the http API either via http call or using the dsh Tool.
Deletion Holds
OPEN TEXT
Global Deletion Hold is usually turned off after restart (temporary). It can be configured in
Server Configuration to persistent. You can configure it in
Server Configuration: DS > System settings > Default runlevel
23-10 710
QPENTEXT
23-12 710
QPENTEXT
OPEN TEXT
Chapter guide
OPEN TEXT
24-2 710
OPEN TEXT
Not necessary
• Contains copies of
already saved data only
• HD crash does not affect
system availability
The basic HD protection rule is: All HD partitions that may hold the only instance of a
document must be protected against data loss by mirroring or RAID.
Suitable techniques to avoid data loss in the instance of a hard disk crash include:
RAID 1 (= one-to-one mirroring)
RAID 5 (= striping with parity)
IXOS supports both one with equal preference.
Chapter guide
OPEN TEXT
24-4 710
OPEN TEXT
OPEN TEXT
;i1iIIB%_lIiIlW.
QPENTEXT
iii Configure
IXWpool:
Select creation of
backup IXW medias
Unlike backup ISO medias which can be removed from the jukebox immediately after they
have been created, backup IXW medias must reside in the jukebox as long as their original
counterpart is being written to - because the backup IXW media is synchronized with the
original incrementally. As soon as the original has been filled completely and its backup has
been synchronized a last time, the backup can be removed and stored at a safe place; see
chapter Handling Optical Archive Media for more information.
Using Archive Server ~ 4.2, there is an additional option in the WORM write configuration:
"Delete from disk buffer after copy". Never select this option for a pool for production data! You
always need the disk buffer as a temporary backup between writing a document to the original
WORM volume and duplicating it to the backup WORM. (For test data, however, this is not
necessary.)
24-6 710
QPENTEXT
Chapter guide
Not useful
• Data stored here for a
very short period only,
or
• contains copies of
already saved data only
Regularly backing up the diverse hard disk areas used by the Archive Server is a necessary precondition
for data recovery after a hard disk crash. However, such a recovery may serve different purposes:
Disk buffer, hard disk pool: A crash can lead to loss of original documents here, therefore backups are
mandatory for data loss protection.
Database OS, WORM filesystem database: Their contents can be restored from the storage media
containing the actual documents. However, with a large archive this can be an extremely time-
consuming process as all optical disks must be read in again. Backing up these databases helps to
restore the system much faster.
In addition to document management data, the OS database contains information about the
Archive Server configuration (logical archives, pools, jobs, etc.). This part of the database cannot
be recovered without a database backup! As a consequence of a total database loss, you would
. have to recreate your server configuration manually (which is, of course, far less harmful than a
loss of archived documents).
The WORM filesystem database is present on an Archive Server with WORM media only; on
other installations, there is nothing to be backed up here.
Software installation (operating system, database system, Livelink Enterprise Archive Server):
backing up these items helps to recover the whole system rapidly after a crash of the system disk.
Attention: The STORM configuration files are located within the Archive Server Software
installation! (see also page: STORM files backup).
Cache: No data loss can happen here since the cache contains only documents that are already saved
on optical disks. Nevertheless: After a loss of the cache, its whole contents has to be reloaded from the
optical disks upon corresponding retrieval requests; during that period, the server's retrieval performance
would be considerably degraded. Backing up the cache therefore helps to retain the good system
performance across a cache loss.
Burn buffer, temporary storage for WORM writing: If one of these becomes lost, only write jobs for
the corresponding optical media are disturbed; the users do not even notice such a problem. After
mounting a new hard disk, these write processes can immediately start working anew, no data recovery
- and therefore no backup - is necessary.
DocumentPipeline: Documents normally pass the DocumentPipeline in very short time (seconds or
minutes); moreover, the DocumentPipeline can be backed up in offline state only. As a con-sequence, a
tape backup would never find any data to be backed up and can thus be omitted.
24-8 710
QPENTEXT
Livelink Enterprise
II Offline backup Archive Server
Using common backup tools
(TSM, Dataprotected, Legato etc.)
- Archive Server services
must be shut down
Setting partitions to write-locked status can be done manually in the Archive Server
Administration. However, for an automatic backup procedure, a scriptable way to do this is
necessary. The Archive Server command line tool dsClient (available on every Archive Server)
can be used in a Unix shell the following way:
dsClient localhost dsadmin <dsadmin-password> «EaT
chgVolS <volume name> wrlock
end
EaT
volume_name here is the logical name of the partition, as assigned and visible in the Archive
Server Administration. For unlocking the partition, replace wrlock by zero.
For a Windows batch script, you cannot use the «EaT construct; instead, write the chgVolS
and end commands to a file and invoke dsClient this way:
dsClient localhost dsadmin <dsadmin-password> < filename
Livelink Enterprise
III Necessary to protect data that cannot Archive Server
be written to optical disks for more
than a day
- For ISO pools with low archiving traffic
In case of media write interruptions
• i.e.
• broken network connection to storage system
jukebox damage
• storage system failure
iii During online backup, ensure that no periodic jobs are executed
on the disk buffer:
- ISO or IXW write
- Purge buffer
24-10 710
OPEN TEXT
Database backup
OPEN TEXT
Attention: In case you use online backup of the file system of your
software installation, you may run into an error (please see also in the note
part of this page)
Attention: The STORM configuration files are located within the Archive Server Software
installation! Those never must be part of an online backup.
If you use online backup for the software installation the following files to be excluded from
online backup:
• config/storm/*
• all parts of the worm file system (section ixworm of server. cfg)
including DataFilePath defined in section ixworm of server. cfg
The job Save_Storm_Files will take care of a valid online backup of the STORM files.
24-12 710
QPENTEXT
l\l After backup is made: Store backup files away (e. g. on tape)
Both backUp methods mentioned above - the j bdbackup command line utility and the
Save_Storm_Files job in the Archive Server Administration - create a copy of all relevant
STORM files on the server's hard disks. The destination of this backup copy is specified in
STORM's configuration file server. cfg (see chapter Where to Find What). Therein you will
find a section like:
backup {
list { dest1 }
backuproot { dest1
dest1 {
path { V:/jbd backup
size { 1024 }-
}
Here, V: / j bd_backup is the backUp destination directory (which must actually exist when
the backup is started; it will not be created automatically!). Multiple directories may be
specified instead of a single one in order to spread the backup copy over several hard disk
volumes; this way, capacity problems can be avoided in case the WORM database is very
huge.
After the backup copy has been made, it is your task to store the backup safely away,
preferrably on a tape.
Using IXOS-ARCHIVE S; 4.2, a true online backup of the STORM files is not possible. Instead,
you can perform a "dirty" online backup: STORM is brought down while all other eCONserver
processes are kept running (that is why it is called "dirty" here), then an offline backup is made,
finally STORM is started again. Due to STORM being shut down during the backup, no access
to optical disks is possible at all in the meantime.
Exactly this procedure is performed by the Save_Storm_Files job on an Archive Server:::; 4.2.
Software backup
OPEN TEXT
I!I Attention:
The STORM configuration files are located
within the Archive Server Software installation!
Those never must be part of an online backup
of the software installation.
(see also page: STORM files backup)
24-14 710
QPENTEXT
.......---~~~
Chapter overview
OPEN TEXT
-):Mlf.idiit1iliiiD.
25-2 710
OPEN TEXT
OPEN TEXT
Detecting disk space shortage of disk buffers is fairly easy: The Archive Server Monitor
shows a warning state if the free space of a buffer is less than 30% of total space. (This
threshold percentage may be altered if it is unsuitable.)
The recommendation not to use too large hard disk partitions is due to the fact that some
administrative actions (like disk buffer purging or consistency checks) require examining the
whole partition contents. The more documents are stored there, the longer such a scan will
take. If, moreover, a partition is full of.very small documents, the total number of files is very
high; this may lead to unacceptably long execution times of those actions. To prevent this type
of problem, rather use multiple partitions of moderate size instead of a single large partition. If
you store rather large documents only (like SAP data archiving files), the partitions may be
made larger as well; where mainly small documents are stored, the partition sizes should be
smaller (using BLOBs, however, reduces the number of stored files of small documents).
To add an additional hard disk partition to a hard disk pool or buffer, you first have to provide a hard disk
partition on operating system level. On a Unix-based Archive Server, make sure the root directory of the
file system is owned by the user/group that the Archive Server is operated as (e. g.
ixosadm/ixossys) and has permissions 770.
Once the disk partition is prepared, continue with the steps illustrated above:
<D Make the new partition known to the Archive Server by invoking the "Create Hard Disk Partition"
dialog as illustrated above. (The term "Create ..." is actually misleading here; you cannot create
a partition from within the Archive Server Administration.) Specify the following:
Partition name: An (Archive Server -internal) logical name for the partition; must be unique
throughout all volume names (including IXJN medias) of this Archive Server. The Archive
Server will henceforth maintain the partition by this name.
Create as replicated partition: If selected, the partition shall serve as the replicate of a
partition on another Archive Server. Only for RemoteStandby server configurations; for a
normal pool or buffer partition, do not select this option.
Mount path: The root directory of the partition's file system. On Windows NT, this should be
a drive specification (including a backslash); on Unix platforms, it is the directory where
the partition is mounted; up from Windows 2000, it can be either of both, depending on
how the partition is hooked into the file system.
If, on a Windows-based Archive Server, you want to use a network share instead of a
local hard disk drive, see ESC article https://esc. ixos. com/1072860397-483
about how to do that exactly.
® Assign the prepared hard disk partition to the disk buffer by means of the "Attach Partition to
Buffer" dialog as illustrated above.
In case of a hard disk pool instead of a disk buffer: Within Archive Server Administration's logical
archives list, select the hard disk pool that the partition shall be added to; then invoke the "Attach
Partition to Buffer" dialog as illustrated above.
25-4 710
OPEN TEXT
iii FS & HDSK pools need more space when total amount of stored
data is about to exceed assigned disk space
Detecting disk space shortage of hard disk pools is fairly easy: The Archive Server Monitor
shows a warning state if the free space of a pool is less than 30% of total space. (This
threshold percentage may be altered if it is unsuitable.)
Assumption for FS pool is that it is using local hard disk only. Sizing an FS pool that is working
with a sophisticated storage system may differ.
iii Cache needs more space when documents are deleted from
cache too early
III Ways to provide more cache space:
Enlarge current hard disk partition (if operating system permit this)
- Assign additional partition:
1. Provide a partition for exclusive use by the cache
Partition will become filled up completely
2. Add drive letter or mount point to cache volume list
3. Change will be effective after next Archive Server resta
S 9.6.0 only: Current cache contents will be discarded!
The recommendation not to use too large hard disk partitions bears an advantage in
situations where the cache index - due to whatever reason - has become damaged. Such a
cache index problem is normally restricted to a single cache partition, and a common solution
is deleting all contents of this cache volume. If the cache consisted of just one single large
partition, you would lose the whole cache contents by this action (which does not mean real
data loss; only the cache would have to be filled again by subsequent document read
requests, during which your server's performance would be impaired); a cache consisting of
several smaller partitions would only lose a small part of its total contents.
In Archive Server S 9.6.0, when adding additional hard disk to local cache, all contents of the
current cache will be discarded.
In Archive Server::: 9.6.1, this problem no longer exists due to new caching technology used.
Additional hard disk can be added to local cache without losing previous content.
25-6 710
Hard Disk Resource Maihlenance
~------....;:~
25-8 710
QPENTEXT
iii OS database
- See Archive Server Monitor for filling rate
- If too small: Enlarge with database tools
25-10 710
OPEN TEXT
Advanced exercise:
ill Enlarge DocumentPipeline
directory
- Move DocumentPipeline directory to
larger partition
Take care to save contents across
change
Adjust DPDIR setting in Archive
Server configuration
• Pay attention to configured and
implicit directory structure
Continue processing of saved
processing items
QPENTEXT
26 Accounting information
Billing your customers based on Archive Server U"'Cl!-1<;i
The Accounting feature is not available anymore with Archive Server 9.6.
While log files on usage are still generated, scripts are not provided anymore to evaluate the
usage information for billing purposes.
Archive Server
operated by
service provider
Customer B
Accounting ~~=
data ~~""':::-
The accounting feature of the Archive Server is dedicated to application service providers
(ASPs) operating an Archive Server for mUltiple customers. It measures various quantities of
the server usage, such as:
Number of access requests
Number of requested documents
Number of active users (estimated)
Amount of transmitted data (= traffic)
These quantities can be retrieved on a per-customer basis; that way, it is possible to invoice
customers for their Archive Server usage based on their individual server resource
consumption.
26-2 710
QPENTEXT
OPEN TEXT
IE_-·
Archive Server
logs usage quan
dministrator retrieves
ccounting data
The whole process of collecting and evaluating accounting data comprises the four steps
illustrated above. Details on each step are presented on the following pages.
1
fi! Archive Server logs all access traffic, with certain restrictions:
- HTTP requests only (no RPC, RFC)
Requires activation for some leading systems and Archive Clients
- Only requests answered with HTTP_OK
Requests resulting in errors are not logged
- Only when switched on
To open the Server Configuration dialog for maintaining the properties of accounting data
collection (illustrated above):
Within the Archive Server Administration, choose menu item File ~
Server Configuration.
In the structure display on the left-hand side, choose Document Service (DS), then
click the '+' sign next to that entry.
Choose entry Accounting and Statistics.
You can then maintain the accounting configuration as desired. After that, choose menu item
File ~ Save changes. Your changes will become effective after the next server restart.
See the administration manual Archive Server Configuration Parameters in ESC for further
details about using the Server configuration dialog.
Note: If you do not intend to make use of the accounting functionality, you should disable it
completely (it is enabled by default) as described above! However, deactivating the
accounting also disables IXOS's Windows Performance Monitor interface (for Archive Server :::;
9.5, see chapter Archive Server Statistics and Performance Monitoring).
With Archive Server 9.6, while there are no scripts available for evaluation & billing, the log
files in <IXOS_ROOT> /var / acc are still generated.
26-4 710
QPENTEXT
Requires logon as
authorized user (e. g. dsadmin)
Before using the collected accounting data for billing, it must first be downloaded from the
Archive Server. This is always done via the Archive Server administrative HTTP interface,
either interactively or script-based. The illustration above shows the steps of the interactive
download:
1. Open a web browser and visit http: / / <archiveserver>: 4060
2. Select Accounting.
3. A logon dialog will be displayed; log on to the Archive Server.
(The user chosen here does not have to be dsadmin; however, it must be given the
View accounting information (ac_view) privilege in the Archive Server Administration;
see the Archive Server Administration Guide for details.)
4. In the following screen, select the date range you are interested in as well as the
download form: View as HTML for on-screen display or Download as CSV. The latter
is needed for subsequent data processing by a financial calculation tool.
5. Click Go. A table with the selected range of accounting data items will then be displayed
in the browser or downloaded as a CSV file.
III Examples
- With MS Excel add-on directly from Archive Server into Excel
Detailed description in Archive Server Administration Guide
- With arbitrary HTTP client tool, e. g. curl
For routine operation, you will probably not want to download the accounting data interactively
for each accounting period. Nevertheless, non-interactive download methods still have to use
the Archive Server HTTP interface via port 4060. You may choose yourself which of the
available scriptable HTTP clients you prefer to accomplish this task.
curl is a freely available command line HTTP client. Here is an example command for
downloading accounting data from an Archive Server and storing it as a local file (to be
entered as a single line only):
curl -u dsadmin:<password> -0 ixos_acc.csv
http://archiveserver:4060/cgi-
binjaccjrunacc.pl?select=last_month&format=csv
The user used for access authentication does not have to be dsadmin; however, it must be
given the View accounting information (ac_view) privilege in the Archive Server Administration
(see the Archive Server Administration Guide for details).
26-6 710
QPENTEXT
The table above mentions those pieces of accounting data that are useful for billing.
Additionally, the following items are logged for each request:
TimeStamp - when did request take place?
JobNumber - classification of requests
ClientAddress -IP address of client or intermediate proxy server
Applicationld - name of IXOS (or other) application that sent the request
NumComponents - number of transmitted (send or received) document components;
one of 0, 1, or 2
Documentld of requested document; (for document-related requests only)
Componentld - name of transmitted component (for data transmission requests only)
Reorganization of "old"
accounting d~a~t:a• • • • • • • • • • • • • ===;;
iii Accounting data directory must be cleaned up regularly
II Done by periodic job Organize_Accounting_Data
- Schedule job to run once after each accounting period
iii Possibilities
- . Keep useful only if you want to manage old files yourself
- Delete useful if you keep downloaded CSV files somewhere
- Store in a logical archive useful in all other cases
Unless collecting accounting data is disabled (see earlier in this chapter), the directory where
the data files are stored will normally be filled with huge amounts of accounting logging quite
fast. Deleting or moving away those files which have already been used for billing is therefore
a mandatory regular task of Archive Server operating.
However, this task can be automated by the predefined periodic Archive Server job
Organize_Accounting_Data which can be configured in the Server Configuration dialog of the
Archive Server Administration as explained above.
Notes about the Pool for the accounting data parameter:
It is not explicitly set and therefore not displayed by default. To have it displayed,
choose menu item View -7 Display undefined values.
- It expresses the storage destination used if the reorganization method "archive
into given pool" is chosen.
It must follow the syntax <archive_ID>_ <pool_name> i in the example
displayed above, A4 is the logical archive ID and the archive's pool is named
WORM.
Accounting data files which have been stored in a media pool (preferrably on optical media)
can later be restored to their original location using command line tool dsAccTool -r.
26-8 710
OPEN TEXT
26-10 710
QPENTEXT
OPENTEXT
III
_ ......~---==~
DocumentService statistics
OPENTEXT
mtt-iMfMg'MtHh.
27-2 710
OPEN TEXT
Please use the stat command carefully and validate its results.
Amount of data
retrieved from ...
... cache
... jukebox
The comparison between the data amount read from optical media vs. from hard disk
resources (disk buffer, HDSK pool, cache) indicates whether the server uses these resources
efficiently. The general rule is: The "direct reads" amount should be low compared to "cache
reads" plus "non-cacheable reads".
The absolute numbers and ratio, however, are not very useful for a judgment; they are too
dependent on how the Archive Server is used (leading applications and storage/retrieval
scenarios). You can though perform a long-term observation to see whether the ratio changes
over the time. If the relative amount of "direct reads" increases, you should consider improving
the caching setup. Possibilities include:
Enlarge the cache.
If documents are cached in the disk buffer (see chapter Disk Buffer Configuration),
extend the buffer retention period in the buffer purge configuration.
If caching after media writing or caching before buffer purging are deactivated, activate
the appropriate option.
If caching is deactivated as config option for a logical archive, activate it.
The decision what to do depends on the exact Archive Server configuration and usage context;
profound knowledge of this context is needed to make a well-founded decision.
27-4 710
OPEN TEXT
The explanations given above apply to the types of statistics discussed on the previous pages.
Generally, the DocumentService maintains a lot of other statistics as well; those explained
here are the ones really useful in normal administrative practice.
27-6 710
QPENTEXT
III Usefulness:
- Are enough drives available for efficient media access?
- Keep track of hardware wearout, especially of jukebox robots
• Anticipate need for hardware maintenance ("early watch")
Meaning of the figures listed in the statistics file (after the '=' sign, from left to right):
Changer (robot) information:
1. Online time of jukebox (seconds)
2. Number of disk moves
3. Number of failed moves recovered by STORM
4. Number of disk inserts
5. Number of disk ejects
Drive information:
6. Online time (seconds)
7. Data volume written (MB)
8. Data volume read (MB)
Volume (medium) information:
9. Checksum of volume ID (before the '=' sign)
10. Volume name (High Sierra)
11. Volume ID
12. Volume creation time (Unix timestamp)
13. Online time of medium (seconds)
14. No. of access requests while medium was not available in drive (expressed as
number of NFS data blocks of 8 kB)
15. No. of access requests while medium was available in drive (expressed as
number of NFS data blocks of 8 kB)
16. Data volume written (MB)
17. Data volume read (MB)
See the Archive Server Administration Guide for a complete statistics file documentation.
27-8 710
QPENTEXT
STORM
Writes every
D 0
1.·
10 minutes jbd_stat.l047888469
(by default)
~.
Renames, collects D
D
jbd_stat.l047894530
0
jbd_stat.log
at startup
------------------~p jbd_stat.l047895276 statistics.txt
ADMSjob Compress_Storm_Statistics
accumulates data into single file
Per default, all STORM statistics files are written and collected in directory
<IXOS_ROOT>/var/stats.
Configuration parameters for STORM statistics collecting can be maintained in
ADMC's Server Configuration page, branch Storage Manager ~ Parameters for
STORM Statistics.
Since some of the configuration variables are not explicitly set by default, choose menu
item View ~ Display undefined values to get the full view on all parameters.
_ .....-------===~
Chapter guide
27-10 710
OPEN TEXT
Statistics interface to
Windows Performance Monitor OPEN TEXT
" Objectives:
- Know how the system can be optimized according to today's requirements
- Prepare for the future by prOViding needed resources in time
.. Components
- Archive Server add-on for data collection and reporting +
on-site visit for installation I introduction
- Result analysis consulting (optional)
" Availability
- As solution package
- on Unix & Windows
- Detailed information in ESC
IXQS-Insight is an Archive Server add-on that systematically collects all available statistics
data, stores it for later analysis, and conditions it for convenient viewing. Its added value -
compared to simply using the statistics tools presented earlier in this chapter - is a
synoptically, coherent view on all relevant information. This eases deducing measures to
optimize your server for current and future requirements.
IXQS-Insight is not part of the Archive Server standard distribution; it has to be ordered and
installed separately.
27-12 710
OPEN TEXT
OPENTEXT
28-2 710
QPENTEXT
Line-wrapped appearance
in text editor
alid argument: cannot open
~..
Log entry origin (module, funct. name,
i source file name, line number)
Relevant for IXOS developers only
The chart above explains an example of a typical log message. The structure is the same for
most Archive Server log files; exceptions are detailed later in this chapter.
\\.\p1fbet1f,e) failed
2ee2/e7/11 11: 1f5: 1e: 25e @ ee e7 "sched" "sch_subdWrkThread. c" 1131f
eq_init(\\.\p1fbet1f,e): cannot open drive
2ee2/e7/11 11: 1f5: 1e: 875 II ee e7 "sched" "sch_common. c" 736 stopping picker of WORM
or log :~:~:[~~:~~~~;~~~;y~~~i~~u'~~11"'"---------..· ~
...
Date and time of log entry t-~--- 'I,
acti~~~:t~~~c~~:~~i~=~~;:~1---------.. ~i.S
I
Type of
The request number mentioned above is assigned by STORM arbitrarily to each client request
that cannot be fulfilled immediately. For example: If all disk drives are occupied, the next
incoming data read request is queued for later processing and is assigned such an internal
request number.
Since STORM handles multiple pending requests in a parallel manner, log messages of
concurrent requests occur interwoven in the log file. Nevertheless, it is easy to filter out all
messages belonging to a certain request by searching log lines for the request number.
28-4 710
OPEN TEXT
l1li Example:
14:29:40.109
Often an operational problem is due to some malfunction or misconfiguration that takes effect
earlier within the processing sequence of an operation. This is reflected within the log files:
The final error message tells what kind of operation could not be fulfilled - but, in order to find
the true reason for the failure, you have to scan through the preceding messages, too; one of
them may reveal the decisive information for diagnosing the problem.
The log messages' time label can provide valuable help in this respect. In many cases, you
may restrict your search for relevant information to the range of messages with a time
statement (nearly) identical to the time of the error message in question; those messages
reveal the operation history that took place immediately before the failure occurred.
The example above is taken from the log file of the OocumentPipeline tool Prepdoc whose
task it was to reserve a document 10 for a document being processed in the pipeline. The
OocumentService rejected this request as being unauthorized - the security options of the
target logical archive required signatures for all kinds of access requests, which the OocTool
was not configured to deliver.
jbd_trace.log
1 @ 00 scheduler -
RC1.log
(~
IXClient.10gl::~:::::::::~~~:!~~!~5~~~:!:
(not on the
Achive Server)
Often an error becomes visible at some system component but has been caused by a different
one. The example above shows how the jukebox server STORM fails to read a document from
a CD, probably because the CD is damaged (top). The DocumentService's read component-
which has requested this CD reading operation from STORM - writes a message about the
failure in its own log file (middle). Finally, the Archive Client which originated the retrieval
request is informed about the reading failure and writes a log entry about not being able to
retrieve the document to the client-side log file (bottom).
For troubleshooting purposes, you examine the log files the opposite direction: You begin with
the one nearest to the error occurrence (the client log file in the above example) and proceed
to the one(s) of the underlying components. In this case of synoptically log file analysis, it is
essential that you pay attention the log messages' time labels in order to track their causal
relationship.
Common "causal connections" between messages of different log files include:
Document storage from Enterprise Scan:
doctods . log (on scanning client) - wc .log
Document storage via DocumentPipeline:
doctods .log (on Archive Server) - wc .log
Document retrieval:
IXClient .log (on retrieval client) - RCI .log - dscachel/2 .log -
jbd_trace.log
ISO media burning:
admsrv.log dsCD.log jbd_trace.log
IXW media writing:
admsrv.log - dsWORM.log - jbd_trace.log
28-6 710
QPENTEXT
Chapter guide
OPEN TEXT
28-8 710
QPENTEXT
Every functional component of the Archive Server (as discussed in chapter Archive Server
Architecture) has its own set of log switches, enabling to control the amount and focus of
logging output quite precisely. (See next page about the available log switches and their
meaning.)
The preferred tool for viewing and changing log settings in the Server Configuration page of
the Archive Server Administration (illustrated above). Each folder containing log settings is
located underneath the Archive Server component the settings belong to.
Setting log switches dynamically, however, is possible in the Server Configuration page for
only a subset of the server components (including DocumentService and
AdministrationService). For some of the components, command line tools are available for
viewing and setting log switches dynamically:
Document service's read and write components (RC1--4, We): dsClient
Document pipeline DocTools: dpctrl
See appendix Archive Server Command Line Tools for details.
l1li Other log switches are relevant for IXOS developers only
28-10 710
QPENTEXT
STORM's loglevels, as set in the Server Configuration page of the Archive Server
Administration (illustrated above), can also be accessed the following ways:
Static log settings are stored in STORM's configuration file:
Win: <IXOS_ROOT>\config\storm\server. cfg
Unix: /usr/ixos -archive/config/storm/server. cfg
Entries: loglevels { <component> { <log_level> } }
Dynamic log settings can be set and queried with command line tool cdadm; see
appendix Archive Server Command Line Tools for details.
28-12 710
QPENTEXT
Chapter guide
III Configure
size limit
component-wise:
28-14 710
QPENTEXT
II! When file reaches size limit and at every STORM startup:
<filename>. log ~ <filename>. 000 ~ .... 001 ~ ... ~ .... Oxx
Old <filename>. Oxx is dropped (maximum for xx:: 99)
New <filename> .log is created and written into
II! Configure
'l'€il [P] Storage Manager (STORM)
size limits: : lilil IP] InternallnstaUation Variables
~- 8i1 [P]lnstallation Variables
, E3 IP] Configuration STORM (file server. erg)
.iRJ [PI Parameters Sizing STORM Server
@J IB] Parameter SCSI report
.. riliJ IS] Parameters jbd schedUler (ontine
@J IP) Parameters jbd scheduler
. @D (B1 Parameters jbd presentation (onli
@J [PI Parameters jbd presentation
@J PI Parameters 1809660 Finalizatio
.. W1
OPEN TEXT
28-16 710
QPENTEXT
The VI pool is used for writing single files to a EMC Centera storage system.
The FS pool is the successor to the HDSK pool and supports the disk buffer.
Using Archive Server 4.x, there are some differences concerning the jUkebox server logfiles:
Using WORMs on a Unix server which was upgraded from a pre-4.0 IXOS-ARCHIVE
version, jukebox server ixwd is used instead of STORM. Its logfile is named ixwd. log.
o All other cases: There is no STORM trace file yet; use STORM's logfile j bd. log
instead.
or
https://<ArchiveServer>:4061/cgi-bin/tools/log.pl
28-18 710
OPEN TEXT
Logging:
Further sources of information
The ESC folder Log files and error messages can be accessed as:
https://esc.ixos.com/0976524753-322
OPENTEXT
Chapter overview
_ ................:=~ OPENTEXT
mil ,. #MM@@¥¢pmi'.
29-2 710
OPEN TEXT
Avoiding problems
OPENTEXT
[I~
* data has queued up irregularly
This and the following pages explain how to investigate error states by dedUcing possible error
causes from the visible error symptoms. Since it is impossible to list all imaginable malfunction
reasons, the explanations concentrate on the first investigation step from the error cause to a
resource for more detailed error information. This more detailed resource will mostly be some
log file; the IXOS log files usually give decisive hints about where to look for the true error
cause, so this should be sufficient in the context of this course.
Depending on the category of the error indicated in the Archive Server Monitor, there are
different ways of examining possible causes, as explained above.
Concerning the correspondence of program names and log file names, see the following page.
The only exception lies in the "Storage Manager" branch of the monitor tree: STORM's log file
is named j bd. log, and the trace file j bd_ trace. log should also be examined for
troubleshooting.
For an explanation about examining DocumentPipeline error items, see page Symptom
examination (4): DocumentPipeline errors later in this chapter.
The articles in ESC section Archive Monitor Diagnosis give further explanations about possible
error causes and actions for problem solving. Find the section as:
https://esc.ixos.com/0951320904-700
29-4 710
QPENTEXT
In a sane operational state, all Archive Server processes listed in spawncmd status have to
be running - except for the ones marked in the chart above. If any of the other programs is
marked as terminated ('T' in column "sta"), something irregular has happened to them. To
investigate this, you will have to have a look in the corresponding log file. Each of the listed
programs writes to a log file whose name is similar, yet not always exactly equal, to the
displayed program name. Some important examples:
admsrv ~ admSrv.log
dsrcl ~ RC1.log
dswc ~ WC.log
On a scanning station with IXOS-EnterpriseScan installation, a subset of the Archive Server
processes is installed and must be running as well. There, stockist is the only program that
is allowed to be terminated during normal operation.
IXOS-ARCHIVE ::; 4.2: One additional process, named checkscsi, is always allowed to be
terminated; it is okay even if its exit code is 1. (Its purpose is to check whether the versions of
the IXOS generic SCSI driver and the operating system match, which is no longer necessary
starting IXOS-eCONserver 5.0.)
The Job Protocol window (shown above) of the Archive Server Administration indicates
unsuccessfully finished periodic jobs by red bulbs in the leftmost column. There are three
possibilities to gather information about possible causes:
The protocol item itself mostly yields some brief note about what has gone wrong.
Messages about missing empty media (as in the example above) should already be
sufficient as diagnosis.
You may click on the protocol row in question and then click the Messages button. This
will open a window (shown above, bottom) displaying all log messages of the chosen
job run.
In case you need to examine messages of earlier job runs, you will have to consult the
corresponding log file. The log file's name can be deduced from the protocol entry: It is
mostly equal to the job's program name mentioned in the protocol window's "Message"
column.
29-6 710
QPENTEXT
Document log
c:/IXOS/dirs/DPDIR/mucl
m/00000009.00000000
document protocol
2000109/2716:15:15 [doctOdsj ERROR:
dscCpComp(archiv8_id='RD',pooJ=",name='im',
type='ASCILNOTE',file='c:fJXOSJdirsl
DPDJR/muc00536/mJOOOOOOOB.
OOOOOOOO/IM',aprUype='notice, failed
Slide?
The DocumentPipeline Info shows the processing status of documents being processed. If
some document is being held in an error queue (see illustration above), there are two
possibilities to gather information about possible causes:
Within the DocumentPipeline Info window, click on the row containing the document in
question ("Archive document" in the example above); the window's status bar then
displays the name of the DocTool at which the error has occurred ("doctods" in the
example above). You may then consult the log file with exactly that name (with .log
appended); it will reveal meaningful messages about the error(s) in question.
A way to get information directly within the DocumentPipeline Info, yet often less
informative:
1. Right-click the row containing the document in question. From the context menu,
choose Documents 7 Show.
2. The first time you do this within a DocumentPipeline Info session, you will be
prompted to log on to the DocumentPipeline as an Archive Server administrator.
3. Beneath the chosen DocumentPipeline row, a sub-list is displayed containing all
documents currently being kept at this processing step. Right-click on one of the
documents in question and choose Protocol from the context menu; this will
open a window showing just that portion of the DocToollog file concerning the
chosen document (shown above, top right).
29-8 710
OPEN TEXT
, ,,
If you need support ...