Data Insight Administration Guide
Data Insight Administration Guide
Administrator's Guide
5.0
Legal Notice
Copyright © 2015 Symantec Corporation. All rights reserved.
Symantec, the Symantec Logo, the Checkmark Logo, Veritas, and the Veritas Logo are
trademarks or registered trademarks of Symantec Corporation or its affiliates in the U.S. and
other countries. Other names may be trademarks of their respective owners.
This Symantec product may contain third party software for which Symantec is required to
provide attribution to the third party (“Third Party Programs”). Some of the Third Party Programs
are available under open source or free software licenses. The License Agreement
accompanying the Software does not alter any rights or obligations you may have under those
open source or free software licenses. Please see the Third Party Legal Notice Appendix to
this Documentation or TPIP ReadMe File accompanying this Symantec product for more
information on the Third Party Programs.
The product described in this document is distributed under licenses restricting its use, copying,
distribution, and decompilation/reverse engineering. No part of this document may be
reproduced in any form by any means without prior written authorization of Symantec
Corporation and its licensors, if any.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, et seq.
"Commercial Computer Software and Commercial Computer Software Documentation," as
applicable, and any successor regulations, whether delivered by Symantec as on premises
or hosted services. Any use, modification, reproduction release, performance, display or
disclosure of the Licensed Software and Documentation by the U.S. Government shall be
solely in accordance with the terms of this Agreement.
http://www.symantec.com
Customer service
Customer service information is available at the following URL:
support.symantec.com
Customer Service is available to assist with non-technical questions, such as the
following types of issues:
■ Questions regarding product licensing or serialization
■ Product registration updates, such as address or name changes
■ General product information (features, language availability, local dealers)
■ Latest information about product updates and upgrades
■ Information about upgrade assurance and support contracts
■ Advice about technical support options
■ Nontechnical presales questions
■ Issues that are related to CD-ROMs, DVDs, or manuals
About configuring Veritas File System (VxFS) file servers .................... 145
Credentials required for configuring Veritas File System (VxFS)
servers ............................................................................... 146
Enabling export of UNIX/Linux NFS shares on VxFS filers ................... 148
Icon Description
The action selector icon displays a menu with the following two
options:
Action Description
Action Description
Configure Data Insight nodes either See “About node templates” on page 254.
individually or configure multiple nodes by
applying node templates.
If monitoring events for NetApp file servers, See “Preparing Data Insight for FPolicy ”
configure Fpolicy service on collectors. on page 86.
If monitoring events for a clustered NetApp See “Preparing Data Insight for FPolicy in
file server, install the DataInsightFPolicyCmod NetApp Cluster-Mode” on page 106.
service on the collectors.
If monitoring events for EMC Celerra file See “About EMC Common Event Enabler
servers, configure Celerra service on (CEE)” on page 116.
collectors.
If monitoring events for EMC Isilon file See “Preparing Symantec Data Insight to
servers, configure EMC Common Event receive event notifications from an EMC Isilon
Enabler (CEE) on the collectors. cluster” on page 129.
If monitoring events for Windows file servers See “About configuring Windows file server
, upload agent packages to collectors. monitoring ” on page 139.
If monitoring events for a generic device, use See “About configuring a generic device”
web APIs to collect access event information. on page 150.
If monitoring events for SharePoint servers, See “Installing the Data Insight web service
install the Data Insight Web service on the for SharePoint” on page 197.
SharePoint server.
If monitoring events from cloud storage See “About configuring Box monitoring”
sources, configure authorization to the cloud on page 210.
storage account
Configure the SharePoint Web applications. See “Adding web applications” on page 199.
Action Description
Device Version
Device Version
Windows File Server Windows Server 2008, or 2008 R2, 32 bit and 64 bit
Veritas File System (VxFS) 6.0.1 or higher, configured in standalone or clustered mode
server using Symantec Cluster Server (VCS)
Note: For VCS support, Clustered File System (CFS) is
not supported.
■ For all supported versions of EMC Celerra/VNX and EMC Isilon, Data Insight
supports only CIFS protocol over NTFS. NFS protocol is not supported. Data
Insight supports the latest Common Event Enabler (CEE), version 6.3.1. Data
Insight still supports the older version of CEE and VEE, but Symantec
recommends that you move to the latest EMC Common Event Enabler, which
you can download from the EMC website
■ To use the Self-Service Portal to remediate DLP incidents, ensure that Symantec
Data Loss Prevention (DLP) version 12.5 or higher is installed. Data Insight
uses the DLP Smart Response Rules to remediate incidents, which are
introduced in DLP version 12.5.
object's list page. This enables you to view further details and to troubleshoot the
issue.
The dashboard contains the following widgets:
Data Insight servers The Servers pie-chart displays the graphical representation of the
total number of Data Insight servers that are in the Faulted, At Risk,
Unknown, and Healthy state.
Scanning The Scanning graph displays a color bar chart representing the
number of scans during the last 7 days from the current date.
The color bar chart represents the different states of the scans-
Failed [Red]; Successful [Green]; and Partially successful [Yellow].
On the Scanning chart, you can also view the following data:
To the right of the chart, you can view the following data:
Directory services The directory services widget provides an inventory count of the
configured directory services. The widget also displays information
and alert notifications associated with the directory services.
Click Add Directory Service , and select the directory service type
to navigate to the corresponding configuration page.
Scan Status The pie-chart provides a summary of the consolidated status of all
(Consolidated) scans on configured shares and site collections in percentage terms.
Below the pie chart, you can view the summary of the total failed,
successful, and partially successful scans on configured shares or
site collections. Additionally, you may get a summary of scans with
the warning status Needs Attention. You can view the detailed
explanation for such warnings from the Scan status page for that
share. The summary also displays the number of shares or site
collections that have never been scanned till date.
Click on an area of the graph to view the filtered list of paths which
have the selected consolidated status.
Scan History The chart provides a graphical representation of the number of scans
during a specified duration. Use the drop-down to the right of the
graph to select the time period for which you want to view the
summary.
The color bar chart represents the different states of the scans - Failed
[Red]; Successful [Green]; and Partially successful [Orange]; for a
more visual comparison of the data. In each bar, the number of failed,
successful, and partially successful states are indicated by the length
of the corresponding colors.
Age of Last The pie-chart provides a high-level overview of the age of last
Successful Scan successful scan for a share or a site collection.
Failed Scans in Last Shows all the shares and site collections on which a scan has failed
24 Hours during the last 24 hours. You may view additional details for the cause
of the scan failures such as type of scan that has failed, exit codes
etc.
3 The Scan Status sub-tab displays the list of all the shares and site collections
with colored icons to indicate their consolidated scan status. Use the following
icons to understand the status:
■ Green icon - scan successful.
■ Orange icon - scan partial.
■ Red icon - scan failed.
■ Yellow warning icon - scan needs your attention.
■ Gray icon - scan never happened.
4 Click on a status icon to know more details about the scan status. The scan
status details pop-up window opens.
The Scan Summary tab displays possible causes, impacts, and solutions in
case of scans whose status is either failed, partial, or needs attention.
To start a scan
◆ On the Scan Status page, click Scan.
Do one of the following:
■ Click Scan selected records to scan the selected objects.
■ Click Scan all filtered records to scan all objects that are displayed after
applying a filter.
■ About Data Insight integration with Symantec Data Loss Prevention (DLP)
4 Click Save.
Note: Data Insight scans only share-level permission changes when event monitoring
is turned off.
To fetch file system metadata, Data Insight performs the following types of scans:
Full scan
During a full scan Data Insight scans the entire file system hierarchy. A full scan is
typically run after a storage device is first added to the Data Insight configuration.
Full scans can run for several hours, depending on the size of the shares. After the
first full scan, you can perform full scans less frequently based on your preference.
Ordinarily, you need to run a full scan only to scan those paths which might have
been modified while file system auditing was not running for any reason.
By default, each Collector node initiates a full scan at 7:00 P.M. on the last Friday
of each month.
For SharePoint the default scan schedule is 11pm each night.
Incremental scan
During an incremental scan, Data Insight re-scans only those paths of a share that
have been modified since the last full scan. It does so by monitoring incoming
access events to see which paths had a create, write, or a security event on it since
the last scan. Incremental scans are much faster than full scans.
Note: For Data Insight versions before version 5.0, incremental scans were triggered
only when Data Insight detected any events during event monitoring.
Incremental scans are not available for SharePoint web applications and for the
cloud-based storage from Box.
By default, an incremental scan is scheduled once every night at 7:00 P.M. You
can initiate an on-demand incremental scan manually by using the command line
utility scancli.exe. It is recommended to run the IScannerJob before you execute
the utility.
See “Scheduled Data Insight jobs” on page 394.
You can turn off re-confirmation scan for any Indexer, using the Advanced Setting
for that Indexer. When the re-confirmation scan is turned off, Data Insight readily
removes the missing paths from the indexes without carrying out re-confirmation.
See “Configuring advanced settings” on page 234.
At a global level, full scans are scheduled for individual Collectors or Windows File
Server agents. The Table 3-1 gives you the details of all the entities for which you
can schedule a full scan.
Collector or Settings > Data Insight Servers > Applies to all the storage See
Windows Advanced Setting > File System devices associated with “Configuring
File Server Scanner settings. the Collector, for which a advanced
agents schedule is defined. settings”
on page 234.
Filers, web In case of a filer, Settings > Filers > Applies to filers, See “Adding
applications, Add New Filer. SharePoint web filers”
and cloud applications, or cloud on page 155.
In case of a SharePoint web
sources sources for which
application, Settings > SharePoint See “Adding
schedule is defined.
Web Applications > Add web
SharePoint Web Application. This setting overrides the applications”
scan schedule defined on page 199.
In case of a cloud storage account,
for the Collector
Settings > Cloud Sources > Add See
associated with the filer,
New Cloud Source. “Configuring
web applications, and
Box
Note: You can also configure cloud sources.
monitoring
scanning at the time of editing filers,
in Data
Web applications, and cloud sources.
Insight”
on page 212.
Shares and Settings > Filers > Monitored Applies to the entire See “Adding
site Shares > Add New Share. share or site collection shares”
collections for which schedule is on page 184.
Settings > SharePoint Web
defined.
Applications > Monitored Site See “Adding
Collections > Add Site Collection. Overrides the scan site
schedules defined for the collections”
Note: You can also configure
filer or the web on page 204.
scanning at the time of editing shares
application associated
and site collections.
with the share or the site
collection.
You can override all the full scan schedules and initiate an on-demand full scan for
configured shares or site collections. See “Managing shares” on page 185.
Sometimes for maintenance and diagnostic purposes, you may need to disable all
the scans. You can disable all scans:
■ At the time of adding or editing a storage device.
See “Adding filers” on page 155.
See “Adding web applications” on page 199.
■ Or from the Settings > Scanning and Event Monitoring page of the
Management Console.
See “Configuring scanning and event monitoring ” on page 34.
If you disable scanning for any device, you will not be able to view any permissions
data for that device. However, you may still see some stale metadata like size,
permissions etc., which was collected before the scanning was disabled. If you run
a report on the paths for which scanning is disabled, you may get a report with stale
data.
You can specify pause schedules for both full and incremental scans to indicate
when scanning should not be allowed to run. You can configure a pause schedule
from the Settings > Data Insight Servers > Advanced Settings page. To know
more about configuring a pause schedule, See “Configuring advanced settings”
on page 234.
You can view the details of the current and historical scan status for your entire
environment from the scanning dashboard. To access the scanning dashboard,
from the Data Insight Management Console, navigate to Settings > Scanning >
Overview. To know more about the scanning dashboard, See “Viewing the scanning
overview” on page 25..
Option Description
Scan File System Clear the check-box to turn off all future file system scanning on all
meta-data filers. Once you save the setting, it will also stop all currently running
scans.
Get Folder ACLs Clear the check box if you do not want Scanner to fetch Access Control
List (ACLs) for folders during scanning.
If you disable this option, the Workspace > Permissions tab in the
Console is disabled and permission related reports will not produce
any data. If you do not need permissions data, you can disable this
option to make the scans run faster.
Get Ownership Clear the check box if you do not want Scanner to fetch the Owner
information for files attribute for files and folders.
and folders
Ownership information is used to determine ownership for data when
access events are not available. If you do not need this information,
you can disable this option to make scans run faster.
Option Description
Throttling for Select Throttle scanning based on latency of the filer to enable
NetApp filers throttling of Data Insight scans for NetApp 7-mode and Cluster-Mode
file servers. This option is not selected by default.
Monitor file system Clear the check box to stop Data Insight from collecting access events
access events from all file servers. In case of NetApp, it means all collector nodes
will disconnect their Fpolicy connections to file servers.
Disk Safeguard Select Enable node safeguard check box to monitor the disk usage
settings for on the Windows File Server node, and prevent it from running out of
Windows File disk space by implementing safeguards.
Server agents
You can specify the threshold for disk utilization in terms of percentage
and size. The DataInsightWatchdog service initiates the safeguard
mode for the Windows File Server node if the free disk space falls
under the configured thresholds.
You can edit the threshold limits as required. If you specify values in
terms of both percentage and size, then the condition that is fulfilled
first is applied to initiate the safeguard mode.
Option Description
If the latency on the physical file server increases above the configured
threshold, Data Insight disconnects from the associated virtual file
server. This information is also displayed on the Data Insight System
Overview dashboard.
You can create separate exclude rules for file servers and SharePoint servers. For
each of these, Data Insight supports two types of filters:
■ Exclude rules for access events
■ Exclude rules for Scanner
Filters for account names or SIDs Typically used to mask service accounts from
registering data accesses into Symantec Data
Insight. For example, if an antivirus software
performs scans on a mounted share using a
specific user account, you can add that user
account to a filter. Data Insight omits all
accesses made by that service user account.
Filters for path names Filters for path names are of two types, file
extension based and path based.
Table 3-3 Add/Edit file system Exclude rule for access events options
Field Description
This filter only applies to NetApp and EMC Celerra file servers.
Table 3-3 Add/Edit file system Exclude rule for access events options
(continued)
Field Description
Exclude patterns When defining a file system rule, enter the file extensions or paths that
you want to exclude. A CIFS file system path must be fully qualified
path in the format, \\filer\share\folder or relative to each share,
for example, <name of folder>. A NFS path must be a fully qualified
physical path on the actual file system in the format,
/path/in/the/physical/filesystem.
The logical operator OR is used create a rule with multiple values of the
same dimension and the logical operator AND is used to combine values
across dimensions in a rule. For example, if you create a rule to ignore
user_foo1, user_foo2, and IP_10.209.10.20, it means that all accesses
from IP_10.209.10.20 AND (user_foo1 OR user_foo2) will be ignored.
You can also specify the wildcard (*) in an exlude rule for paths. Data
Insight allows the use of wildcard (*) in the following formats in an
exclude rule:
■ <prefix string> - Events for paths that start with the specified <prefix
string> are excluded .
■ <prefix string>* - Events on paths that start with the specified <prefix
string> are excluded.
■ *<string> - Events on paths which start with anything followed by
the specified string are excluded.
■ *<string>* Events on paths which have the specified string
somewhere in the path name are excluded.
For example, if you specify *<abc>*, events on all paths that have the
string abc anywhere in the path name will be excluded.
When defining a SharePoint rule, enter the URL of the SharePoint Web
application or the site.
You can use the wildcard (*) to exclude events for URLs that contain a
specified sting in its name. For example, if you specify <abc>*, events
on all URLs that have the string abc anywhere in the path name are
excluded.
Pattern Type Select PREFIX or EXTENSION from the Pattern Type drop-down.
Rule is enabled Select the Yes radio button to enable the rule and the No radio button
to disable it.
Table 3-4 Add/Edit file system Exclude rule for Scanner options
Field Description
Exclude Patterns When defining a CIFS file system rule, specify the name of the folder
to exclude as /<name of first level folder>. For NFS file
system rule, specify the name of the folder to exclude as /<name of
first level folder>
When defining a SharePoint rule, enter the URL of the SharePoint Web
application or the site.
Rule is enabled Select the Yes radio button to enable the rule and the No radio button
to disable it.
Saved Credential Name Enter your name for this stored credential.
3 Click Save.
4 You can later edit or delete credentials from the credential store.
You can delete or edit a saved credential.
To delete a saved credential
1 In the Management Console, click Settings > Saved Credentials.
2 Locate the name of the stored credential that you want to remove.
3 Click the Delete to the right of the name.
A credential can be deleted only if it is not currently used for filers, shares,
Active Directory, Fpolicy service, EMC Celerra service, permission remediation
scripts, custom action scripts, Enterprise Vault server, and as Data Insight
server credentials..
To edit a saved credential
1 Locate the name of the saved credential that you want to edit.
2 Click the Edit to the right of the name.
3 Update the user name or password.
4 If you change the password for a given credential, the new password is used
for all subsequent scans that use that credential.
5 Click Save.
For the purpose of access control, only a user assigned the role of Server
Administrator can add, edit, and view all saved credentials. A user assigned the
Product Administrator role can add new saved credentials, but can only view and
edit those credentials which the user has created.
Note: You can disable archiving of access events information by enforcing a legal
hold on shares and site collections.
Note: You can disable purging of access events information by enforcing a legal
hold on shares and site collections.
INSTALL_DIR/bin/configcli execute_job
DataRetentionJob
Purge access data automatically Select the check box to enable purging of
file system or SharePoint access events,
and enter the age of the data (in months)
after which the data must be deleted.
Purge Data Insight system events Select the check box to enable purging of
automatically Data Insight system events, and enter the
age of the data (in months) after which the
data must be deleted.
Automatically purge data for deleted Select the check box to enable purging of
shares or site collections data pertaining to deleted shares. This
option is enabled by default.
4 Click Save.
See “About archiving data” on page 43.
See “About purging data” on page 43.
Indexer
Network Discover
Server
Collector
Data Insight has a bi-directional integration with DLP. Based on your requirement
you can integrate the two products in any or both of the following ways:
■ Configure DLP in Data Insight:
DLP provides Data Insight the information about sensitive files in a storage
environment monitored by DLP. Data Insight uses this information to raise alerts
in response to configured DLP policies. Data Insight runs the
DLPSensitiveFilesJob at 12:00 A.M. every night to retrieve a list of sensitive
files from DLP.
The information about sensitive files and DLP policies is used to calculate the
risk-scores for storage resources and their users. The risk-scores and related
information are displayed on the dashboard view of the Data Insight Management
Console. You can use this information to find the high-risk shares and the folders
that violate important DLP policies. Additionally, you can use the information
from DLP to define DLP Incident Remediation workflow to take action on the
files that violate certain DLP policies.
See “About configuring Data Insight to integrate with Data Loss Prevention
(DLP)” on page 47.
■ Configure Data Insight in DLP:
Log in to the DLP Enforce Server Administration This user must have the role with the Incident
Console using Administrator credentials. Reporting and the Update API privileges.
Import the SSL certificate from the DLP Enforce Refer to the section Importing SSL certificate
4 Server to Data Insight. from the DLP Enforce Server to Data Insight
Management Server for details.
Note: Ensure that the credentials belong to an existing DLP user assigned
the Incident Reporting and Update API role. Also ensure that when assigning
a role to the user, the Display Attribute Location is selected. This attribute
allows Data Insight to view the complete path of a file.
The user credential being used must have access to DLP Network Discover
scan data and DLP Saved Report IDs.
■ Password
The password of the account that is used to access the DLP Enforce Server.
■ Domain
The name of the domain to which the user belongs. DLP domains are
case-sensitive. Specifying the domain is optional for a user who is a DLP
administrator.
■ DLP Role
Specify the role you want to use to log on to DLP. DLP roles are
case-sensitive.
Users who are assigned more than one role can only log on under one role
at a time.
■ Configure storage resources automatically
By default, Data Insight discards classification information for paths on
storage devices that it does not monitor. Select this option to add the
3 Click Test Connection to verify the connection to the DLP Enforce Server.
4 Click Save to save the settings.
See “Importing SSL certificate from the DLP Enforce Server to Data Insight
Management Server” on page 50.
Importing SSL certificate from the DLP Enforce Server to Data Insight
Management Server
The DLP Enforce Server administration console requires SSL transport for all
communication. Data Insight must be able to negotiate the SSL connection with
the Enforce Server. For this purpose, you must import the certificate to the keystore
used by Data Insight.
To import the SSL certificate from the DLP Enforce Server to Data Insight using
Firefox
1 Type the URL to connect to a DLP Enforce Server Administration console.
2 On the security certificate warning page, click I understand the risks.
3 Click Add Exception.
4 On the Add Security Exception page, click View to view the certificate details.
5 Click the Details tab and click Export.
6 From the Save as type drop-down, select X.509 Certificate (DER).
7 Click Save.
To import the SSL certificate from the DLP Enforce Server to Data Insight using
Internet Explorer
1 Type the URL to connect to a DLP Enforce Server Administration console.
2 On the security certificate warning page, click Certificate Error next to address
bar.
3 Select View certificates.
4 Click the Details tab, and select the appropriate certificate.
5 Click Copy to File
6 In the Certificate Export Wizard, select DER encoded binary.
7 Click Next.
8 Enter the name of the file and browse to the location where you want to save
the file.
9 Click Next
10 Click Finish to save the file.
After the SSL certificate is imported, complete the following steps to import the SSL
certificate on the Data Insight server.
To import the SSL certificate on the Data Insight server
1 From the Windows Start menu, select Run and type cmd in the dialog box to
open a command prompt window.
2 Run the following command:
cd C:\Program Files\Symantec\DataInsight\jre\bin
■ Configure a connection between the DLP Enforce Server and Data Insight.
■ Configure the Data Insight Lookup Plug-in to retrieve data owner information.
■ Configure other lookup plug-ins to populate the Data Owner email field in Data
Insight.
Refer to the Symantec™ Data Loss Prevention Administration Guide for details
on configuring these plug-ins.
■ On the Enforce Server, create custom attributes for each file detail that you want
retrieved from the Data Insight Management Server.
■ Map the custom attributes that you have created to the details from the Data
Insight Management Server.
■ Restart the DLP Enforce services.
The steps mentioned in this section are applicable for DLP users who want to pull
data ownership, permissions and access information from Data Insight. For the
detailed steps, see the Data Loss Prevention Data Insight Implementation Guide.
See “About Data Insight integration with Symantec Data Loss Prevention (DLP)”
on page 45.
dlp.csv.enabled=true
Note: If the dlp.csv.enabled property is set to true in the dlp_db.conf file, the
Data Insight process uses the .csv file to identify sensitive files, even if DLP is
configured in Data Insight.
Data Insight displays the information about control points on the ContextMap
view on the Workspace tab of the Management Console.
■ The interval for refreshing the data that is displayed on the Device and Shares
and Site collections tabs of the Data Insight dashboard. This interval is also
considered for generating data that is displayed on the ContextMap view on
the Workspace tab of the Management Console.
You must ensure that you decide the frequency of refreshing the data judiciously,
because the statistics are calculated for all the configured devices, shares, and
site collections.
Note: The Data Insight dashboard does not display any data, if a summary report
has not run even once.
The page also displays information about refresh cycles that have failed. Click
on the Failed link to download the logs that you can use to troubleshoot the
cause for failure.
To configure advanced analytics
1 In the Management Console, click Settings > Advanced Analytics. The
existing analytics settings display by default.
2 Click Edit to change the appropriate settings.
3 Click Save to save the changes.
4 Click Compute Now to refresh the data on the Data Insight dashboard.
See “About open shares” on page 55.
ID or title are different for each user, and should ideally not be used as the primary
attribute.
To choose attributes for advanced analytics
1 On the Settings tab, click Advanced Analytics.
2 Click Attributes to display the Advanced Analytics Manager.
The Available Attributes panel displays all the configured custom attributes.
3 Select an attribute, and click the right arrow to select an attribute. Similarly,
you can click the left arrow to remove an attribute from the list of selected
attributes.
4 Use the up and down arrows to set the priority of the attributes for computing
the analytics data.
5 From the Primary grouping attribute drop-down, select the attribute that you
want to use as the primary attribute for identifying users and for creating
attribute-based filters.
this purpose, the ACLs are examined from level 1 (root being level 0), and all folders
three levels down are examined.
Defining an open share policy helps you to review the permissions assigned at
share level and revisit entitlement decisions. You can view the details of the open
shares, the size of the data in open shares, and the number of files in open shares
on the Dashboard tab on the Management Console.
See “Configuring an open share policy” on page 56.
5 Use the Up and Down arrows to define the level in the share hierarchy that the
policy should be applied. You can also examine the depth starting from level
0, that is the root of the share.
6 The depth in terms of number of levels of the folder heirarchy for which the
permissions should be examined.
7 Click Save to save the policy and close the window.
You can use the report.exe utility to exclude certain paths from the open share
computation for the dashboard.
See reportcli.exe on page 382.
You can search for extensions and file groups by using the filter at the top right
corner of the screen.
For detailed information on fg.exe, see the Command File Reference.
To configure file groups
1 In the Management Console, click Settings > File Groups.
2 To add a new file group, click Add new file group.
To install a license
1 Obtain the new license file.
2 In the Management Console, click Settings > Licensing.
3 On the Licensing page, click Add/Update License.
4 On the Add new license page, browse to the new Data Insight license file that
you have downloaded, and click Upload.
Report Footer Text You can choose to add a footer to all the
reports that you run in the Console. Enter
the sentence string that you want to appear
in the footer of the report. For example,
Proprietary and Confidential.
3 Click Save.
For more information about creating reports, see the Symantec Data Insight User's
Guide.
assigned to them. To know more about the Data Insight custodians, you can refer
to the Symantec Data Insight User's Guide.
You can assign custodians to a data resource from the Workspace tab. However,
assigning custodians on a path at a time can be tedious and time-consuming. You
can use the Custodian Manager to easily assign multiple custodians using only a
few steps.
You can bulk-assign custodians by using any of the two options:
■ Assign by CSV - You can use a CSV file that contains the information about
the paths and their respective custodians for to assign custodians on paths.
See “Assigning custodians in bulk using a CSV file” on page 61.
■ Assign by owner method - You can specify the criteria for computing the
possible owner of the selected paths, and assign the computed owners as the
custodians.
See “Assigning custodians based on data ownership” on page 62.
Note: The custodian assignment in Data Insight can take some time depending on
the number of paths. You can view the status of the operation on the Settings >
Events page of the Management Console.
Note: When you clear the check-box for Use default data owner policy,
Data Insight still enforces the exclusion rules for deleted, disabled, and
unresolved users as defined under the Workspace Data Owner Policy
setting.
Note: The custodian assignment in Data Insight can take some time depending on
the number of paths. You can view the status of the operation from the Settings>
Events page of the Management Console.
3 Similarly, you can exclude specific users from a watchlist in the following ways:
■ Under Exclusion list tab, select the users or groups that you want to
exclude from the watchlist.
Or click Upload CSV to upload a list of users or groups in bulk.
■ Under the User Exclusion Using Attributes tab, click Add Condition.
From each of the drop-down menu, select the criteria to build the query.
The query is used to exclude users based on their Active Directory custom
attributes.
4 Click Save.
■ Scheduling scans
Field Description
Domain Name Enter the name of the domain which you want to scan.
The domain name is used for display purpose only. The domain name
that appears on the Workspace tab depends on the name set in the
domain.
Domain Controller Enter the hostname or IP address of the Active Directory domain
IP controller.
Field Description
Bind Anonymously Select the check box if you want to allow Data Insight to connect to the
Active Directory server without a credential.
Disable scanning Select the check box to disable the scanning of the directory server.
Field Description
Fully Qualified Enter the fully qualified name of the domain that you want to scan.
Domain Name Entering the FQDN will automatically populate the User and Group
search Base DN fields.
LDAP server Enter the hostname and the port of the LDAP server.
address
By default, the LDAP server runs on HTTPS port 389. If TLS is enabled,
the LDAP server runs on port 636, by default.
Field Description
Type The type of LDAP schema used by the directory service. Data Insight
extracts the attributes from the schema attribute file when scanning the
domain. Select one of the following:
■ OPENLDAP
■ Sun ONE
You can also create a schema attribute file with customized attributes
for each LDAP implementation that does not match the defaults. Ensure
that you name the file as ldap_<ldap_type>.conf and save it at
$data\conf\ldap on the Management Server.
Search base DN The Organization Unit (OU) in which all users and groups have been
defined.
This directory uses Select this check box if the LDAP server uses the TLS protocol.
secure connection
(TLS)
Field Description
Scanning details Select the saved credentials from the drop-down or specify new
credentials.
The DN string may change depending upon the LDAP schema used.
Refer to the LDAP schema to get correct DN for the user.
The schema attribute names for setting these limits may vary depending
upon the LDAP implementation. The above example is for Sun ONE.
Test Credentials Click to verify that the given credentials are correct and to test the
availability of network connection between the Management Server and
the LDAP server.
Bind anonymously Select the check box if you want to allow Data Insight to connect to the
LDAP server without a credential.
Field Description
Disable scanning Select the check box to disable the scanning of the directory server.
Field Description
Fully Qualified Enter the name of the domain that you want to scan.
Domain Name
Scanning Details Click Test Credentials to verify that the given credentials are correct
and to test the availability of network connection between the
Management Server and the NIS server.
Disable scanning Select the check box to disable the scanning of the directory server.
Field Description
Fully Qualified Enter the name of the domain that you want to scan.
Domain Name
Field Description
Configured in NIS This check box is only available when adding a NIS+ server.
compatibility mode
When configuring a NIS+ server, select the Configured in NIS
compatibility mode check box if the NIS+ server is configured in the
NIS compatibility mode. In this mode, Data Insight can fetch the users
and groups data from the NIS+ server remotely in most cases.
See “Fetching users and groups data from NIS+ scanner” on page 72.
Scanning Details Click Test Credentials to verify that the given credentials are correct
and to test the availability of network connection between the
Management Server and the NIS+ server.
Disable scanning Select the check box to disable the scanning of the directory server.
Note: Data Insight scans all domains together because dependencies might
exist between the different domains.
5 To edit the scan schedule for the configured domains, click Edit Schedule.
By default, Data Insight scans all domains at 3:00 A.M. everyday.
On the Set Directory Scan Schedule dialog, change the schedule, and click
Update Schedule.
The updated schedule is used for all subsequent scans of the configured
domains.
6 To edit the properties of a directory service domain, from the Select Actions
drop-down, select edit Edit.
On the directory service properties screen make the necessary changes, and
click Save.
7 To delete a configured directory service domain, from theActions drop-down,
select Delete.
Select OK on the confirmation message.
8 To view events pertaining to configured directory services, click Events.
The events of all directory services are displayed. You can filter these events
by data, domain server name, severity, and the type of event.
6 Select the Do not fetch attribute check box if the value for the attribute does
not exist in the selected domain.
7 Click Add New LDAP Attribute Name to add other domain-specific names
for the custom attribute.
8 Click Save.
For detailed information about using the Social Network Map to analyze collaborative
activity, see the Symantec Data Insight User's Guide.
See “Choosing custom attributes for advanced analytics” on page 54.
Note: Users from a deleted directory domain are removed from Data Insight only
after the next directory scan runs.
Scheduling scans
Symantec Data Insight scans configured domains everyday at 3:00 A.M., by default.
You can, however, configure the scanning schedule, as needed.
Data Insight also scans local users of all file servers and site collections that are
managed Data Insight. Information from these scans becomes visible in Data Insight
after the directory scan runs.
Note: The domain name that is given in the .csv file must be among the domains
that are scanned by Data Insight.
Note: When you save a .csv file with multibyte characters, you must select
UTF-8 encoding instead of unicode or default encoding.
Note: To troubleshoot issues related to import of such attributes, check the log file,
adcli.log in the log folder on the Management Server
■ About FPolicy
■ Preparing a non-administrator domain user on the NetApp filer for Data Insight
■ There is connectivity to the collector node from the filer using the short name
and the Fully Qualified Host Name (FQHN) of the Collector node.
■ The DNS lookup and reverse-lookup for hostname of the Collector node from
the filer is working fine.
■ The standard RPC ports are open in the firewall.
■ The local security policies are set. The installer automatically registers the local
security policies on Windows 2008 machines which are used as collector nodes.
However, if the installer fails to register the security policies, you must set them
manually. Click Administrative Tools > Local Security Policy > Local Policies
> Security Options and change the following settings:
■ Network access: Named Pipes that can be accessed anonymously - Add
NTAPFPRQ to the list.
You must restart the machine after making these changes.
Credential Details
Credential Details
Credentials required during filer configuration Required to discover shares and enable
through the Symantec Data Insight FPolicy on the NetApp filer. This credential
Management Console. belongs to the NetApp ONTAP user who has
administrative rights on the NetApp filer (for
example, root) or a domain user who is part
of the Administrators group on the filer.
Credential Details
Credential Details
Credential Details
Credential Details
Credential Details
Credentials required during filer configuration Required to discover shares and enable
through the Symantec Data Insight Fpolicy on the NetApp filer. This credential
Management Console. belongs to the NetApp ONTAP user who has
administrative rights on the NetApp filer (for
example, root) or a domain user who is part
of the Administrators group on the filer.
Credentials required for scanning of shares. Required for scanning of shares from the
NetApp filer.
packets of data that is sent over a network to a remote host are signed. A mismatch
in the setting on the Collector node and the NetApp filer can cause the filer to drop
the FPolicy connection to the Collector node.
To configure SMB signing
1 Check whether the SMB signing option on the NetApp filer, options
cifs.signing.enable is set to off or on.
2 On the Collector node that is assigned to the NetApp filer, open the Windows’
Registry Editor (Start > Run > regedit).
3 In Registry Editor, navigate to HKEY_LOCAL_MACHINE > SYSTEM >
CurrentControlSet >SERVICES > LanmanServer > Parameters.
4 Modify the following registry entries:
■ enablesecuritysignature - Enter the value 0 to turn signing off and enter
the value 1 to turn on signing.
■ requiredsecuritysignature - Enter the value 0 to turn signing off and
enter the value 1 to turn on signing.
About FPolicy
Symantec Data Insight uses the FPolicy framework provided by NetApp to collect
access events from the NetApp filers.
NetApp provides an interface called FPolicy which allows external applications to
receive file access notifications from the NetApp Storage subsystem. FPolicy allows
partner applications to perform tasks like file access screening and auditing. The
FPolicy interface uses Remote Procedure Calls (RPC) and external applications
can use these tools to register with the NetApp Filer as FPolicy servers. FPolicy
supports both CIFS and NFS.
The unit of FPolicy configuration on the NetApp filer is called a policy, which is
identified by a user specified name. You can configure a policy to monitor all or a
list of volumes on the NetApp filer along with a specified set of operations. The
monitored operations are open, close, read, write, create, delete, rename, and set
attribute. As soon as a file operation is performed on a file or folder on the filer which
is being monitored, a notification is sent to the registered FPolicy server
asynchronously.
Note: The policy created by Symantec Data Insight should not be shared by any
other applications or clients.
By default, Data Insight does not register for read and close events from NetApp
filers. Data Insight treats an open event as a read event. This behavior reduces the
load on the filer in case of peak traffic loads from third party applications like backups
over CIFS. It also does not have an adverse effect for most consumer applications
because consumer applications seldom write to a file before first reading it. Data
Insight assumes that an open event is almost always be followed by a read event
and then optionally by a write event. However, you can customize the default
behavior as per your requirements.
See “Enabling export of NFS shares on a NetApp file server” on page 95.
See “Preparing the NetApp filer for Fpolicy” on page 87.
9 Under Credentials, select the saved credentials that the service needs to run
as. The credentials must belong to the Backup Operators group on the NetApp
filer that is being monitored by the Collector.
See “Credentials required for configuring NetApp filers” on page 79.
10 Click Configure to apply these settings to the server and start the FPolicy
service.
See “Configuring SMB signing” on page 84.
See “About FPolicy” on page 85.
Note: The steps below assume that the name of the policy is matpol.
■ To enable a policy:
■ To delete a policy:
Note: The domain user is the user who is configured to run the Fpolicy service
on the collector.
A list with the SIDs of the configured domain users appears. To resolve the
SIDs, run the following command:
vfiler status
Choose the name of the vfiler that you want to configure and then perform
the following operations for that vfiler. Ignore the name, vfiler0, which is the
default name given to the physical filer by NetApp.
Note: Consult your system administrator to get the IP address of the vfiler.
You will need this IP address while adding the vfiler from the Management
Console.
See “Adding filers” on page 155.
■ To create a policy:
■ To enable a policy:
■ To set the Fpolicy for CIFS to monitor specific events on NetApp filer
versions 7.3 or higher:
■ Enable set attributes operation:
■ To delete a policy:
Note: The domain user is the user who is configured to run the Fpolicy service
on the collector. See “Preparing the NetApp filer for Fpolicy” on page 87.
A list with the SIDs of the configured domain users appears. To resolve the
SIDs, run the following command:
Note: For vfilers, append the above command-line examples with vfiler run
<vfilername>.
Capability Description
login-http-admin Enables you to log into the NetApp filer and run
commands. With this capability, you can get latency
statistics (for scan throttling), volume size information, or
discover shares.
Capability Description
api-system-get-ontapi-version Enables you to get the ONTAPI version number and the
system version number respectively. These are required
api-system-get-version to set the login handle context properly. Data Insight
reports a failure when you test the connection to the filer,
if these capabilities are absent. Also, if these capabilities
are absent, you cannot execute any APIs including those
required to discover shares, and get latency statistics.
api-license-list-info Used to check if this NetApp filer has CIFS license and
print the appropriate diagnostic message.
Capability Description
api-options-set Used to enable the global FPolicy flag on the NetApp filer.
api-perf-object-get-instances-iter-start Used to get CIFS latency information from the NetApp filer,
which enables the self-throttling scan feature of Data
api-perf-object-get-instances-iter-next Insight. Absence of these APIs can cause scanner to fail
if you enable the feature to throttle scanning.
api-perf-object-get-instances-iter-end
api-nfs-exportnfs-list-rules Used to discover all NFS shares that are exported from
the NetApp filer. If this capability is absent, these NFS
api-nfs-exportnfs-list-rules-2 shares are not discovered.
api-fpolicy-volume-list-set Used to set the volume names on the filer which are to be
excluded from being monitored by FPolicy.
Capability Description
Note: Data Insight does not support scanning of NFS shares using a Collector node
that is running Windows Server 2012 or Windows Server 2012 R2 edition.
Note that the list of volumes is not the path of the volumes, but names of the
volumes.
7 Select the Define custom cron schedule radio button, and select Never from
the schedule drop-down.
8 To verify that the share is added, on the Management Console, navigate to
Settings > Filers.
9 Click the relevant filer. On the filer details page, click the Monitored Shares
tab to review the list of configured shares.
10 Repeat the steps for each NetApp filer for which you want to add an artificial
share.
represents a filer for Data Insight. Every SVM can have multiple physical nodes.
Data Insight uses the FPolicy interface on ONTAP to receive event notifications,
which are sent in XML format over a TCP connection to the Data Insight's FPolicy
server.
Data Insight automatically discovers the following entities in the cluster environment:
■ The SVMs and the CIFS server underneath each SVM.
■ The CIFS shares for each CIFS server.
To enable Data Insight to discover shares in the cluster, you must ensure the
following:
■ Complete all the pre-requisites.
See “Pre-requisites for configuring NetApp file servers in Cluster-Mode”
on page 100.
■ Provide correct cluster user credentials when adding the clustered file server.
You can also use the credentials of a local cluster administrator user.
See “Credentials required for configuring a clustered NetApp file server”
on page 101.
■ You can also choose to configure the NetApp cluster in Data Insight by using
SSL certificates.
See “About configuring secure communication between Data Insight and
cluster-mode NetApp devices” on page 109.
■ Configure the DataInsightFPolicyCmod service on the Collector that is configured
to monitor the filer.
See “Preparing Data Insight for FPolicy in NetApp Cluster-Mode” on page 106.
■ After you add a CIFS server from the ONTAP cluster to Data Insight, Data Insight
automatically enables FPolicy on the corresponding SVM on the ONTAP cluster.
This operation helps the ONTAP cluster register with the Data Insight FPolicy
server. Note that in ONTAP 8.2 Cluster-Mode, the connection is initiated by
ONTAP.
See “Preparing the ONTAP cluster for FPolicy” on page 107.
■ Add the clustered NetApp file server to the Data Insight configuration.
■ See “Adding filers” on page 155.
See “Add/Edit NetApp cluster file server options” on page 160.
Once a TCP connection is established with the Data Insight FPolicy server (Collector
node), it starts receiving access event information from the ONTAP cluster.
Note: Data Insight does not support scanning of NFS shares on a clustered NetApp
file server.
Note: The FPolicy server for a NetApp standalone (7-mode) configuration and
C-Mode configuration can co-exist on the same Data Insight Collector node.
■ Data Insight should be able to communicate with the CIFS server hosted within
the ONTAP cluster.
See “About configuring a clustered NetApp file server” on page 98.
Credential Details
Credentials required during filer Required to discover shares and enabling FPolicy on the
configuration through the NetApp filer.
Symantec Data Insight
This credential belongs to the NetApp ONTAP cluster
Management Console.
administrator user who is a local user on the ONTAP
cluster. Or, this credential belongs to the ONTAP cluster
non-administrator user with specific privileges.
Credential Details
Credentials required for scanning Required for scanning of shares from the NetApp filer.
of shares.
When scanning CIFS shares, this credential belongs to
the user in the domain of which the NetApp filer is a part.
This user must belong to either the Power Users or
Administrator's group on the NetApp filer. If the credential
is not part of one of these groups, the scanner is not able
to get share-level ACLs for shares of this filer.
5 Run the following command to create a local user, for example, testuser, and
assign the role that you created in 3 to the user:
■ If the cluster does not have a data Storage Virtual Machine (SVM) with a
CIFS server created, you can use any data SVM in the cluster and join it
to a domain by using the vserver active-directory create command.
Set the --vserver parameter to the data SVM. Joining a data SVM to a
domain does not create a CIFS server or require a CIFS license. However,
it enables the authentication of users and groups at the SVM or cluster-level.
2 Grant a user or a group access to the cluster or SVM with the -authmethod
parameter set to domain.
Also, create a new role, for example testrole, using the useradmin utility on
the filer.
The following command enables <testuser> in the <DOMAIN1> domain to
access the cluster through SSH:
the cluster. The open sessions that were authenticated before the deletion
of the authentication tunnel remain unaffected.
Enables you to log into the NetApp filer and run commands. With this capability,
you can get latency statistics (for scan throttling), volume size information, or
discover shares.
Run the following commands to create the role with specific privileges:
security login role create -role testrole -cmddirname
"version"-access all
You can optionally specify a default role such as admin/vsadmin which already
has these privileges.
server as a Collector for a clustered NetApp filer, you must configure the
Cluster-Mode FPolicy service on that server.
To configure the DataInsightFPolicyCmod service
1 Provision a Windows 2003 or 2008 server. Symantec recommends a minimum
requirement of a Windows 2008 64-bit server with 4 to 8GB RAM and a quad
core processor. A higher configuration may be required if the load on the FPolicy
server is high.
This computer hosts the FPolicy server.
2 Install the Data Insight Collector worker node or the Data Insight Management
Server on this server.
3 Log in to the Data Insight Management Console.
4 In the Console, click Settings > Data Insight Servers to open the listing page
for the server.
5 Select the server that is configured to monitor the NetApp clustered file server
to open the details page for the server.
6 Click the Services tab.
7 Select DataInsightFPolicyCmod to expand the section.
8 To configure the service, enter the following details:
■ The user-configured name to create FPolicy the ONTAP cluster.
■ The IP address of the Data Insight Collector running the FPolicy server.
The NetApp ONTAP Cluster-Mode filer connects with the
DataInsightFPolicyCmod service running on the Data Insight Collector node
on this IP address.
■ The TCP port used on the DataInsightCollector server.
The NetApp ONTAP Cluster-Mode filer connects on this port to the Data
Insight Collector. Ensure that this port is not blocked by firewall.
9 Click Configure.
See “Configuring Data Insight services” on page 233.
Note: You can choose to configure FPolicy on the OnTAP cluster manually. However,
Symantec does not recommend using manual steps to monitor the SVMs in the
cluster.
When you configure the NetApp cluster-mode device from the Data Insight
Management Server, you must select the SSL certificate which includes a .key
file and a .pem file that are installed on the cluster-mode NetApp storage device.
See “Add/Edit NetApp cluster file server options” on page 160.
-----BEGIN CERTIFICATE-----
MIICwjCCAiugAwIBAgIJAJpgINzlWl06MA0GCSqGSIb3DQEBBQUAMHoxCzAJBgNV
BAYTAklOMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBX
aWRnaXRzIFB0eSBMdGQxEDAOBgNVBAMMB2Fhc2hyYXkxITAfBgkqhkiG9w0BCQEW
EmFhc2hyYXlAbmV0YXBwLmNvbTAeFw0xMzA3MzAxNjQ2NDRaFw0xNDA3MzAxNjQ2
NDRaMHoxCzAJBgNVBAYTAklOMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQK
DBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQxEDAOBgNVBAMMB2Fhc2hyYXkxITAf
BgkqhkiG9w0BCQEWEmFhc2hyYXlAbmV0YXBwLmNvbTCBnzANBgkqhkiG9w0BAQEF
AAOBjQAwgYkCgYEAv8jid3ADQH/HQ05iZ6Tk0NF2cY9iiEna71PVKjM1L8GGkyWJ
kGioW2j1qoHO4kJEXUOMoX7YREOKLYbBQW5nx6rrg8Z3iFvP09YJnByonUIuN9QZ
96OHQ+ws9u6wNgM2LTJbcbOUUdJuOQNgaQ4XhzLDa6g0jEzyDBHbC05m2XUCAwEA
AaNQME4wHQYDVR0OBBYEFDdavnhJnCUHDJXgZEAovxcoYAsxMB8GA1UdIwQYMBaA
FDdavnhJnCUHDJXgZEAovxcoYAsxMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEF
BQADgYEAdnD5BzSlV2SiZJbOjzmhkYraNwG3WauDYlnzo8K0v6BFhxKEC/abjUaa
Ic/mBXEE8JqnLN7uqQf1wZtqIU60eNexMMdg+tstYe5O0Fnu27ss9HsmDD51A9LZ
kT5+XIfG21EYJMnFa1LwWTtmkla66GNhVEzzJKUtOXD23H6SyNc=
-----END CERTIFICATE-----
4 Copy the certificate and the key strings and save them in two different files
with extension .pem and .key respectively.
Paste the certificate to a text document exactly as it appears on the screen.
Include the top line and bottom line (-----BEGIN CERTIFICATE REQUEST-----
and -----END CERTIFICATE REQUEST-----) with the extension .pem. Make
sure that no extra lines, spaces, trailing carriage returns, or characters have
been inadvertently added.
Paste the key strings from (-----BEGIN RSA PRIVATE KEY ----- and -----END
RSA PRIVATE KEY-----) to a text document with the extension .key.
5 Send the CSR string (the string portion from “-----BEGIN CERTIFICATE
REQUEST----- to-----END CERTIFICATE REQUEST-----“) to a Certification
Authority (CA) electronically to apply for a digital certificate. After the CA sends
you the signed digital certificate, you must install it with the associated private
key (<yourKeyFileName>.key) on the Vserver.
Note: : If you have a CA-signed SSL certificate, install the root certificate and
each intermediate certificate of the CA that signed the certificate by using the
command security certificate install command with –type client-ca
parameter.
Note: You should keep a copy of your certificate and private key for future reference.
If you revert or downgrade to an earlier release, you must first delete the certificate
and private key.
2 To view the security roles that were created, run the following command:
security login role show –vserver <Host name of AdminVserver>
2 To view the security logon that you created, run the following command:
security login show –vserver Host name of AdminVserver>
After you install all required certificates, create the appropriate role and logon
credentials, and enable client authentication, you can add the NetApp cluster to
the Data Insight configuration.
To configure the EMC Celerra or EMC VNX filer to send event information to
Symantec Data Insight
1 Create a cepp.conf file on the EMC filer. The following is a sample of the code
that the cepp.conf file must contain:
surveytime=90
pool name=matrixpool \
option=ignore \
reqtimeout=500 \
retrytimeout=50
Note: If the CEE server pool contains more than one server, you may separate
each of the server entry by a | character. The setting ensures that the filer
sends events to the CEE servers in a round robin fashion which ensures load
balancing and high availability. However, the filer does not concurrently forward
events to the multiple CEE servers. In case of VNX, you must modify the
cepp.conf file so that events are simultaneously forwarded to the CEE server
pool.
2 Copy the cepp.conf file to the root directory of the Data Mover. Run the following
command: server_file <datamover_name> -put cepp.conf cepp.conf
For example, server_file server_2 -put /tmp/CEPA/cepp.conf cepp.conf
3 Start the CEPP service on the filer. Run the following command:
server_cepp <datamover_name> -service -start
Ensure that the service has started by running the following command:
server_cepp name of data mover -service -status
Note: For detailed information about configuring CEPA, refer to the EMC
documentation.
4 Double-click Endpoint.
5 Modify the registry entry for the EMC CAVA service to allow access to the Data
Insight Collector node. Depending on the type of your Data Insight deployment,
there can be the following different scenarios:
■ The EMC CAVA service and the Collector node are running on the same
machine, and the EMC CAVA service is only being used by Data Insight.
In this case, add the Data Insight key, SymantecDataConnector, to the
Endpoint option.
■ The EMC CAVA service and the Collector node are running on the same
machine, and the EMC CAVA service is also being used by applications
other than Data Insight. In this case, append the Data Insight key,
SymantecDataConnector, to the Endpoint option. Each entry must be
separated by a semi-colon.
■ The EMC CAVA service and the Collector node are running on separate
machines, and the EMC CAVA service is being used only by Data Insight.
In this case, add the Data Insight key in the format,
SymantecDataConnector@<IP address of the Collector>, to the
Endpoint option.
■ The EMC CAVA service and the Collector node are running on separate
machines, and the EMC CAVA service is also being used by applications
other than Data Insight. In this case, append the Data Insight key in the
Credential Details
Credentials required during filer configuration Required to discover shares for EMC filer.
through the Symantec Data Insight This credential belongs to the EMC filer
Management Console. Control Station user who has administrative
rights including XMLAPI v2 privilege (for
example, nasadmin).
Credentials required for scanning of shares. Required for scanning of shares from the
EMC filer. This credential belongs to the user
in the domain of which the EMC filer is a part.
■ Configuring audit settings on EMC Isilon cluster using OneFS GUI console
■ Configuring audit settings on EMC Isilon cluster using the OneFS CLI
Symantec Data Insight also supports scanning and event monitoring of user's home
directories on the EMC Isilon storage system.
Complete the following tasks to enable Data Insight to monitor an EMC Isilon file
server:
■ Complete all the prerequisites.
See “Prerequisites for configuration of Isilon file server monitoring ” on page 122.
■ Obtain the necessary user credentials for accessing the EMC Isilon filer.
See “Credentials required for configuring an EMC Isilon cluster” on page 123.
■ Configure the audit settings on the EMC Isilon filer using either the GUI console
or the command line.
See “Configuring audit settings on EMC Isilon cluster using OneFS GUI console”
on page 124.
See “Configuring audit settings on EMC Isilon cluster using the OneFS CLI”
on page 126.
■ Perform additional auditing configuration for improved performance.
See “Configuring Isilon audit settings for performance improvement” on page 128.
■ Configure Data Insight to receive event notifications from an EMC Isilon cluster.
See “Preparing Symantec Data Insight to receive event notifications from an
EMC Isilon cluster” on page 129.
■ Add the EMC Isilon filer to Data Insight.
See “Adding filers” on page 155.
See “Add/Edit EMC Isilon file server options” on page 168.
Once you have configured an Isilon file server, as a maintenance activity you must
periodically clear the audit logs.
See “Purging the audit logs in an Isilon filer” on page 131.
■ The EMC Common Event Enabler (CEE) version 6.1 or later is installed. You
can install CEE either on the same Windows server as the Data Insight Collector
or on a remote server in the same directory service domain.
Table 8-1
Credential Details
Credentials required during Required to discover shares for the Isilon cluster.
filer configuration through the
See “Preparing Symantec Data Insight to receive event
Data Insight Management
notifications from an EMC Isilon cluster” on page 129.
Console.
Credential Details
Credentials required for Required for scanning of shares from the Isilon cluster. This
scanning of shares credential belongs to the user in the domain of which the
Isilon is a part.
Note: It is recommended that you enable the OneFS auditing feature only after
you install and configure Data Insight for your storage environment. Otherwise,
the backlog consumed by Data Insight may be so large that results may be
stale for a prolonged time.
3 Under the Audited Zones section, add the access zone that you want to audit.
To enable auditing for the entire Isilon cluster, you can select the default System
zone. For more information about access zones, see the EMC Isilon
documentation.
Note: Do not enable auditing for the access zones having file systems
configured only for the NFS protocol. Data Insight currently does not support
monitoring of NFS shares.
4 Under the Event Forwarding section, enter the uniform resource identifier for
the server where the Common Event Enabler is installed. The format of the
entry is: http://<IP of the CEE server>:port/cee.
For example: http://10.209.302.152:12228/cee.
Note that 12228 is the default CEE HTTP listen port. You must choose a port
number that is same as the one configured in the registry on the computer
where CEE is installed.
See “Preparing Symantec Data Insight to receive event notifications from an
EMC Isilon cluster” on page 129.
5 Under the section, Event Forwarding, add the host name of the storage cluster.
You can either use the EMC Isilon SmartConnect cluster name or the DNS
resolvable host name of one of the nodes of the cluster. Do not leave this field
blank. This host name is later used to add the EMC Isilon cluster to the
Symantec Data Insight configuration.
See “Configuring Isilon audit settings for performance improvement” on page 128.
Note: Symantec recommends that you enable the OneFS auditing feature
only after you install and configure Data Insight for your storage environment.
Otherwise, the backlog consumed by Data Insight may be so large that
results may be stale for a prolonged time.
■ To disable auditing:
di-isilon-1# isi audit settings
modify--protocol-auditing-enabled off
After you have enabled audit settings on the EMC Isilon cluster you must configure
the Access Zones on the cluster.
Note: Do not enable auditing for the access zones having file systems configured
only for the NFS protocol. Data Insight currently does not support monitoring of
NFS shares.
--remove-audit-success <string>
Using the command line interface, you can enable specific audit events.
See “Configuring audit settings on EMC Isilon cluster using OneFS GUI console”
on page 124.
See “Configuring Isilon audit settings for performance improvement” on page 128.
■ set_security
The number of different types of events that Data Insight monitors affects the system
performance. To reduce performance overhead, you may disable auditing of
audit-failure events. You must use the OneFS CLI to disable auditing.
To disable auditing for audit-failure events:
1 Log on to the Isilon OneFS cluster using the command line interface.
2 Issue the command:
isi zone zones modify system --remove-audit-failure all
■ To give the user, the privileges to log on to the REST API platform
framework, to get a list of CIFS shares and to list users and groups:
isi auth roles modify dirole --add-user=username@domain
--add-priv-ro=ISI_PRIV_SMB --add-priv-ro=ISI_PRIV_LOGIN_PAPI
--add-priv-ro=ISI_PRIV_AUTH
■ To add the user to the Backup Operators group on Isilon, which enables
Data Insight to scan all the CIFS shares:
isi auth groups modify "Backup Operators" --add-user
username@domain
■ To grant the user, the privileges to log on to the REST API platform
framework, to get a list of CIFS shares and to list users and groups:
isi auth roles modify dirole --add-user=diuser
--add-priv-ro=ISI_PRIV_SMB --add-priv-ro=ISI_PRIV_LOGIN_PAPI
--add-priv-ro=ISI_PRIV_AUTH
Note: The cleaning of audit logs may cause interruptions in the SMB client
connections between the Isilon file server and the Collector node, leading to scan
failures. To avoid disruption in scanning service, perform the cleaning operation
during a planned maintenance window. The described procedure is applicable only
to OneFS 7.1 and OneFS 7.2. For more information refer to EMC Isilon technical
support.
4 Verify that no isi_audit processes are running on the cluster, by executing the
command:
isi_for_array -s 'pgrep -l isi_audit'
6 To ensure that you are in the /ifs/.ifsvar/audit directory, execute the command:
pwd
7 Optionally, create a backup of your audit directory if you want to preserve your
old audit logs. You can move or copy them to another directory by using either
mv or cp command.
9 Inform the Master Control Program (MCP) to resume monitoring the audit
daemons, by executing the following commands:
isi services -a isi_audit_d monitor
MCP automatically restarts the audit daemons and reconstructs the audit
directory on each node when the isi_audit_d process is running.
10 Check if the audit processes have restarted, by executing the command:
isi_for_array -s 'pgrep -l isi_audit'
11 Verify that audit data was removed and reconstructed, by executing the
command:
isi_audit_viewer -t protocol
12 Verify that the audit log files are being populated after audit processes have
restarted, by executing the command:
isi_audit_viewer -t protocol
■ Creating a domain user on a Hitachi NAS file server for Data Insight
Note: Symantec recommends that you do not use any version which is lower than
Hitachi NAS 12.x. This may lead to serious degradation of filer performance when
you enable auditing.
Complete the following tasks to enable Data Insight to monitor a Hitachi NAS file
server:
■ Obtain the necessary user credentials for accessing the Hitachi EVS host.
See “Credentials required for configuring a Hitachi NAS EVS” on page 135.
■ Create a domain user with necessary privileges on the Hitachi NAS EVS.
See “Creating a domain user on a Hitachi NAS file server for Data Insight”
on page 135.
■ Configure the audit settings on the Hitachi NAS file server.
See “Preparing a Hitachi NAS file server for file system auditing” on page 136.
■ Add the Hitachi NAS EVS to Data Insight.
See “Adding filers” on page 155.
See “Add/Edit Hitachi NAS file server options” on page 178.
Credentials required during filer configuration A domain user in the Administrators group on
through the Data Insight Management the Hitachi NAS EVS.
Console. Data Insight also uses the same
credentials also for scanning of file metadata.
Note: The audit log consolidated cache accumulates all the individual audit logs
for each EVS. Data Insight accesses the cache to monitor the audit events.
3 Execute the following command to configure the audit log consolidated cache:
audit-log-consolidated-cache add -s <Size> <EVS name>
For example:
audit-log-consolidated-cache add -s 50MB EVS1
Where, EVS1 is the name of file system EVS where you want to store the audit
log consolidated cache file. Symantec recommends that you provision disk
space of at least 50MB size for audit log consolidated cache to avoid loss of
events.
4 To verify if the auditing is enabled on the EVS for the required file system,
generate some activity on the shares created on the file system. Execute the
following command on the Hitachi NAS Console to see if the events are
generated:
audit-log-show <Name of file system>
min_events_per_run For all the read calls which fetch less number
of events than min_events_per_run, Data
Insight sleeps for sleep_per_run
microseconds, before making next read call.
To alter the configuration parameters. You can use configdb.exe command from
the Data Insight Management Server
For example:
configdb.exe -o -T filer -k 2 -J max_events_to_pull -j 50000
Where,
o- Object attribute
k- Filer ID
J- Attribute name
j- Attribute value
For detailed information about installing the agent manually, see the Symantec
Data Insight Installation Guide.
If you do not want Data Insight to access events for a Windows file server, it is
possible to configure Windows file server without an agent. In this case, Data
Insight scans shares of the filer from the Collector.
■ Review the credentials that are required for configuring Windows file server
monitoring in Data Insight.
See “Credentials required for configuring Windows File Servers” on page 140.
■ Add the Windows file server to Data Insight.
See “Adding filers” on page 155.
See “Add/Edit Windows File Server options ” on page 170.
You can either add a Windows file server to Data Insight through the
Management Console, or if you want to add multiple filers together, you can
use the installcli.exe utility.
See “Using the installcli.exe utility to configure multiple Windows file servers”
on page 142.
Note: All Data Insight worker nodes must be at the same level of major version as
the Management Server. Windows file server agents can be one level lower than
the Management Server version. Thus, Management Server 5.0 is compatible with
both 3.0RU1 (3.0.1) version as well as 5.0 of Windows file server agents. This gives
you enough time to plan out the upgrade of your Windows file server agents.
You can also add a clustered Windows file server to Data Insight. Data Insight
supports only a Microsoft Cluster Server (MSCS) configuration.
See “Configuring a DFS target ” on page 189. for details about configuring a DFS
target.
See “Using the Upload Manager utility” on page 251.
See “Adding filers” on page 155.
See “Add/Edit Windows File Server options ” on page 170.
Credential Details
Credentials required to install agent on the This credential belongs to a user in the
Windows File Server. Administrators group on the Windows File
Server.
Credentials required to discover shares and Required for monitoring shares or when
obtain storage utilization information on the configuring a Windows File Server cluster.
filer. This credential belongs to a user in the
Administrators group on the file server.
Credential Details
Credentials required for scanning shares on Required to scan a share. This credential
the Windows File Server. belongs to a user with necessary share-level
permissions and file system ACLs on a
Windows File Server share.
Note: If you neither want Data Insight to install an agent automatically, nor do you
want Data Insight to discover shares on the cluster or get storage utilization
information, specifying the filer credentials is optional.
The installcli.exe utility uses a .csv file with the following details as input:
■ The host name or IP address of the Windows file servers that you want Data
Insight to monitor.
■ The host name, IP address, or ID of the Collector node that is configured to
scan the filer.
■ The host name, IP address, or ID of the Indexer node that is configured for the
filer.
■ The credentials that Data Insight should use to install the agent on the Windows
file server. The credential should be in the format user@domain. installcli.exe
also accepts LocalSystem credentials as value _LOCAL_. The same credentials
must be added to Data Insight as a saved credential previously.
■ True or false value indicating if the filer is clustered.
■ The IP addresses of the agents. Separate multiple IP addresses with a
semi-colon. If you do not want to use an agent to monitor the filer, indicate this
option with a hyphen (-).
■ The credentials that are required to scan the filer. The credential should be in
the format user@domain. The same credentials should be added to Data Insight
as a saved credential previously.
See “Credentials required for configuring Windows File Servers” on page 140.
■ True or false value indicating whether the scan should be enabled according to
the specified schedule.
■ In case of a Windows file server agent upgrade, RP or Full value indicating the
type of upgrade you want to perform. This parameter is optional.
Optionally, the name of the installer. If the name of the installer is not specified,
an appropriate installer is picked from installers folder on the Collector.
■ True or false value indicating whether event monitoring should be enabled. For
example, winnas.company.com,collector.company.com,indexer.company.com,
Administrator@DOMAIN,FALSE,winnas.company.com,
Administrator@DOMAIN,TRUE,TRUE,RP,
Symantec_DataInsight_windows_winnas_4_0_0_3002_x64.exe
To add multiple Windows file servers
1 Log in to the Data Insight Management Server.
2 Open a Windows command prompt and change to the INSTALL_DIR\bin
directory, where install_dir\bin is the installation path for Symantec Data
Insight.
3 Type the following command:
installcli -w <Path to .csv file with Windows File Server
specifications>
Note: The option to upgrade the agent automatically appears only if you have
configured the Windows File Server to allow Data Insight to automatically install
the agent.
Note: To upgrade the Windows File Server agent manually, see the Symantec Data
Insight Installation Guide. You can upgrade multiple Windows File Server agents
using the installcli utility. See “About configuring Windows file server monitoring
” on page 139.
■ The file server must be installed with Veritas Operations Manager (VOM) 4.1
or higher.
■ NFS version 3.0 is configured on the VxFS filer.
■ The LDAP or NIS domains that your users are part of must be configured in
Data Insight.
■ The Collector node for the VxFs filer must be a Windows 2008 Enterprise server.
Ensure that the Collector node monitoring the VxFS filer has services for NFS
enabled as file server roles. You can install a role on Windows 2008 Enterprise
server through the Server Manager > Add roles option.
■ The filer is accessible from the Collector node using the host name or IP address
you plan to use when adding the filer.
You can also add a clustered VxFS file server to Data Insight. Data Insight supports
only a Veritas Cluster Server (VCS) configuration for VxFS file servers configured
in failover mode. Parallel Clustered File System is not supported in this release.
See “Adding filers” on page 155.
See “Enabling export of UNIX/Linux NFS shares on VxFS filers” on page 148.
See “Add/Edit Veritas File System server options” on page 173.
Credentials Details
Credentials required during filer configuration Required to discover shares on the VxFs filer.
through the Symantec Data Insight This credentials belongs to a user on the
Management Console. UNIX server who has administrative rights on
the VxFS filer (for example, root). The
credential should belong to a root user on the
VxFS filer.
2 Change directory to
/opt/VRTSsfmh/di/web/admin.
Credentials Details
Credentials required for scanning on VxFS Required for scanning of shares from the
filer server VxFS filer.
Ensure that the device entries are added in /etc/fstab to automatically mount
NFS file systems after reboot.
Data Insight uses /etc/exports and /etc/fstab for NFS share discovery. Sample
entries are shown below:
/didata *(rw,sync,no_root_squash)
3 Specify the root access and read only access to Data Insight Collector node.
For example,
/demoshare <Collector node IP> (ro,sync,no_root_squash)
ro:read only
You can specify read +write, root_squash, anonuid, anongid or other settings,
as required.
4 Run the following command to start the NFS daemon
#service nfs start
Credential Details
Credentials required for scanning of shares. Required for scanning of shares from the filer.
■ Adding filers
■ Deleting filers
■ Adding shares
■ Managing shares
■ Deleting shares
■ The object ID of the filer. This numerical value is used to identify the filer
when troubleshooting issues with the filer. This column is hidden by default.
To view this column, click on the column header and select Columns > ID.
■ The name of the filer.
■ The number of shares monitored by the filer.
■ The health of the filer.
■ The type of filer -NetApp, EMC Celerra, Windows File Server, Veritas File
System (VxFS) server, or a generic device.
■ Whether file system event monitoring is enabled.
■ The Collector node for the filer.
■ The Indexer node for the filer.
■ The scanning schedule for the filer. This column is hidden by default.
Adding filers
You must add filers that you want Symantec Data Insight to monitor.
To add filers
1 In the Console, click Settings > Filers.
The Filers page displays the list of available filers.
2 On the Filers page, click the Add New Filer drop-down, and select the type of
filer you want to add.
3 On the New Filer screen, enter the filer properties, and click Add New Filer.
If you are adding a Windows File Server, Data Insight can automatically install
an agent on the filer. This agent enables Data Insight to receive event
notifications from the filer.
For detailed information about installing the agent manually, see the Symantec
Data Insight Installation Guide.
Note: If you are configuring multiple filers for the first time, download a sample
of the CSV file; create a CSV file with the details corresponding to the type of
filer you want to configure.
4 Click Upload.
Field Description
Filer host name or Enter the host name or IP address of the filer that you want Data Insight
IP address to monitor.
Note: The host name or IP address should be the same as the filer
name is entered in Symantec Data Loss Prevention targets.
Events and metadata collected from the filer are processed and stored
on the Indexer node.
Field Description
Filer administrator See “Credentials required for configuring NetApp filers” on page 79.
credentials
Specifying the filer administrator credentials is optional, if you choose
to not monitor events on the filer, nor enable share discovery.
Test credentials Click to test the availability of network connection between the Collector
worker node and the filer, and to test the validity of specified credentials.
Filer is vFiler Select the check box to indicate that this filer is a NetApp virtual file
server.
Physical filer for The host name or IP address of the physical NetApp file server that is
vFiler associated with the virtual file server.
If the Data Insight FPolicy safeguard is not enabled for the virtual file
server, the field is not editable.
Enable CIFS Select this check box to enable monitoring of CIFS shares.
monitoring
Enable NFS Select this check box to enable monitoring of NFS shares.
monitoring
Select domain From the drop-down, select the domain to which the NetApp filer
belongs.
This option is enabled when you enable the monitoring of NFS shares.
Monitoring details Select Automatically discover and monitor shares on this filer to
allow Data Insight automatically discover shares of the filer and add
them configuration.
Discovery of shares takes place as soon as you add a new filer and
then twice each day at 2:00 A.M. and 2:00 P.M.
Field Description
Exclude shares Enter the details of shares which should not be included during
from discovery discovery.
Enable storage Select the check box to allow Data Insight to gather storage utilization
utilization analytics information from the filer. This information is used when you generate
Filer Utilization and Filer Growth Trend reports.
Register for explicit Select the option to register for explicit Read events.
Read events
When this option is not selected, OPEN events are treated as READ
events.
Note: NFSv3 does not support OPEN events. This means that you will
not see READ events for NFS shares when this check box is cleared.
Symantec recommends that you do not register for explicit Read events.
This can increase the load on the filer during peak traffic from third party
applications such as backups over CIFS.
Enable filer Select the check box to enable filer scanning according to the specified
scanning schedule.
Field Description
Scanning schedule Select one of the following to define a scanning schedule for shares of
for full scans this filer:
Scanner See “Credentials required for configuring NetApp filers” on page 79.
credentials
Scan new shares Select this option to scan newly added shares immediately, instead of
immediately waiting for the normal scan schedule. Scanning proceeds only when
scanning is permitted on the Collector node.
See “Enabling export of NFS shares on a NetApp file server” on page 95.
Field Description
Cluster Enter the host name or IP address NetApp Cluster Management host
Management Host interface that is used to manage the nodes in the cluster.
Field Description
Events and metadata that are collected from the cluster are processed
and stored on the Indexer node.
Field Description
Cluster Data Insight uses the credentials that you specify to discover the
Management following:
Interface
■ The SVMs in the cluster and the CIFS server underneath each SVM.
credentials
■ The CIFS shares for each of the CIFS servers.
Cluster Optionally, you can use an SSL certificate instead of the Cluster
Management SSL Management Interface credentials.
Certificate
■ Select the radio button if you want to authenticate the communication
between Data Insight and the NetApp cluster using a digitally signed
certificate.
■ You must generate a self-signed or a CA signed certificate and install
the SSL certificate on the cluster Admin Vserver to use this option.
See “Generating SSL certificates for NetApp cluster-mode
authentication” on page 110.
See “Preparing the NetApp cluster for SSL authentication”
on page 112.
■ From the drop-down, select the a saved SSL certificate, or click Add
New.
■ On the Add SSL Certificate pop-up, do the following:
■ In the Saved SSL Certificate Identifier field, enter a logical
name to identify the filers for which the certificate is used.
■ Browse to the location of the .pem and .key files to select each
of them.
■ Click Upload to upload the SSL certificate to the Data Insight
Collector assigned to the filer.
Note: Do not choose this option if you are using domain user credentials
to configure the NetApp cluster-mode filer. If you choose to use SSL
authentication, use local user credentials.
Field Description
Use Data LIF You can use a Logical Interface (LIF) associated with the Vserver to
hostname for communicate with Data Insight.
scanning
Data Insight uses the Data LIF to access data from the CIFS server. If
(Optional)
the Admin LIF and Data LIF are associated with two different networks,
then you must specify the Data LIF name while scanning the CIFS
shares that reside on configured CIFS servers.
Providing a Data LIF hostname is also useful if the Admin LIF for the
cluster is not configured for CIFS protocol access.
Test credentials Click to test the availability of network connection between the Collector
worker node and the filer, and to test the validity of specified credentials.
By default, Data Insight does not test credentials for the following
HOMEDIR shares:
■ ~
■ ~CIFS.HOMEDIR
■ CIFS.HOMEDIR
■ %w
■ %d
CIFS server Every SVM node in the cluster has a CIFS server configured on it. The
CIFS server represents a file server for Data Insight.
Data Insight automatically discovers all the CIFS servers that are
configured in the cluster. From the drop-down, select the CIFS server
that you want Data Insight to monitor.
Ensure that you can resolve the CIFS server host name from the
Collector node.
Enable CIFS Select this check box to enable monitoring of CIFS shares.
monitoring
Field Description
Monitoring details Select Automatically discover and monitor shares on this filer to
allowData Insight to automatically discover shares of the filer and add
them configuration.
Discovery of shares takes place as soon as you add a new filer and
then twice each day at 2:00 A.M. and 2:00 P.M.
Exclude shares Enter the details of shares which should not be included during
from discovery discovery.
Enable storage Select the check box to allow Data Insight to gather storage utilization
utilization analytics information from the filer. This information is used when you generate
Filer Utilization and Filer Growth Trend reports.
If you clear this check box, you must manually enable FPolicy on the
filer.
Register for Select if you want Data Insight to monitor the changes to permissions
permission change in your storage environment.
events
By default, this option is not selected because it has a significant impact
on the performace of the file server.
Field Description
Enable filer Select the check box to enable filer scanning according to the specified
scanning schedule.
Scanning schedule Select one of the following to define a scanning schedule for shares of
for full scans this filer:
Scanner See “Credentials required for configuring a clustered NetApp file server”
credentials on page 101.
Scan new shares Select this option to scan newly added shares immediately, instead of
immediately waiting for the normal scan schedule. Scanning proceeds only when
scanning is permitted on the Collector node.
Field Description
CIFS Server Name Enter the host name of the CIFS server that is exported by the filer.
Field Description
Events and metadata that are collected from the cluster are processed
and stored on the Indexer node.
Control Station Enter the credentials for the filer's Control Station.
Credentials
These credentials are used to discover shares on the filer and add them
to the configuration.
Virtual Data Mover Select the check box if the filer is running a virtual data mover.
This field is used to handle physical paths that are returned for virtual
data movers.
Field Description
Test credentials Click to test the availability of network connection between the Collector
worker node and the control station and the validity of the specified
credentials.
Monitoring details Select Automatically discover and monitor shares on this filer to
enable Data Insight to automatically discover shares of the filer and add
them configuration.
Clear the check box if you use Control Station credentials with
insufficient privileges for share discovery. If you choose to use
credentials that do not have administrator rights and XML v2 privilege,
you must manually add shares to the configuration.
Discovery of shares takes place as soon as you add a new filer and
then twice each day at 2:00 A.M. and 2:00 P.M.
Enable filer Select the check box to enable filer scanning according to the specified
scanning schedule.
Scanning schedule Select one of the following to define a scanning schedule for shares of
for full scans this filer:
Field Description
Scan new share Select this option to scan newly added shares immediately, instead of
immediately waiting for the normal scan schedule.
Scanning will still run only when scanning is permitted on the Collector
node.
Cluster Enter the host name for the Isilon cluster. It can be EMC Isilon
Management Host SmartConnect Cluster name or it can be the DNS resolvable host name
of one of the hosts of the cluster.
Note: The Cluster Management Host name is the same host name
which is entered during the configuration of audit settings on the Isilon
cluster. See “Configuring audit settings on EMC Isilon cluster using
OneFS GUI console” on page 124.
Field Description
Cluster Select the saved credentials from the drop-down list to access the
Management Host Cluster Management Host.
Credentials
Test credentials Click to test the availability of network connection between the Collector
worker node and the Isilon cluster.
Monitoring Details Select Automatically discover and monitor shares on this filer to
enable Data Insight to discover the shares on the filer automatically
and add them configuration.
Ensure that the host name of the access zones that you want Data
Insight to discover are resolvable from the Collector node monitoring
the filer.
Enable filer Select the check box to enable filer scanning according to the specified
scanning schedule.
Scanning Select one of the following to define a scanning schedule for shares of
Schedule(Full this filer:
Scan)
■ Use the Collector's default scanning schedule
■ Use custom schedule
Field Description
Scan newly added Select this option to scan newly added shares immediately, instead of
shares waiting for the normal scan schedule. Note that a scan can run only
immediately when scanning is permitted on the Collector node.
Field Description
Is a MSCS Select the check box if the Windows File Server is part of a Microsoft
clustered file Cluster Server configuration.
server
Windows server Enter the host name or IP address of the filer that you want Data Insight
name/Cluster to monitor.
name
In case of a clustered Windows File Server, enter the host name or IP
address of the cluster.
Note: The hostname or IP address should be same as the filer name
entered in Symantec Data Loss Prevention Discover targets.
Field Description
Events and metadata that are collected from the cluster are processed
and stored on the Indexer node.
Agent names for This option is visible when adding a clustered file server that is monitored
this filer using an agent, but where the agent is installed manually.
Select one or more agent nodes from the list that belong to this cluster.
Let Data Insight Select to allow Data Insight to install or upgrade the agent on the
install the agent Windows File Server.
automatically
Data Insight automatically installs the Windows File Server agent on
the filer using the WMI interface and also registers the filer with the
Management Server.
Node names to This option is only visible if you have selected Is a MSCS clustered
install agent file server.
Field Description
Filer Administrator Enter the credentials that Data Insightshould use to install the agent on
Credentials the Windows File Server.
Test Connection Click to test the availability of network connection between the Collector
worker node and the filer, and the validity of the specified credentials.
Automatically Use this option to have Data Insight automatically discover shares of
discover and the filer and add them configuration. You can choose to exclude certain
monitor all shares shares using the Exclude shares field. Discovery of shares takes place
on this filer as soon as you add a new filer and then twice each day at 2:00 a.m.
and 2:00 p.m.
Exclude following Enter the details of shares which should not be included in share
shares from discovery.
discovery
This option is available if you select Automatically discover all shares
on this filer. Specify comma separated patterns that you want to ignore.
Patterns can have 0 or more wildcard * characters. For example, tmp*
ignores shares tmp_A, tmp_abc, *$ ignores shares C$, EXT$ and
others.
Collect storage Select to enable Data Insight to collect storage utilization information
utilization from the filer. This information is used to create Filer utilization and Filer
information for the Growth Trend reports.
filer
Enable filer Select the check box to enable filer scanning according to the specified
scanning schedule.
Field Description
Scanning schedule Select one of the following to define a scanning schedule for shares of
for full scans this filer:
Scan new shares Select this option to scan newly added shares immediately, instead of
immediately waiting for the normal scan schedule.
Scanning will still take place during the hours when scanning is permitted
on the Collector node.
Field Description
This is a VCS Select the check box if the Veritas File System server is part of a Veritas
clustered file Cluster Server (VCS) configuration.
server
VCS cluster name Enter the logical name of the VCS cluster. This field is avalable only if
you select the This is a VCS clustered file server check box.
Cluster Node IP Enter the comma-separated list of the host names or IP addresses of
addresses the physical nodes in the VCS cluster.
Filer hostname or Enter the hostname or IP address of the filer that you want Data Insight
IP address to monitor.
Note: The hostname or IP address should be the same as the filer
name entered in Symantec Data Loss Prevention Discover targets.
Table 13-6 Add/Edit Veritas File System (VxFS) filer options (continued)
Field Description
Events and meta-data collected from the filer is processed and stored
on the Indexer node.
Login credentials See “Credentials required for configuring Veritas File System (VxFS)
servers” on page 146.
Test credentials Click to test the availability of network connection between the Collector
worker node and the filer, and to test the validity of the specified
credentials.
Monitoring details Select Automatically discover and monitor shares on this filer to
have Data Insight automatically discover shares of the filer and add
them to the configuration.
Discovery of shares takes place as soon as you add a new filer and
then twice each day at 2:00 a.m. and 2:00 p.m.
Exclude shares Enter the details of shares which should not be included during
from discovery discovery.
Table 13-6 Add/Edit Veritas File System (VxFS) filer options (continued)
Field Description
Enable filer Select the checkbox to enable filer scanning according to the specified
scanning schedule.
Scanning schedule Select one of the following to define a scanning schedule for shares of
for full scans this filer:
Scanner See “Credentials required for configuring Veritas File System (VxFS)
credentials servers” on page 146.
Scan newly added Select this option to scan newly added shares immediately, instead of
shares waiting for the normal scan schedule.
immediately
See “Enabling export of UNIX/Linux NFS shares on VxFS filers” on page 148.
Field Description
Filer hostname or Enter the hostname or IP address of the device that you want Data
IP address Insight to monitor.
Events and metadata that are collected from the cluster are processed
and stored on the Indexer node.
Field Description
Domain From the drop-down, select the domain to which the device belongs.
Enable filer Select the check box to enable filer scanning according to the specified
scanning schedule.
Scanning schedule Select one of the following to define a scanning schedule for shares of
for full scans this filer:
2 Enter the name of a share on the the device, and click OK.
Scanner See “Credentials required for scanning a generic device” on page 151.
credentials
Scan new shares Select this option to scan newly added shares immediately, instead of
immediately waiting for the normal scan schedule. Scanning proceeds only when
scanning is permitted on the Collector node.
Field Description
Hitachi EVS Enter the host name or IP address of the HNAS file system EVS that
Hostname/IP you want Data Insight to monitor.
Field Description
Events and metadata collected from the filer is processed and stored
on the Indexer node.
Hitachi EVS Data Insight uses the credentials that you specify to discover the CIFS
Credentials shares for each of the CIFS servers.
Test Credentials Click to test the availability of network connection between the Collector
worker node and the Hitachi NAS file server.
Field Description
Monitoring Details Select Automatically discover and monitor shares on this filer to
allow Data Insight to automatically discover shares of the filer and add
them to the configuration.
Discovery of shares takes place as soon as you add a new filer and
then twice each day at 2:00 A.M. and 2:00 P.M.
Enable Filer Select the check box to enable filer scanning according to the specified
Scanning schedule.
Scanning Select one of the following to define a scanning schedule for shares of
Schedule(Full this filer:
Scan)
■ Use the Collector's default scanning schedule
■ Use custom schedule
Scan newly added Select this option to scan newly added shares immediately, instead of
shares waiting for the normal scan schedule. Note that a scan can run only
immediately when scanning is permitted on the Collector node.
Option Description
Once Runs the scan once at the specified time and date.
Daily Runs the scan once every day. You must specify the time when the scan
should be run.
Option Description
Weekly Runs the scan once every week. You can choose to run it on every weekday,
or on specific weekdays. Also, you must specify the time when the scan
should be run.
Monthly Runs the scan on the specified days of a month. You must specify the days
of the month and the time when the scan should be run. Separate multiple
days with a comma. For example, 2,5.
Custom Cron Runs the scan according to a defined cron schedule. You can build strings
in the cron format to specify custom schedules such as every Friday at
noon, or weekdays at 10:30 a.m., or every 5 minutes between 9:00 a.m
and 10 a.m. on Wednesdays and Fridays.
Deleting filers
You can delete a configured filer.
To delete a filer
1 In the Console, click Settings > Filers to display the configured filers.
2 Do one of the following:
■ In the filer summary table, click the Select Action drop-down, and select
Delete.
■ Click the filer you want to delete, and on the filer details page, click Delete
.
Note: For EMC Celerra, VxFS, and Windows File Servers, you can only view the
count of files and folders across all the shares on the filer.
Adding shares
All shares on a filer are added to the Data Insight configuration when you add the
filer to Data Insight. You must add shares present on the filer manually if you do
not select the Discover shares automatically option when adding a filer. You can
either add multiple shares at once using a CSV file or select individual shares on
a filer that you want to add to the Data Insight configuration.
To add a share
1 In the Console, click Settings > Filers .
2 To add shares from a filer, do one of the following:
■ On the Filer list page, select the filer from which you want to add shares.
Click Add Shares in Bulk. On the pop-up, browse to the location of the
CSV file and select it. Click Upload.
If you are adding shares in bulk for the first time, you must create a CSV
file in a specific format. Download a sample CSV file to view the format.
■ Click a configured filer. On the Details page, click Monitored Shares.
3 On the Monitored Shares list page, click Add New Share or Add Shares in
Bulk.
4 On the Add Share pop-up, enter the share properties, and click Save.
See “Add New Share/Edit Share options ” on page 184.
Field Descriptions
Share name Enter the name of the share you want to add. For example, share1.
Physical path on Enter the physical path of the share on the filer. For example,
filer F:\<Share name>.
Field Descriptions
Enable legal hold Select to preserve the activity information on the share. Selecting this
for this share option disables the deletion or archiving of access event information on
the share.
Managing shares
On the Monitored Shares details page you can view the detailed information about
configured shares and run a customized scan on the configured shares.
Use the provided dynamic search filter to search for configured shares based on
the name of the share.
To view configured shares
1 In the Console, click Settings > Filers.
2 Click the filer on which the share resides.
3 On the Filer Detail screen, click Monitored Shares.
Review the following information about the shares:
■ ID of the share. The ID is required during troubleshooting. This column is
hidden by default.
■ The name of the share.
If this share belongs to a clustered filer, then the name should appear as
fileserver@share, where, fileserver is the name of the file server within the
cluster that hosts the share.
■ Type of this share, CIFS or NFS.
■ Enabled status of this share. This column is hidden by default.
■ Legal hold status for this share. This column is hidden by default.
■ The scanning schedule for the share. This column is hidden by default.
■ The date and time of the last full scan of the share.
■ Whether a legal hold is being enforced for the share. You can choose to
prevent access information for a share from being archived or deleted by
putting a legal hold on the share. Data Insight preserves the access events
information for such shares indefinitely.
See “Add New Share/Edit Share options ” on page 184.
4 Click the Export icon at the bottom of the page to save the data on the Monitored
Shares panel to a .csv file.
You can also add a new share, edit the share's configuration, delete the share, start
an unscheduled scan for a share, view the scan status, and download Data Insight
logs from this page.
To view the scan status of a share
1 In the Console, click Settings > Filers.
2 Click the filer on which the share resides.
You can also view the scan status for a share from the Scan History sub-tab of
the Scanning dashboard.
To view events pertaining to a share
1 In the Console, click Settings > Filers.
2 Click the filer on which the share resides.
3 On the filer details screen, click Monitored Shares.
4 Click the Action drop-down for the corresponding share, and select Event
Log.
The event log for that share appears.
5 To download the Data Insight logs for the share, click the Select Action
drop-down for the corresponding share, and select Download Log.
Data Insight downloads a compressed folder containing logs for this share from
all relevant Data Insight servers.
See “Downloading Data Insight logs” on page 338.
Note: The Scan option is not available for shares that have been disabled.
2 To scan multiple share, select one or more shares using the check boxes.
3 Click Scan, and select Scan Selected Records.
Optionally, filter shares as needed using the filters available on the page. Click
Scan, and select Scan Filtered Records.
Note: You can use a command line utility, scancli.exe, to further customize the
scan, view the scan jobs running on a specified node, or display the scan status
for specified shares. See scancli.exe on page 386. You can also use the Scanning
dashboard view to scan shares and site collections based on more granular criteria.
Deleting shares
You can delete a configured share.
To delete a share
1 In the Console, click Settings > Filers to display the configured filers.
2 Click the filer, on which the share that you want to delete exists.
3 On the filer details page, under Monitored Shares, select the share that you
want to delete.
4 Click the Select Action drop-down and select Delete.
5 Click OK on the confirmation message.
You must first import the DFS mappings to physical shares in to Data Insight before
you can view data using DFS hierarchy.
then import settings from test1.csv and test2.csv from the Data Insight
Management Console
When you import a new DFS mapping file to Data Insight, the old mappings are
maintained in Data Insight. For example, if you import mappings from test1.csv
and then from test2.csv, the mappings from both files are displayed in Data Insight.
However, if there are some duplicate mappings (the same DFS link appears twice
– whether mapped to the same physical path or a different path), these mappings
are not imported. A message is displayed indicating that there are duplicate
mappings and hence one or more mappings cannot be imported.
where,
-n is the name of the DFS root
An exclude list can have max 128 exclude entries. For example,
\\DFS\root\AP\NUY
\\DFS\root\users
■ Enable auditing on the SharePoint server. You can enable auditing from the
Management Console when you add Web applications, or directly from the
SharePoint server.
See “ Add/Edit Web application options” on page 199.
Credential Details
Credentials required to install the Data Insight This credential belongs to a user in the
Web service on the SharePoint Server. Administrators group on the SharePoint
server.
Credential Details
Note: If you are configuring multiple web applications for the first time, download
a sample of the CSV file; create a CSV file with the details corresponding to
each web application that you want to configure.
4 Click Upload.
Field Description
Web application Enter the URL of the web application that you want Data Insight to
URL monitor.
Events and metadata that are collected from the cluster are processed
and stored on the Indexer node.
Default Site Enter the credentials that Data Insight should use to provide
Collection authenticated access to the Data Insight Web service on the SharePoint
Administrator server.
Field Description
Verify credential Click to test the availability of network connection between the Collector
worker node and the SharePoint server, and to test the validity of
specified credentials. You must first ensure that the Data Insight Web
service is already installed on the SharePoint server.
Exclude following Enter the details of the site collections which should not be included
site collections during discovery.
from discovery
This option is available if you select Automatically discover and add
site collections in the added SharePoint Web Applications. Specify
comma separated patterns that you want to ignore. Patterns can have
0 or more wildcard * characters.
Monitor access for Select to enable monitoring of access events for the Web application.
this Web
application
Automatically Select to automatically enable event monitoring for all site collections
enable auditing for of this Web application.
site collections of
You can also choose to manually enable auditing by logging in to the
this Web
SharePoint server. For this purpose, you must have site collection
application
administrator privileges on the SharePoint server.
Delete audit logs Select to delete audit logs from SharePoint to prevent the Web
from SharePoint application database from growing too large. By default, Data Insight
database after deletes audit logs that are older than two days from the SharePoint
importing in Data server once every 12 hours. You can configure this interval from the
Insight. Advanced Settings page for the corresponding Collector.
You can choose to customize how often Data Insight should delete old
audit logs from the Data Insight Servers node on the Management
Console.
Field Description
Enable scanning Select the checkbox to enable SharePoint scanning according to the
for this Web specified schedule.
application
Scanning schedule Select one of the following to define a scanning schedule for the
for full scans SharePoint servers in this farm:
Scan newly added Select this option to scan newly added site collections immediately,
site collections instead of waiting for the normal scan schedule. Scanning will still
immediately proceed only when scanning is permitted on the Collector node.
■ Click the web application whose configuration you want to edit. On the web
application detail screen, click Edit.
3 On the Monitored Site Collection list page, click Add Site Collection or Add
Site Collections in Bulk.
4 On the Add New Site Collection pop-up, enter the site collection properties,
and click Save.
Field Description
Site Collection Enter the URL of the site collection that you want to add.
URL
Field Description
Scanning schedule Select one of the following to define a scanning schedule for the site
collection:
Enable legal hold Select to preserve the activity information on the site collection. Selecting
for this site this option disables the deletion or archiving of access event information
collection on the site collection.
■ The date and time of the last full scan of the site collection.
■ The time this site collection's index was last updated with scan information.
After every scan, the index is updated with information about the changes
to the folder hierarchy on a site collection. This column indicates whether
the last update was successful or has failed. It also indicates number of
scan files pending for this site collection on the Indexer and the number of
files that failed to be indexed. The files that failed to be indexed, are present
in the $data/indexer/err folder on the Indexer. If you do have failed files
on the indexer, you can move them from the err folder to the $data/inbox
folder and attempt a full scan of the site collection.
If the scan information again fails to be indexed, contact Symantec support.
■ The time this site collection's index was last updated with access event
information.
As new access events come in, the index for the site collection is periodically
updated with information about these events. This indicates whether the
last update was successful or has failed. It also indicates number of audit
files pending for this site collection at the Indexer and the number of files
that failed to be indexed. Audit files are present in the $data/indexer/err
folder on the Indexer. If you do have failed files on the indexer, you can try
moving them back to $data/inbox folder on the Indexer.
If the new audit information again fails to be indexed, contact Symantec
support.
■ The status of event monitoring for the site collection, whether enabled or
disabled.
■ Whether a legal hold is being enforced for the site collection. You can
choose to prevent access information for a site collection from being
archived or deleted by putting a legal hold on the site collection. Data Insight
preserves the access events information for such site collections indefinitely.
See “Add/Edit site collection options” on page 204.
5 Click the Export icon at the bottom of the page to save the data on the
Monitored Site Collections panel to a .csv file.
You can also edit the properties of the site collection, start an unscheduled scan
of the site collection, delete the site collection, view the event log or scan history
of the site collection, or download logs for troubleshooting purposes.
To edit a site collection
1 On the web application details page, click Monitored Site Collections.
2 Select the site collection that you want to edit, and from the Select Action
drop-down, select Edit.
3 On the Edit site collection screen, make the necessary configuration changes.
4 Click Save.
To delete a site collection
1 On the web application details page, click Monitored Site Collections.
2 Select the site collection that you want to delete, and from the Select Action
drop-down, select Delete.
3 Click OK on the confirmation message.
To view the scan history of a site collection
1 On the web application details page, click Monitored Site Collections.
2 Select the site collection for which you want to view the scan history, and from
the Select Action drop-down, select Scan History.
The scan history for the site collection appears. You can view the details in a
tabular format or in a Timeline view. The tabular view displays the following
details of a scan:
■ The start and end time of the scan.
■ The time taken for the scan.
■ The type of scan, whether full or incremental.
■ The Collector node associated with the site collection.
■ The details of the scan. For example, if a scan has failed, the Details column
indicates the exit code for the error message.
■ The user account that initiated the scan.
The Timeline view displays an hourly and daily overview of the scans run on
the site collection, including information about the status of the scan, whether
failed, successful, partially successful or aborted.
You can also view the scan history of a site collection from the Scan History sub-tab
of the Scanning dashboard.
To view events pertaining to a site collection
1 In the Console, click Settings > SharePoint Web Applications.
2 On the web application details screen, click Monitored Site Collections.
3 Click the Select Action drop-down for the corresponding site collection, and
select Event Log.
The event log for that site collection appears.
4 To download the logs for the site collection, click the Select Action drop-down
for the corresponding site collection, and select Download Logs.
Data Insight downloads a compressed folder containing the logs for this site
collection from all relevant Data Insight servers.
See “Downloading Data Insight logs” on page 338.
To scan site collections in a batch
1 On the Monitored site collections tab, click the Scan button.
Note: The Scan option is not available for site collections that have been
disabled.
Note: You can use a command line utility, scancli.exe, to further customize the
scan, view the scan jobs running on a specified node, or display the scan status
for specified site collections. For details, See scancli.exe on page 386.
Data Insight scans the Box account for metadata such as path of the file or folder,
modified by, modified at, created by, created at, size, file or folder type, and whether
more than one user collaborate on the resource.
Data Insight maps the user name for every user account attribute to the users'
attributes in the directory service for the purpose of ascertaining ownership on
folders.
Field Description
Box account name This is a free-form field. Enter a name that Data Insight uses to identify
your box account. The name that you enter in this field represents a file
share.
Field Description
Events and metadata collected from the Box account are processed
and stored on the Indexer node.
Data Insight can now access the user, folder and file metadata
and the information about the activities performed on these file
and folders. .
Enable scanning of Select the check box to enable Data Insight to scan the Box account
Box account using the Box Administrator credentials according to the specified
schedule.
Scanning schedule Select one of the following to define a scanning schedule for shares of
for full scans this Box account:
Symantec Data Insight periodically scans the Box account to obtain file
metadata and security descriptors. Each Collector worker node by
default initiates a full scan of shares at 7:00 P.M. on the last Friday of
each month.
■ About containers
■ Managing containers
■ Adding containers
About containers
A container can consist of similar entities such as filers, shares, Web applications,
site collections, or DFS paths. Grouping the entities under a single container allows
you to easily define the scope of a role assigned to a user.
For example, User1 is assigned the Product Administrator role. You can further
define the scope of the role by selecting a container that contains only the filers
that you want User1 to access.
Managing containers
You can add containers to Data Insight, view details of the configured containers
and delete one or more containers on the Containers listing page.
To manage containers
1 In the Console, click Settings > Containers to display the Containers details
page.
2 The list of configured containers appears.
Adding containers
You must add containers to Data Insight that group the filers, Web applications,
shares, site collections or DFS paths, as required.
To add a new container
1 In the Console, click Settings > Container.
2 On the Containers page, click Add new container.
3 On the Add new container screen, enter the container properties, and click
Add new container.
4 Enter
Field Description
■ Adding a user
■ Editing users
■ Deleting users
Server Administrator Allows the user to perform all actions in the product GUI that
includes setting up all infrastructure (including filers, users, and
others) and view all the access and permissions data.
Product Administrator Allows the users to manage filer settings and optionally to view all
the access and permissions data for the given filers. Product
administrator role, configured for a select set of filers/Web
applications, is not allowed to add new filers or delete configured
filers.
Report Administrator Allows the user to view and edit all reports that are configured in
the system and all data on the Workspace tab. Additionally, the
Report Administrator may or may not be configured to take post
processing actions on a report output.
Only a user with the Server Administrator role can add a user with
the Report Administrator role.
User Allows the users to view all permissions data. However, user in
this role may or may not be allowed to view activity data of other
users and certain reports.
Users assigned this role also do not have access to any tasks on
the Settings tab.
Storage User Allows the users to view storage-related data in the Workspace
tab, but does not allow them to view permissions data or audit
logs. Users in this role do not have access to the Settings tab.
Adding a user
This section describes how to add users to Symantec Data Insight.
To add new a Data Insight user
1 In the Console, click Settings > Data Insight Users to display the Product
Users listing page.
2 Click Add New Data Insight User.
3 On the Configure new product user page, enter the user properties, click Add
New Data Insight User.
See “Add or edit Data Insight user options ” on page 220.
Field Description
Domain name Enter the name of the domain to which the user belongs.
Role From the drop-down, select the role you want to assign
the user.
Allow access to Workspace data Select the check box to enable the user to view the
screens on the Workspace and the Reports tabs
Field Description
Allowed to remediate data and Select the check box to enable the user to take
permissions remediation actions on the report output. If the user is
restricted access to the Remediation tab during reports
configuration, the user cannot execute remediation actions
when the report is run. Also, if a role restricts the user
from taking remediation actions, the actions are not
executed even if such user runs a report created by a user
who is allowed to take remediation actions on a report
output.
Field Description
Allowed to view activity data? This option lets you restrict the user's access to activity
views and certain reports. Select the check box to enable
the user to view access information of other users. If the
check box is cleared, the following tabs and views will not
be accessible to the user:
Editing users
After you add a user to Data Insight, you can edit the user properties. For example,
you might need to edit any of the following:
■ The role assigned to the user
■ The view option for the user
■ The filers and/or Web applications that the user is allowed to monitor
To edit the properties of a user
1 In the Console, double-click Settings > Data Insight Users to display the
Product Users listing page.
2 Click the Edit button for the corresponding user.
3 On the Edit Data Insight user page, make changes. as necessary, and click
Save.
See “About Data Insight users and roles” on page 218.
Deleting users
You can delete Data Insight users.
To delete an user
1 In the Console, double-click Settings > Data Insight Users to display the
Product Users listing page.
2 Select the user, and click Delete.
3 Click OK on the confirmation message.
4 Click Install.
You can view the progress and status of the installation on the Installation Status
page.
See “Viewing the status of a remote installation” on page 253.
Data Insight also displays the recommendation to upgrade to the latest version, if
available.
To view server events
1 In the Console, click Settings > Data Insight Servers.
2 Click the Select Action drop-down for the corresponding server in the servers
listing table, and select Event Log.
Or, on the details page for a server, click Event Log.
The event log for that server appears.
You can create a node template to change one or few settings on multiple nodes.
Using node templates is useful when multiple nodes need to inherit the same
settings, for example, more number of indexer threads for all indexers in your
environment.
See “Managing node templates” on page 254.
To apply a template to a Data Insight node
1 In the Console, click Settings > Data Insight Servers.
2 Select the server to which you want to apply a template, and from the Apply
a node template drop-down, select a configured template.
3 Click Yes to confirm.
To delete a server
1 In the Console, click Settings > Data Insight Servers.
2 Click the Select Action drop-down for the corresponding server in the servers
listing table, and select Delete.
Note: Data Insight does not allow you to delete a server, if it is associated with a
storage device.
See “About automated alerts for patches and upgrades” on page 249.
See “Configuring advanced settings” on page 234.
server. At this time, Data Insight does not support changing the address of
the server.
■ Roles
This indicates the roles that the server plays. Possible server role values
are Management Server, Indexer, Collector, and Windows File Server
Agent.
■ Filer Name
If the server is a Windows File Server Agent, the name of the associated
file server is displayed here.
■ Data Insight version
Indicates the version of Data Insight installed on this server.
■ Operating System
Indicates the operating system installed on this server.
■ CPUs
Indicates the number of CPUs available on this server.
■ Memory
Indicates the RAM installed on this server in MBs.
■ Associated Windows File Server
This detail is available only if you select the server with the Windows Filer
Server agent role. Indicates the host name or IP address of the Windows
File Server that the agent is monitoring.
■ Server Health
Indicates the current state of the server - whether a server is in a healthy
state, is faulted, or at risk. You can also view the reasons for the current
health state of the server.
■ Composition of Data Directory
The pie chart shows the disk space utilized by various folders under the
data directory. These folders include collector, indexer, console,
workflow, attic, inbox, outbox and others
■ Product Updates
The suggestion for upgrades if a newer version of Data Insight is available.
You must add the Portal role to the Data Insight server you want to use as the Portal
node.
You must ensure that you upgrade the node to which you want to add the Portal
node to Release 4.5.1.
For more information about the Self-Service Portal node, see the Symantec Data
Insight Installation Guide.
To add the Portal role to a Data Insight server
1 In the Console, click Settings > Data Insight Servers.
2 Click the Select Action drop-down for the corresponding server that you want
to use as the Self-Service Portal, and select Add Portal role.
The option to add the Portal role to the Management Server, Collector node
or Collector and Indexer node is not available if the Portal role is already added
to the server.
3 On the Add Portal role pop-up, enter the default port number. The
DataInsightPortal service runs on port 443 by default.
However, if you want to designate the Management Server as the Portal node,
you must enter a port number other than the default port number because the
DataInsightWeb service also runs on port 443 on the Management Server.
4 Click Configure to designate the server as the Portal node and to install the
DataInsightWorkflow service and the DataInsightPortal service on the server.
See “Configuring Data Insight services” on page 233.
Table 18-1 lists the type of files and their location on server nodes.
Table 18-1 Files and their location on Data Insight servers (continued)
3 Click Cancel to cancel a particular scan or click Cancel All to cancel all scans.
Note: To view the in-progress scans across all nodes in your environment, navigate
to the Settings > Scanning > In-progress Scans tab.
from generic file servers and web API clients, and copies them to a specific
folder on the Collector.
■ DataInsightWorkflow service - The service runs only on the Management Server.
This service is responsible for managing the lifecycle of various actions initiated
from the Management Server.
■ DataInsightPortal service - The service runs on any server that is designated
as the Portal node. It provides an interface to the portal where the custodians
can log on to take remediation action. The service runs on the Management
Server and the Portal node.
For detailed information about the Data Insight services, see the Symantec Data
Insight Installation Guide.
Depending on the type of the filers managed by the Collector, you can enable the
FPolicy, EMC Celerra, or Genericcollector service on the server from this page.
To enable or reconfigure the FPolicy or EMC Celerra service
1 On the Services tab, click the service that you want to enable on the server.
2 From the Select saved credential drop-down, select the credential that the
service uses to run. Ensure that the user used to configure the FPolicy service
is added to to the Group Policy object with the Log on as a service privilege
in your Active Directory domain controller.
3 If configuring the DataInsightFpolicy service, enter the name of the policy.
4 If configuring the DataInsightCelerra service, select one of the following to
specify the location of the server on which the EMC CAVA service is installed:
■ EMC CAVA Service is installed locally on this server
■ Remote EMC CAVA Server Pool publishes events to this server
5 Click Configure.
See “Configuring advanced settings” on page 234.
See “Credentials required for configuring NetApp filers” on page 79.
See “Credentials required for configuring EMC Celerra filers” on page 119.
■ Filesystem Scanner settings - Configures how the server scans file systems.
Data Insight performs two types of scans on the configured shares:
■ Full scans
During a full scan, Data Insight scans the complete share. These scans can
run for several hours, if the share is very big. Typically, a full scan should be
run once for a newly added share. After the first full scan, you can perform
full scans less frequently based on your preference. Ordinarily, you need to
run a full scan only to scan those paths which might have been modified
while event monitoring was not running for any reason. In all other cases,
the incremental scan is sufficient to keep information about the file system
metadata up-to-date.
See Table 18-2 on page 236.
■ Incremental scans
During an incremental scan, Data Insight re-scans only those paths of a
share that have been modified since the last full scan. It does so by
monitoring incoming access events to see which paths had a create event
or write event on it since the last scan.
See Table 18-3 on page 238.
■ Indexer settings - Configures how the indexes are updated with new information.
This setting is applicable only for Indexers.
See Table 18-5 on page 240.
■ Audit events preprocessor settings - Configures how often raw access events
coming from file servers must be processed before they are sent to the Indexer.
See Table 18-6 on page 241.
■ High availability settings - Configures how this server is monitored.
Each server periodically monitors its CPU, memory, state of essential services,
number of files in its inbox, outbox, and err folders. Events are published if these
numbers cross the configured thresholds. Also, each worker node periodically
heartbeats with the Management Server. The Management Server publishes
events if it does not receive a heartbeat from a node in the configured interval.
See Table 18-7 on page 242.
■ Report settings - Configures settings for reports.
See Table 18-8 on page 242.
■ Windows File Server Agent settings - Configures the behavior of the Windows
File Server filter driver. This setting is applicable only for the Windows File Server
Agent server.
See Table 18-9 on page 243.
■ Veritas File System server (VxFS) settings - Configures how Data Insight scans
the VxFS filer.
Note: Symantec recommends using the custom properties settings under the
guidance of the Support.
You can configure the advanced settings per node or save commonly used settings
as a templates. See “About node templates” on page 254.
To configure advanced settings
1 In the Console, click Settings > Data Insight Servers.
2 Click the server, for which you want to configure the advanced settings.
3 Click Advanced settings.
4 Click Edit.
5 Make necessary configuration changes, and click Save.
See “Managing node templates” on page 254.
Each of the categories for the advanced settings are described in detail below.
Setting Description
Total scanner threads The Collector can perform multiple full scans
in parallel. This setting configures how many
full scans can run in parallel. The default
value is two threads. Configure more threads
if you want scans to finish faster.
Setting Description
Scan multiple shares of a filer in parallel This setting indicates if the scanner can
perform a full scan on multiple shares of the
same filer in parallel.
Maximum shares per filer to scan in parallel If multiple shares of a filer can be scanned in
parallel, this setting puts a limit on the total
number of shares of a filer that you can scan
in parallel
Pause scanner for specific times You can configure the hours of the day when
scanning should not be allowed. This setting
ensures that Data Insight does not scan
during peak loads on the filer.
Setting Description
1 Click Add.
3 Click Save.
Setting Description
Scan multiple shares of a filer in parallel The setting indicates whether the scanner
can perform an incremental scan on multiple
shares of the same filer in parallel.
Maximum shares per filer to scan in parallel If multiple shares of a filer can be scanned in
parallel, this setting puts a limit on total
number of shares of a filer that can be
scanned in parallel.
Setting Description
Pause scanner for specific times You can configure hours of the day when
scanning should not be allowed. This setting
ensures that Data Insight does not scan
during peak loads on the filer.
Setting Description
Scanner snapshot interval Scanning a big share can take several hours.
The scanner periodically saves information
to a disk so that information is visible sooner
without waiting for the entire scan to finish.
Setting Description
Limit maximum events processed in memory By default, the indexer processes all new
incoming events in memory before saving the
events to the disk. If your are falling short of
RAM on your Indexer, you can limit the
maximum number of events that the indexer
processes in memory before it saves them to
the disk.
Reconfirm deleted paths when reconciling full After Data Insight indexes full scan data, it
scan information computes the paths that no longer seem to
be present on the file system. Set this option
to true to haveData Insight re-confirm if those
paths are indeed deleted using an
incremental scan before removing them from
the index.
Setting Description
Indexer integrity checking schedule Data Insight checks the integrity of its
databases once a week. If any errors are
found in the database, an event is published.
You can configure a different schedule if
required.
Setting Description
Audit events preprocessor schedule Incoming raw audit events from file servers
must be pre-processed before sending them
to the Indexer. At this stage,
collector.exe applies various heuristics
to the raw events and also removes transient
events.
Batch size (MB) The maximum size of the raw audit event files
that a single Collector thread can process.
Setting Description
Ping timeout (in minutes) If a worker node does not heartbeat in the
specified interval, Management server will
publish an event to that effect. This setting is
only applicable for the Management Server.
Notify when CPU continuously over If CPU used on this server is consistently over
(percentage) the specified percentage, an event is
published. (Default value: 90%)
Notify when memory continuously over If Memory used on this server is consistently
(percentage) over the specified percentage, an event is
published. (Default value: 80%)
Notify when disk usage over (percentage) If disk usage, either for the system drive or
data drive, is over the specified threshold, an
event is published. (Default value: 80%)
Notify when disk free size under (MB) If the free disk space for the system drive or
data drive is over the specified threshold in
megabytes, an event is published. (Default
value: 500 MB)
Notify when number of files in err folders If Data Insight is not able to process an
over incoming file for some reason, that file is
moved to an err folder. Data Insight
publishes an event if number of files in the
err folder crosses the specified threshold.
(Default value: 50)
Notify when number of files in inbox and If Data Insight is not able to process incoming
outbox folder over data fast enough, the number of files in the
transient folders, inbox and outbox, goes
on building up. Data Insight publishes an
event if number of files crosses the configured
threshold. (Default value: 5000)
Setting Description
Maximum memory when generating report Specifies the maximum memory that can be
output used for generating a report output. By
default, it is 1024 MB on a 32 bit machine and
2048 MB on a 64 bit machine
Setting Description
Total threads for generating report output Configure the number of report outputs that
can be generated in parallel. Default value is
2.
Total threads for generating report data By default, Data Insight executes two reports
in parallel. However, you can configure a
higher value to run multiple reports in parallel.
Maximum reports that can run simultaneously Specify the number of report instances that
can run in parallel. This setting helps you
speed up the process of report generation.
Setting Description
Maximum kernel ring buffer size The Windows File Server filter driver puts
events in an in-memory buffer before the
DataInsightWinnas service, consumes them.
By default, it uses a 10MB buffer. You can
use a bigger buffer. Data Insight publishes
an event that indicates events are being
dropped due to a high incoming rate.
Ignore accesses made by Local System The Windows File Server filter driver ignores
account accesses made by processes running with
Local System account. This setting ensures
that Data Insight can ignore most events
originating from the operating system
processes or other services like antivirus and
backup.
Setting Description
Flush events on VxFS filer before audit Set this option to true, if you want to force
VxFS to flush its events to disk each time
Data Insight requests for information. This
option is useful in Proof-of-Concept (POC)
setups and enables you to see events faster.
Maximum number of audit threads This option determines how many filers to
fetch audit information from in parallel.
Maximum kernel ring buffer size (Number of The access event records are saved in a log
records) file on the VxFS filer before Data Insight
consumes them. By default, 50,000 records
can be saved in the log file. You can also
specify a larger number. Data Insight
publishes an event that indicates that events
are being dropped due to a high incoming
rate.
Setting Description
Set default credentials for NFS scanner Set this option to true if you want to allow
Data Insight to use the specified User and
Group ID to log in to scan NFS shares.
Setting Description
Schedule to fetch audit events from Data Insight fetches new audit events from
SharePoint server SharePoint periodically. By default, it does
so every 2 hours. You can configure a
different schedule.
Total scanner threads The Collector can perform multiple full scans
in parallel. This setting configures how many
full scans can run in parallel. The default
value is 2 parallel threads. Configure more
threads if you want scans to finish faster.
Scan multiple site collections of a web This setting indicates if the scanner can
application in parallel perform a scan on multiple site collections of
the same web application in parallel. The
setting disabled by default.
Maximum site collections per web application If multiple site collections of a web application
to scan in parallel can be scanned in parallel, this setting puts
a limit on the total number of site collections
of a web application that you can scan in
parallel
Pause scanner for specific times You can configure the hours of the day when
scanning should not be allowed. This ensures
that Data Insight does not scan during peak
loads on the SharePoint servers. The setting
is enabled by default. Scans resume from the
point they were at before they were paused.
Setting Description
Pause auto-delete for specific times You can configure the hours of the day when
auto-delete of audit events from SharePoint
server should not be allowed. This feature
can help you to avoid overloading the
SharePoint servers during the peak hours.
Pause schedule for auto-delete Specify when auto-delete of the audit logs
should not be allowed to run.
Setting Setting
Preserve raw audit event files Events processed by the Audit Pre-processor
stage are deleted once consumed. If this
setting is enabled, raw audit event files will
be preserved in the attic folder in the data
directory.
Note: You can view the SPEnableAuditJob and the SPAuditJob only if the server
is configured to be the Collector for a SharePoint site collection.
information about the DataInsightWatchdog service, see the Symantec Data Insight
Installation Guide.
The line graphs display hourly, weekly, monthly, and yearly data.
To view server statistics
1 Click Settings > Data Insight Servers > Server Name > Statistics.
2 Select the following to filter the data that is displayed on the page:
■ The charts that you want to view.
■ The duration for which you want to view the data.
■ The type of statistics that you want to view - Average, Minimum, and
Maximum.
4 You can view each processing backlog chart from three perspectives. Click
one of the following aspects of the backlog:
■ The size of the backlog in MB or GB; the total size of all files that are yet
to be processed.
■ The count of files that are yet to be processed.
■ The time lag in terms of hours or days; the time lag is the difference between
the current time and the file with the oldest timestamp that is yet to be
processed.
5 For the Collector and Indexer backlogs, click the drop-down to the right of the
chart to view the Top 10 objects that are contributing to the backlog. In case
of Collector backlog, objects are the filers or web application and case of Indexer
backlog, objects mean the shares or site collections
The Top ten chart is a bar chart.
See “Configuring advanced settings” on page 234.
Note: The Product Updates column is displayed only when the Data Insight
Management Server is able to connect to the SORT website. When there is
no connection, an error message is displayed in the footer.
3 Click the link to the latest patch that Data Insight recommends. You will be
redirected to the SORT website.
4 Download the patch from the Downloads page on the website.
5 You can refer to the README on the page for the installation instructions and
to verify the problems that have been fixed in the patch.
Note: Remote upgrade of an Indexer node (neither Windows nor Linux Indexer
nodes) is not supported. You must upgrade an Indexer node manually.
■ Windows File Server agents can be upgraded remotely, from the Settings >
Data Insight Servers page of the Data Insight Management Console.
■ For Collector nodes, both upgrade and installation of rolling patches can be
done remotely.
Note: If the first three version numbers for Management server and the concerned
node match, then only the option to install a rolling patch is available. For example,
when upgrading from 4.5.0 to 4.5.1, if the first three numbers do not match, then
the option to upgrade must be used. For example, when upgrading a node from
4.0.0 to 4.5.0. However, for Windows File server nodes, a rolling patch can be
installed in spite of version mismatch between the Management Server and the
concerned Windows File Server nodes, because backward compatibility is supported.
You can either use existing saved credentials for upgrading a node or create new
credentials.
You can also perform all the remote deployment actions using the installcli.exe
utility from the Windows command prompt. For detailed information on
installcli.exe, see the Command File Reference.
You can view the progress of the remote deployment operation from the Installation
Status page.
See “Viewing the status of a remote installation” on page 253.
Note: When a new version is installed on the Management Server, the installer
automatically copies itself to the installers folder of the Management Server. In
such cases, you do not need to separately upload the package to the Management
Server for upgrading other worker nodes (except the Windows File Server agents).
3 Click View Progress to view a more detailed status of the install operation.
3 On the node templates list page, select the template that you want to edit, and
from the Select Action drop-down, select Edit.
4 On the Edit Node Templates page, change the required settings, and click
Close Configuration to save the changes.
When you edit a node template, the changes in configuration do not automatically
reflect on the Data Insight servers on which the template is applied. You must apply
the modified template to the Data Insight server again for the configuration changes
to take effect on the server.
You can use the following types as custom scripts,.exe, .bat, .pl, or .vbs.
You can view the status of remediation actions on the Settings > Action Status
tab of the Data Insight Management Console.
For information about custom scripts, see the Symantec Data Insight Programmer's
Reference Guide.
See “Viewing and managing the status of an operation” on page 268.
5 If you selected the Send email option, provide the relevant information in the
email template:
■ The email ID of the sender
■ The email IDs of the recipients
■ The email IDs of other recipients
7 In the Enter the command to be executed field, provide the file name of the
saved script.
8 Select the relevant saved credential if your system needs to run the script using
the specified credentials. The script runs with the Local System account
credentials, however network calls made by the script will impersonate the
specified user credential.
9 Click Save.
To configure the process of applying recommendations
1 Write the relevant scripts to handle changes to the following:
■ The Active Directory.
■ CIFS permissions.
For more information about the custom scripts refer to the Symantec Data
Insight Programmer's Reference Guide.
2 Save the scripts in the following locations:
■ For changes to Active Directory -
$DATADIR\conf\workflow\steps\permission_remediation\AD
3 From the Data Insight Management Console, click Settings > Permissions.
The Remediation sub-tab opens by default.
4 Click Edit. The page expands to display the configuration for permission
remediation.
5 Select Enable Permission Remediation if it is not already enabled.
6 Select Remediate using custom scripts. The panel expands to show you
the configuration details.
7 In the Enter the command to be executed field, specify the file name of the
custom script(s) that you have created in step 1
8 Click Save.
The saved scripts are used to handle the permission remediation actions after
you accept the permissions recommendations displayed on the Workspace
tab.
For information on reviewing recommendations and initiating the process of applying
them, see the Data Insight User's Guide.
■ To exclude groups with more than a certain number of users, specify the
value in the space provided.
To exclude specific groups, in the Exclude following groups from
recommendations pane, click the group's name which you want to exclude.
■ To exclude a specific user, in the Available Members pane, select the
user.
You can use the name filters and domain filters to view and sort the available
user groups.
The users and groups that you select are displayed in the Exclusion List pane.
4 Click Save.
Note: If there are multiple Enterprise Vault servers in an Enterprise Vault site, then
only one of the EV servers in the site must be added to the Data Insight
configuration. If any of the Enterprise Vault servers is down, archiving of files from
the shares that use vault stores that are managed by that Enterprise Vault server
fails. For information about Enterprise Vault sites and vault stores, see the Symantec
Enterprise Vault™ Introduction and Planning Guide.
For instructions on initiating archive requests, see the Symantec Data Insight User's
Guide.
4 Click Test Credentials to verify that Data Insight can connect to the server
using the saved credentials.
5 Click Save.
You can configure additional options, such as the total size and number of the files
and folders that can be archived in one archive request.
To configure archive options for Enterprise Vault servers
1 From the Data Insight Management Console, click Settings > Data
Management.
The Archiving (Enterprise Vault Configuration) tab opens by default. It
displays a list of configured servers.
2 On the Archive Options panel, specify the preferred batch size in MB(s).
When archiving files to Enterprise Vault, the batch of files sent to Enterprise
Vault in one call does not exceed the given size.
3 Enter the number of files that you want to archive in one operation. By default,
you can archive 50 files in one archive request.
4 Click Save.
Note: The batch size has a higher priority than the file count for deciding the list of
files in an archive operation. Thus, Data Insight limits files in the archive operation
after the batch size limit is reached, even if the file count does not exceed the
specified limit.
You can configure a pause window for the Enterprise Vault operations by scheduling
Data Insight to pause all the archive activities during a specific duration of time.
When the pause occurs, Data Insight submits no more new archive requests. It
places all new requests in a queue and executes them after the pause window.
To configure a pause schedule for the Enterprise Vault operations
1 From the Data Insight Management Console, click Settings > Data
Management.
2 On the Pause Workflow Schedule panel, select Pause workflow for specific
times. Data Insight displays a list for all the previously configured pause
schedules.
3 Do any of the following:
■ To change an existing pause schedule, click the name of the schedule and
click Edit
■ To add a new pause schedule, click Add.
■ The days of the week for which you want to schedule the pause.
5 Click Save.
Note: Clustered Windows File Servers are added to Data Insight using the name
of the cluster. Enterprise Vault requires that virtual file servers configured in the
cluster must be added to the Enterprise Vault configuration. Tto enable EV to archive
paths on the virtual file servers, Data Insight automatically maps the virtual file
servers in the Windows cluster to the configured Enterprise Vault server.
4 Select Do not expand paths to apply the action defined in the script to the
paths selected in the view or the report. The selected paths are passed as-is
to the custom script.
5 Select Expand paths to apply the action defined in the script to all child folders
under the selected folder recursively. If you select this option to invoke an
action on the folder, Data Insight passes individual files present in that path's
hierarchy to the script, instead of the parent folder.
6 Select the additional data that you want to pass to the script.
7 Click OK to save the settings.
Note: Only some columns are displayed in the default view. You can view any other
columns by selecting them from the column header drop-down.
The Details for Action panel shows you the step-by-step break-down of the selected
operation.
To view the status of an operation
1 In the Management Console navigate to Settings > Action Status. The Action
Status page displays the details of recently triggered operations.
2 Use the check box filter to display the operations based on their Type or Status.
Additionally, you can use the search facility to display the operations based on
their attributes such as Origin , Type, or Status.
3 Click the Origin of the selected operation to view granular details of an
operation. Alternatively, click the Select Action drop-down, and select View.
4 The details of the selected operation are displayed in the Details for Action
panel.
5 Use the check box filter to display the operations based on the attributes such
as: Status or Filer. Additionally, you can use the search facility to display the
details based on attributes such as Path or Status.
You can cancel an operation that is in progress. Cancelling an operation, pauses
all the activities of the operation. You can re-run a canceled operation later.
To cancel an ongoing operation
1 In the Management Console, navigate to Settings > Action Status. The Action
Status page displays the details of recently triggered operations.
2 Use the check box filter to display the operations based on their Type or Status.
Additionally, you can use the dynamic filter to display the operations based on
their attributes such as Origin , Type, or Status.
3 Click Select Action for the operation you want to cancel.
4 Click Cancel.
You can re-run a canceled or a completed operation.
To re-run a canceled or a completed operation
1 In the Management Console navigate to Setting > Action Status. The Action
Status page displays the details of recently triggered operations.
2 Use the check-box filter to display the operations based on their Type or Status.
3 Click Select Action for the operation you want to re-run.
4 Select Run Again.
5 Select any of the following:
■ All - To run all the sub-steps for the operation.
■ Unsuccessful - To run all the failed sub-steps for the operation.
Note: The permission remediation action from the Workspace tab can only be
taken by a user with the Server Administrator role. The options to remove users or
groups or to revoke permissions using reports are only visible to users with the the
Server Administrator role and Report Administrators who are allowed to remediate
data and permissions.
■ Remove direct member users or groups from a group on the Overview tab of
the Workspace.
Or, revoke permissions of specific trustees directly from the Permissions tab
of the Workspace.
See “Making permission changes directly from Workspace” on page 276.
The permission recommendations are calculated after considering the effective
permissions for a user or a path, which include share-level permissions.
You can configure the settings required to implement the permissions
recommendations. For more information on configuring the permission remediation
settings, see the Symantec Data Insight Administrator's Guide.
Note: The users with Server Administrator role can take further action on the
recommendations after analyzing them.
You can also configure a Group Change Analysis report on the Reports tab to
analyze the effects of permission changes outside the scope of the recommendations
that are made by Data Insight.
For information about Group Change Analysis report, see the Symantec Data Insight
User's Guide.
To analyze and apply permission recommendations
1 In the Management Console, click Workspace > Shares.
2 Drill down to the path for which you want to view the permission
recommendation.
3 Click the Permissions tab. Or right-click the folder in the navigation pane and
select Permissions > Recommendations.
4 Review the recommendations.
If the recommendations include changes to the group, the Analyze Group
Change(s) option is enabled.
5 Click Analyze Group Change(s) to run a Group Change Analysis report for
the recommended changes.
If you do not agree with any of the recommendations, you can delete the
recommendation from the list before analyzing the changes. To remove a
recommendation, click the Delete icon corresponding to the recommendation.
6 Once the report run is complete, review the Group Change Analysis report for
the Data Insight recommendations.
The report is also available on the Reports tab.
7 Review the report to analyze the effects of making the recommended changes.
8 Click Apply Changes to accept the recommendations, and to start the process
of raising a request to implement the changes.
9 You can also complete the task of making the recommended changes from
the Reports tab as well. Do the following:
■ Navigate to the Reports tab.
■ Select the appropriate report, and select Apply Recommendations from
Select Action drop-down.
The permission changes are handled as configured on the Settings tab.
See “About configuring permission remediation” on page 258.
Note: Data Insight allows only the user with the Server Administrator role to take
permission remediation action from the Workspace tab. The options to remove
users or groups or to revoke permissions is not visible to users other than the Server
Administrator.
■ Managing workflows
Note: Data Insight does not let you create an incident remediation workflow for
sensitive paths that are imported into Data Insight using a CSV file. This is
because the workflow requires data from DLP, such as Smart Response rules
and incident IDs and severity information for paths that violate a policy.
For more information about DLP incidents, see the Symantec Data Loss
Prevention Administrator's Guide.
■ Ownership Confirmation
Confirm the ownership of files and folders in your storage environment.
■ Records Classification
Classify the sensitive files that must be retained for a legally mandated period.
The workflow helps you classify files based on their business value and manage
the life cycle of sensitive documents by applying data management rules to the
classified data.
You can choose to archive the files that are marked as record and apply retention
categories that define how long the files must be stored before being deleted.
The files that are marked as record are retained based on the file classification
policies that they violate.
You can use the workflow to trigger automatic actions only if your organization
uses Symantec Enterprise Vault™ to archive data and if Enterprise Vault is
configured in Data Insight.
Depending on the type of workflow, the custodian may perform the following actions:
Workflow Action
Entitlement Review Review the user permissions on folders that the custodian
owns and automatically trigger a permission remediation
workflow to execute the changes.
DLP Incident Remediation Choose the configured remediation actions, and submit
the same for execution by the DLP Enforce Server.
Once you submit a workflow from the Data Insight console, the custodians receive
an email notification with a link to the Self-Service Portal. They can log in to the
portal, choose the necessary remediation actions, and submit the same for execution
by the DLP Enforce Server, Enterprise Vault server, or the Data Insight Management
Server, depending on the type of workflow.
See “About workflow templates” on page 283.
Login help text Enter any information that the portal users
may need to login to the portal. For
example, the login credentials that are
required for the portal.
3 Click Save.
Note: You cannot delete a template if it is being used for creating a workflow.
Option Description
Template Type Describes the type of workflow that can be created using the
template.
Description Enter a short description for the template. The description can
state the kind of Entitlement Review workflow for which the
template should be used.
Option Description
Welcome Text This text appears in a pop-up when the custodian first logs in to
the Self-Service Portal. You can include the specific instructions
for remediation in this field.
Email Reminder Select the frequency, day, time for sending email reminders to
the custodians.
2 Insert the variable in the To, From, CC, and Subject fields.
Option Description
Template Type Describes the type of workflow that can be created using the
template.
Description Enter a short description for the template. The description can
state the kind of DLP Incident Remediation workflow for which
the template should be used.
Welcome Text Select the check box to display a message to the portal users.
Use the variables from the adjoining drop-down to create the
message.
Option Description
Portal Options Click the Refresh icon to fetch the latest rules from DLP.
Email Reminder Select the frequency, day, time for sending email reminders to
the custodians.
2 Insert the variable in the To, From, CC, and Subject fields.
Option Description
Template Type Describes the type of workflow that can be created using the
template.
Description Enter a short description for the template. The description can
state the kind of Ownership Confirmation workflow for which the
template should be used.
Welcome Text This text appears in a pop-up when the custodian first logs in to
the Self-Service Portal. You can include specific instructions for
the portal users in this field.
Portal Options Select the check boxes for the file attributes that you want to
display on the Self-Service Portal.
Email Reminder Select the frequency, day, time for sending email reminders to
the custodians.
2 Insert the variable in the To, From, CC, and Subject fields.
Option Description
Template Type Describes the type of workflow that can be created using the
template.
Description Enter a short description for the template. The description can
state the purpose of the workflow for which the template should
be used.
Welcome Text Select the check box to display a message to the portal users.
This text appears in a pop-up when the custodian first logs in to
the Self-Service Portal. You can include the specific instructions
for remediation in this field.
Option Description
Portal Options
Option Description
Option Description
Email Reminder Select the frequency, day, time for sending email reminders to
the custodians.
2 Insert the variable in the To, From, CC, and Subject fields.
Option Description
Option Description
Option Description
Exclusion List Select the groups or users that you want to exclude from the
scope of the review. Click the group or user to select it. The
selected data set is listed in the Selected Groups/Users panel.
Once you have excluded a user or a group, the activities of the
user or the group on the paths will be ignored and thus will not
be considered for the review.
Option Description
Option Description
Option Description
For the paths that do not have custodians, you can assign
custodians using the following methods:
Option Description
Resource -Custodian This panel displays the data set selected in the Data Selection
Selection tab. You can review the selected paths on the basis of criteria
such as custodians and custodian email. You can remove a
selected path from the list.
Option Description
Option Description
Option Description
For the paths that do not have custodians, you can assign
custodians using the following methods:
Managing workflows
On the workflow details page, you can complete the following tasks:
Note: Once you submit a workflow, you can only modify the deadline to complete
the workflow.
Note: The option to log in as custodian is not available if the workflow is complete
or if the custodian has submitted his responses for further action for all assigned
paths.
Status Description
Status Description
2 On the workflow listing page, click Select Action > View, or click the workflow
link to view details of a submitted, completed, or canceled workflow.
3 On the workflow summary page, you can view the list of paths that are submitted
for custodians' actions on the Self-Service Portal. The page also displays the
summary of the total paths in the workflow, the percentage of paths on which
an action is submitted on the portal, and the time within which the workflow
must be completed.
You can also view the following details:
■ The list of paths that are part of the workflow.
■ In case of a DLP Incident Remediation workflow, the Data Loss Prevention
(DLP) policies that the paths violate, the severity of the incidents, and the
incident IDs that need to be remediated. The incident ID is associated with
the available response rules for a given incident.
■ In case of a Records Classification workflow, the policies that the files
violate, the name of the action , the retention category being applied to the
file, and the response from the Symantec Enterprise Vault™ server.
■ The custodian(s) for whose action the workflow is submitted.
■ The status for each path can be one of the following:
Status Description
Pending Indicates that the custodian has not taken any action on
the assigned paths.
Status Description
Expired - Indicates that the due date for completing the workflow
has expired, and the portal users will not be able to take
any action on the paths in that particular workflow,
■ Depending on the type of workflow, you can also view the following
information about the actions taken on the files assigned for remediation:
Workflow Details
■ Managing policies
■ Managing alerts
Data Insight comes packaged with the following out-of-the-box policies that you
can configure according to your needs:
■ Data Activity Trigger policy
Use this policy to define the maximum cumulative count of the meta operations
on the selected paths. For example, if you have defined the maximum accesses
per day as 500 on the share \\netapp1\finshare, and the total access count
by the active set of users exceeds 500, then Data Insight generates an alert.
■ User Activity Deviation policy
Use this policy to define the threshold of deviation from the baseline activity.
The baseline activity on a file or folder is the average number of accesses that
are considered normal based on past access counts. If the activity, by the
selected users, on the selected data exceeds the specified threshold of the
baseline (for example, three standard deviations above the baseline activity),
and the maximum accesses allowed per day, Data Insight generates an alert.
You can configure how many standard deviations a user is allowed to deviate
from the defined baseline.
■ Data Activity User Whitelist-based policy
Use this policy to define a whitelist of users based on the Active Directory custom
attributes, who can access selected shares or paths. Also, you can create such
a policy with multiple conditions with multiple values for the same custom
attributes.
If users, other than those defined in the whitelist, access selected data, Data
Insight generates an alert.
■ Data Activity User Blacklist-based policy
Use this policy to define a blacklist of users based on the Active Directory custom
attributes, who should not be accessing selected shares or paths. Also, you can
create such a policy with multiple conditions with multiple values for the same
custom attributes.
If users, who are included in the blacklist, access selected data, Data Insight
generates an alert.
■ Real-time Sensitive Data Activity Policy
Use this policy to trigger real-time alerts when a selected set of users perform
any access events on the paths that violate configured DLP policies. Whenever
the events that violate Real-time Sensitive Data Activity Policy are processed
by a Collector node, alerts are generated and an email is sent to a configured
list of recipients. The policy violations are also published in the Windows
Applications and Services Logs as DataInsightAlerts events.
Managing policies
You can view, edit and delete configured policies, and add new policies to Data
Insight from the Policies tab.
To manage policies
1 In the Console, click the Policies tab.
The left pane displays the default policy groups.
2 Click a policy group.
The policy listing page displays the configured policies for that policy group.
3 To edit an existing policy, from the Actions drop-down, click Edit.
4 To delete a policy, select the corresponding check box and click Delete.
To add a new policy
1 In the Console, click the Policies tab.
The left pane displays the default policy groups.
2 Click the policy group that you want to base your policy on.
3 On the policy listing page, click Add new policy. Or in the tree-view panel,
right-click the policy type, and select Add.
4 On the Add new policy page, click each tab and enter the relevant information.
5 Click Save.
See “Create Data Activity Trigger policy options” on page 312.
See “Create User Activity Deviation policy options” on page 314.
See “Create Data Activity User Whitelist-based policy options” on page 316.
See “Create Data Activity User Blacklist-based policy options” on page 319.
See “Create Real-time Sensitive Data Activity policy options” on page 322.
By default, all policies except Real-time Sensitive Data Activity policy, are evaluated
at 12:00 A.M. every night. You can schedule policies to be evaluated more frequently
for proof-of-concept (POC) setups. Note that a schedule that is too aggressive can
put excessive load on the Indexer.
You can set a custom schedule to evaluate policies from the Settings tab. The
schedule must be specified in the cron format.
To set a custom schedule for policies
1 Click Settings > Data Insight Servers.
2 Click the entry for the Management Server.
3 On the page for the Management Server node, click Advanced Settings.
4 Click Edit.
5 Scroll to bottom of the page and expand the Set custom properties section.
Specify property name to be job.PolicyJob.cron and property value to be the
new schedule. Schedule needs to be specified in cron format
6 In the Property name field, enter job.PolicyJob.cron.
7 In the Property value fields, enter the values as follows:
To evaluate policies every N hours, specify For example, to evaluate policies every
value as 0 0 0/N * * ? *. two hours, specify value as 0 0 0/2 * * ? *.
Option Description
Option Description
Or you can use a .csv file with information about the paths
that you want to apply the policy to. Click Browse to
navigate to the location of the .csv file, and click Upload.
Option Description
Notification Enter one or more specific email addresses for people to whom
you want to send alerts that are generated for the policy.
Option Description
Option Description
1 From the drop-down, select the time range for the baseline
activity. Baseline activity is then computed as the average
access in that time range.
Option Description
You can use the Domain filter search bar to filter users
or groups according to domains.
Notification Enter one or more specific email addresses for people to whom
you want to send the alerts that are generated for the policy.
Option Description
Table 23-3 Create Data Activity User Whitelist-based policy options (continued)
Option Description
Or you can use a .csv file with information about the paths
that you want to apply the policy to. Click Browse to
navigate to the location of the .csv file, and click Upload.
Table 23-3 Create Data Activity User Whitelist-based policy options (continued)
Option Description
Notifications Enter one or more specific email addresses for people to whom
you want to send alerts that are generated for the policy.
Option Description
Table 23-4 Create Data Activity User Blacklist-based policy options (continued)
Option Description
Table 23-4 Create Data Activity User Blacklist-based policy options (continued)
Option Description
Notifications Enter one or more specific email addresses for people to whom
you want to send alerts that are generated for the policy.
Option Description
Configure Policy Select Activity - Select the type of accesses to be monitored on the
selected data set.
Select the Meta Access radio button to monitor only the high-level
access events that Data Insight maps from the detailed file system and
SharePoint access events.
Select the Detailed Access radio button to monitor specific file system
and SharePoint access events.
From the DLP Policies violated drop-down list, select a set of DLP
policies to be considered for alerting.
Do the following:
You can use the Domain filter search bar to filter users or groups
according to domains.
You can also filter the users according to their Active Directory
custom attributes.
Table 23-5 Create Real-time Sensitive Data Activity policy options (continued)
Option Description
User Selection You can also select users by using the attribute query.
Using Attributes
Do the following:
2 From each drop-down menu, select the criteria to build the query.
You can add multiple conditions. For evaluating a query, Data Insight
uses the logical AND operation between multiple conditions.
User Exclusion Select the users you want to exclude from the policy. Any activity
performed by these users will be ignored and will not be used to trigger
an alert.
Do the following:
You can use the Domain filter search bar to filter users or groups
according to domains.
You can also filter the users according to their Active Directory
custom attributes.
User Exclusion You can also exclude users by using the attribute query.
Using Attributes
Do the following:
2 From each of the drop-down menu, select the criteria to build the
query.
You can add multiple conditions. For evaluating a query, Data Insight
uses the logical AND operation between multiple conditions.
Notification Enter one or more specific email addresses for people to whom you
want to send the alerts that are generated for the policy.
Managing alerts
An alert is a signal generated by a policy when the condition specified in the policy
is violated.
You can view alerts on the Alerts tab on the Management Console.
To manage alerts
1 In the Console, click the Policies tab.
The Alerts tab display by default. On the tab, you can view all the alerts that
were generated by Data Insight.
2 In the Alerts Summary, click the drop-down arrow on any column header and
select Columns. Then, select the parameters you want to show or hide. You
can sort by:
■ The name of the policy.
■ The severity of the alert.
■ The type of policy associated with the alert - Data Activity Trigger, User
Activity Deviation, Data Activity User Whitelist-based, Data Activity User
Blacklist-based, or Real-time Sensitive Data Activity.
■ The name of the user account that violated the policy.
■ The date on which the alert was generated.
■ The resolution, if any, taken in response to the alert.
3 To send alerts in email, select the alerts and click Send Email.
4 Enter the email addresses and click Send.
5 To enter the resolution for an alert, select the alert, click in the Resolution
column for the alert and type in the resolution.
To update the resolution for multiple alerts, select the alerts and click Update
Resolution at the top of the summary table.
To delete alerts
◆ To delete an alert, select an alert and click Delete.
To delete alerts by severity, click Delete and select the severity. This deletes
all alerts that match the selected severity.
To delete alerts older than a certain date, click Delete and select the date at
the top of the table.
Note: You can configure automatic deletion of alerts older than the specified
interval on the Data Retention screen. However, you cannot restore the alerts
once they are deleted. Alerts are also automatically published to the Windows
event log.
See “Configuring data retention settings” on page 44.
■ Viewing events
Note: Before you enable email notifications, you must enable configure the SMTP
settings.
4 Select the severity of events for which the email notifications must be sent.
5 Click Save.
Viewing events
You can monitor Symantec Data Insight recent system events on the Events page.
The report displays entries for all system events. These events include the following
information about an event:
■ Time
■ Severity
■ Event summary
■ Symantec Data Insight server where the event originated
■ The user if any performing the action
■ The object for which the event originated
To view system events
1 A list of recent system events appears.
2 You can choose to filter the events further using one or all of the following
criteria:
■ By time
■ By any text appearing in the event summary
■ By severity
■ By the product server on which the event originates
Enter the filter criteria in the relevant fields and click Go.
3 Click the Export icon at the bottom of the page to save the data to a .csv file.
2 On the Scan Errors page, review the time of the error, the error code, and the
possible cause of the error.
4 Open a Windows command prompt and run the following command to increment
the version of the config.db file that was changed in Step 3:
<INSTALL DIR>\DataInsight\bin\configdb –O -J dummy –j dummy
4 Specify the original location of the $data directory as the previous install. By
default, the $data directory is located at C:\DataInsight\data.
5 Clear the Launch worker node registration wizard after exit checkbox. You
do not need to register the worker node at this time as the registration
information is already present in the data that you have backed up.
6 Complete the installation. Do not start the services at this time; clear the Start
services now option when the installer prompts for it.
7 Delete the $data folder that is created as a part of the new installation and
copy the backed up data to this location.
8 Start the Data Insight services, which include DataInsightComm,
DataInsightWatchdog, and DataInsightConfig.
9 Check the status of the services and ensure that they come to running state.
Successful start of all services indicates that the Indexer node is successfully
restored.
To restore the Indexer node with a different host name or IP address
1 Repeat steps 1 through 6 as described in the section, Restoring the Indexer
node..
2 Edit $data/conf/<nodename>.conf and enter the new server name.
3 Open the file, $data/conf/config.db.<N> (N being the latest version of
config.db) in an SQLITE editor.
Update the node_name and node_ip columns in node table with the host name
and IP address of the new server.
4 Run the following SQL updates:
9 On each worker node, except the Windows File Server agents, stop
DataInsightComm and DataInsightConfig services.
10 If this node is a Collector for one or more Windows File Server agents, log in
to each Windows File Server, stop the DataInsightComm and DataInsightConfig
services.
Perform step 3 on the worker node’s config.db.N
11 Start the DataInsightComm and DataInsightConfig services on the Indexer and
all other worker nodes where configdb.N was changed. Ensure that the worker
nodes show online on the Data Insight Management Console.
Table A-1 describes the logs that are relevant for troubleshooting.
webserver0.0.log This file contains the log messages from the Web service process.
commd0.0.log This file contains the log messages from the scheduler
communication service.
adcli.log This file contains the log messages from the Active Directory
scanner process, adcli.exe.
celerrad.log This file contains the log messages for DataInsightCelerra service.
cli0.0.log This file contains the log messages for various command line
utilities.
collector.log.N This file contains the log messages for the audit pre-processor
(collector.exe).
dashboard.log This file contains the log messages for the Dashboard data
generation report.
dscli0.0.log This file contains the log messages for LDAP, NIS, NIS+ Directory
scanner.
scanner/extN_msuN.log This file contains the log messages for Full file system scans.
scanner/extN_msuN.ilog This file contains the log messages for Incremental file system
scans.
fpolicyd.log This file contains the log messages for DataInsightFpolicy service.
indexer/index-N.log This file contains log messages for index updater process.
localusers.log This file contains log messages for local users scanning process.
mxpolicy.log This file contains log messages for Data Insight policy evaluation
process.
sharepoint_audit.log This file contains log messages for SharePoint audit fetching
utility.
winnas_util.log This file contains log messages of windows share discovery utility.
Note: Contact Symantec Support to help you determine which of these options you
should select when troubleshooting an issue.
3 Edit the value for the parameter matrix.datadir to indicate the new location of
the data directory. For example, matrix.datadir=E:/DataInsight/data.
4 Copy the folder,$DATADIR/data, from the old location to the new location. For
example, copy the folder from the original location C:/DataInsight to the new
location E:/DataInsight.
Note: If you choose to rename the data directory, do not use any space in the
filename. Doing so will prevent the Data Insight services from starting.
6 Verify that the command output points to the new data directory location.
7 Execute the command configdb -p -T node.
Verify that the command output lists all the Data Insight servers that are in your
deployment.
8 Start the Data Insight services on the server.
9 After all Data Insight services start successfully, delete the original data
directory.
Exception Description
Exception Description
Exception Description
Exception Description
2013-03-21 11:17:27 WARNING: This error occurs when the Enterprise Vault
Archive: Got exception while task controller service cannot be reached for
archiving - some reason.
System.ServiceModel.Fault
Exception`1[www.symantec.com.Enterprise
Vault.API.FileSystem
Archiving.Data.ServerTemporary
UnavailableFault]: Unable to
contact the Enterprise Vault Task
Controller service. Check that
the service is running. (Fault
Detail is equal to
www.symantec.com.Enterprise
Vault.API.FileSystem
Archiving.Data.ServerTemporary
UnavailableFault).
Exception Description
[Exception in method Archive: This error occurs when you restart the Data
System.Web.Services.P Insight services, while Data Insight is still
rotocols.SoapException: The socket processing an archiving operation.
connection was aborted. This could To resolve this error, make sure that the File
be caused by an error processing System Archiving (FSA) task and Enterprise
your message or a receive timeout Vault services are running.
being exceeded by the remote host,
or an underlying network resource
issue. Local socket timeout was
''''00:29:59.9810000''''., at
evClient.Program.
Archive(FileSystemArchivingService
channel)]
Collector node and the NetApp filer. Ensure that this user is a part of the Backup
Operators group on the filer.
■ If the NetApp filer and Data Insight Collector node are in different domains,
ensure that the DataInsightFPolicy service is running as Local System or there
is bidirectional trust between the domains to which the filer and the Collector
belong.
Once you performed all the checks mentioned in the earlier procedure, you might
need to perform the following additional checks if the problem persists:
■ Network Settings on the Collector: If you are using a Windows 2008 machine
as a Collector, verify the local security policy for the named pipes that can be
accessed anonymously.
See “To verify the correct network security policy for FPolicy” on page 346.
■ Setting entry in the hosts file on the filer: The hosts file entry on the filer and the
format of the hosts file entry. The entry should be in the format shown:
<IP_ADDRESS> <FQDN> <Short_name>
For example,
10.209.88.190 mycollector.tulip.matrixad.local mycollector
■ SMB signing check on the filer and the Collector: Disable the SMB signing on
the filer and the Collector.
To verify the correct network security policy for FPolicy
1 On the Collector node, navigate to the Control Panel.
2 ClickSystem and Security > Administrative tools > Local Security Policy
> Local Policies > Security options.
3 Check if the value for the policy called "Network access: Named Pipes that
can be accessed anonymously" is NTAPFPRQ.
4 Restart the Collector node if you have made any changes.
Connectivity issues between the Ensure that the system time on Perform the following steps:
EMC filer and the host computer the filer and the Windows host
■ Log in to the EMC Control Station using SSH
running EMC CEE service. running the CAVA or the
or Telnet.
DataInsightCelerra service are
■ Execute the command for the corresponding
in sync with the domain
data mover:
controller.
server_date server_2 timesvc start
ntp <domain-controller>
where server_2 is the name of the data mover.
Ensure that the CIFS servers To test the connectivity of the CIFS server with
can communicate with the the domain controller:
domain controller.
■ Log in to the EMC Control Station using SSH
or Telnet.
■ Execute the following command for the
corresponding data mover:
server_cifs <server_2>
where server_2 is the name of the CIFS
server.
■ Verify that the output of the command contains
the following line:
FQDN=<Fully qualified name of the CIFS
server> (Updated to DNS)
Appearance of the line is the output confirms
that connection is established.
surveytime=90
pool name=matpool \
postevents=* \
option=ignore \
reqtimeout=500 \
retrytimeout=50
Error when you attempt to start To enable the EMC VNX or To troubleshoot error while trying to start CEPP
the CEPP service on the filer Celerra filer to send event service on a VNX filer:
information to Data Insight you
■ Log in to the EMC Control Station using SSH
must start the CEPP service on
or Telnet.
the filer.
■ View the cepp.conf file by executing the
When you attempt to start the following command:
CEPP service you may server_file server_2 -get
encounter the error cepp.conf
13162905604.
■ Note the IP address mentioned in the servers
section.
■ Ensure that the noted IP address is accessible
and resolvable from the EMC filer Control
Station.
The Table A-4 describes some common issues and troubleshooting steps to resolve
these issues.
Data Insight is unable to fetch audit ■ Ensure that the Isilon hostname configured in Data Insight is the same
events from the EMC Isilon filer. as configured in the Isilon Console audit settings.
Issues when installing the Data Insight Perform the following steps:
SharePoint agent on the SharePoint
■ Log in to the SharePoint Central Administration Console and set the
server
ULS log level to Verbose.
■ Collect the Event Viewer logs and ULS logs from the following locations:
■ For SharePoint 2007:
C:\Program Files\Common Files\Microsoft Shared\Web
Server Extensions\12\LOGS
■ For SharePoint 2010:
C:\Program Files\Common Files\Microsoft Shared\Web
Server Extensions\14\LOGS
■ For SharePoint 2013:
C:\Program Files\Common Files\Microsoft Shared\Web
Server Extensions\15\LOGS
■ Collect the Data Insight logs from the Collector node.
■ Send the logs to Support for assistance.
Test Connection fails for a Web In the Data Insight Management Console, when you click the Test
Application Connection for a Web Application, Data Insight attempts to do the following:
Ideally, all the three tasks should succeed. If any of the three tasks fail, look
for the following log files for further troubleshooting:
■ sharepoint_util.log
■ sharepoint_audit.log
■ sharepoint_scanner_0.log
Errors during the addition of Web After you configure a Web Application, Data Insight discovers and adds site
Applications collections to the configuration automatically. Data Insight also enables the
auditing flags on the site collections.
To troubleshoot any errors during the addition of Web Applications, view the
sharepoint_util.log file.
<INSTALLDIR>\log
Data Insight Collector node is not able Refer to the following log:
to fetch audit events from the HNAS
hansd_<filer_id>.log
filer.
To view the audit logs that are Perform the following steps:
generated on the Hitachi NAS filer
■ Using SSH or Telnet log in to Admin EVS.
■ Switch to the EVS you are monitoring.
■ Execute the following commands:
console-context --evs <EVS Name>
audit-log-show <File System Name>
Services checks
Verify if the services listed in the following figure are running:
■ Number of the events that are ■ Determine the number of events received weekly
generated and processed per by navigating to https://localhost/datadir
Collector (Only in Data Insight In the list of folders, select stats and then select
4.0 or later.) statistics.db.
In the query drop down, select "weekly_stats", enter
statid=252842000 in the where text box, and then
click Go.
■ Determine number of events processed weekly.
Navigate as described above but enter
statid=251793411 in the where text box and then
click Go.
Generic checks
Generic checks for all servers include checks for the following information:
■ Configuration version
■ Component version
■ Resource usage statistics
■ Windbg check
Configuration version
Table B-2 lists three configuration scenarios that you must review, and their related
steps:
Are the versions on the Management Server Run the following command on Management
and the other nodes consistent for Server/node:
configuration, users, and database?
# configdb -V
Note that the policy versions may vary.
Are the versions on the Management Server Contact Symantec support for more
lower than other Data Insight nodes? information.
Are the versions considerably higher on ■ Ensure that connectivity exists between
Management Server than the other Data Management Server and worker nodes.
Insight nodes? ■ Ensure that connectivity exists between
Collector nodes and Windows File Server
agents. (if applicable).
Component versioning
Perform one of the following steps to gather component versions for all nodes:
■ Navigate to Settings > Servers.
■ Or, to get the node number-wise breakdown of component versions, run the
following command:
# configdb -p -T objattr_node| findstrnode_info_product.version
Note: Data Insight Management Server, Indexer nodes, and Collector nodes must
run the same version of Data Insight. Windows File Server agent version may be
different from the Data Insight version on the head servers. Also note that Data
Insight 5.0 supports Windows File Server agents later than version 4.0 or later.
Table B-3 describes the various parameters you must verify in the Data Insight
performance chart and the remedial steps required.
CPU usage is consistently high. Look for processes clogging the server.
CPU usage is consistently low. Look for cores and threads settings in
server-specific checks. Verify if a server is
unable to process more load.
Memory consumption is consistently high. Look for processes clogging the server.
Spike patterns exist in hourly or weekly If the spikes occur in a weekly chart or hourly
charts, or both. chart form a pattern, verify if they coincide
with the Data Insight schedule. Also
investigate other schedules on the servers.
The following table describes the remedial steps that you must perform, if disk
usage increases to high levels.
Disk usage is consistently high; that is, ■ Use stale data statistics and purge data
consistently reaching 100 per cent.
OR
Windbg check
If the windbg utility is installed, check for any processes that are captured and are
running in windbg. For example, if you run a report that fails, say report.exe,
Windbg captures the process. Investigate if any process appears to be running,
but is not functional.
Ideally, there should be no error files. View errors at the following location:
datadir\indexer\err folder.
Attic check
Ensure that the attic feature is turned off on all Data Insight nodes.
That is, ensure that there are no intermediate files in the following directory:
DATADIR/attic.
Checks Steps
Checks Steps
Presence of old or stale reports in the ■ If the reports are no longer required, archive
following folder: or delete them.
For each report, set a reasonable value in the
DATADIR/console/reports/reportruns ■
Maximum reports to preserve field.
SMTP and events notification check Ensure that these settings are configured.
Checks Steps
Number of filers and the shares ■ Navigate to Settings > Filers, and filter on each
that are serviced by each Indexer.
Indexer
Or
Index integrity and space usage ■ Check Idxcheck0.log under installdir/log folder
per Indexer checks for database integrity and index.
■ Go to the end of the log file to review the overall
integrity check.
■ If the integrity check fails, review the index check for
each share in the same file.
■ Review two key metrics: IndexDB Size and Segments
Size. Others metrics indicate stale hash sizes.
IndexDB Size is a sum of all index databases. It stores
index information such as scan information (metadata
and ACLs).
Segment size is a sum of all segment sizes. It stores
the audit events.
Hashes are indexed against the two databases -
indexDB and segments.
ActIndex Size was introduced in Data Insight 4.0 to
indicate activity in a social network map.
Checks Steps
Figure B-5 illustrates the chart ■ Ensure that the Inbox plot follows a smooth saw-tooth
pattern.
■ If the number of files in inbox consistently rises, and
so does CPU and memory usage, investigate if there
is an Indexer issue.
■ Check schedule settings and number of threads to
fine-tune the processing of incoming files.
Basic Settings In Data Insight UI, navigate to Data Insight > Indexer >
Advanced Settings, and fine-tune the following
parameters as necessary:
Checks Steps
Audit Events Pre-Processor In the Data Insight UI, navigate to Data Insight
Settings Indexer>Advanced Settings
Accumulation of backlog in ■ Look for the date of oldest files (backlog): If files are
DATADIR/inbox very old, it means that audit or scan file processing is
not able to keep up with collection.
■ Execute the configcli list_jobs command to
gather ADScanJob, ProcessEventsJob, and
IndexWriterJob.
■ Collect Event Viewer logs. Collect the output of the
indexcli -j command.
The following figure illustrates a smooth saw-tooth pattern in an Inbox and Outbox
chart:
Table B-7
Checks Steps
Number of filers (and shares) If applicable, also specify the number of Windows File Servers
serviced by each collector with the Windows File Server agents that are configured.
Ping time from Collector to This value should not be more than 1 microsecond. If the
the filers value is greater than 1 microsecond, you must investigate
network latency issues.
Checks Steps
Basic settings for full and In the Data Insight UI, navigate to DataInsight > Collector >
incremental scans under Advanced Settings, and fine-tune the following parameters
Filesystem scanner settings as necessary.
Audit Events Pre-Processor In the Data Insight UI, navigate to DataInsight > Collector >
Settings Advanced Settings, and change the following settings:
Checks Steps
Accumulation of backlog and ■ Look for the date of oldest files (backlog): If files are very
errors in DATADIR/collector old, it means pre-processing is not able to keep up with
(error and staging) audit events collection.
■ Verify pre-processing settings. You can set this value to
more than 2 GB. You can increase the number of threads.
■ Collect the following information:
■ To see schedule and last run status of the CollectorJob
process, execute the command: configcli
list_jobs
■ Collect the following log:
installdir\log\collector_n.log.
Where installdir is the installation directory: For
example:
C:\ProgramFiles\Symantec\DataInsight
■ Collect the following log:
installdir\log\commd.0.0.log
■ Collect the following contents:
datadir\data\config.db.<n>.
■ Collect the following content:
datadir\data\collector\err\*.sqlite.
■ Collect Event Viewer logs.
Accumulation of backlog in ■ Look for the date of oldest files (backlog): If the files are
DATADIR/outbox very old, investigate if the network is slow or
communication is broken
■ Collect the following information
■ To see the schedule and last run status of CollectorJob
and FileTransfer job processes on the collector,
execute the following command: configcli
list_jobs
■ Collect Event Viewer logs.
■ If communication issues occur, add the entries of the
Iindexer nodes in etc/hosts directory of collectors;
and the Collector nodes’ entries in etc/hosts folder
of Windows File Servers.
Accumulation of backlog in ■ Look for the date of the oldest file (backlog): If the files
DATADIR/changelog are very old, investigate if incremental scans are fail.
■ Collect the following information:
■ Last scan job
■ Scan schedule
Checks Steps
The following figure illustrates the smooth saw-tooth pattern in the Inbox and Outbox
chart:
Checks Steps
File system scanner settings See “Data Insight Collector checks” on page 365.
Accumulation of backlog and ■ Look for the date of oldest files (backlog). If files are
errors in DATADIR/collector (error very old, it means pre-processing is not able to keep
and staging) up with audit events collection.
■ Verify pre-processing settings. You can set this value
to more than 2 GB. You can increase the number of
threads.
■ Collect the following logs:
■ installdir\log\collector_n.log
Where installdir is (?) and n is (?) (Query)
■ installdir\log\commd.0.0.log
■ <datadir>\data\config.db.<n>
■ <datadir>\data\collector\err\*.sqlite
■ Collect Event Viewer logs.
■ To see schedule and last run status of the
CollectorJob process, execute the command:
configcli list_jobs.
■ Look for kernel ring buffer error in winnasd.log.
If the error is present, increase the Maximum kernel
ring buffer size setting. (Windows File Server Agent
setting)
Accumulation of backlog in ■ Look for the date of oldest files (backlog): If the files
DATADIR/outbox are very old, investigate if the network is slow or
communication is broken.
■ Collect the following information:
■ To see the schedule and last run status of
CollectorJob and file FileTransferJob processes
on the Collector, execute the following command:
configcli list_jobs
■ Collect Event Viewer logs.
■ If communication issues occur, add entries of
indexer nodes in the etc/hosts directory of the
Collector nodes; and Collector nodes’ entries in
etc/hosts folders of Windows File Server.
Checks Steps
Accumulation of backlog in ■ Look for the date of the oldest file (backlog): If the
DATADIR/changelog files are very old, investigate if incremental scans
fail.
■ Collect the following information:
■ Last scan job
■ Scan schedule
Common practices
■ Use 64-bit server architecture for better Data Insight performance.
■ Use high performance disks for indexers, SAN disk for ease of expansion and
backup
■ Disk concatenation or RAID 4 may generate hotspots in the array subsystem.
Ensure proper distribution to avoid hotspots.
■ Create exclude rules to ensure that the information that is reported is relevant
and useful.
■ If a third-party application that generates a lot of events, resides on a volume,
exclude that volume from auditing (like NetApp FPolicy) to restrict events.
■ Schedule scans at off peak hours to minimize effect on users accessing the
shares.
■ Disable scans from running during peak usage.
■ Manage job scheduling as per load.
■ Assign appropriate roles and access for Data Insight users.
■ Deploy DFS mappings to ensure that all names are recognizable and meaningful.
■ Create containers to logically group related objects together for administration
and reporting purposes.
■ Set up event notifications to ensure that errors and warnings are reported.
■ Define retention policies to ensure the database and log files are maintained
over time.
■ Define report retention in when configuring reports (“Reports to preserve” setting).
■ Maintain adequate proximity – Management Server close to indexers, Collector
close to filers.
■ Use Chrome for faster UI.
■ Refer to the Symantec Data Insight Installation Guide for the latest platform and
version support information.
■ Indexers
■ Up to to 20,000 shares
■ Allocate 100 MB of disk space per million files
■ Allocate 20 MB of disk space per million events
■ Typical storage size range is 40 GB to 400 GB depending upon environment.
Large deployments may even need more space.
■ Collectors
■ Up to 10,000 shares or ten filers, whichever comes first
■ Up to 150 Windows NAS agents
■ Up to 20 SharePoint web front-end servers
■ Typical storage size range is 60-80 GB per collector. Additional space may
also help in case of network outages where data is staged before it is
transferred to Indexer.
■ Indexer storage
■ 100 MB/1 million files
■ 20 MB/1 million events
■ Scan times
■ 200 files per second per thread
■ fg.exe
■ indexcli.exe
■ reportcli.exe
■ scancli.exe
■ installcli.exe
fg.exe
fg.exe – A script that modifies the file group configuration for Data Insight.
SYNOPSIS
fg -C -N <name of file group>
fg -L -d
Description
fg is a script used to modify the configuration for sorting files into file groups. By
default, Data Insight sorts files into 18 file groups based on the file extensions.
Options
-i <username>
(Required) The fully-qualified user name of the user running the command, for
example, user@domian. This user should have Server Administrator privileges
in Data Insight.
-A Adds an extension to an existing file group.
-C Creates a new file group.
-D Deletes an existing file group.
-L Lists existing file groups.
-R Removes an extension from an existing file group.
-N Name of the file group to be created or deleted.
-d Shows file group details when listing existing file groups.
-t <name of extension>
The file extension to add or delete from the file group (For example, doc).
-h Prints the usage message.
EXAMPLES
EXAMPLE 1: The following command creates a new file group.
EXAMPLE 2: The following example adds a new extension to an existing file group.
fg -i <username> -L -d
indexcli.exe
indexcli.exe – a utility that manages the index segments available on an Indexer
worker node.
SYNOPSIS
indexcli.exe
--display|--archive|--purge|--restore|--rearchive|--list-jobs
|--stop-jobs [OPTIONS]
indexcli.exe -c
indexcli.exe -d
indexcli.exe -h
indexcli.exe -j
indexcli.exe -r
indexcli.exe -t
indexcli.exe -u
Archive options
indexcli.exe -A -a ¦ -f <FILERS> ¦ -m
<SHARES> ¦ -S <SITECOLLS> ¦ -w <WEBAPPS> ¦ -I
<MONTHS>
-S,--sitecoll <SITECOLLS>
Archives segments for specified list of Microsoft SharePoint site collections.
-w, --webapp <WEBAPPS><
Archives segments for specified list of Microsoft SharePoint Web applications.
Purge options
indexcli.exe -D -a ¦ -f <FILERS> ¦
-m <SHARES> ¦ -S <SITECOLLS> ¦ -w <WEBAPPS> ¦
-I <MONTHS>
Display options
indexcli.exe -d -a ¦ -f <FILERS> ¦
-m <SHARES> ¦ -S <SITECOLLS> ¦ -w <WEBAPPS> ¦
-s <STATES>
-s <name of state>
Displays index segments for the given state only. Multiple stars can be
separated by comma. Possible states are, ARCHIVING, RE-ARCHIVING,
ARCHIVED, RESTORING, RESTORED, RESTORE, FAILED, or DELETED.
-S,--sitecoll <SITECOLLS>
Displays information for a specified list of Microsoft SharePoint site collections.
-w, --webapp <WEBAPPS><
Displays information for a specified list of Microsoft SharePoint Web
applications.
Restore options
indexcli.exe -r -a ¦ -f <FILERS> ¦
-m <SHARES> ¦ -S <SITECOLLS> ¦ -w <WEBAPPS> -C -F <FROM> ¦ >
¦ -R <RANGE>
[-L <MONTHS> ¦ -l <MONTHS> ¦ -y]
-S,--sitecoll <SITECOLLS>
Restores segments for a a specified list of Microsoft SharePoint site collections.
-w, --webapp <WEBAPPS><
Restores segments for a specified list of Microsoft SharePoint Web applications.
-R <range in months>
Restore all index segments for the specified month range. Specify the month
in the format, YYYY/MM-YYYY/MM. For example, indexcli.exe -r -R
2010/01-2010-03 restores segments from January 2010 to March 2010.
-y Instead of restoring segments, this option displays the list of files that must be
available before restoring the specified segments.
Re-archive options
indexcli.exe -u -a ¦ -f <FILERS> ¦
-m <SHARES> ¦ -S <SITECOLLS> ¦ -w <WEBAPPS>
-F <FROM> ¦ -R <RANGE>
-m <name of share(s)>
Re-archives previously restored index segments for specified list of shares.
-S,--sitecoll <SITECOLLS>
Re-archives previously restored segments for a specified list of Microsoft
SharePoint site collections.
-w, --webapp <WEBAPPS><
Re-archives previously restored segments for a specified list of Microsoft
SharePoint Web applications.
-R <range in months>
Restore all index segments for the specified month range. Specify the month
in the format, YYYY/MM-YYYY/MM. For example, indexcli.exe -u -R
2010/01-2010-03 restores segments from January 2010 to March 2010.
EXAMPLES
EXAMPLE 1: The following command archives index segments for specified list of
filers.
indexcli.exe -A -f \\filer1,\\filer2,ID1,ID2
EXAMPLE 2: The following command archives index segments for specified list of
shares.
indexcli.exe -A -m \\filer1\share1,\\filer2\shares2,ID3,ID4
EXAMPLE 3: The following command purges index segments for specified list of
filers.
indexcli.exe -D -f \\filer1,\\filer2,ID1,ID2
EXAMPLE 4: The following command purges segments for specified list of shares.
indexcli.exe -D -m \\filer1\share1,\\filer2\shares2,ID3,ID4
EXAMPLE 5: The following command restores index segments for specified list of
filers.
indexcli.exe -r -f <\\filer1,\\filer2,ID1,ID2>
EXAMPLE 6: The following command restores index segments for specified list of
shares.
indexcli.exe -r -m \\filer1\share1,\\filer2\shares2,ID3,ID4
indexcli.exe -u -f \\filer1,\\filer2,ID1,ID2
indexcli.exe -u -m \\filer1\share1,\\filer2\shares2,ID3,ID4
indexcli.exe -S,--sitecoll<http://sp_webapp:8000/sc1,ID2,ID3...>
EXAMPLE 10: The following command archives segments for specified list of
Microsoft SharePoint Web applications.
indexcli.exe - w,--webapp<http://sp_webapp:8000,ID2,ID3,...>
EXAMPLE 11: The following command purges segments for specified list of Microsoft
SharePoint site collections.
indexcli.exe -S,--sitecoll<http://sp_webapp:8000/sc1,ID2,ID3...>
EXAMPLE 12: The following command purges segments for specified list of Microsoft
SharePoint Web applications.
indexcli.exe - w,--webapp<http://sp_webapp:8000,ID2,ID3,...>
EXAMPLE 13: The following command displays information for specified list of
Microsoft SharePoint site collections.
indexcli.exe -S,--sitecoll<http://sp_webapp:8000/sc1,ID2,ID3...>
EXAMPLE 14: The following command displays information for specified list of
Microsoft SharePoint Web applications.
indexcli.exe - w,--webapp<http://sp_webapp:8000,ID2,ID3,...>
EXAMPLE 15: The following command restores segments for specified list of
Microsoft SharePoint site collections.
indexcli.exe -S,--sitecoll<http://sp_webapp:8000/sc1,ID2,ID3...>
EXAMPLE 16: The following command restores segments for specified list of
Microsoft SharePoint Web applications.
indexcli.exe - w,--webapp<http://sp_webapp:8000,ID2,ID3,...>
indexcli.exe -S,--sitecoll<http://sp_webapp:8000/sc1,ID2,ID3...>
indexcli.exe - w,--webapp<http://sp_webapp:8000,ID2,ID3,...>
reportcli.exe
reportcli.exe – a utility to create reports using a properties file that contains the
input parameters, execute and list configured reports, check the status of the reports,
and cancel report runs.
SYNOPSIS
reportcli.exe ––list-jobs¦——list-reports¦––list-outputs¦––create
––execute¦––cancel¦––help [OPTIONS]
reportcli.exe –c
reportcli.exe –e
reportcli.exe –h
reportcli.exe –j
reportcli.exe –l
reportcli.exe –o
Options
reportcli.exe -n -r <name of report> -p <property file path> -u <user
name of creator> [-rt <report type>] [--users <path of users' .csv
file>] [-t <path of .csv file of paths>] [--custodian <path of
custodian' .csv file>]
Creates a report using the properties file in which the input parameters are
specified. The following attributes apply:
--users <path of users' .csv Path of the .csv file containing the names
file> of users in the user@domain,<user group>
format.
--paths <path of .csv file of Path of the .csv file containing the fully
paths> qualified paths of the data for which you
want to create the report.
–r – –report <Report Name> Prints the status of jobs for the specified
report. You can either specify the report ID
or the report name.
–rt – –type<Report Type> Prints the status of jobs for the specified
report type.
–-w --wait <Max_Wait> Returns the report output only after the report
execution is complete or the specified wait
time in minutes is exceeded. Specify –1 to
wait forever.
--users <path of users' .csv Path of the .csv file containing the names
file> of users in the user@domain,<user¦group>
format.
--paths <path of .csv file > Path of the .csv file containing the fully
qualified paths of the data for which you
want to create the report.
You can exclude any path on a file server or a SharePoint server from the
dashboard data computation. However, Data Insight does not support the
exclusion of DFS paths using this method.
report.exe –c –i <JOB_ID>
Cancels execution of the specified report job.
scancli.exe
scancli.exe – scancli.exe - a utility that scans shares and site collections.
SYNOPSIS
scancli.exe ––start¦ ––stop¦ ––list-jobs¦ ––help [OPTIONS]
–s ––start
–c ––stop
–l ––list-jobs
–d ––display
Displays the scan status for specified shares or site collections. To view real time
scan queue information, use the –l ––list-jobsoption.
–h ––help
Displays help.
Scan options
scancli.exe –s –a ¦ –f <FILERS> ¦ –m <SHARES> ¦ –S <SITECOLLS> ¦w <WEBAPPS>
[–D] [–e <EXCLUDE] [–F ¦ –N ¦ –p] [–I <INCLUDE>] [–i <DAYS>] [–t]
–a – – all
Scans all shares and site collections.
–D – –disabled
By default, disabled devices or those for which scanning has been disabled
are not included in the scan. Specify this option to include shares or site
collections of disabled devices.
–e – –exclude <EXCLUDE>
Exclude shares or site collections matching specified patterns. Separate multiple
patterns with a comma. You can specify one or more wildcards in the pattern,
for example, vol*,*$.
–f – –filer <FILERS>
Scans shares of the specified filers. For example, –f – –filer >\\filer1,
filer2, ID1,..>.
–F – –failed
Select the shares or site collections whose last scan failed completely. This
does not include those shares or site collections that have never been scanned
before or those which succeeded partially (*).
–I – –Include <INCLUDE>
Include the shares or site collections matching the specified patterns. Separate
multiple patterns with a comma. You can specify one or more wildcards in the
pattern. For example, –I – –Include >vol*,*$ >
–i – –interval <DAYS>
Select the shares or site collections that have not been scanned for specified
number of days. This includes shares or site collections which have never been
scanned before (*).
–m – –share <SHARES>
Scans specified list of shares. For example, –m – –share >\\filer1\share1,
share2, ID3...>.
–n – –never
Select the shares or site collections that have never been scanned before (*).
–p – –partial
Select the shares or site collections whose last scan succeeded partially, that
is, those shares or site collections for which the scan is complete but with failure
to fetch information for some paths (*).
–S – –sitecoll <SITECOLLS>
Scans the specified list of Microsoft SharePoint site collections.
– t – –top
Adds shares or site collections to top of the scan queue.
–w – –webapp <WEBAPPS>
Scans site collections for specified list of Microsoft SharePoint Web applications.
Note: (*) indicates that the option can only be used on the Management Server.
–a – – all
Stops scans for all shares and site collections.
–e – –exclude <EXCLUDE>
Exclude shares or site collections matching specified patterns. Separate multiple
patterns with a comma. You can specify one or more wildcards in the pattern,
for example, vol*,*$.
–f – –filer <FILERS>
Stops scans for shares of the specified filers.
–I – –Include <INCLUDE>
Include shares or site collections matching the specified patterns. Separate
multiple patterns with a comma. You can specify one or more wildcards in the
pattern.
–m – –share <SHARES>
Stops scans for the specified list of shares.
–S – –sitecoll <SITECOLLS>
Stops scans for the specified list of Microsoft SharePoint site collections.
–w – –webapp <WEBAPPS>
Stops scans for site collections for specified list of Microsoft SharePoint Web
applications.
Display options
scancli.exe –d –a ¦ –f <FILERS> ¦ –m <SHARES> ¦ –S <SITECOLLS> ¦w <WEBAPPS>
[–D] [–e <EXCLUDE] [–F ¦ –N ¦ –p] [–I <INCLUDE>] [–i <DAYS>]
–a – – all
Displays scan status for all shares and site collections.
–e – –exclude <EXCLUDE>
Exclude shares or site collections matching specified patterns. Separate multiple
patterns with a comma. You can specify one or more wildcards in the pattern,
for example, vol*,*$.
–f – –filer <FILERS>
Displays scan status for the shares of the specified filers.
–F – –failed
Displays scan status for the shares or site collections whose last scan failed
completely. The scan status does not include those that have never been
scanned before or those which succeeded partially (*).
–I – –Include <INCLUDE>
Include shares or site collections matching the specified patterns. Separate
multiple patterns with a comma. You can specify one or more wildcards in the
pattern.
–i – –interval <DAYS>
Displays scan status for the shares or site collections that have not been
scanned for specified number of days. The scan status includes the shares
which have never been scanned before (*).
–m – –share <SHARES>
Displays scan status for specified list of shares.
–n – –never
Displays scan status for the shares or site collections that have never been
scanned before (*).
–p – –partial
Displays scan status for the shares or site collections whose last scan
succeeded partially, that is, those shares or site collections for which the scan
is complete but with failure to fetch information for some paths (*).
–S – –sitecoll <SITECOLLS>
Displays scan status for the specified list of Microsoft SharePoint site collections.
–w – –webapp <WEBAPPS>
Displays scan status for the site collections for specified list of Microsoft
SharePoint Web applications.
Examples
EXAMPLE 1: The following command scans all shares of a filer, netapp1.
EXAMPLE 2: The following command scans all shares and site collections for which
a full scan failed 3 or more days ago.
The following command scans all site collections of a Web application that have
not been scanned for the past 30 days or have never been scanned.
installcli.exe
installcli.exe – A utility that is used to configure multiple Windows File Servers
and Data Insight worker nodes simultaneously.
SYNOPSIS
installcli [-w winnas_csv [-q]] [-n node_csv [-q]] [-p operation_token]
[-l] [-h]
Options
-w --winnas winnas_csv
Installs Data Insight Windows File Server agents and configures the
corresponding filer.
-w option uses a .csv file with the following details as input:
■ The host name or IP address of the Windows File Server that you want
Data Insight to monitor.
■ The host name, IP address, or ID of the Collector node that is configured
to scan the filer.
■ The host name, IP address, or ID of the Indexer node that is configured for
the filer.
■ The credentials that Data Insight should use to install the agent on the
Windows File Server. The credential should be in the format user@domain.
installcli.exe also accepts Local System credentials as value _LOCAL_.
The same credentials must be added to Data Insight as a saved credential
previously.
■ True or false value indicating if the filer is clustered.
■ The IP addresses of the agents. Separate multiple IP addresses with a
semi-colon. If you do not want to use an agent to monitor the filer, indicate
this with a hyphen (-).
■ The credentials required to scan the filer. The credential should be in the
format user@domain. The same credentials should be added to Data Insight
as a saved credential previously.
■ In case of a Windows File Server agent upgrade, RP or Full value indicating
the type of upgrade you want to perform. This parameter is optional.
Optionally, the name of the installer. If not specified, an appropriate one
will be picked up from installers folder on the collector.
-h --help
Displays help.
Job Description
ADScanJob Initiates the adcli process on the Management Server to scan the directory servers.
Ensure the following:
CollectorJob Initiates the collector process to pre-process raw audit events received from storage
devices. The job applies exclude rules and heuristics to generate audit files to be sent
to the Indexers. It also generates change-logs that are used for incremental scanning.
ChangeLogJob The CollectorJob generates changelog files containing list of changed paths, one
per device, in the changelog folder. There cab be multiple files with different
timestamps for each device. The ChangeLogJob merges all changelog files for a
device.
Job Description
ScannerJob Initiates the scanner process to scan the shares and site collections added to Data
Insight.
Creates the scan database for each share that it scanned in the data\outbox folder.
IScannerJob Intiates the incremental scan process for shares or site-collections for paths that have
changed on those devices since the last scan.
CreateWorkflowDBJob Runs only on the Management Server. It creates the database containing the data for
DLP Incident Management, Entitlement Review, and Ownership Confirmation workflows
based on the input provided by users.
DlpSensitiveFilesJob Retrieves policies and sensitive file information from Data Loss Prevention (DLP).
FileTransferJob Transfers the files from the data\outbox folder from a node to the inbox folder of
the appropriate node.
FileTransferJob_Evt Sends Data Insight events database from the worker node to the Management Server.
FileTransferJob_WF Transfers workflow files from Management Server to the Portal service.
IndexWriterJob Runs on the Indexer node; initiates the idxwriter process to update the Indexer database
with scan (incremental and full), tags, and audit data.
After this process runs, you can view newly added or deleted folders and recent access
events on shares on the Management Console.
ActivityIndexJob Runs on the Indexer node; It updates the activity index every time the index for a share
or site collection is updated.
PingHeartBeatJob Sends the heartbeat every minute from the worker node to the Data Insight
Management Server.
PingMonitorJob Runs on the Management Server. It monitors the heartbeat from the worker nodes;
sends notifications in case it does not get a heartbeat from the worker node.
SystemMonitorJob Runs on the worker nodes and on the Management Server. Monitors the CPU, memory,
and disk space utilization at a scheduled interval. The process sends notifications to
the user when the utilization exceeds a certain threshold value.
DiscoverSharesJob Discovers shares or site collections on the devices for which you have selected the
Automatically discover and monitor shares on this filer check box when configuring
the device in Data Insight
Job Description
ScanPauseResumeJob Checks the changes to the pause and resume settings on the Data Insight servers,
and accordingly pauses or resumes scans.
DataRetentionJob Enforces the data retention policies, which include archiving old index segments and
deleting old segments, indexes for deleted objects, old system events, and old alerts.
IndexVoldbJob Runs on the Management Server and executes the command voldb.exe --index which
consumes the device volume utilization information it receives from various Collector
nodes.
SendNodeInfoJob Sends the node information, such as the operating system, and the Data Insight version
running on the node to the Management Server. You can view this information on the
Data Insight Server > Overview page of the Management Console.
EmailAlertsJob Runs on the Management Server and sends email notifications as configured in Data
Insight.The email notifications pertain to events happening in the product, for example,
a directory scan failure. You can view them on the Settings > System Overview page
of the Management Console.
LocalUsersScanJob Runs on the Collector node that monitors configured file servers and SharePoint
servers. In case of a Windows File Server that uses agent to monitor access events,
it runs on the node on which the agent is installed.
UpdateCustodiansJob Runs on the Indexer node and updates the custodian information in the Data Insight
configuration.
The job also deletes stale data that's no longer being used.
StatsJob On the Indexer node, it records index size statistics to lstats.db. The information
is used to display the filer statistics on the Data Insight Management Console.
MergeStatsJob Rolls up (into hourly, daily and weekly periods) the published statistics. On the Collector
nodes for Windows Filer Server, the job consolidates statistics from the filer nodes.
StatsJob_Latency On the Collector node, it records the filer latency statistics for NetApp filers.
Job Description
SyncScansJob Gets current scan status from all Collector nodes. The scan status is displayed on the
Settings >Scanning Dashboard > In-progress Scans tab of the Management
Console.
SPEnableAuditJob Enables auditing for site collections (within the web application), which have been
added to Data Insight for monitoring.
SPAuditJob Collects the audit logs from the SQL Server database for a SharePoint web application
and generates SharePoint audit databases in Data Insight.
SPScannerJob Scans the site collections at the scheduled time and fetch data about the document
and picture libraries within a site collection and within the sites in the site collection.
NFSUserMappingJob Maps every UID in raw audit files for NFS and VxFS with an ID generated for use in
Data Insight. Or generates an ID corresponding to each User and Group ID in raw
audit files received from NFS/VxFS.
ProcessEventsJob Processes all the Data Insight events received from worker nodes and adds them to
the yyyy-mm-dd_events.db file on the Management Server.
WFStatusMergeJob Merges the workflow and action status updates for remediation workflows (DLP Incident
Remediation, Entitlement Reviews, Ownership Confirmation), Enterprise Vault archiving,
and custom actions and update the master workflow database with the details so that
users can monitor the progress of workflows and actions from the Management
Console.
UpdateConfigJob Reconfigures jobs based on the configuration changes made on the Management
Server.
DeviceAuditJob Fetches the audit records from the Hitachi NAS EVS that are configured with Data
Insight.
HNasEnableAuditJob Enables the Security Access Control Lists (SACLs) for the shares when a Hitachi NAS
filer is added.
Job Description
WorkflowActionExecutionJob This service reads the request file created on the Management Server when a Records
Classification workflow is submitted from the Portal. The request file contains the paths
on which an Enterprise Vault action is submitted. When the action on the paths is
complete, the job updates the request file with the status of the action.
Job Description
Job Description
Job Description
Job Description
Job Description
A DFS utility
Active Directory domain scans overview 190
scheduling 75 running 191
adding exclude rules 39 directory domain scans
archiving overview 65
adding new Enterprise Vault servers 263 directory servers
filer mapping 266 adding 66
managing the Enterprise Vault servers 264 managing 71
overview 262 directory service domain
archiving data deleting 75
overview 43
E
B EMC Celerra filers
business unit mappings configuration credentials 119
configuring 76 preparing for CEPA 116
events
configuring scanning 34
C email notifications configuring 327
Clustered NetApp filers enabling Windows event logging 328
about configuration 98
configuring
DFS target 189 F
EMC filers 115 filers
SMB signing 84 Add/Edit EMC Celerra filer dialog 165
Windows File Server 139 add/edit geneic device dialog 177
Workspace data owner policy 58 Add/Edit Hitachi NAS file server 178
configuring product users Add/Edit NetApp filer dialog 156
reviewing current users and privileges 219 Add/Edit VxFS filer dialog 173
Symantec Data Loss Prevention users 223 Add/Edit Windows File Server dialog 170
containers adding 155
adding 217 deleting 182
managing 216 editing configuration 181
overview 216 migrating 181
current users and privileges viewing 154
reviewing 219 Fpolicy
overview 85
preparing NetApp filer 87
D preparing NetApp vfiler 89
data retention preparing Symantec Data Insight 86
configuring 44
deployment
moving data directory 338
G overview
generic device administering Symantec Data Insight 16
scanning credentials 151 configuring filers 154
DFS target 189
filtering accounts, IP addresses, and paths 37
H
Hitachi NAS filers
about configuration 134 P
configuration credentials 135 patches and upgrades
viewing and installing recommendations 249
Permission remediation
I About 258
importing exclude rules 261
custom attributes 76 managing 259
DFS mappings 192 policies
Indexers managing 311
Migration 252 overview 309
preparing
L EMC Celerra filer 116
licenses NetApp filer for Fpolicy 87
managing 59 NetApp vfiler for Fpolicy 89
Symantec Data Insight for Fpolicy 86
M product users
adding 220
Management Console
deleting 223
configuring global settings 60
editing 222
operation icons 16
product users and roles
Management Server
overview 218
configuring SMTP settings 31
purging data
managing
overview 43
undesired data 262
N S
saved credentials
NetApp cluster
managing 41
capabilities for adding non-administrator domain
overview 41
user 104
scan errors
preparing non-administrator domain user 104
viewing 329
NetApp filers
SharePoint servers
capabilities for adding non-administrator domain
configuration credentials 194
user 92
shares
configuration credentials 79, 83
Add New Share/Edit Share dialog 184
handling events from home directories 96
adding 184
preparing non-administrator domain user 92
deleting 189
prerequisites 78
editing configuration 188
managing 185
O site collections
operation progress status managing 205
tracking and managing 268 supported file servers 19
T
troubleshooting
Celera/VNX 346
HNAS 354
Isilon 349
NetApp 344
SharePoint 350
V
viewing
configured filers 154
VxFS file server
configuration credentials 146
W
Windows File Server agent
installing
using Upload Manager utility 251
Windows File Servers
configuration credentials 140