100% found this document useful (1 vote)
3K views445 pages

DataSunrise Database Security User Guide

Uploaded by

thameemul ansari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
3K views445 pages

DataSunrise Database Security User Guide

Uploaded by

thameemul ansari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 445

www.datasunrise.

com

DataSunrise Database Security 9.0

User Guide
DataSunrise Database Security User Guide

Copyright © 2015-2023, DataSunrise, Inc . All rights reserved.

All brand names and product names mentioned in this document are trademarks, registered trademarks or service
marks of their respective owners.
No part of this document may be copied, reproduced or transmitted in any form or by any means, electronic,
mechanical, photocopying, recording, or otherwise, except as expressly allowed by law or permitted in writing by the
copyright holder.
The information in this document is subject to change without notice and is not warranted to be error-free. If you
find any errors, please report them to us in writing.
iii

Contents

Chapter 1: General Information........................................................................ 13


Product Description........................................................................................................................................ 13
Supported Databases and Features..............................................................................................................13
DataSunrise Operation Modes.......................................................................................................................18
Sniffer Mode........................................................................................................................................ 18
Proxy Mode......................................................................................................................................... 19
Trailing DB audit logs..........................................................................................................................19
Dynamic SQL Processing.............................................................................................................................. 20
System Requirements.................................................................................................................................... 22
Useful Resources........................................................................................................................................... 23

Chapter 2: Quick Start....................................................................................... 25


Connecting to DataSunrise's Web Console...................................................................................................25
Product Registration.......................................................................................................................................25
Creating a Database Profile on Startup (optional)........................................................................................ 26
Creating an SMTP Server (optional)............................................................................................................. 26

Chapter 3: DataSunrise Use Cases.................................................................. 27


Creating a Target Database Profile and a Proxy.......................................................................................... 27
Scenario 1. Database Audit........................................................................................................................... 29
Creating an Audit Rule........................................................................................................................29
Viewing Database Audit Results.........................................................................................................30
Scenario 2. Database Security...................................................................................................................... 31
Creating a Security Rule.....................................................................................................................31
Blocking Results.................................................................................................................................. 32
Scenario 3. Data Masking............................................................................................................................. 33
Creating a Masking Rule.................................................................................................................... 33
Data Masking Results......................................................................................................................... 34
Scenario 4. Limiting Access to a Database.................................................................................................. 34
Creating a Limited Access Rule......................................................................................................... 34
Limited Access Results....................................................................................................................... 36
Creating a Dynamic Masking Rule for multiple instances............................................................................. 36

Chapter 4: DataSunrise's Web Console........................................................... 38


Structure of the Web Console....................................................................................................................... 38
Dashboard...................................................................................................................................................... 39
SSL Certificates..............................................................................................................................................41
Creating a Certificate for the Web Console........................................................................................41
Creating a Private Certification Authority............................................................................................41
Alternative User Authentication Methods.......................................................................................................42
Configuring Active Directory Authentication to the Web Console.......................................................42
Configuring LDAP Authentication to the Web Console...................................................................... 43
Avoiding self-signed certificate problem by attaching a certificate..................................................... 43
Configuring OAuth2 Authentication in the DataSunrise's Web Console (based on Okta)...................44
Configuring Kerberos/NTLM Authentication in Internet Browsers.......................................................45
Mozilla Firefox...........................................................................................................................45
Microsoft Edge..........................................................................................................................45
iv
Google Chrome........................................................................................................................ 45
Single Sign-On in DataSunrise........................................................................................................... 46
Configuring SSO Authentication Based on OpenID (Okta)......................................................46
Configuring SSO Authentication Based on SAML (Okta)........................................................ 48
Configuring SSO Authentication Based on SAML (JumpCloud).............................................. 50
Configuring Email-Based Two-Factor Authentication..........................................................................53
Configuring OTP Two-Factor Authentication...................................................................................... 54
Monitoring....................................................................................................................................................... 54
Viewing System Information................................................................................................................54
Diagrams of Internal Characteristics of DataSunrise..........................................................................55
DataSunrise Throughput Reports........................................................................................................56

Chapter 5: Database Configurations................................................................ 58


Databases.......................................................................................................................................................58
Creating a Target Database Profile.................................................................................................... 58
Editing a Target Database Profile.......................................................................................................61
Displaying Database Properties.......................................................................................................... 62
Creating an MS SQL Sniffer............................................................................................................... 62
Troubleshooting Connection Failure................................................................................................... 63
Creating Database Users Required for Getting the Database's Metadata.................................................... 63
Creating an Oracle Database User.................................................................................................... 64
Creating an AWS RDS Oracle Database User.................................................................................. 67
Creating a PostgreSQL/Aurora PostgreSQL Database User............................................................. 69
Creating a Netezza Database User.................................................................................................... 70
Creating a MySQL/MariaDB/Aurora MySQL Database User (main method)......................................70
Creating a MySQL/MariaDB Database User (alternative method)..................................................... 71
Creating a Greenplum Database User............................................................................................... 71
Creating a Teradata Database User...................................................................................................72
Creating an SAP HANA Database User.............................................................................................72
Creating a Redshift Database User.................................................................................................... 73
Creating a Vertica Database User...................................................................................................... 73
Creating a DB2 Database User.......................................................................................................... 73
Creating a Sybase Database User..................................................................................................... 74
Creating a MongoDB Database User................................................................................................. 74
Creating a Snowflake Database User.................................................................................................75
Granting Necessary Privileges to a DynamoDB User........................................................................ 75
Creating an Informix Database User.................................................................................................. 76
Creating an Amazon S3 Database User............................................................................................ 76
Configuring an MS SQL Server Connection..................................................................................................77
Configuring an MS SQL Server Connection with the SQL Browser Service...................................... 77
Granting Necessary Privileges to an MS SQL Server User (also an AD user)...................................78
MS Azure Specific...............................................................................................................................78
Additional Proxy Configuration.......................................................................................................................78
Enabling "Regex replace" Data Masking for Netezza........................................................................ 78
Enabling "Regex Replace" Data Masking for Aurora MySQL and MariaDB.......................................78
Using Custom Certificate Authority in Redshift Client Applications (JDBC)........................................79
Changing PostgreSQL's Port Number................................................................................................ 79
Configuring Authorization of Local Users in PostgreSQL................................................................... 80
Enabling "Regex Replace" Data Masking for SQL Server................................................................. 80
Configuring Kerberos on SQL Server Startup Under Domain Account.............................................. 80
Configuring Windows Authentication for Microsoft SQL Server..........................................................82
Getting Metadata with an AD User.....................................................................................................83
Configuring Windows Authentication for Microsoft SQL Server on Linux........................................... 84
Connecting to an Amazon Redshift Database Using IAM Authentication...........................................86
Connecting to an Amazon Elasticsearch Using IAM Authentication...................................................87
Connecting to an Amazon PostgreSQL/MySQL Database Using IAM Authentication........................88
v
Setting up a Proxy or a Reverse Proxy for Amazon S3, Minio or Alibaba OSS................................. 89
Connecting to Athena through database connectors (DBC)...............................................................89
Connecting to Snowflake through NET Snowflake Connector (Windows)..........................................91
Processing Encrypted Traffic......................................................................................................................... 92
Configuring SSL Encryption for DB2.................................................................................................. 92
Configuring SSL for Microsoft SQL Server.........................................................................................92
Enabling SSL Encryption for MS SQL Server......................................................................... 92
Generating an SSL Certificate with OpenSSL......................................................................... 93
Generating a Signed SSL Certificate with OpenSSL............................................................... 93
Installing an SSL Certificate for an MS SQL Server Proxy......................................................95
Disabling Ephemeral Keys-Based Encryption.......................................................................... 95
Two-Factor Authentication (2FA)................................................................................................................... 96
Configuring 2FA Based on Emails......................................................................................................96
Configuring 2FA Based on OTP......................................................................................................... 97
Reconfiguring Client Applications.................................................................................................................. 97
PGAdmin (PostgreSQL Client)............................................................................................................97
SQL Server Management Studio (MS SQL Server Client)...............................................................100
MySQL Workbench (MySQL Client)................................................................................................. 101

Chapter 6: Database Users..............................................................................103


Creating a Target DB User Profile Manually............................................................................................... 103
Creating Multiple DB User Profiles Using a CSV or TXT File.....................................................................103
Creating a User Group................................................................................................................................ 105

Chapter 7: SSL Key Groups............................................................................ 106


Creating an SSL Key Group........................................................................................................................106
Enabling SSL Encryption and Server Certificate Check for the Target Database......................................106
Enabling Oracle Native Encryption.............................................................................................................. 107

Chapter 8: Encryptions.................................................................................... 108


Using Encryptions........................................................................................................................................ 109

Chapter 9: DataSunrise Rules.........................................................................110


Execution Order of DataSunrise Rules........................................................................................................110
General Settings...........................................................................................................................................111
Filter Sessions..............................................................................................................................................111
Filter Statements.......................................................................................................................................... 114
Object Group Filter............................................................................................................................115
Query Group Filter............................................................................................................................ 117
Query Types Filter.............................................................................................................................117
Session Events Filter........................................................................................................................ 117
SQL Injection Filter............................................................................................................................119
Response-Time Filter................................................................................................................................... 120
Rule Triggering Threshold........................................................................................................................... 120
Data Filter.....................................................................................................................................................121
Creating DataSunrise Rules from Transactional Trails............................................................................... 121
Data Audit (Database Activity Monitoring)...................................................................................................121
Creating a Data Audit Rule...............................................................................................................122
Using Audit Trail for auditing Amazon RDS Oracle database queries..............................................123
Using Audit Trail for auditing on-prem Oracle database queries...................................................... 128
Configuring Audit Trail for Oracle SMB............................................................................................ 130
Configuring Audit Trail for Oracle Package...................................................................................... 131
Using Oracle Unified auditing for auditing Amazon RDS Oracle database queries.......................... 135
vi
Using Audit Trail for auditing Amazon RDS PostgreSQL database queries.....................................137
Using Audit Trail for auditing standalone PostgreSQL database queries......................................... 139
Using Audit Trail for auditing MS SQL Server database queries......................................................141
Using Audit Trail for auditing Amazon RDS MS SQL Server database queries............................... 143
Using the MariaDB Audit Plugin for auditing MySQL/MariaDB database queries on AWS...............145
Using Audit DB Trail General logs for auditing MySQL database queries on AWS..........................146
Using Audit Trail for auditing standalone MySQL database queries................................................ 147
Using Audit Trail for auditing standalone MariaDB database queries.............................................. 148
Using Audit Trail for auditing standalone MySQL/MariaDB database queries using Samba............ 149
Using Audit Trail for auditing Snowflake database queries.............................................................. 151
Using Audit Trail for auditing AWS S3 queries.................................................................................152
Using Audit Trail for auditing standalone Neo4J database queries.................................................. 152
Solving the Missing Grants Issue..................................................................................................... 153
Configuring Audit Trail for auditing Redshift database queries.........................................................154
Getting Audit Events via AWS DAS (Database Activity Streams) for Aurora PostgreSQL............... 155
Configuring Audit Trail for auditing MS Azure Synapse database queries....................................... 156
Configuring Audit Trail for MS Azure................................................................................................157
Configuring Audit Trail for auditing MS Azure MySQL database queries......................................... 157
Configuring Audit Trail for auditing MS Azure PostgreSQL database queries..................................158
Configuring Audit Trail for auditing Sybase database queries..........................................................159
Using Audit Trail for auditing Google Cloud BigQuery database queries......................................... 161
Data Security................................................................................................................................................162
Creating a Data Security Rule.......................................................................................................... 162
Data Masking............................................................................................................................................... 164
Generating a Private Key Needed for Data Masking....................................................................... 164
Dynamic Data Masking..................................................................................................................... 164
Creating a Dynamic Data Masking Rule...........................................................................................165
Masking Methods.............................................................................................................................. 167
Using a Custom Function for Masking...................................................................................171
NLP Data Masking (Unstructured Masking)...........................................................................172
Static and Dynamic Masking Using Lua Script...................................................................... 174
Extending Lua Script Functionality......................................................................................... 175
Conditional Masking............................................................................................................... 175
Consistent Masking (Dynamic Masking)................................................................................ 176
Configuring DataSunrise for Masking with random-based methods................................................. 177
Creating a "DS_Environment" in PostgreSQL/Aurora PostgreSQL....................................... 177
Creating a "DS_Environment" in Oracle................................................................................ 177
Creating a "DS_Environment" in SQL Server........................................................................ 177
Creating a "DS_Environment" in Redshift..............................................................................178
Creating a "DS_Environment" in Greenplum......................................................................... 178
Creating a "DS_Environment" in MySQL/Aurora MySQL/MariaDB........................................178
Masking XML, CSV, JSON and Unstructured Files Stored in Amazon S3 Buckets......................... 178
Informix Dynamic Masking Additional Info........................................................................................ 179
Cassandra Masking Additional Info.................................................................................................. 180
Enabling Dynamic Masking for Teradata 13.....................................................................................180
Data Masking.....................................................................................................................................180
Generating a Private Key Needed for Data Masking.............................................................181
Dynamic Data Masking.......................................................................................................... 181
Creating a Dynamic Data Masking Rule................................................................................182
Masking Methods....................................................................................................................184
Configuring DataSunrise for Masking with random-based methods...................................... 194
Masking XML, CSV, JSON and Unstructured Files Stored in Amazon S3 Buckets............... 195
Informix Dynamic Masking Additional Info............................................................................. 196
Cassandra Masking Additional Info........................................................................................197
Enabling Dynamic Masking for Teradata 13.......................................................................... 197
Learning Mode Overview............................................................................................................................. 197
Creating a Learning Rule.................................................................................................................. 198
vii
Tags..............................................................................................................................................................199
Viewing Transactional Trails (Audit Events)................................................................................................ 199
Examples of Rules....................................................................................................................................... 201
Making a Database Read-Only.........................................................................................................201
Making a Table Column Accessible..................................................................................................202

Chapter 10: DataSunrise Configurations....................................................... 203


Object Groups.............................................................................................................................................. 203
Creating an Object Group................................................................................................................. 203
Adding Objects to an Object Group Manually.................................................................................. 204
Adding Objects to an Object Group Using Regular Expressions..................................................... 205
Adding Stored Procedures to an Object Group Manually.................................................................206
Adding Stored Procedures to an Object Group Using Regular Expressions.................................... 206
Query Groups...............................................................................................................................................207
Creating a New Query Group........................................................................................................... 208
Populating a SQL Group with Statements Automatically Logged by DataSunrise............................208
IP Addresses................................................................................................................................................ 209
Creating a Host Profile......................................................................................................................209
Adding Multiple IP Addresses Using a CSV or TXT File..................................................................209
Creating a Group of Hosts................................................................................................................210
Client Applications........................................................................................................................................211
Creating a Client Application Profile................................................................................................. 211
Creating Multiple Client Application Profiles Using a CSV or TXT File............................................ 211
Subscriber Settings...................................................................................................................................... 212
Configuring Servers...........................................................................................................................212
Configuring an SMTP Server................................................................................................. 212
Configuring an SNMP Server.................................................................................................213
Configuring an External Application Server........................................................................... 214
Configuring a Slack (direct) Server........................................................................................ 214
Configuring a Slack Legacy Token Server............................................................................ 215
Configuring a NetcatTCP/NetcatUDP Server......................................................................... 215
Configuring a ServiceNow Server.......................................................................................... 215
Configuring a Jira Server....................................................................................................... 215
Configuring a Syslog Server.................................................................................................. 215
Creating a Subscriber Profile............................................................................................................ 216
Email Templates................................................................................................................................217
Schedules..................................................................................................................................................... 219
Creating a Schedule..........................................................................................................................219
Examples of Schedules.....................................................................................................................220
Configuring Active Period of a Schedule............................................................................... 220
Configuring Active Days of a Schedule................................................................................. 221
Syslog Settings (CEF Groups).....................................................................................................................222
Periodic Tasks..............................................................................................................................................222
Backup Dictionary Task.................................................................................................................... 223
Clean Audit Task...............................................................................................................................223
Health Check..................................................................................................................................... 224
Update Metadata............................................................................................................................... 224
AWS Remove Unused Servers Periodic Task..................................................................................225
Periodic User Behavior..................................................................................................................... 225
Database User Synchronization........................................................................................................226
Azure Remove Unused Servers Periodic Task................................................................................ 226
Kubernetes Remove Unused Servers Periodic Task....................................................................... 227
Kubernetes Remove Unused Servers Periodic Task....................................................................... 227

Chapter 11: DataSunrise Functional Modules............................................... 228


viii
Static Data Masking..................................................................................................................................... 228
Creating a Static Masking task......................................................................................................... 229
Static Masking Loaders.....................................................................................................................232
Batch Setup of Masking Methods for Database Columns................................................................ 234
Creating Database Users Required for Static Masking.................................................................... 237
Creating an Oracle Database User........................................................................................237
Creating a PostgreSQL/Aurora PostgreSQL Database User.................................................238
Creating a Greenplum Database User...................................................................................238
Creating an SAP Hana Database User................................................................................. 239
Creating a SQL Server Database User................................................................................. 239
Creating a MySQL/Aurora MySQL/MariaDB Database User................................................. 240
Creating a Netezza Database User....................................................................................... 241
Creating a Redshift Database User....................................................................................... 241
Creating a Teradata Database User...................................................................................... 241
Creating a Vertica Database User......................................................................................... 242
Creating a DB2 Database User............................................................................................. 242
Creating a MongoDB Database User.................................................................................... 242
In-Place Static Masking............................................................................................................................... 243
Sensitive Data Discovery............................................................................................................................. 243
Creating a New Information Type..................................................................................................... 246
Periodic Data Discovery....................................................................................................................248
NLP Data Discovery..........................................................................................................................250
Discovering Sensitive Data Using Lua Script................................................................................... 251
Discovering Sensitive Data Using Lexicon....................................................................................... 251
Creating a Lexicon................................................................................................................. 251
Using Table Relations for Data Discovery........................................................................................252
OCR Data Discovery.........................................................................................................................252
AWS S3 Crawler............................................................................................................................... 253
Incremental Data Discovery.............................................................................................................. 254
Randomized Data Discovery.............................................................................................................254
Creating Database Users Required for Data Discovery................................................................... 255
Creating an Oracle Database User........................................................................................255
Creating a PostgreSQL Database User................................................................................. 255
Creating a Greenplum Database User...................................................................................256
Creating an SAP Hana Database User................................................................................. 256
Creating an SQL Server Database User............................................................................... 256
Creating a MySQL/Aurora MySQL/MariaDB Database User................................................. 256
Creating a Netezza Database User....................................................................................... 256
Creating a Redshift Database User....................................................................................... 257
Creating a Teradata Database User...................................................................................... 257
Creating a Vertica Database User......................................................................................... 257
Data Discovery with TDS 7.4 Always Encrypted................................................................... 257
Enabling Data Discovery in Sybase.......................................................................................257
Data Subject Access Request (DSAR).............................................................................................258
Reporting...................................................................................................................................................... 259
Reports.............................................................................................................................................. 259
Creating Custom Reports with the Report Generator.......................................................................260
Data Filter Values...................................................................................................................262
VA Scanner....................................................................................................................................... 263
VA Scanner grants............................................................................................................................ 265
Compliance Manager................................................................................................................................... 268
Compliance Manager Overview........................................................................................................ 268
Configuring a Compliance Manager Task........................................................................................ 269
Integrating Elasticsearch and Kibana with DataSunrise..............................................................................270

Chapter 12: Resource Manager...................................................................... 275


ix
Template Structure.......................................................................................................................................275
ExternalResources Section............................................................................................................... 276
Mappings Section.............................................................................................................................. 276
Parameters Section........................................................................................................................... 277
Resources Section............................................................................................................................ 277
"Parameters" File (optional)......................................................................................................................... 278
Working with Templates...............................................................................................................................278
Creating a Template..........................................................................................................................278
Exporting DataSunrise Configuration into Template......................................................................... 279
Deploying a Template....................................................................................................................... 279
Resources Description................................................................................................................................. 279
Resource Types................................................................................................................................ 279
Instance Parameters......................................................................................................................... 281
Interface Parameters......................................................................................................................... 282
Proxy Parameters..............................................................................................................................283
Server Parameters............................................................................................................................ 284
LDAP Server Parameters..................................................................................................................284
Subscription Server Parameters....................................................................................................... 285
Sniffer Parameters.............................................................................................................................286
SSO Service Parameters.................................................................................................................. 287
Query Based Parameters..................................................................................................................287
Learning Parameters......................................................................................................................... 287
DDL Parameters................................................................................................................................288
Masking Parameters..........................................................................................................................288
Masking Key Parameters.................................................................................................................. 290
SqlInjection Parameters.................................................................................................................... 290
Object Based Parameters................................................................................................................. 290
Error Based Parameters................................................................................................................... 291
Data Model Parameters.................................................................................................................... 291
SSL Key Group Parameters............................................................................................................. 292
Query Group Parameters.................................................................................................................. 292
Object Group Parameters................................................................................................................. 293
CEF Group Parameters.................................................................................................................... 293
Database Users Group Parameters..................................................................................................294
Data Model Lexicon Group Parameters........................................................................................... 295
Host Group Parameters.................................................................................................................... 295
User Parameters............................................................................................................................... 296
dbUser Parameters........................................................................................................................... 297
Schedule Parameters........................................................................................................................ 297
Security Standard Parameters.......................................................................................................... 298
Application Parameters..................................................................................................................... 298
Host Parameters................................................................................................................................298
Lua Script Parameters...................................................................................................................... 299
DSAR Config Parameters................................................................................................................. 299
User Access Role Parameters..........................................................................................................299
Queries Map Parameters.................................................................................................................. 302
Backup Dictionary Parameters..........................................................................................................305
Clean Audit Parameters.................................................................................................................... 305
Health Check Parameters................................................................................................................. 305
Static Masking Parameters............................................................................................................... 306
AWS Remove Unused Servers Parameters..................................................................................... 310
User Behavior Training Parameters..................................................................................................310
Queries History Learning Parameters...............................................................................................310
Vulnerability Assessment Parameters...............................................................................................310
Data Discovery Task Parameters..................................................................................................... 311
Data Discovery Report Parameters.................................................................................................. 311
Operations Report Task Parameters................................................................................................ 312
x
Operations Report Parameters......................................................................................................... 312
Session Report Parameters.............................................................................................................. 314
Direct Sessions Report Task Parameters.........................................................................................315
Operations Error Report Task Parameters....................................................................................... 315
System Events Report Task Parameters..........................................................................................316
Instances Status Report Task Parameters....................................................................................... 317
Settings Parameters.......................................................................................................................... 317

Chapter 13: DataSunrise Authentication Proxy.............................................318


DataSunrise Authentication Proxy Overview............................................................................................... 318
Integrating Active Directory with DataSunrise Proxy................................................................................... 319
Integration on Windows.....................................................................................................................319
Setting a Service Principle Name (SPN)............................................................................... 319
Configuring Active Directory Delegation.................................................................................320
Integration on Linux...........................................................................................................................320
Creating an Active Directory User (Linux)............................................................................. 320
Setting a Service Principle Name (SPN)............................................................................... 320
Configuring Active Directory Delegation.................................................................................321
Creating a keytab (Linux)....................................................................................................... 321
Configuring DataSunrise Authentication Proxy for Database Connections................................................. 322
LDAP Authentication for Database Connections.............................................................................. 322
Kerberos Authentication for Database Connections......................................................................... 324
Configuring User Mapping................................................................................................................ 324
Mapping a Group of AD Users......................................................................................................... 325
Configuring Mapping of AD Users to Database Users via the Web Console..............................................326
LDAP Users Cache........................................................................................................................... 326
Customization of an LDAP Search String for Authentication Proxy............................................................ 327
Searching for Users.......................................................................................................................... 327
Searching for User Groups............................................................................................................... 327

Chapter 14: System Settings...........................................................................328


General Settings...........................................................................................................................................328
Logging Settings...........................................................................................................................................330
Limiting Size of Logs.........................................................................................................................336
Advanced Dictionary Operations.......................................................................................................337
Additional Parameters.................................................................................................................................. 337
ExternalJSONMetadata additional parameter..............................................................................................382
Audit Storage Settings................................................................................................................................. 383
Audit Storage Compression.............................................................................................................. 385
Rotation of audit.db Files.................................................................................................................. 385
Configuring Automatic Rotation of audit.db Files................................................................... 385
Manual Rotation of audit.db Files.......................................................................................... 386
Setting Limit for DataSunrise Rotated Audit Files..................................................................386
Clean Storage....................................................................................................................................386
Encrypting Audit Storage (PostgreSQL) while DataSunrise instance is running...............................387
Encrypting the Dictionary (PostgreSQL) while DataSunrise instance is running.............................. 387
Audit Storage Table Partitioning....................................................................................................... 388
Audit Storage Table Partitioning (PostgreSQL)..................................................................... 388
Audit Storage Table Partitioning (MySQL)............................................................................. 388
Audit Storage Table Partitioning (MS SQL Server)................................................................388
SQL Parsing Errors......................................................................................................................................388
Syslog Integration Settings.......................................................................................................................... 389
DataSunrise User Settings...........................................................................................................................389
Creating a DataSunrise User............................................................................................................ 390
User Roles.........................................................................................................................................390
xi
Creating a Role................................................................................................................................. 390
Password Settings.............................................................................................................................396
Limiting Access to the Web Console by IP Addresses.................................................................... 396
Logs.............................................................................................................................................................. 396
LDAP............................................................................................................................................................ 396
Servers......................................................................................................................................................... 398
Operation Group...........................................................................................................................................398
Queries Map.................................................................................................................................................398
About............................................................................................................................................................ 399

Chapter 15: Table Relations............................................................................ 400


Database Query History Analysis................................................................................................................ 400
Preparing an Amazon Aurora MySQL Database..............................................................................400
Preparing an Amazon Aurora PostgreSQL Database...................................................................... 401
Preparing a DB2 Database............................................................................................................... 401
Preparing a MS SQL Server Database............................................................................................ 401
Preparing a MySQL Database.......................................................................................................... 401
Preparing a Netezza Database.........................................................................................................402
Preparing an Oracle Database......................................................................................................... 403
Preparing a PostgreSQL Database.................................................................................................. 403
Preparing a Redshift Database.........................................................................................................403
Preparing a Teradata Database........................................................................................................403
Preparing a Vertica Database...........................................................................................................404
Preparing a Greenplum Database.................................................................................................... 404
Periodic DDL Table Relation Learning Task............................................................................................... 404
Database Traffic Analysis............................................................................................................................ 405
Manual Editing of Table Relations...............................................................................................................405

Chapter 16: Capturing of Application Users..................................................406


Markers Used to Identify Client Application Users...................................................................................... 406
Creating a Rule Required for Application Users Capturing......................................................................... 409
App User Capturing Examples.................................................................................................................... 409
Example 1: Masking a PostgreSQL Table for a Certain User.......................................................... 409
Example 2: Using a Dedicated Web Site as the Client Application..................................................411
Example 3: Changing Users in SQL Developer during one session................................................ 413
Example 4: Masking a table by using ResultSet as Capturing type................................................. 414

Chapter 17: Amazon Web Services (AWS).................................................... 417


Creating a Health Check............................................................................................................................. 417
Amazon CloudWatch Custom Metrics......................................................................................................... 418
Using AWS Secrets Manager for Storing Passwords................................................................................. 420
How Load Balancing Works on Vertica.......................................................................................................420

Chapter 18: Integration with the CyberArk AAM........................................... 421


AAM Installation........................................................................................................................................... 421
AAM Configuration. Defining the Application ID (APPID) and Authentication Details................................. 421
Provisioning Account and Settings Permission for Application Access....................................................... 422
DataSunrise Installation and Configuration..................................................................................................423
Retrieving a Dictionary Password from CyberArk....................................................................................... 423
Retrieving an Audit Storage Password from CyberArk............................................................................... 424

Chapter 19: Self-Service Access Request..................................................... 425


Overview.......................................................................................................................................................425
Using SSAR................................................................................................................................................. 425

Chapter 20: Frequently Asked Questions......................................................426

Chapter 21: Appendix 1................................................................................... 434


Default OIDs.................................................................................................................................................434
DataSunrise System Events IDs..................................................................................................................435
Examples of Database Connection Strings................................................................................................. 444
1 General Information | 13

1 General Information

1.1 Product Description


The introductory section of this chapter describes basic features, steps necessary for database protection and
principles of DataSunrise operation.
Protection of databases starts with selecting and configuring the database instance. In the process you also need to
select the protection mode: Sniffer (passive protection) or Proxy (active database protection). You can additionally
restrict access to your database(s) protected by DataSunrise web user interface using 2-factor authentication.
DataSunrise’s functionality is based on a system of highly customizable and versatile policies (Rules) which control
database protection. You can create rules for the following tools included in DataSunrise:
• DataSunrise Audit. DataSunrise logs all user actions, SQL queries and query results. DataSunrise Data Audit
saves information on database users, user sessions, query code, etc. Data auditing results can be exported to an
external system, such as SIEM.
• DataSunrise Security. DataSunrise analyzes database traffic, detects and blocks unauthorized queries and SQL
injections on-the-fly. Alerts and reports on detected threats can be sent to network administrators or a security
team (officer) via e-mail or instant messengers.
• DataSunrise Dynamic Masking. DataSunrise prevents sensitive data exposure thanks to its data masking tool.
DataSunrise’s Dynamic Masking obfuscates output of sensitive data from a database by replacing it with random
data or real-looking data on-the-fly.
The Static Masking feature replaces real data with a fake copy which enables you to create a fully protected testing
and development environment out of your real production database.
The Table Relations feature can build associations between database columns. As a result, all associated columns
with sensitive data are linked and better organized.
The Data Discovery tool enables you to search for database objects that contain sensitive data and quickly create
Rules for these objects. The search can be done by the Lexicon, column names and data type. In addition, you can
use Lua scripting. NLP (Natural Language Processing) Data Discovery enables you to search for sensitive data across
database columns that contain unstructured data. For example, you can locate an email address in a text. Using the
Table Relations feature you can see all the columns associated with the discovered columns. You can set up periodic
task for DataSunrise to search for and protect newly added sensitive data.
DataSunrise functionality allows companies to be compliant with national and international sensitive data protection
regulations such as HIPAA, PCI DSS, ISO/IEC 27001, CCPA, GDPR, SOX, KVKK, PIPEDA, APPs, APPI, LGPD,
Nevada Privacy Law, Digital Personal Data Protection Bill, New Zealand's Privacy Act. This is how the
Compliance feature works. Databases are regularly searched for newly added sensitive data. As a result, database
and sensitive data within are constantly protected.
DataSunrise can generate PDF and CSV reports about audit and security events, data discovery, sessions, operation
errors and system events.

1.2 Supported Databases and Features


Supported database types and versions:
• Amazon Aurora MySQL
• Amazon Aurora PostgreSQL
• Amazon DynamoDB
1 General Information | 14
• Amazon Redshift
• Amazon S3 and other S3 protocol compatible file storage services like Minio and Alibaba OSS. Auditing and Data
Masking of CSV, XML, JSON and unstructured files are supported
• Apache Hive 1.0+
• Amazon Athena
• AlloyDB
• Cassandra 3.11.1- 3.11.2 (DB servers), 3.4.x ( CQL)
• CockroachDB 22.1+
• DocumentDB
• Elasticsearch 5+
• GaussDB(DWS)
• Greenplum 4.2+
• Hydra
• IBM DB2 9.7+. Linux, Windows, UNIX and z/OS are supported
• IBM Db2 Big SQL 5.0+
• Impala 2.x
• IBM Informix 11+
• MS SQL Server 2005+
• MariaDB 5.1+
• Microsoft Azure Synapse Analytics
• MongoDB 3.0+
• MySQL 5.0+ (Xprotocol is supported too)
• Neo4j
• IBM Netezza 6.0+
• Oracle Database 9.2+
• Percona Server for MySQL 5.1+
• PostgreSQL 7.4+
• SAP HANA 1.0+
• ScyllaDB 3.0+
• Snowflake Standard, Enterprise, Business Critical
• Sybase Adaptive Server Enterprise/15.7.0+
• Teradata 13+
• TiDB 5.0.0+
• Vertica 7.0+
• YugabyteDB 1.3+
• Google Cloud BigQuery
• Amazon OpenSearch 2.3+
The table below lists the databases supported by DataSunrise and features available for them. Please note that
proxying both of encrypted and unencrypted traffic is supported for all types of databases.
1 General Information | 15
Supported features. Part 1
DB type Database Activity DB Audit Trail Database Dynamic Static Masking
Monitoring Security Masking
Amazon Aurora MySQL + + + + +
Amazon Aurora + + + + +
PostgreSQL
Amazon DynamoDB + + +
Amazon OpenSearch + + +
Amazon Redshift + + + + +
Amazon S3 + + +
Apache Hive + + + +
Amazon Athena + + +
AlloyDB + + + +
IBM Db2 Big SQL + + + +
Cassandra + + + +
CockroachDB + + +
DocumentDB + + +
Elasticsearch + + +
GaussDB(DWS) + + + + +
GCloud BigQuery +
Greenplum + + + +
Hydra + + + +
IBM DB2 + + + +
Impala + + + +
IBM Informix + + + +
Microsoft SQL Server + + + + +
MariaDB + + + + +
Microsoft Azure Synapse + + + + +
Analytics
MongoDB + + + + +
MySQL + + + + +
Neo4j +
IBM Netezza + + + +
Oracle Database + + + + +
Percona Server for + + + +
MySQL
PostgreSQL + + + + +
SAP HANA + + + +
ScyllaDB + +
Snowflake + + + +
1 General Information | 16

DB type Database Activity DB Audit Trail Database Dynamic Static Masking


Monitoring Security Masking
Sybase + + +
Teradata + + + +
TiDB + + + +
Vertica + + + +
YugabyteDB + + + + +
1 General Information | 17
Supported features. Part 2
DB type Data Authentication Kerberos Sniffer Sniffing of Dynamic SQL
Discovery Proxy Authentication encrypted processing
traffic
Amazon Aurora MySQL + + +
Amazon Aurora + + +
PostgreSQL
Amazon DynamoDB +
Amazon OpenSearch + +
Amazon Redshift + +
Amazon S3 +
Apache Hive + + +
Amazon Athena
AlloyDB +
IBM Db2 Big SQL +
Cassandra + +
CockroachDB + +
DocumentDB + +
Elasticsearch + +
GaussDB(DWS) + + + + +
GCloud BigQuery
Greenplum + + + +
Hydra + + + +
IBM DB2 + +* +
Impala + + +
IBM Informix + +
Microsoft SQL Server + + + + + +
MariaDB + + + +
Microsoft Azure Synapse + + + + +
Analytics
MongoDB + +
MySQL + + + + +
Neo4j +
IBM Netezza + + + +
Oracle Database + + +
Percona Server for + + + +
MySQL
PostgreSQL + + + + +
SAP HANA + + +
ScyllaDB + +
1 General Information | 18

DB type Data Authentication Kerberos Sniffer Sniffing of Dynamic SQL


Discovery Proxy Authentication encrypted processing
traffic
Snowflake +
Sybase +
Teradata + +
TiDB + + + +
Vertica + + + +
YugabyteDB + + + +

*Kerberos delegation is not supported

1.3 DataSunrise Operation Modes


DataSunrise can be deployed in one of the following configurations: Sniffer mode, Proxy mode, Trailing DB Audit
Logs.

1.3.1 Sniffer Mode


When deployed in the Sniffer mode, DataSunrise is connected to a SPAN port of a network switch. Thus, it acts as a
traffic analyzer capable to capture a copy of the database traffic from a mirrored port of the network switch.

Figure 1: Sniffer mode operation scheme.

In this configuration, DataSunrise can be used only for "passive security" ("active security" features such as database
firewall or masking are not supported in this mode). When deployed in Sniffer mode, DataSunrise is capable to
perform database activity monitoring only because it can't modify database traffic in this configuration. Running
DataSunrise in Sniffer mode does not require any additional reconfiguring of databases or client applications. Sniffer
mode can be used for data auditing purposes or for running DataSunrise in Learning mode.

Important: database traffic should not be encrypted. Check your database settings as some databases encrypt
traffic by default. If you're operating an SQL Server database, do not use ephemeral ciphers. DataSunrise deployed
in Sniffer mode does not support connections redirected to a random port (like Oracle). All network interfaces (the
main and the one the database is redirected to) should be added to DataSunrise's configuration.
1 General Information | 19

1.3.2 Proxy Mode


When deployed in this configuration, DataSunrise works as an intermediary between a database server and its client
applications. Thus it is able to process all incoming queries before redirecting them to a database server.

Figure 2: Proxy mode operation scheme.

Proxy mode is for "active protection". DataSunrise intercepts SQL queries sent to a protected database by database
users, checks if they comply with existing security policies, and audits, blocks or modifies the incoming queries or
query results if necessary. When running in the Proxy mode, DataSunrise supports its full functionality: database
activity monitoring, database firewall, both dynamic and static data masking are available.

Important: We recommend to use DataSunrise in the proxy mode. It provides full protection and in this mode,
DataSunrise supports processing of encrypted traffic and redirect connections (it is essential for Hana, Oracle,
Vertica, MS SQL). For example in SQL Server redirects can occur when working with Azure SQL or AlwaysOn Listener.

1.3.3 Trailing DB audit logs


This deployment scheme can be used to perform auditing of Oracle, Snowflake, Neo4J, PostgreSQL-like, AWS S3, MS
SQL Server, GCloud BigQuery, MongoDB and MySQL-like databases by the means of native auditing tools.

Figure 3: Trailing DB logs operation scheme.

Target database performs auditing using its integrated auditing mechanisms and saves auditing results in a
dedicated database table or in either a CSV or XML file depending on selected configuration. Then DataSunrise
1 General Information | 20
establishes a connection with the database, downloads the audit data from the database and passes it to the Audit
Storage for further analysis.
First and foremost, this configuration is intended to be used for Amazon RDS databases because DataSunrise
doesn't support sniffing on RDS.
This operation mode has two main drawbacks:
• If the database admin has access to the database logs, he can delete them
• Native auditing makes a negative impact on database performance.

1.4 Dynamic SQL Processing


Dynamic SQL processing is auditing, masking and blocking of queries that contain dynamic SQL. Thus, all queries
not clear before they are executed in database we call dynamic ones. For example, in PostgreSQL, EXECUTE is used
for such queries.

Note: Dynamic SQL processing is available for PostgreSQL, MySQL and MS SQL Server

EXECUTE enables you to execute a query which is contained in a string, variable or is a result of an expression. For
example:

...
EXECUTE "select * from users";
EXECUTE "select * from ” || table_name || where_part;
EXECUTE foo();
...

Here table_name and where_part are variables, foo() is a function that returns a string. The second and third queries
are dynamic ones because we can’t tell what query will be executed in the database.
Let's take a look at the following example:

SELECT run_query();

When executing this subquery, the following function will be called:

CREATE FUNCTION run_query() RETURNS RECORD AS


$$
DECLARE
row RECORD;
result RECORD;
BEGIN
SELECT * FROM queries AS r(id, sql) ORDER BY id DESC LIMIT 1 INTO row;
EXECUTE row.sql into RESULT;
DELETE FROM queries WHERE id = row.id;
RETURN result;
END;
$$ LANGUAGE plpgsql;

This function takes a random query from the queries table, executes it and returns some result. DataSunrise can't
know which query will be executed beforehand because the exact query will be known when executing the following
subquery:

...
SELECT * FROM queries AS r(id, sql) ORDER BY id DESC LIMIT 1 INTO row;
...
1 General Information | 21
That's why DataSunrise wraps dynamic SQL in the special function, DS_HANDLE_SQL, that does the trick. As the
result, the original function is modified to be the following:

CREATE FUNCTION DSDSNRBYLCBODMJOVNJLFJFH() RETURNS RECORD AS


$$
DECLARE
row RECORD;
result RECORD;
BEGIN
SELECT * FROM queries AS r(id, sql) ORDER BY id DESC LIMIT 1 INTO row;
EXECUTE DS_HANDLE_SQL(row.sql) into RESULT;
DELETE FROM queries WHERE id = row.id;
RETURN result;
END;
$$ LANGUAGE plpgsql;

And the following query:

SELECT run_query();

Will be changed to the following one:

SELECT DSDSNRBYLCBODMJOVNJLFJFH();

Inside the DS_HANDLE_SQL function the database sends a dynamic SQL to DataSunrise's handler. The handler
processed the query and audits, masks or blocks it respectively. Thus,

...
EXECUTE row.sql into RESULT;
...

executes not the original query contained in the queries table but a modified one.
To enable dynamic SQL processing, when creating a database instance, you need to enable the “Dynamic SQL
processing” option in the Advanced settings. Then you need to select host and dynamic SQL handler’s port. This is
a host of the machine DataSunrise is installed on, it should be available for your database because the database
connects to this host when processing dynamic SQL.

Important: it's required to provide an external IP address of the SQL handler machine ("127.0.0.1" or "localhost" will
not work).

For processing of dynamic SQL inside functions, you need to enable the “UseMetadataFunctionDDL” parameter in
the Additional parameters and check the : “Mask Queries Included in Procedures and Functions” for masking Rules
or “Process Queries to Tables and Functions through Function Call” for audit and security Rules respectively.
You can also enable dynamic SQL processing in an existing Instance's settings and specify host and port in proxy’s
settings.
Note that you need to configure a handler for each proxy and select a free port number.

PostgreSQL
In PostgreSQL, dblink is used for processing of dynamic SQL. It enables sending any SQL queries to another remote
PG databaase.
Thus, dynamic SQL handler uses a PostgreSQL emulator. User DB with the help of dblink sends a dynamic SQL to
our handler. The emulator receives new connection, performs handshakes and makes the client DB believe that it
1 General Information | 22
sends queries to a real DB. Since it's necessary to pass session id and operation id (to associate a query sent to the
emulator with the original query), all these parameters are transferred using dblink's connection string:

host=<handler_host> port=<handler_port>
dbname=<session id> user=<operation id> password=<connection id>

MySQL
In MySQL, the FEDERATED storage engine extension is used for dynamic SQL processing. It connects two remote
databases as well. But in MySQL case it's something like an extended table that is created in one DB, but the data is
stored in another DB. To create such a table, it's necessary to provide a connection string to MySQL DB.
During execution of a first dynamic SQL query, the HANDLE_SQL function and such an extended table is created in
the DSDS***_ENVIRONMENT schema. This table's connection string points at MySQL emulator at that. The table
includes the following columns: query, connection_id, session_id, operation_id and action.
First, the function INSERTs all the required parameters. The emulator processes the query, modifies it and changed
action to block if necessary. After that, the function SELECTs the resulted query and returns it.
In MySQL, for creation and execution of dynamic queries the following pair of entities is used: prepare stmt
from @var and execute stmt. Since the execution of latter means that a prepared statement already exists in
the database, we modify prepare. As a result, a complete query:

prepare stmt from


@var

is replaced with prepare_<stmt_name>(@var).


This procedure's body looks that way:

call HANDLE_SQL(dynamic_sql, connection_id, session_id, operation_id,


@ds_sql_<stmt_name>);
prepare <stmt_name> from @ds_sql_<stmt_name>;

<stmt_name> in this case is the name of statement of user query. A separate procedure is created for every
name and for every addressing. Information about these procedures is stored in PreparedStatementManager.
@ds_sql_<stmt_name> is an exit parameter of HANDLE_SQL where the function puts a modified query to.

Important: for dynamic SQL processing in MySQL, federated engine should be enabled. To enable it, it's necessary
to add the federated string to the [mysqld] section of the /etc/my.cnf file. Another method: connect to your MySQL/
MariaDB with admin privileges, ensure that Federated Engine is off and enable it with the following query:

show engines;
install plugin federated soname
'ha_federated.so'

1.5 System Requirements


Before installing DataSunrise, make sure that your server meets the following requirements:
Minimum hardware requirements:
• CPU: 2 cores
• RAM: 4 GB
• Available disk space: 20 GB.
1 General Information | 23
Recommended hardware configuration:

Estimated database traffic volume CPU cores* RAM, GB


Up to 3000 operations/sec 2 8
Up to 8000 operations/sec 4 16
Up to 12000 operations/sec 8 32
Up to 14000 operations/sec 16 64
Up to 17000 operations/sec 40 160

*Xeon E5-2676 v3, 2.4 GHz

Note: the more proxies you open, the higher the RAM consumption you will experience.

Software requirements:
• Operating system: 64-bit Linux (Red Hat Enterprise Linux 7+, Debian 10+, Ubuntu 18.04 LTS+, Amazon Linux 2)
• 64-bit Windows (Windows Server 2019+) with .NET Framework 3.5 installed https://www.microsoft.com/en-us/
download/details.aspx?id=21
• Linux-compatible file system (NFS and SMB file systems are not supported)
• Web browser for accessing the Web Console:

Web Browser Supported version


Mozilla Firefox 100+
Google Chrome 72+
Opera 58+
MS Edge 44.18+
Apple Safari 14+

Note that you might need to install some additional software like database drivers depending on the target
database and operating system you use. For the full list of required components see the Prerequisites section of the
corresponding Admin Guide.

1.6 Useful Resources


Web resources:
• DataSunrise official web site: https://www.datasunrise.com/
• DataSunrise latest version download page: https://www.datasunrise.com/download
• DataSunrise Facebook page: https://www.facebook.com/datasunrise/
• Frequently Asked Questions: https://www.datasunrise.com/documentation/faq/
• Best practices: https://www.datasunrise.com/download-the-datasunrise-security-best-practices/
• Best practices (AWS): https://www.datasunrise.com/documentation/download-the-datasunrise-aws-security-best-
practices/
• Best practices (Azure): https://www.datasunrise.com/documentation/download-the-datasunrise-azure-best-
practices/
Documents (located in the doc folder within the DataSunrise's installation folder):
• DataSunrise Administration Guide for Linux (DataSunrise_Database_Security_Admin_Guide_Linux.pdf). Describes
installation and post-installation procedures, deployment schemes, includes troubleshooting subsection.
• DataSunrise Administration Guide for Windows (DataSunrise_Database_Security_Admin_Guide_Windows.pdf).
Describes installation and post-installation procedures, deployment schemes, includes troubleshooting
subsection
• DataSunrise User Guide (DataSunrise_Database_Security_User_Guide.pdf). Describes the Web Console's structure,
program management, etc
• Command Line Interface Guide (DataSunrise_Database_Security_CLI_Guide.pdf). Contains the CLI commands
description, use cases, etc
• Release Notes (DataSunrise_Database_Security_Release_Notes.pdf). Describes changes and enhancements made
in the latest DataSunrise version, known bugs and version history
• EULA (DataSunrise_EULA.pdf). Contains End User License Agreement.
2 Quick Start | 25

2 Quick Start

2.1 Connecting to DataSunrise's Web


Console
DataSunrise is provided with a comprehensive web-based interface (the Web Console) used to control all the
program's functions.
1. To enter the Web Console, do the following:
To connect to the Web Console using the HTTPS protocol (by default), open the following address in your web
browser:

https://<DataSunrise_ip_address>:11000

<DataSunrise_ip_address> is the IP address or the hostname of the server DataSunrise is installed on, 11000 is
the HTTPS port of the DataSunrise's Web Console. For example, if your DataSunrise is installed on your local PC,
the address should be the following:

https://localhost:11000

2. Your web browser may display an "Unsecure connection" prompt due to an untrusted SSL certificate. Follow your
browser's prompts to confirm a security exception for the DataSunrise's Web Console (refer to subs. Creating a
Certificate for the Web Console on page 41).
3. Enter your credentials and click Log in to enter the web interface. On the first startup, use admin as the user
name. Concerning the password, see the instruction below:
• Linux: use the password you received at the end of the installation process.
• Windows: use the password you set at the end of the installation process.
• AWS: use Instance ID if your EC2 machine with the DS- prefix as the password. For example:

DS-i-05ad7f56124728269

• Microsoft Azure: leave the password field empty. You will be prompted to set a new password after logging in.
• In case the dictionary.db file was removed or a password wasn't set during the installation process, leave the
password field empty to set a new password.

2.2 Product Registration


The first time you start DataSunrise, you will be prompted to register it. If your License is expired, contact us at
[email protected] to buy a new license key. Replace an existing License with a new License or add multiple
Licenses. In case a License is expired, all DataSunrise Rules become disabled and all user queries go to the target
database directly bypassing the DataSunrise's proxies.
You can register your DataSunrise using the following methods:
1. To replace your expired License with a new one, navigate to System Settings → About → License Manager.
Click Add License and paste your license key into the Input the License Key field. Click Apply.
2. Paste a license key into the appfirewall.reg file located within the DataSunrise's installation folder and the file's
contents will be imported to DataSunrise. If this file doesn't exist, you can create it manually but we recommend
registering your DataSunrise via the License Manager as described above.

2.3 Creating a Database Profile on Startup


(optional)
At first startup, you are prompted to create a target database profile (if there are no profiles existing). You can skip
this step to perform it later.
Before establishing protection of a certain database, you should specify this database in DataSunrise's settings.
To do this, you need to create a target database profile. The profile includes connection details which enable
DataSunrise to get your database's metadata. For an instruction on creating a database profile, refer to Creating a
Target Database Profile on page 58

2.4 Creating an SMTP Server (optional)


On the first startup you will be prompted to create an SMTP server for sending notifications to subscribers:
Refer to Configuring an SMTP Server on page 212.
3 DataSunrise Use Cases | 27

3 DataSunrise Use Cases


The following demonstration includes four scenarios (database audit, database security, dynamic data masking, and
database limited access). Its aim is to show you how to configure DataSunrise Rules.
In this demo, we use a PostgreSQL database that includes the customers table created for the demo. The table
contains clients' personal data, ZIPs, addresses and credit card numbers. To query the test database, we use
PGAdmin utility.

Figure 4: Customers table displayed in PGAdmin

3.1 Creating a Target Database Profile and a


Proxy
Before creating any rules, it's necessary to create a target database profile (i.e. to inform DataSunrise about your
target database). To do it, perform the following:
1. Navigate to Configurations → Databases.
2. Click Add Database to create a new database profile.
3. Enter required information about the target database (see notes below):
3 DataSunrise Use Cases | 28

Note:
• The Logical Name field contains a logical name of the database profile. You can set any name
• In the Database Type drop-down list, PostgreSQL (target database type) is selected as an example
• In the Hostname field, DBs IP address is specified
• In the Port field, port number 5434 is specified, because the database listens on this port (example)
• Click Test Connection when done to check the connection between DataSunrise and your database.

4. To employ database security and masking features, it is necessary to create a DataSunrise proxy for the target
database. To create a proxy, we click Add Proxy in the Capture Mode subsection. Then we specify proxy's IP
address in the Listen on IP Address drop-down list. Then we assign proxy's port number in the Listen on Port
field. Proxy's port number should differ from the database's port number (it is 54321 in this case). When done,
click Save to save the database profile.
5. To connect to the database through the proxy, it is necessary to create a new connection in PGAdmin with
DataSunrise proxy settings.
3 DataSunrise Use Cases | 29

Note: In practice, a database is usually configured to listen on a non-standard port (54321 for example), and a
DataSunrise proxy is configured to use the port which client applications use to connect to the server. Thus, client
applications connect to the DataSunrise proxy instead of connecting to the database directly.

3.2 Scenario 1. Database Audit


In this scenario, we demonstrate how to configure DataSunrise to audit all queries directed to the target database.

3.2.1 Creating an Audit Rule


To audit our test database, it is necessary to create and configure an Audit Rule. In this case, the sequence of actions
is the following:
1. Go to the Data Audit → Rules subsection. Then click Add Rule to create a new Audit Rule.
2. Configure your Audit Rule to log all queries to the database (see notes below).

In the Main section subsection, the target database information is specified. It includes database type
(PostgreSQL), database instance (as the target database entry is named in the Configurations) and the Rule's
logical name.

By default, the "Audit" action is selected. It means that DataSunrise will audit user queries when the rule is
triggered. To log database responses (the output), the Log Query Results check box is checked. Since the current
scenario requires all user queries to be audited, Filter Sessions are left as by default. Thus, any query to the
database regardless of its source IP address will trigger the rule.
3 DataSunrise Use Cases | 30

Filter Statements settings are as by default as well. Thus, the Rule will be triggered by all queries that contain
any DML statements.

3.2.2 Viewing Database Audit Results


This stage includes demonstration of auditing results. The Audit Rule which was created at the previous stage is
configured to be triggered by any incoming user query. Here's what happens when DataSunrise receives a user
query.
1. Let's send the following query via PGAdmin:

SELECT * FROM public.customers;

2. The database outputs the table contents:

3. Now let's check the auditing results in the Web Console. Navigate to the Audit → Transactional Trails
subsection.
4. To view detailed information about some event, click event's ID. In a new tab, the event's details will be displayed:
SQL of the query, basic information, session information and the database output.
3 DataSunrise Use Cases | 31

3.3 Scenario 2. Database Security


In this scenario, we demonstrate how to configure a Data Security Rule to prevent unauthorized modification of DB
table's columns.

3.3.1 Creating a Security Rule


To prevent modification of the test table, it is necessary to create and configure a Security Rule. Here's the sequence
of actions:
1. Go to Data Security → Rules. Click Add Rule to create a new Rule
2. Configure the Security Rule to block attempts to modify the customers table (see notes below).

The target database is specified in General Settings.

The Block action in the Action Settings subsection to block all queries that meet the current rule's conditions is
set by default.
3 DataSunrise Use Cases | 32

Since the current scenario requires to prevent ALL table modification attempts, the Object Group filter is selected
in the Filter Statements subsection, INSERT, UPDATE and DELETE check boxes are checked. Thus, when the Rule
is triggered, DataSunrise will block all queries aimed at table modification. The Filtering settings also include the
customers table specified (Process Query to Database Objects subsection). Thus, the Rule can be triggered
only by the queries directed to the customers table. All actions aimed at other tables will be ignored.

3.3.2 Blocking Results


This stage includes demonstration of DataSunrise's Data Security results. Data Security Rule created earlier is
configured to be triggered by any attempts to modify the customers table (i.e. it is triggered by queries which
contain INSERT and UPDATE statements).
1. Let's query the database with PGAdmin. The query is aimed to change one entry of the Last Name column from
Wade to Burnwood:

UPDATE public.customers
SET "Last Name"='Burnwood'
WHERE "Last Name"='Wade';

2. As a result, the query is blocked. The blocking is performed in the form of a SQL error ("ERROR: The query is
blocked").

3. To view Data Security events and event details, go to Data Security → Events.
3 DataSunrise Use Cases | 33

3.4 Scenario 3. Data Masking


This scenario demonstrates how to configure DataSunrise's Dynamic Data Masking to obfuscate the output of the
customers table column which contains client last names.

3.4.1 Creating a Masking Rule


The current scenario requires obfuscating of the Card column output. To do this, it is necessary to create and
configure a new Masking Rule:
1. Enter Masking → Dynamic Masking Rules. Click Add Rule to create a new Rule.
2. Configure a Rule to obfuscate the LastName column output (see the notes below):

Target database is specified in General Settings.

In the Action Settings subsection, Mask action is selected by default.

In the Columns to Mask subsection a column to be masked is specified (the LastName column of the customers
table). To select it, click Select and check it in the database objects tree. The Fixed string algorithm is selected
3 DataSunrise Use Cases | 34
in the Masking Method drop-down list. Thus, the current Rule will be triggered by a query directed to the
LastName column and will obfuscate its contents in the database output. Other columns will be ignored.

3.4.2 Data Masking Results


This stage includes demonstration of dynamic data masking results. The Masking Rule created at the previous stage,
is configured to be triggered by any query directed to the customers table.
1. Let's query the target DB with PGAdmin:

SELECT * FROM public.customers;

2. As a result, the contents of the LastName column are obfuscated with a fixed string.
3. To view masking events, enter Data Masking → Dynamic Masking Events subsection. To view details of some
event, click the event's ID you're interested in.

3.5 Scenario 4. Limiting Access to a


Database
This scenario demonstrates how to allow access to the test table while blocking access to other tables.
In this scenario we use an MS SQL Server database and two duplicate tables, customers and customers_copy, created
for this case.

3.5.1 Creating a Limited Access Rule


To allow working with the customers_copy table only, it is necessary to create and configure an Access Rule. It is
very similar to creating a Security Rule mentioned above:
1. Go to Security → Rules. Click Add Rule to create a new Rule.
2. Configure a Rule to allow modification of the Customers table (see notes below).

The target database is specified in the General Settings.


3 DataSunrise Use Cases | 35

The Allow value is set in the Action Settings subsection to ignore all queries that meet the current rule’s
conditions.

The current scenario requires approving of table modifications, so the Object Group filter is selected in the Filter
statements subsection, INSERT, DELETE and UPDATE check boxes are checked. Thus, once the Rule is triggered,
DataSunrise will allow all queries aimed at table modification. Filtering settings also include the customers table
specified (Process Query to Database Objects subsection). Thus, the Rule can be triggered only by the queries
directed to the customers table. It is now necessary to create a Blocking Security Rule to prevent accessing the
remaining tables.
3. Click Add Rule once again in the Security → Rules section.
4. Configure a Rule to block access to the database. Since the scenario requires to prevent table modification
attempts, the Object Group filter is selected in the Filter Statements subsection, INSERT, DELETE and UPDATE
check boxes are checked. Thus, DataSunrise will block these type of queries.
5. To prevent the Blocking Rule from blocking the customers table, it’s necessary to set the Access Rule to higher
priority. In the Data Security → Rules section, right-click and select Priority Mode from the context menu. Then
drag and drop your Rule. Click Save Priority.
3 DataSunrise Use Cases | 36

The Rules are checked and executed by DataSunrise from the top to the bottom of the list. If an incoming query
doesn’t match the first Rule conditions, DataSunrise starts to check the second Rule and so on. But if a query
matches the Rule's conditions, DataSunrise stops executing the action with the lower priority. The closer a Rule to
the top of the list — the higher its priority. Thus DataSunrise does as a higher priority Rule demands.

3.5.2 Limited Access Results


This stage includes demonstration of DataSunrise's Limited Access results. The Access Rule created earlier is
configured to be triggered by any attempts to modify the customers_copy table (i.e. it is triggered by queries which
contain INSERT and UPDATE statements) but any attempts to modify the customers will trigger the Blocking Rule:
1. Let's query the database with Microsoft SQL Server Management Studio (SSMS). The query is aimed to change
one entry of the Last Name column from Wade to Burnwood in the customers table:

UPDATE public.customers
SET "LastName"='Burnwood'
WHERE "LastName"='Wade';

2. As a result, the query is allowed and the table will be successfully modified.
3. Now let’s query the customers table using the same command:

UPDATE public.customers
SET "LastName"='Burnwood'
WHERE "LastName"='Wade';

4. As a result, the Blocking Rule is triggered, and the query is blocked. The blocking is performed in the form of a
SQL error (it says "ERROR: The query is blocked").
5. To view Limited Access events and event details, go to Security → Events.

3.6 Creating a Dynamic Masking Rule for


multiple instances
The current scenario covers creating a masking rule for several instances where the following objects exist: database
D, schema S, table T and column C (double type).
1.
Note: for MySQL, database and schema should be defined using the same name.

First, a function should be defined for each database on each instance. This function returns a random double
value for a column.

CREATE FUNCTION randomizer(val double) RETURNS double


begin
return val * rand();
END

2. Configure a Rule to obfuscate your column: go to Masking → Dynamic Masking Rules and click the Add Rule
button. Scroll down up to Masking Settings and click Select to add the column to be masked.
3 DataSunrise Use Cases | 37
3. Click ADD REGEXP DATABASE, input ^D$ as the regular expression and then click Add. This regular expression
defines that the database name should be D exactly.
4. Once the regular expression for the database is added, hover your mouse cursor over it and the Add RegEx
Schema button will appear. Click Add RegEx Schema button, input ^S$ as the regular expression and then click
Add. This regular expression defines that the schema name must be exactly S.
5. Once the regular expression for the schema is added, please put your mouse over it and Add RegEx Table
button will appear. Click Add RegEx Table button, input ^T$ as the regular expression and then click Add
button. This regular expression defines that the table name must be exactly T.
6. Once the regular expression for the table is added, please put your mouse over it and Add RegEx Column
button will appear. Click Add RegEx Column, input ^C$ as the regular expression and then click Add. This
regular expression defines that the columns name must be exactly C. Click Done.
7. Finally, we write into Function to Call field D.randomizer, where D is the database or schema name and
randomizer is the function name previously created. Click Save Rule and that’s all.
4 DataSunrise's Web Console | 38

4 DataSunrise's Web Console

4.1 Structure of the Web Console


This User Guide section describes the Web Console's elements common for all the DataSunrise web interface's
sections.

Figure 5: Basic Web Console elements.

Each page of the DataSunrise's Web Console (fig. 5) is divided into three parts. The upper part (element group 1) is
common for all the Web Console's sections and subsections. It contains the Admin Panel.
The left part of the page (element group 2) is common for each Web Console's section. It includes the Navigation
Menu.
And the content part (element group 3) is different for each page.
See detailed description of all aforementioned elements below:
1. Admin Panel
4 DataSunrise's Web Console | 39

Interface element Description


Calendar Current date
Clock Time at the DataSunrise's server
Task Manager Displays all running tasks (metadata update, Clean Audit, Dictionary backups etc)
Notifications link (bell) Available notifications
User link Current DataSunrise user and its settings

2. Navigation menu
Interface element Description
Dashboard link Dashboard access (refer to Dashboard on page 39)
Compliances link Compliance Manager access (Compliance Manager Overview on page 268)
Audit link Data Audit section access
Security link Data Security section access
Masking link Data Masking section access
Data Discovery link Data Discovery section access (Sensitive Data Discovery on page 243)
VA Scanner link Vulnerability Assessment section access (VA Scanner on page 263)
Monitoring link Monitoring section access (Diagrams of Internal Characteristics of DataSunrise
on page 55)
Reporting link Report Gen access (Reporting on page 259)
Resource Manager link Resource Manager access (Resource Manager on page 275)
Configuration link Configuration section access (DataSunrise Configurations on page 203)
System Settings link System Settings section access (System Settings)

Each section of the Navigation menu can be extended to access its subsections. It is used to navigate through
subsections of a current section.
3. Content area. It is used to display current subsection's content or tabs/pop-up windows.

4.2 Dashboard
The Dashboard is the starting page of the DataSunrise's Web Console. It displays general information about the
program operations.
The Dashboard's interface includes the following elements:
4 DataSunrise's Web Console | 40

Figure 6: The Dashboard page

1. Proxies list. Available DataSunrise proxies. Right-click on a proxy name for a context menu which enables you to
do the following:
• Test Connection. Testing a connection between the selected DataSunrise's proxy and the target database
• Active Database Sessions. Displays details of database sessions in progress
• Disable Proxies. Makes the proxies inactive.
2. Last System Errors list. Displays DataSunrise system errors.
3. System Info list. Contains information about a computer DataSunrise is installed on.
List item Description
Server Current DataSunrise server
Current Dictionary Location of the current Dictionary database
License Type Type of the DataSunrise license
Backend UpTime DataSunrise Backend working time
Version DataSunrise version number
Node Name Computer name
OS Version DataSunrise server's operating system version
License Expiration Date Expiration date of the DataSunrise license

4. Top Blocked Queries per Day list. Displays a list of the most frequent user queries that were blocked by the
DataSunrise's Data Security module.
5. Current Throughput clickable diagram. Displays a number of user sessions and the number of executed
commands in respect of a target database. The diagram is refreshed every 10 seconds.
6. Active Audited Sessions list. Displays user sessions in progress. Also it enables you to close running sessions. To
do this, select a session of interest in the list and click Interrupt Session
7. Trail DB Audit Logs list. Displays a list of available Audit Trails (see Data Audit (Database Activity Monitoring) on
page 121)
4 DataSunrise's Web Console | 41

4.3 SSL Certificates


4.3.1 Creating a Certificate for the Web Console
On the first start-up, the web browser used to access the DataSunrise's Web Console may prompt you about an
unsecure connection and will propose to add a security exception for the Web Console. This issue is caused by
the DataSunrise's self-signed SSL certificate. To avoid this, you should use a signed SSL certificate from a certain
certification authority. For example, you can do the following:
• You can create the required certificate with the Let's Encrypt service. Refer to the following page for instructions
on obtaining a certificate from Let's Encrypt: https://www.datasunrise.com/blog/getting-an-ssl-certificate-with-
lets-encrypt
• You can get a certificate from the Active Directory Certificate Services. Refer to the following page for
instructions: https://technet.microsoft.com/en-us/library/cc772393(v=ws.10).aspx
• To create a self-signed certificate with the OpenSSL tool, do the following:
• Generate a private key and a Certificate Signing Request (CSR) with the following command:

openssl req -out CSR.csr -new -newkey rsa:1024 -nodes -keyout privateKey.key

• Remove the Passphrase from the private key with the following command:

openssl rsa -in privateKey.key -out newPrivateKey.pem

• Generate a self-signed certificate with the following command:

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:1024 -keyout privateKey.key -out
certificate.crt

Paste the private key and the certificate you got, into the appfirewall.pem file located in the DataSunrise
installation folder

4.3.2 Creating a Private Certification Authority


To create your own Certification Authority (CA) and generate a signed certificate for the DataSunrise's Web Console,
do the following. The example is given for Linux OS.
1. Creating your own CA using OpenSSL
a. Create the root key.

openssl genrsa -des3 -out rootCA.key 2048

b. Create a self-signed certificate.

openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem

2. Creating a certificate signed by the CA


a. Create a private key.

openssl genrsa -out datasunrise_gui.key 2048

b. Generate a certificate signing request (CSR).

openssl req -new -key datasunrise_gui.key -out datasunrise_gui.csr


4 DataSunrise's Web Console | 42
When you are asked to specify “Common Name (eg, YOUR name)”, specify the host DataSunrise is
installed on. It is important to specify the domain name, as Redshift-ODBC requests it when performing
authentication in the verify-full SSL mode.
c. Sign the CSR using the CA root key.

openssl x509 -req -in datasunrise_gui.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out
datasunrise_gui.crt -days 500 -sha256

d. Copy the key and the certificate generated by the CA to the appfirewall.pem file.

cat datasunrise_gui.key > /opt/datasunrise/appfirewall.pem


cat datasunrise_gui.crt >> /opt/datasunrise/appfirewall.pem
chmod 600 /opt/datasunrise/appfirewall.pem
chown datasunrise:datasunrise /opt/datasunrise/appfirewall.pem

e. Restart the DataSunrise's Core. Navigate to System Settings → Servers, click the required server, then make
necessary changes and in the Core and Backend Process Manager → Actions click Restart Core. As a result,
clients will have to install the CA (rootCA.pem) certificate for full SSL authentication (verify-full mode).

4.4 Alternative User Authentication Methods


4.4.1 Configuring Active Directory Authentication to the
Web Console
DataSunrise enables you to use your Active Directory credentials for logging into the Web Console. Both NTLM and
Kerberos authentication methods are available. For Kerberos, an SPN (Service Principal Name) should be created
(see description below), otherwise NTLM is used. To enable AD authentication, do the following:
1. Prepare your network environment.
• The server DataSunrise is installed on should be included in an AD domain
• The DATA_SUNRISE_SECURITY_SUITE system service should be started by the LocalSystem account
• A DNS should be configured and tested. A connection between the AD Domain Controller and the
DataSunrise server should be available
• A DNS name should be used to access the Web Console. For example:

https://<myserver>.<mydomain>:11000

• For Kerberos-based authentication, an SPN should be created with the following command (It should be
created again if the port number or hostname were changed):

setspn -S HTTP/<myserver>.<mydomain>:11000 <myserver>

2. Navigate to System Settings → General of the Web Console, select Kerberos in the Type of Authentication...
drop-down list.
3. Create a DataSunrise user (refer to subs. Creating a DataSunrise User on page 390). Name it as follows: <short
domain name>\<user name>. In other words, name the user similarly to the AD user you are going to log into
the Web Console as. For example:

DB\Administrator

4. Restart the DATA_SUNRISE_SECURITY_SUITE system service for the changes to take effect.
4 DataSunrise's Web Console | 43
5. Enter the Web Console using port 11000. To bypass the Kerberos-based authentication mechanisms and log in to
the Web Console using regular DataSunrise user credentials, use port 12000.

4.4.2 Configuring LDAP Authentication to the Web Console


To configure LDAP-based authentication to DataSunrise's Web Console, do the following:
1. Add at least one LDAP server to System Settings → LDAP (LDAP on page 396)
2. Navigate to System Settings → General Settings and in the Type of Authentication to DataSunrise Web
Console, select LDAP
Two authentication modes are available:
• By user name. A user should exist (Access Control → Users) with Active Directory Authentication enabled.
At the login page, insert the user name and password saved in LDAP. DataSunrise’s Backend will check all
available LDAP servers and will try to connect to them using the provided credentials and will authenticate the
user.
• By group. It is used when a user unknown to the system is trying to log in. Note that the Group Attribute
(System Settings → LDAP) should include a correct attribute name and Access Control → Roles should
include Active Directory Path. The Backend will try to connect to available LDAP servers and get the attribute
specified in the Group Attribute field. It will create a user and grant it certain rights according to the Active
Directory Path names. Authentication will be performed as “By user name” hereafter, because a user already
exists.

4.4.3 Avoiding self-signed certificate problem by attaching


a certificate
In order to attach a certificate to your load balancer, do the following:
1. In Route 53, create a Simple Routing Record for your hosted zone in order to create an Alias for the DataSunrise
load balancer.

2. In EC2 service, create a Target Group with TLS Protocol pointing to the DataSunrise Machines
4 DataSunrise's Web Console | 44

3. Create a new Listener for the DataSunrise Load Balancer for the TLS port 443 with specifying the certificate:

4. In the Web Browser window try to proceed to https://dsloadbalancer.yourdoimain" (the alias you have created in
step 1)
Doing so you will be directed to the required port automatically without any self-signed certificates approval
in the Web Browser. Of course, it can be automated for your environment in the Cloud Formation template, for
testing purposes you can do it manually.

4.4.4 Configuring OAuth2 Authentication in the


DataSunrise's Web Console (based on Okta)
OAuth2 is an authentication protocol that may be used for a user to access a resource without providing login/
password. OAuth2 may be used to get access to the execution of DataSunrise's RPCs.
Here are the steps you need to perform to authorize in DataSunrise using the OAuth2 base on the Okta service:
1. Register your organization and user at https://okta.com
2. Navigate to the applications page using the following link: https://YOUR_ORGANIZATION_NAME-
admin.okta.com/admin/apps/add-app. Create new application:
• Platform: OAuth Service
• Sign on method: OAuth2
3. Save Client ID and Client secret
4. Create your scope and save its name. Refer to the following page: https://developer.okta.com/docs/guides/
customize-authz-server/create-scopes
5. In the DataSunrise's Web Console, navigate to Access Control and create new user named oauth2
6. Navigate to Additional Parameters and set the OAuth2URLForJsonWebKeys parameter's value
as the following:https://YOUR_ORGANIZATION_NAME.okta.com/oauth2/default/v1/keys?
client_id=YOUR_CLIENT_ID_STEP3
7. Generate and save a base64 code. Visit the https://www.base64encode.org/ web site and encrypt the string in
the following format: YOUR_CLIENT_ID_STEP3_YOUR_CLIENT_SECRET_STEP3
4 DataSunrise's Web Console | 45
8. Get a JSON Web Token (JWT) from okta.com. You can also use the curl app and use the following command:

curl --request POST --url https://YOUR_ORGANIZATION_NAME.okta.com/oauth2/default/v1/token --header


"accept: application/json" --header "authorization: Basic YOOUR_BASE64_CODE_STEP7" --header
"cache-control: no-cache" --header "content-type: application/x-www-form-urlencoded" --data
"grant_type=client_credentials&scope=SCOPE_NAME_STEP4"

Having executed the command, you will get a JSON string.


9. Pass the DataSunrise's oauth2Connect RPC the following parameter: JWT=JSON_OBJECT_STEP8. Note that you
can use the Postman app to pass RPC requests to DataSunrise
10. As the result of the aforementioned actions, your user will be authorized in DataSunrise and will be able to pass
to the RPCs of interest the session_id parameter got from the oauth2Connect RPC.

4.4.5 Configuring Kerberos/NTLM Authentication in Internet


Browsers
4.4.5.1 Mozilla Firefox
To enable Kerberos authentication in your Firefox browser, do the following:
1. Open the low-level configuration page by inputting the following in the URL bar and pressing Enter:

about:config

2. In the Search text field, enter the following and press Enter:

network.negotiate-auth.trusted-uris

3. Double-click the parameter's name or click Edit and enter the hostname or the domain of the server protected
by Kerberos. For example:

https://localhost:11000

4. Confirm the changes.

4.4.5.2 Microsoft Edge


To enable Kerberos authentication in your Microsoft Edge browser, do the following:
1. Click the gear icon for settings then select Browser Properties → Security.
2. Select Local Intranet zone and click Sites. Make sure that the following options are checked: Include all local
(intranet) sites not listed in other zones and Include all sites that bypass the proxy server.
3. Click Advanced and add the names of the Kerberos-protected domains to the list. Close the window and save
the changes.
4. Click Custom level.
5. Navigate to Scripting and enable Active Scripting.
6. Navigate to User Authentication/Logon. Check Automatic logon only in Intranet zone.
7. Open the Advanced tab.
8. In the Settings list, navigate to Security. Check Enable Integrated Windows Authentication and save the changes.

4.4.5.3 Google Chrome


To enable Kerberos authentication in your Chrome browser, do the following:
For Windows:
1. You need to configure your Edge browser because Chrome uses its settings. Refer to Microsoft Edge on page
45.
For Linux:
4 DataSunrise's Web Console | 46
1. Run Chrome with the following parameter:

--auth-server-whitelist

and specify hostname or domain of the server protected by Kerberos.


For example:

> google-chrome --auth-server-whitelist = "hostname/domain"

4.4.6 Single Sign-On in DataSunrise


The Single Sign-On (SSO) feature enables you to log in into the Web Console using your OpenID or SAML
credentials. The examples included in this subsection describe SSO configuring using Okta as the service provider,
but you can also use another SSO providers that support SAML and OpenID.

4.4.6.1 Configuring SSO Authentication Based on OpenID (Okta)


This example describes configuring of SSO authentication provided by Okta. To enable Open ID authentication to
the DataSunrise's Web Console, do the following:
1. Register in the Okta service. Navigate to Dashboard → Add Applications → click Create New App.

Figure 7: Adding Application

2. On the Create a New Application tab, select Web as Platform, and OpenID Connect as Sign on Method

Figure 8: Creating a new App


4 DataSunrise's Web Console | 47
3. On the next tab, set application name (any) and input the following URL:

https://<DataSunrise_IP_address>:11000/sso_endpoint

For example:

https://127.0.0.1:11000/sso_endpoint
https://localhost:11000/sso_endpoint

Figure 9: Connect integration

4. Navigate to Assign Applications and assign your application to your Okta user
5. Go to the following page: https://developer.okta.com/docs/api/resources/oidc#request-example-3. See Request
Example. Copy the first part of the query (for example):

https://datasunriseantony.okta.com/oath2/${authServerId}/.well-known/openid-configuration

And delete the middle part of it:

oauth2/${authServerId}

The query should look like the following:

https://datasunriseantony.okta.com/.well-known/openid-configuration

Open this query in your web browser for query results.


Note that you will need the following values from there:

authorization_endpoint
token_endpoint
jwks_uri

6. Go to Okta's Dashboard and navigate to Application → Your App → General → Client Credentials. Note the
Client ID and Client secret. You will need these parameters' values.
4 DataSunrise's Web Console | 48

Figure 10: Client Credentials

7. Enter the DataSunrise's Web Console. Note that you need to specify the full IP address instead of just a host
name. For example:

https://127.0.0.1:11000

Navigate to System Settings → SSO, click Add SSO Service.


8. Input a logical name (any), select OpenID Connect in the SSO Service Type. Input the following values:
Parameter in the Web Console Corresponding parameters
Authorization Token Endpoint URL authorization_endpoint (see step 5)
Token Endpoint URL token_endpoint (see step 5)
Token Keys Endpoint URL jwks_uri (see step 5)
OIDC Client ID Client ID (see step 6)
OIDC Client Secret Client secret (see step 6)

Save the profile.


9. Navigate to Access Control → Your user (admin for example) → Single Sign-On Connections. In the Login
With drop-down list, select the SSO Service created in the previous steps and click Add Connection.
10. You will be redirected to the logon screen of the Web Console. Input OpenID credentials to be logged into the
Web Console.

4.4.6.2 Configuring SSO Authentication Based on SAML (Okta)


This example describes configuring of SAML-based SSO authentication provided by Okta. To enable SAML
authentication to the DataSunrise's Web Console, do the following:
1. Register in the Okta service. Navigate to Dashboard → Add Applications → click Create New App.
4 DataSunrise's Web Console | 49

Figure 11: Adding Application

2. On the Create a New Application tab, select Web as Platform, and SAML 2.0 as Sign on Method
3. On the next tab, set application name (any) and input the following URL into Single Sign on URL and Audience
URI (SP Entity ID):

https://<DataSunrise_IP_address>:11000/sso_endpoint

For example:

https://localhost:11000/sso_endpoint

Figure 12: SAML settings

4. Navigate to Assign Applications and assign your application to your Okta user. A new page will open. Note the
Identity Provider Single Sign-On URL. You will need this parameter's value.
5. Enter the DataSunrise's Web Console. Navigate to System Settings → SSO, click Add SSO Service.
6. Input a logical name (any), select SAML in the SSO Service Type. Input the "Identity Provider Single Sign-On
URL" (see step 4) into the Authorization Token Endpoint URL field. Save the profile.
7. Navigate to Access Control → Your user (admin for example) → Single Sign-On Connections. In the Login
With drop-down list, select the SSO Service created in the previous steps and click Add Connection.
4 DataSunrise's Web Console | 50
8. You will be redirected to the logon screen of the Web Console. Input Okta credentials to be logged into the UI.

4.4.6.3 Configuring SSO Authentication Based on SAML (JumpCloud)


This example describes configuring of SSO authentication provided by JumpCloud. To enable SAML authentication
to the DataSunrise's Web Console, do the following:
1. Register in the SAML service. Navigate to Console → Users and create a JumpCloud user:

Figure 13: Creating a JumpCloud user

2. Create a User Group and assign your user to this group:

Figure 14: Creating a User Group

Figure 15: Adding a User to a User Group

3. Navigate to User Authentication → SSO and create a JumpCloud Application:


4 DataSunrise's Web Console | 51

Figure 16: Creating an Application


4 DataSunrise's Web Console | 52

Figure 17: Creating an Application

Figure 18: Creating an Application

• Provide idP Entity ID (DataSunrise by default)


4 DataSunrise's Web Console | 53
• Provide SP Entity ID (DataSunrise by default)
• Provide ACS URL: https://<DS_IP_Address>:11000/sso_endpoint
• Upload an SP Certificate
• Save the settings
4. Open your Application in the JumpCloud Console and click Export Metadata. Save the metadata file you got.
5. Open your DataSunrise Web Console and navigate to System Settings → SSO → Add SSO Service:

• Select SAML as the SSO Service type


• Upload the metadata file got from JumpCloud to DataSunrise
• Specify Entity ID and Endpoint similarly to the respective values in JumpCloud
• Paste your Private Key and Certificate into the corresponding fields
• Select an option for Users authenticated by SSO Service and save the settings.
6. Navigate to Access Control → Your user (admin for example) → Single Sign-On Connections. In the Login
With drop-down list, select the SSO Service created in the previous steps, provide JumpCloud User name and
click Add Connection.
7. Log out and you will be redirected to the logon screen of the Web Console. Input JumpCloud credentials to be
logged into the Web Console.

4.4.7 Configuring Email-Based Two-Factor Authentication


Two-factor authentication is an additional layer of security (except login/password authentication) when accessing
the Web Console. The second authentication factor is Email.
To enable email-based two-factor authentication, do the following:
• Configure an SMTP server to send security letters
• You need your DataSunrise user email to be confirmed
• Select E-mail value in the Type of Two-Factor Authentication drop-down list.
4 DataSunrise's Web Console | 54
To enable sending letters with confirmation codes, it is necessary to configure an SMTP server at Configuration
→ Subscribers → Add Server (refer to subs. Configuring an SMTP Server on page 212 for details) and set the
Send security emails from this server at least for one server (otherwise it will cause "Cannot send verification code
(Cannot find any server to send security data)" error).
To confirm an email, do the following:
• Configure an SMTP server to send security letters (check the Send security emails from the server check box)
• At System Settings → Access Control → your User, input a valid email address to the Email text field and in the
Confirm your Email subsection click Get Code
• Paste the code received from DataSunrise into the Confirmation Code text field and click Confirm Email
When two-factor authentication is enabled and unsuccessful logging occurred, a letter including a confirmation
code will be sent to the user. The user needs to paste this code into the Get Code text field. If a code hasn't been
sent, you can request a new one by clicking Get Code.

Note: the session timeout is 10 minutes by default. Thus, a Confirmation code is valid for 10 minutes for a
certain IP address. Once this time is elapsed, you need to request a new Confirmation code. You can configure
2FA session timeout by changing the TfaLinksValidation Timeout parameter's value (see Additional Parameters on
page 337)

4.4.8 Configuring OTP Two-Factor Authentication


Along with Email-based 2FA, DataSunrise supports One-Time-Passwords (OTP) authentication provided by the
Google Authenticator app.
To enable OTP-based Two-Factor Authentication, do the following:
• Install Google Authenticator on your smartphone
• Enter the Web Console and navigate to System Settings → Access Control. Create a new user or use an existing
one.
• In the user's settings, select Time-based One-Time password in the Type of Two-Factor Auth drop-down list.
• Click Reset Secret to display a QR code. Scan the code with your Google Authenticator.
• When logging into the Web Console, click Get Code and input your one-time password provided by Google
Authenticator.

Note: the session timeout is 10 minutes by default. Thus, a Confirmation code is valid for 10 minutes for a certain
IP address. Once this time is elapsed, you need to request a new Confirmation code. You can configure 2FA session
timeout by changing the TfaLinksValidation Timeout parameter's value (see Additional Parameters on page 337)

4.5 Monitoring
For viewing statistical information about DataSunrise operations, you can use the Monitoring section.

4.5.1 Viewing System Information


To display a list of system events, go to Monitoring → System Events link in the left pane. System Events include:
• DataSunrise configuration changes
• Successful and denied attempts of authentication to database through DataSunrise
• DataSunrise Core events
• DataSunrise Backend events
• Database metadata changes
To display a list of system events, perform the following:
4 DataSunrise's Web Console | 55
1. Click Filter and select required filters in the Filter tab: click +add column and select a filter from the drop-down
list.
2. In the Column name, Filter and Value drop-down lists, specify parameters of events to be displayed. You can
filter events by importance level of an event (Level value), culprit of an event (Types value), contents of an event
message (Message value), and reporting time frame (To, From drop-down lists or the calendar icon).
3. After setting the filters, click Apply to apply the settings and display the required events.

4.5.2 Diagrams of Internal Characteristics of DataSunrise


To display graphs showing variations of DataSunrise’s internal characteristics, go to the Monitoring → Performance
and select a required parameter on the left panel. Below is the example of a graph of database traffic and
description of available characteristics.

Figure 19: Curves showing a variation of database traffic flow

• You can click the icon or the name of a graph to switch off a selected graph.
Below is the list of available characteristics. To display a graph, select a required parameter from the left panel and
specify the graph update speed and the server to view the information on.
4 DataSunrise's Web Console | 56

Graph name Description


Query Recognizer subsection
Antlr Pool Size Reserved ANTLR pool size, bytes
Antlr Pool Used Used ANTLR pool size, bytes
Audit subsection
Processing Speed Processing speed of audit messages, messages/sec
Audit Queue Length Audit queue length
Traffic subsection
MH Server Traffic Throughput Message Handler Server throughput
MH Client Traffic Throughput Message Handler Client throughput
From DB to Client Traffic of queries from database to a client, bytes/sec
From Client to DB Traffic of queries from client to a database, bytes/sec
DB Operations Number of database operations
DB Executions Number of database executions
Free Space subsection
Audit Available space on the disk where audit.db files are stored, bytes
Logs Available space on the disk where DataSunrise logs are stored, bytes
Memory subsection
Core Virtual Memory Usage Memory usage of DataSunrise Сore, bytes
Traffic Buffers subsection
Traffic Buffer Pool Balance Number of busy internal traffic buffers
Traffic Buffer Pool Free Objects Number of available internal traffic buffers
Queues subsection
Audit Queue Lenght Queue length of audit data handler
Message Handlers Global Queue Lenght Global queue length of Message handler
Message Handlers Local Queue Lenght Local Queu length of Message handler
Message Handlers OutOfOrder Queue Out-of-order Queue length of Message Handler
Lenght
Query Cache Rate subsection
Rate Query cahe rate
Auit Storage Info subsection
Interval in minutes between readings Frequency of Audit Storage reading

4.5.3 DataSunrise Throughput Reports


Throughput reports display traffic flow between a certain client and a certain database. To create a new throughput
diagram, perform the following:
1. Click Graph+ to create a new throughput diagram.
2. Enter the required information into the Specify a Client and a Database to Show Throughput between them
window.
4 DataSunrise's Web Console | 57
3. Enter the required information into the Show Graphs subsection.
Interface element Description
Number of Operations/Sec check box Display a number of performed operations per second
Number of Executions/Sec check box Display a number of executed queries per second
Number of Sessions check box Display a number of sessions

4. Enter the required information into the Time Period subsection.


Interface element Description
Begin drop-down list Initial date of the reporting time frame
End drop-down list End date of the reporting time frame

5. Enter the required information into the Throughput From Client subsection.
Interface element Description
Host drop-down list Select a degree of conformity between an IP address specified in
the Host text field (see below) and the real IP addresses.
Host text field IP address client queries were sent from
Port text field Client application's port number
Login drop-down list Select a degree of conformity between a user name specified in
the Login text field (see below) and the real DB user name.
Login text field Database user name
Application drop-down list Select degree of conformity between a client application name
specified in the Application text field (see below) and the real
client application name.
Application text field Client application name

6. Enter the required information into the Throughput to the Database subsection.
Interface element Description
Instance drop-down list Database instance
Interface drop-down list Database network interface
Proxy/Sniffer drop-down list DataSunrise proxy or sniffer used to process database traffic
Schema text field Database schema

7. When you're done with entering the required information, click Show Lines to create a diagram.
8. Click Clear Diagram to delete an existing diagram.
5 Database Configurations | 58

5 Database Configurations
This section of the User Guide contains database-related instructions such as:
• Creating a target database profile in the Web Console
• Creating target database users required for establishing a connection between DataSunrise and the target
database
• Proxy configuring
• Encrypted traffic processing
• Configuring Two-factor authentication (2FA) in a target database
• Creating database user profiles
• SSL Key Groups
• Database Encryption functionality

5.1 Databases
5.1.1 Creating a Target Database Profile
To be able to work with a target database, DataSunrise needs to be aware of the database it should protect.
Thus, it needs a target database profile to be created in DataSunrise's Configuration. This is the first thing you should
do before creating any Rules and establishing protection. To create a profile of a target database, do the following:

Note: if you need to create a target database profile of a MySQL version lower than 8, set TLSv1,TLSv1.1 as the value
of the MySQLConnectorAllowedTLSVersions additional parameter (Additional Parameters on page 337).

1. Click Databases. A list of existing database profiles will be displayed.


2. Click Add Database.
3. Input information about the target database according to the table below:
5 Database Configurations | 59

UI element Description
Logical Name text field Profile's logical name (it is used by DataSunrise as a
reference to the database)
Database Type drop-down list Target database type
Hostname/IP text field Target database's address (hostname or IP address)
Port text field Database's port number
Authentication Method drop-down list User authentication type (regular login/password or Active
Directory user authentication)
Instance text field (for Oracle database only) Oracle service name or SID
Default Login text field Database user name DataSunrise should use to connect to
the target database
Save Password drop-down list Method of saving the target database's password:
• No
• Save in DataSunrise
• Retrieve from CyberArk. In this case you should specify
CyberArk's Safe, Folder and Object to store the password
in (fill in the corresponding fields)
• Retrieve from AWS Secrets Manager. In this case you
should specify AWS Secrets Manager ID
• Retrieve from Azure Key Vault. You should specify Secret
Name and Azure Key Vault name to use this feature

Password text field Database user password that DataSunrise should use to
connect to the database

Important: DataSunrise needs user credentials only to get


metadata from the target database

Database text field (for all DB types except Oracle Name of the target DB. Required to get metadata from the
and MySQL) database
Encryption drop-down list (for Oracle only) Encryption method:
• No: no encryption
• SSL

Instance Type drop-down list (for Oracle only) A method which DataSunrise should use to connect to the
database:
• SID: using SID
• Service Name: using an Oracle service name
• You can specify multiple Service Names when
configuring an Instance. This enables you to add Primary
and Standby RAC clusters with different Service Names
to a single DataSunrise Instance. To add several Service
Names, separate them with a semicolon:

report_svc;oltp_svc

Advances Settings
Kerberos Service Name field Service name for Kerberos-based connections
5 Database Configurations | 60

UI element Description
Custom Connection String field Specify a custom connection string for database connection.

Important: The ODBC connection string should be


used for all databases except Oracle, MySQL-based and
PostgreSQL-based databases. For PG-based databases
the LibPQ driver is used by default, and for MySQL-
based the MySQL Connector is used by default. To switch
drivers for MySQL and PostgreSQL to ODBC, disable the
MySQLConnectorEnable and LibPQEnable options in the
Additional Parameters (refer to subs. Additional Parameters
on page 337. For examples of connection strings, refer to
the following web site: https://www.connectionstrings.com/)

Dynamic SQL Processing check box Enable processing of Dynamic SQL (see Dynamic SQL
Processing on page 20)
Environment Name field A dedicated database or schema used for employing some
masking methods while doing Dynamic or Static masking
(see Data Masking on page 164)

Automatically Create Environment check box Create an Environment automatically (see the entry above)
IP Version drop-down list IP protocol version to use for connection:
• Auto: define automatically
• IPv 4
• IPv 6

Note: for MongoDB, you should always select either Pv4


or IPv6, not Auto

Database keys drop-down list SSL Key Group that contains required keys for the database
(SSL Key Groups on page 106). Required for establishing
an SSL connection between the DataSunrise's proxy and the
target database.

4. Click Test to check the connection between the target database and DataSunrise.
5. Specify a method of interaction between DataSunrise and the target database in the Capture Mode subsection:
5 Database Configurations | 61

UI element Description
Server drop-down list Select DataSunrise server (DS Instance) to open a proxy
or a sniffer on
Action drop-down list Select an operating mode DataSunrise should employ
to process requests to the target database (refer to subs.
DataSunrise Operation Modes on page 18):
• Proxy: Proxy Mode on page 19
• Sniffer: Sniffer Mode on page 18

Network Adapter drop-down list (for Sniffer mode Network controller DataSunrise should use to connect
only) to the target DB
IP Address drop-down list (for Proxy mode only) IP address of the proxy
Port text field (for Proxy mode only) Number of a network port DataSunrise should be
listening to
Accept Only SSL Connections check box (for Proxy Check to disable unencrypted connections
mode only)

6. Click Save to save the target DB profile.

5.1.2 Editing a Target Database Profile


To edit an existing database profile, do the following:

1. Click Databases in the main menu. A list of existing database profiles will be displayed.
2. Click profile name of a required database in the list.
3. Click Add Interface to add a new database interface and specify hostname, port number, database keys and IP
version (also SID or service name for Oracle Database). Click Save to apply new settings.
4. To add a new proxy for the target database, do the following:
a) Click Add Proxy.
b) Select a network interface for the database in the Interface drop-down list.
c) Select a DataSunrise server (node) to open a proxy on, in the Server drop-down list.
d) Select new database host in the Host drop-down list.
e) Specify proxy keys if necessary. The keys are needed to establish an SSL connection
f) Specify a port number for the DataSunrise proxy in the Port text field.
g) Check the Enabled check box to activate the proxy.
h) Click Save to apply new settings.
5. To add a new sniffer to the database, do the following:
a) Click Add Sniffer.
b) Select a required network interface in the Instance Interface drop-down list. An Interface has an IP address
and a port number on which a target server is listening. DataSunrise opens a proxy or a sniffer on an interface.
A database instance can include several interfaces.
c) Select a DataSunrise server (node) to open a sniffer on, in the Server drop-down list.
d) Select a required network device (network adapter) in the Device drop-down list.
e) Specify Sniffer keys if necessary. Sniffer key is a database server's private SSL key. It is used to decrypt the
traffic flowing between the client and the database.
f) Check the Enabled check box to activate a current sniffer.
g) Click Save to apply new settings.
5 Database Configurations | 62

Note: If a database server, database client and the firewall are installed on the same Windows-powered local
machine, the DataSunrise sniffer would not be able to capture network traffic.

6. Click Actions → Update Metadata to update the database's metadata.


Database's metadata contains information about database structure, tables' properties, etc. DataSunrise creates
database metadata copy in the dictionary.db file which is located within the program installation folder, and
uses it for processing of the database's traffic. DataSunrise keeps metadata copy up to date automatically, but
if some serious error occurs or the database has been updated directly (bypassing DataSunrise), you should
update the metadata manually by clicking Update Metadata. If you update your target database often directly,
we recommend you to configure a Periodic Metadata Update task and run it as often as you update the database
(refer to subs. Update Metadata on page 224).
7. Click Actions → Test Connections to test a connection between DataSunrise server and the target database.
Enter required credentials in the Connection to Database window.

5.1.3 Displaying Database Properties


You can view properties of your target database which is useful for advanced users. To view the properties, do the
following:

1. Enter your database profile for profile settings.


2. In the Actions drop-down list select Show Properties. You will be redirected to a new tab
3. Here you can see a table with database properties. Select the Property type Drop-down list to select various
types of properties:
Property Type Description
Instance Properties Properties of your target database instance (database profile)
Database Properties Properties of databases included into your database instance
DBUSers DBLevel Properties Properties of users of various levels
DBUsers Properties Database users properties

5.1.4 Creating an MS SQL Sniffer


MS SQL 2005+ performs user authentication using SSL even if encryption (the Encrypt check box) is disabled.
Depending on the client application used, this dialogue can include three steps:
• The client sends the server a request for connection and sends the needSSL flag
• The server sends the client a response to the request for connection and sends the same needSSL flag.
• The client performs authentication according to the server’s response: if the server enables its needSSL flag, then
the SSL authentication process is performed, and if the needSSL is disabled, then authentication is performed
without SSL.
There are three options available for needSSL:
• No SSL is used
• SSL is used at the authentication stage only
• SSL is used for the complete connection:
The server can reject client’s request for SSL if the server doesn’t support SSL.
The server can force the client to enable SSL if forceSSL is enabled at the server’s side.
To set up a sniffer for an existing SQL Server database, do the following:
5 Database Configurations | 63
1. Create an SSL Key Group (refer to SSL Key Groups on page 106). Input a Private Key. Leave all other fields
empty
2. Create an SQL Server database profile or use existing and create a sniffer there (refer to Creating a Target
Database Profile on page 58). Attach the SSL Key Group you created to your sniffer: open the Sniffer's settings
and in the Sniffer Keys drop-down list select your SSL Key Group.

5.1.5 Troubleshooting Connection Failure


In case the connection between DataSunrise and the target database fails, perform the following:

1. Check the state of proxies using the DataSunrise's Web Console.


- Open DataSunrise's Web Console and navigate to Configuration → Databases.
- Check the target database and click Actions drop-down list.
- Click Test Connection.
- Click Test All.
If the status of all ports is OK, go to the next step.
2. Scan the host with Telnet Client.
Telnet (Terminal Network) is a network protocol that provides a command line interface a possibility to
communicate with a device.
Since Windows Vista Telnet client is no longer enabled by default on Windows operating systems,
- To enable Telnet client, run the command prompt with administrative privileges and execute the following
command:

dism /online /Enable-Feature /FeatureName:TelnetClient

- Wait until the operation is finished. You will have to restart your computer in order to implement the system
changes.
- Find Telnet application using the Windows search tool on your computer and run it. Use the o command with
the required hostname and port number as shown below:

o 192.168.1.71 3306

If Telnet client cannot connect to the host, the issue is caused by your computer or network, not by DataSunrise.
If the specified hostnames and port numbers are correct, check your network firewall or another kind of
conflicting security software that can block the network traffic.

5.2 Creating Database Users Required for


Getting the Database's Metadata
DataSunrise interacts with a target database and receives all information required for operation through a user
account of this database (the account, user name and password of which are specified in the target database profile
in the Web Console). You can use database administrator's account for connection but it is also possible to use any
other user account with sufficient privileges.
This section describes the actions required to establish a connection between DataSunrise and various databases.
5 Database Configurations | 64

5.2.1 Creating an Oracle Database User


1. Connect to the Oracle target database using the SYS user account.
2. To create a new user, perform the following depending on your Oracle database version:
• Oracle 11g Release 2 or earlier:

CREATE USER <User_name> IDENTIFIED BY <Password>;

Having created a new user, grant the following privileges to the user:

Note: to provide these grants, connect locally as SYSDBA.

GRANT CONNECT TO <User_name>;


GRANT CREATE ANY TABLE TO <User_name>;
GRANT SELECT ON "SYS"."DBA_OBJECTS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_OBJECT_TABLES" to <User_name>;
GRANT SELECT ON "SYS"."DBA_TAB_COLUMNS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_TABLES" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_TAB_COLS" to <User_name>;
GRANT SELECT ON "SYS"."DBA_NESTED_TABLES" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_SYNONYMS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_USERS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_PROCEDURES" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_TYPES" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_TYPE_ATTRS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_DEPENDENCIES" TO <User_name>;
GRANT SELECT ON "SYS"."COLLECTION$" TO <User_name>;
GRANT SELECT ON "SYS"."V_$SERVICES" TO <User_name>;
GRANT SELECT ON "SYS"."V_$INSTANCE" TO <User_name>;
GRANT SELECT ON "SYS"."V_$DATABASE" TO <User_name>;
GRANT SELECT ON "SYS"."GV_$INSTANCE" TO <User_name>;
GRANT SELECT ON "SYS"."OBJ$" TO <User_name>;
GRANT SELECT ON "SYS"."COL$" TO <User_name>;
GRANT SELECT ON "SYS"."USER$" TO <User_name>;
GRANT SELECT ON "SYS"."COLTYPE$" TO <User_name>;
GRANT SELECT ON "SYS"."HIST_HEAD$" TO <User_name>;
GRANT SELECT ON "SYS"."TAB$" TO <User_name>;
GRANT SELECT ON "SYS"."DATABASE_PROPERTIES" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_EDITIONS" TO <User_name>;
GRANT SELECT ON "SYS"."_CURRENT_EDITION_OBJ" to <User_name>;
GRANT SELECT ON "SYS"."V_$PARAMETER" TO <User_name>;


Note: For Dynamic masking of VIEWs, to get VIEW-related metadata, grant the following privilege:

GRANT CREATE ANY TABLE TO <User_name>;


GRANT CREATE ANY PROCEDURE TO <User_name>;
GRANT CREATE ANY VIEW TO <User_name>;
GRANT DROP ANY VIEW TO <User_name>;


Note: For Dynamic masking of functions, grant the following privilege:

GRANT DROP ANY PROCEDURE TO <User_name>;


GRANT CREATE ANY PROCEDURE TO <User_name>;
5 Database Configurations | 65

Note: to use the Hide Rows masking method and Data Preview, grant the privilege shown below. Note that
this privilege should be issued manually:

GRANT SELECT ANY TABLE TO <User_name>;

Note: if you use a container-based Oracle 12+ and your instance is in CDB (all containers), use the c## prefix
for your User_name. For example:

CREATE USER c##myuser IDENTIFIED BY mypassword;

More on Oracle common users: https://docs.oracle.com/database/121/ADMQS/GUID-DA54EBE5-43EF-4B09-


B8CC-FAABA335FBB8.htm

• For container databases, additionally you need the following grants:

GRANT SELECT ON "SYS"."CDB_OBJECTS_AE" TO <User_name>;


GRANT SELECT ON "SYS"."CDB_EDITIONS" TO <User_name>;
GRANT SELECT ON "SYS"."CDB_PROPERTIES" TO <User_name>;
GRANT SELECT ON "SYS"."CDB_PLSQL_TYPES" TO <User_name>;
GRANT SELECT ON "SYS"."CDB_PLSQL_TYPE_ATTRS" TO <User_name>;

• For non-container databases or container databases where your Instance is located in a separate container,
not cdb$root:

GRANT SELECT ON "SYS"."DBA_OBJECTS_AE" TO <User_name>;

• Additional grants on all editions should be set in OracleDefaultEdition:

GRANT USE ON EDITION <Edition1 name> TO <User_name>;


GRANT USE ON EDITION <Edition2 name> TO <User_name>;
...
GRANT USE ON EDITION <EditionN name> TO <User_name>;

• Oracle 12c and later versions:

Note: If you're using Oracle 12c in the non-Multitenant mode, please refer to the GRANT list for Oracle 11g
provided above.

Starting from this version of Oracle Database, there is a possibility to create a user to get metadata either
from a particular container or from all containers at once.
To create a user for all containers (global user), execute the following queries:

ALTER SESSION SET CONTAINER = CDB$ROOT;


CREATE USER C##<User_name> IDENTIFIED BY <Password>;
GRANT CONNECT TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."V_$INSTANCE" TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."V_$SERVICES" TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."V_$DATABASE" TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."V_$PDBS" to C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."DBA_OBJECTS" TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."DBA_OBJECT_TABLES" to C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."DBA_TAB_COLS" TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."CDB_USERS" TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."CDB_OBJECTS" TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."CDB_OBJECT_TABLES" to C##<User_name> CONTAINER=ALL;
5 Database Configurations | 66
GRANT SELECT ON "SYS"."CDB_TAB_COLUMNS" TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."CDB_SYNONYMS" TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."CDB_NESTED_TABLES" TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."CDB_PROCEDURES" TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."CDB_TYPES" TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."CDB_TYPE_ATTRS" TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."CDB_DEPENDENCIES" TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."CDB_TABLES" TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."COLLECTION$" TO C##<User_name> CONTAINER=ALL;
GRANT CREATE TABLE TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."OBJ$" TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."COL$" TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."USER$" TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."COLTYPE$" TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."HIST_HEAD$" TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."TAB$" TO C##<User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."DATABASE_PROPERTIES" TO <User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."DBA_EDITIONS" TO <User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."_CURRENT_EDITION_OBJ" to <User_name> CONTAINER=ALL;
GRANT SELECT ON "SYS"."V_$PARAMETER" TO <User_name> CONTAINER=ALL;

To create a user for a particular container, execute the following queries:

ALTER SESSION SET CONTAINER = <Container_name>;


CREATE USER <User_name> IDENTIFIED BY <Password>;
GRANT CONNECT TO <User_name>;
GRANT SELECT ON "SYS"."V_$INSTANCE" TO <User_name>;
GRANT SELECT ON "SYS"."V_$SERVICES" TO <User_name>;
GRANT SELECT ON "SYS"."V_$DATABASE" TO <User_name>;
GRANT SELECT ON "SYS"."V_$PDBS" to <User_name>;
GRANT SELECT ON "SYS"."DBA_OBJECTS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_OBJECT_TABLES" to <User_name>;
GRANT SELECT ON "SYS"."DBA_TAB_COLS" TO <User_name>;
GRANT SELECT ON "SYS"."CDB_USERS" TO <User_name>;
GRANT SELECT ON "SYS"."CDB_OBJECTS" TO <User_name>;
GRANT SELECT ON "SYS"."CDB_OBJECT_TABLES" to <User_name>;
GRANT SELECT ON "SYS"."CDB_TAB_COLUMNS" TO <User_name>;
GRANT SELECT ON "SYS"."CDB_SYNONYMS" TO <User_name>;
GRANT SELECT ON "SYS"."CDB_NESTED_TABLES" TO <User_name>;
GRANT SELECT ON "SYS"."CDB_PROCEDURES" TO <User_name>;
GRANT SELECT ON "SYS"."CDB_TYPES" TO <User_name>;
GRANT SELECT ON "SYS"."CDB_TYPE_ATTRS" TO <User_name>;
GRANT SELECT ON "SYS"."CDB_DEPENDENCIES" TO <User_name>;
GRANT SELECT ON "SYS"."CDB_TABLES" TO <User_name>;
GRANT SELECT ON "SYS"."COLLECTION$" TO <User_name>;
GRANT CREATE TABLE TO <User_name>;
GRANT SELECT ON "SYS"."OBJ$" TO <User_name>;
GRANT SELECT ON "SYS"."COL$" TO <User_name>;
GRANT SELECT ON "SYS"."USER$" TO <User_name>;
GRANT SELECT ON "SYS"."COLTYPE$" TO <User_name>;
GRANT SELECT ON "SYS"."HIST_HEAD$" TO <User_name>;
GRANT SELECT ON "SYS"."TAB$" TO <User_name>;
GRANT SELECT ON "SYS"."DATABASE_PROPERTIES" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_EDITIONS" TO <User_name>;
GRANT SELECT ON "SYS"."_CURRENT_EDITION_OBJ" to <User_name>;
GRANT SELECT ON "SYS"."V_$PARAMETER" TO <User_name>;

Warning: In most cases, it is preferable to use a common user for establishing connections with your target
databases because if you use a user created for one container, DataSunrise will not be able to work with other
containers.

• For Oracle 12.1+ users created for a particular container or without a container, additionally grant the
following privileges:

GRANT SELECT ON DBA_PLSQL_TYPES TO <User_name>;


GRANT SELECT ON DBA_PLSQL_TYPE_ATTRS TO <User_name>;
5 Database Configurations | 67
• If you are going to use the UseMetadataViewDDL and UseMetadataFunctionDDL additional parameters
(Additional Parameters on page 337), it is necessary to provide your user with the following privilege:

Oracle 12+:
GRANT SELECT_CATALOG_ROLE TO C##<User_name>;
GRANT SELECT ON "SYS"."DBA_VIEWS" TO C##<User_name>;

Oracle 11:
GRANT SELECT_CATALOG_ROLE TO <User_name>;
GRANT SELECT ON "SYS"."DBA_VIEWS" TO <User_name>;

• For Oracle Real Application Cluster (RAC), grant the following privilege:

GRANT SELECT ON "SYS"."GV_$INSTANCE" TO C##<User_name>;


Important: if you're not allowed to grant the CREATE TABLE privilege but want to use UseMetadataViewDDL,
you need to create a temporary table to be used for downloading metadata. To do this, execute the following
query:

CREATE GLOBAL TEMPORARY TABLE <User_name>.TEMP_DBA_VIEWS_V4 (OWNER VARCHAR2(30) NOT NULL,


VIEW_NAME VARCHAR2(30) NOT NULL, TEXT CLOB) ON COMMIT PRESERVE ROWS

• If you're going to use Oracle native encryption (the EnableOracleNativeEncryption additional parameter),
provide your user with the following grant:

GRANT SELECT ON "SYS"."USER$" TO C##<User_name>;

5.2.2 Creating an AWS RDS Oracle Database User


1. Connect to the RDS Oracle target database using the admin user account.
2.
Note: use uppercase to define all parameter values, unless you created a user with a case-sensitive identifier.

To create a new user, execute the following queries:


• Oracle 18 and Oracle 19:

CREATE USER <User_name> IDENTIFIED BY <Password>;


GRANT CREATE SESSION TO <User_name>;
GRANT select_catalog_role TO <User_name>;
GRANT CREATE TABLE TO <User_name>;
GRANT CREATE USER TO <User_name>;
GRANT CREATE PROCEDURE TO <User_name>;
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'AUD$',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;

Note that if you input User_name without quotes your user will be saved as User_name in the Oracle's table of
users. If you need your User_name to be in lower case, use double quotes: "User_name" .
If you need to use Delete Processed Logs, grant the following privilege:

begin
rdsadmin.rdsadmin_util.grant_sys_object(
5 Database Configurations | 68
p_obj_name => 'AUD$',
p_grantee => '<User_name>',
p_privilege => 'DELETE');
end;

3. Grant your user the required privileges:

begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'OBJ$',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;

begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'COL$',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;

begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'USER$',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;

begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'COLTYPE$',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;

begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'HIST_HEAD$',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;

begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'TAB$',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;

begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'COLLECTION$',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'DBA_EDITIONS',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'CDB_PROPERTIES',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;
5 Database Configurations | 69
4. For downloading VIEW metadata, (useMetadataViewDdl), you need the following grants:

begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'DBA_VIEWS',
p_grantee => '%s',
p_privilege => 'SELECT');
end;

5. For Dynamic masking of VIEWs, you need the following grants:

GRANT CREATE ANY VIEW TO <User_name>;


GRANT DROP ANY VIEW TO <User_name>;

6. For Dynamic masking of functions, you need the following grants:

GRANT DROP ANY PROCEDURE TO <User_name>;


GRANT CREATE ANY PROCEDURE TO <User_name>;

7. If you're going to use the Hide Rows masking method and Data Preview, you need the following grant:

GRANT SELECT ANY TABLE TO <User_name>;

5.2.3 Creating a PostgreSQL/Aurora PostgreSQL Database


User
1. To create a PostgreSQL/Aurora PostgreSQL user, execute the following command:

CREATE USER <User_name> WITH PASSWORD '<Password>';

Note: The user should be able to get information about the database structure from the following system tables:
• pg_database
• pg_namespace
• pg_class
• pg_catalog
• pg_attribute
• pg_user
• pg_settings
• pg_db_role_setting

To do this, execute the following query:

GRANT SELECT ON
pg_catalog.pg_database,
pg_catalog.pg_namespace,
pg_catalog.pg_class,
pg_catalog.pg_attribute,
pg_catalog.pg_user,
pg_catalog.pg_settings,
pg_catalog.pg_db_role_setting
TO <User_name>;

2. For Dynamic masking of VIEWs, functions, "Hide Rows", and Data Preview, grant your user the following
privileges:
5 Database Configurations | 70
• PostgreSQL 14+:

GRANT pg_read_all_data TO <User_name>;

• PostgreSQL lower than 14:

ALTER USER <User_name> WITH SUPERUSER;

• Aurora PostgreSQL lower than 14:

GRANT rds_superuser TO <User_name>;

5.2.4 Creating a Netezza Database User


1. To create a new Netezza user, execute the following command:

CREATE USER <User_name> WITH PASSWORD '<Password>';

2. Grant all required privileges to the user. Connect to the SYSTEM database and execute the corresponding SQL
query:
• For Netezza 6.X:

GRANT LIST ON AGGREGATE, DATABASE, EXTERNAL TABLE, FUNCTION, GROUP, MANAGEMENT TABLE, MANAGEMENT
VIEW, PROCEDURE, SEQUENCE, SYNONYM, SYSTEM TABLE, SYSTEM VIEW, TABLE, USER, VIEW to <User_name>;

• For Netezza 7.X:

GRANT LIST ON AGGREGATE, DATABASE, EXTERNAL TABLE, FUNCTION, GROUP, MANAGEMENT TABLE, MANAGEMENT
VIEW, PROCEDURE, SCHEMA, SEQUENCE, SYNONYM, SYSTEM TABLE, SYSTEM VIEW, TABLE, USER, VIEW to
<User_name>;

5.2.5 Creating a MySQL/MariaDB/Aurora MySQL Database


User (main method)
To create a new MySQL/MariaDB/Aurora MySQL user, do the following:
1. Execute the following query in MySQL's client to create a new user:

CREATE USER <User_name> IDENTIFIED BY '<Password>';

2. Grant the required privileges to the user:

GRANT SELECT ON <Database_name>.* TO <User_name>;


FLUSH PRIVILEGES;

Note: This method has a serious drawback. Granting the SELECT privilege to a user means that this user will be
able not only to get metadata but the database contents as well. Thus if it is not acceptable, use the alternative
method described below.
5 Database Configurations | 71
To use dynamic SQL processing, grant the following privileges:

GRANT SELECT, CREATE, DROP, INSERT, EXECUTE, CREATE ROUTINE, ALTER ROUTINE ON
`<Unique_Instance_ID>`.* TO <User_name>@'%';

You can find unique_instance_id in the Core logs when you try to mask dynamic SQL for the first time.

5.2.6 Creating a MySQL/MariaDB Database User (alternative


method)
Use this method if you don't want to grant excessive privileges to the default database user as the first method
described above provides. As the result you will get a user with minimum possible privileges.
To execute the script automatically:
• Add MySQL instance in Configuration → Databases → Add Database. Enter the credentials of a user having
sufficient privileges for SELECT from all tables and rights required for creating procedures and functions (root
user for example) and click Test.
• After successful test connection, select Via Stored Procedures in the Metadata Retrieval Method drop-down
list and click on Create stored procedures on database to generate the script.
• Click Execution to automatically execute the script's contents.
• After that, click Create user and enter the credentials. Select Via Stored Procedures in the Metadata Retrieval
Method drop-down list and click Create. Click Apply this user to change a user in the instance.
To execute the script manually:
• Connect to your MySQL database as a user having sufficient privileges for SELECT from all tables and rights
required for creating procedures and functions (root user for example).
• Generate the script in Configuration → Databases → Add Database. Choose MySQL or MariaDB in Database
Type. Select Via Stored Procedures in the Metadata Retrieval Method drop-down list and click on Create
stored procedures on database.
• Copy the script's contents and execute it in your MySQL client manually.
• Create a new MySQL user and grant this user the following privileges:

CREATE USER <User_name> IDENTIFIED BY '<Password>';


GRANT EXECUTE ON `dsproc_%`.* TO <User_name>;
FLUSH PRIVILEGES;

• Now you can create a new database profile in Configuration → Databases. Use details of the MySQL database
you've installed the procedure to earlier and the credentials of the newly created user. Select Via Stored
Procedures in the Metadata Retrieval Method drop-down list.

Note: For masking of procedures and functions and dynamic SQL processing, you need to grant your user the
following privilege (for both the manual and automatic cases):

GRANT CREATE routine on .* TO @'%';

5.2.7 Creating a Greenplum Database User


To create a new Greenplum user, execute the following query:

CREATE USER <User_name> WITH PASSWORD '<Password>';


5 Database Configurations | 72
Execute the following query to provide the user with necessary privileges:

GRANT USAGE ON SCHEMA <Target_schema_name> TO <User_name>;

or

GRANT ALL ON SCHEMA <Target_schema_name> TO < User_name>;

5.2.8 Creating a Teradata Database User


1. To create a new Teradata user, execute the following query

CREATE USER "<User_name>"


AS
PERM = 0
PASSWORD = "<Password>";

2. Grant the required privileges to the new user by executing the following query:

GRANT SELECT
ON "<Target_database_name>"
TO "<User_name>";

5.2.9 Creating an SAP HANA Database User


1. To create a new SAP HANA user, execute the following command:

CREATE USER <User_name> PASSWORD "<Password>" NO FORCE_FIRST_PASSWORD_CHANGE;

2. Providing required privileges includes two stages: first, a role should receive privileges to access schema's
objects, and then the role is assigned to a user. To grant the required privileges, execute the following query:

GRANT SAP_INTERNAL_HANA_SUPPORT TO <User_name>;

should be executed by SYSTEM

GRANT SELECT ON SCHEMA <Schema_name> TO SYSTEM;

should be executed by <User_name>


Note: this role includes privileges for read-only access to all metadata, the current system status, and the data of
the statistics server. Additionally, it includes the privileges for accessing low-level internal system views. Without
the SAP_INTERNAL_HANA_SUPPORT role this information can be selected only by the SYSTEM user.
It can only be granted to a limited number of users at the same time.
The maximum number of users the role can be granted to can be configured with the
"internal_support_user_limit" parameter in the "Authorization" section of the "indexserver.ini" configuration file.
The default value is 1.
5 Database Configurations | 73

5.2.10 Creating a Redshift Database User


Note: if you need to configure a cluster, you can refer to Connecting to an Amazon Redshift Database Using IAM
Authentication on page 86
1. To create a new Redshift user, execute the following command:

CREATE USER <User_name> PASSWORD '<Password>';

2. Grant required privileges to your user by executing the following query:

alter default privileges in schema <Schema_name> grant select on tables to <User_name>;

To be able to select Redshift External Tables and Schemas in DataSunrise, grant the following privilege:

grant usage on schema <External_schema_name> to <User_name>;

5.2.11 Creating a Vertica Database User


1. To create a new Vertica user, execute the following command:

CREATE USER <User_name> IDENTIFIED BY '<Password>';

2. Define an authentication type you will use:

GRANT AUTHENTICATION <Authentication_method_name> TO <User_name>;

3. Grant USAGE for all schemas:

GRANT USAGE ON SCHEMA <Schema_name> TO <User_name>;

4. Grant SELECTs:

GRANT SELECT ON ALL TABLES IN SCHEMA <Schema_name> TO <User_name>;


GRANT SELECT on V_INTERNAL.vs_users TO <User_name>;

Note: You can execute the following queries that generate queries from GRANTS for all required schemas:

select 'GRANT USAGE ON SCHEMA ' || <Schema_name> || ' to <User_name>;' from v_catalog.schemata WHERE
is_system_schema = false;

select 'GRANT ALL ON ALL TABLES IN SCHEMA ' || <Schema_name> || ' to <User_name>;' from
v_catalog.schemata WHERE is_system_schema = false;

5.2.12 Creating a DB2 Database User


To make DataSunrise work correctly with a DB2 database, it is necessary to provide your DB2 user with the privileges
required to select data from the following system views:
• syscat.schemata
• syscat.procedures
• syscat.functions
5 Database Configurations | 74
• syscat.tables
• syscat.columns
• syscat.sequences
• syscat.packages
1. In the operating system of the machine you DB2 is installed on, create an OS user to use as your DB2 user. DB2
will create a database user based on the OS user.
2. Grant necessary user privileges by executing the query below. This query returns a statement for each schema
table (DB2 can't provide required rights for accessing a complete schema). Copy this code to your DB2's SQL
Editor and execute it as a complete script.

SELECT DISTINCT
'GRANT Select ON TABLE '
|| rtrim (tabschema)
|| '.'
|| rtrim (tabname)
|| ' TO USER <User_name>;'
FROM syscat.tables
WHERE
tabschema = '<Schema_name>' AND
tabschema not like 'SYS%' AND
tabschema not IN ('SQLJ', 'NULLID')

5.2.13 Creating a Sybase Database User


To enable your Sybase user to get the target database's metadata, do the following:
1. Create Login by executing the following commands: https://www.datasunrise.com/doc/sybase_create_login.sql
2. Create User associated with your Login using the following script: https://www.datasunrise.com/doc/
sybase_create_user.sql
3. To delete your User and Login, remove User first then Login. To remove your User, execute the following script:
https://www.datasunrise.com/doc/sybase_delete_user.sql
Then you can delete your Login by executing the following commands: https://www.datasunrise.com/doc/
sybase_delete_login.sql

5.2.14 Creating a MongoDB Database User


1. To create a new MongoDB user with minimum possible privileges, create a new Role:

Note: it's desirable to assign a "root" role. For other roles, some of the functionality (such as importing database
users to DS) will be unavailable.

use admin
db.createRole(
{
role: "<Role_name>",
privileges: [
{ resource: {cluster: true}, actions: [ "inprog" ] },
],
roles: [
{ role: "read", db: "admin" }
]
}
)

2. Create a new database user and assign the role you created before to the user:

use admin
5 Database Configurations | 75
db.createUser(
{
user: "<User_name>",
pwd: "<Password>",
roles: [ { role: "<Role_name>", db: "admin" } ]
}
)

5.2.15 Creating a Snowflake Database User


To make DataSunrise work correctly with a Snowflake database, do the following:
1. Create a new user and grant the user the required privileges:

USE ROLE accountadmin;


CREATE USER <User_name>;
ALTER USER <User_name> SET login_name='<User_name>';
ALTER USER <User_name> SET password='<password>';
CREATE ROLE <Role_name>;
ALTER USER <User_name> SET default_role=<Role_name>;
GRANT ROLE <Role_name> TO USER <User_name>;
GRANT OPERATE ON WAREHOUSE <Warehouse_name> TO ROLE <Role_name>;
GRANT USAGE ON WAREHOUSE <Warehouse_name> TO ROLE <Role_name>;
GRANT MONITOR ON WAREHOUSE <Warehouse_name> TO ROLE <Role_name>;
ALTER USER <User_name> SET default_warehouse=<Warehouse_name>;

2. Grant your user the following privileges for each database you want to see when setting up DataSunrise rules:

GRANT USAGE ON DATABASE <Database_name> TO ROLE <Role_name>;


GRANT USAGE ON SCHEMA <Database_name>.<Schema_name> TO ROLE <Role_name>;
GRANT SELECT ON ALL TABLES IN SCHEMA <Database_name>.<Schema_name> TO ROLE <Role_name>;
GRANT SELECT ON ALL TABLES IN DATABASE <Database_name> TO ROLE <Role_name>;
GRANT USAGE ON FUTURE SCHEMAS IN DATABASE <Database_name> TO ROLE <Role_name>;
GRANT SELECT ON FUTURE TABLES IN DATABASE <Database_name> TO ROLE <Role_name>;

GRANT USAGE ON PROCEDURE <Database_name>.<Schema_name>.<Procedure_name>() TO ROLE <Role_name>;


GRANT USAGE ON FUNCTION <Database_name>.<Schema_name>.<Procedure_name>() TO ROLE <Role_name>;

3. Grant the following privileges to enable user fetching from your database:

GRANT MANAGE GRANTS ON ACCOUNT TO ROLE <Role_name>;

You can revoke the privilege by executing the following query:

REVOKE MANAGE GRANTS ON ACCOUNT FROM ROLE <Role_name>;

5.2.16 Granting Necessary Privileges to a DynamoDB User


To make DataSunrise work correctly with a DynamoDB database, it is necessary to provide the proper privileges to
enable testing connections and creating a database instance:
To create a database instance, use an IAM role with the following permissions:

dynamodb:ListTables
dynamodb:DescribeTable

Please visit the following page for more information: https://docs.aws.amazon.com/amazondynamodb/latest/


developerguide/api-permissions-reference.html
5 Database Configurations | 76

5.2.17 Creating an Informix Database User


To make DataSunrise work correctly with an Informix database, do the following:
1. Enter your Informix server and execute the following command:

onmode -wf USERMAPPING=BASIC


sudo useradd -d /home/<User_name> -s /bin/false <User_name>
sudo mkdir -p /etc/informix
sudo vim /etc/informix/allowed.surrogates

2. Assign the owner and permissions to the allowed.surrogates file:

USERS:<User_name> sudo chown root:root /etc/informix/allowed.surrogates


sudo chmod 644 /etc/informix/allowed.surrogates onmode -cache surrogates

3. Connect via ODBC and execute the following command:

CREATE USER <User_name> WITH PASSWORD '<Password>' PROPERTIES user <User_name>;

You can see the created user in your OS console in the following way:

echo "SELECT * FROM sysusermap" | dbaccess sysuser

4. Get a list of databases:

SELECT DISTINCT trim(DBS_DBSNAME) from informix.sysdbslocale;

5. Enable your user to connect to each database:

echo "GRANT CONNECT TO <User_name>" | dbaccess sysmaster


echo "GRANT CONNECT TO <User_name>" | dbaccess <DB1_name>
echo "GRANT CONNECT TO <User_name>" | dbaccess <DB2_name>
echo "GRANT CONNECT TO <User_name>" | dbaccess <DBn_name>
...

5.2.18 Creating an Amazon S3 Database User


To make DataSunrise work correctly with Amazon S3 buckets, do the following:
1. Create the following AWS IAM Policy:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:ListAllMyBuckets",
"sts:DecodeAuthorizationMessage"
],
"Resource": [
"*"
]
}
]
}
5 Database Configurations | 77
2. Attach the Policy to your IAM Role (Policies → Policy actions → Attach) and attach the Role to your DataSunrise
EC2 machine (EC2 machine → Instance Settings → Attach/Replace IAM Role).

5.3 Configuring an MS SQL Server


Connection
To establish a connection between DataSunrise and a SQL Server database, perform the following:
1. Run the SQL Server configuration manager utility (it is included in the SQL Server pack). Open SQL Server
Network Configuration → Protocols for (DB instance name)
2. Right-click on the TCP/IP protocol name and select Properties in the context menu
3. In the TCP/IP Properties window, Protocol tab, set Yes value for Enabled parameter. Then open IP-addresses
tab, IPAII subsection and set TCP-port parameter value to 1433. Click OK to close the window.
4. Open the SQL Server Services subsection, right-click on the SQL Server (DB instance name) parameter to
open its context menu, and click Restart
5. If you're using a firewall application (including Windows Firewall), you should allow the following inbound
connections: TCP/IP, port 1433 and UDP, port 1434
6. Once the configuring is done, it is recommended to restart your PC.
7. Connect to the database server with the SQL Server Management Studio (SSMS). It should be noted that SSMS’s
Encrypt connection option forces encryption and server certificate check on client’s side (except SSMS 2016
and higher). Thus, when this option is enabled, the client would not be able to connect to a DataSunrise proxy if
the certificate included into proxy.pem or dictionary.db does not include proxy’s hostname. In case encryption
is enabled (it is disabled by default), it is necessary to have a properly signed SSL certificate. Otherwise, disable
Encrypt connection. When configuring a database connection, specify the database server's hostname or IP
address instead of an SPN. It is recommended to use the SQL Server authentication instead of the Windows
authentication. Otherwise, refer to subs. Configuring Windows Authentication for Microsoft SQL Server on page
82.

5.3.1 Configuring an MS SQL Server Connection with the


SQL Browser Service
DataSunrise has an option for the SQL Browser Service. Instead of a proxy port, you need to know just the name
of the server, its INSTANCENAME, to be able to connect to it through DataSunrise proxy. To establish a connection
between DataSunrise and an SQL Server database with the SQL Server Browser Service, do the following:
1. Enable SQLServerBrowserEnable parameter in the System Settings - Additional Parameters
2. Run the SQL Server Management Studio (SSMS) environment. Open the File menu and select Connect Object
Explorer
3. Select Database Engine. There you will see the Connect to Server window.
4. In the Server name box, type the name of your MSSQL server: ds_server_name_or_IP\INSTANCENAME. If
DataSunrise and MSSQL are on the same machine, for example: .\SQLEXPRESS.
a) If DataSunrise and MSSQL are on the same machine you can write: .\SQLEXPRESS
Also, you need to disable SQL Server Browser parameter and enable SQL Server parameter in the SQL Server
Configuration Manager.
If you do not know the name of the MSSQL server, you can check it with the SQL Server Configuration
Manager. For this, run the SQL Server Manager and expand the SQL Server Network Configuration node.
There you will see all instance names of the MSSQL server deployed on this machine.
5 Database Configurations | 78
Or, you can execute the following command in the SQL Server Management Studio to know the name of the
MSSQL server:

SELECT SERVERPROPERTY('InstanceName')

b) If DataSunrise is installed on a separate machine, type the IP address or Host name of the DataSunrise server
together with the MSSQL server name.
Example: 192.168.5.78\SQLEXPRESS or
JennyPC\SQLEXPRESS.
5. Input or choose a User name and password. Select Connect
Instead of SSMS, you can choose Azure Data Studio with the same details as Management Studio.

5.3.2 Granting Necessary Privileges to an MS SQL Server


User (also an AD user)
To make DataSunrise work correctly with an SQL Server database whether it's a regular user or an AD user, you
should execute a simple script to grant the necessary privileges to your user.
1. Create a LOGIN on the server, USER in each database and grant them the following privileges. You can download
this script at: https://www.datasunrise.com/doc/creating_mssql_user.sql
You can use the method below for an Active Directory user as well. In this case, you don't need to specify the
following variables (DataSunrise will just ignore them):
• @PWD - user password
• @USER - AD login name that is used as the @USER's value
• @SID - user/user group Security Identifier assigned by the Domain controller during the logon process
2. Delete the USER (if exists) and LOGIN from all databases. You can download this script at: https://
www.datasunrise.com/doc/deleting_mssql_user.sql

5.3.3 MS Azure Specific


At this moment it is impossible to create a Backend User with minimum privileges for Azure SQL. You can use
only Server admin and Azure AD admin users. This because only these two users have access to the [master].[sys].
[sql_logins] table which is critical for getting a list of logins and associated databases.

5.4 Additional Proxy Configuration


5.4.1 Enabling "Regex replace" Data Masking for Netezza
IBM Netezza does not support regular expressions by default, so it is impossible to use "Regex Replace" out-of-the-
box. To enable Regex masking, it is required to install the additional Netezza package:
1. Obtain IBM Netezza SQL Extensions toolkit from here: https://www.ibm.com/support/knowledgecenter/
SSULQD_7.2.1/com.ibm.nz.sqltk.doc/r_sqlext_software_loc.html#r_sqlext_software_loc
2. Install and configure SQL Extension toolkit. Refer to this page for instructions: https://www.ibm.com/support/
knowledgecenter/SSULQD_7.2.1/com.ibm.nz.sqltk.doc/c_sqlext_install_and_setup.html.

5.4.2 Enabling "Regex Replace" Data Masking for Aurora


MySQL and MariaDB
MySQL lower than v.8, Aurora MySQL and MariaDB don't include the function required for Regex Replace so you
can't use this masking method out-of-the-box. To enable Regex masking, it is required to install the regexp_replace
5 Database Configurations | 79
function into your database. The function is located here: <DataSunrise_installation_folder>/scripts/Masking/MySQL/
regexp_replace.sql.

5.4.3 Using Custom Certificate Authority in Redshift Client


Applications (JDBC)
When connecting to a Redshift database through a DataSunrise proxy, you can get the following error: "Amazon
600000 General SSL Engine problem".
Such problem occurs because Java is not aware of the DataSunrise's root certificate. To solve this issue, you need to
embed a CA to the Java Keystore using the Java Keytool. Execute the following command:

keytool -import -keystore <cacerts> -storepass <changeit> -file <CA.crt> -alias "redshift"

You can view the certificate by using the following command:

keytool.exe -list -keystore <cacerts> -storepass <changeit> -alias redshift

To delete a certificate:

keytool -delete -keystore <cacerts> -alias redshift

Important: you need to run JDBC client where Java is aware of the storage you've embedded a certificate to.

You can also specify this storage when starting a client:

-Djavax.net.ssl.trustStore=<cacerts>
-Djavax.net.ssl.trustStorePassword=<changeit>

Example:

"%JAVA_HOME%\bin\java" -Djavax.net.ssl.trustStore="%JAVA_HOME%\jre\lib\security\cacerts" -
Djavax.net.ssl.trustStorePassword=changeit -jar C:\sqlworkbench\sqlworkbench.jar

Bypassing forced CA usage


To bypass the forced CA usage, add the following parameters to the JDBC connection string:

ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory

5.4.4 Changing PostgreSQL's Port Number


When configuring a DataSunrise proxy, it would be necessary to change database port number. It is necessary if
DataSunrise proxy is configured to use the port number assigned to the original database. To do this, perform the
following:
1. Open the postgresql.conf file which is located in the data subfolder of PostgreSQL installation folder.
2. In the CONNECTIONS AND AUTHENTICATION section, change the port parameter's value (5432 by default) to
a new port number.
3. Restart PostgreSQL for the changes to take effect.
5 Database Configurations | 80

5.4.5 Configuring Authorization of Local Users in


PostgreSQL
If DataSunrise proxy is deployed on the same host as the database is, remote users which connect to the database
through proxy will be treated by the database as local users thus they can have some preferences like password-
free or simplified authorization. Thus, it is necessary to disable password-free authorization for local users in the
database settings if it is enabled. For this, do the following:
1. Open the pg.hba file which is located in the data subfolder of the PostgreSQL's installation folder.
2. Edit the pg.hba file in the following way:

# TYPE DATABASE USER ADDRESS METHOD


# IPv4 local connections:
host all all 127.0.0.1/32 md5
host all all all md5
# IPv6 local connections:
host all all ::1/128 md5

3. As a result, MD5 or Password authentication method will be assigned for all database connections.

5.4.6 Enabling "Regex Replace" Data Masking for SQL


Server
SQL Server does not support regular expressions but provides the possibility to use external add-ons. The
"Regex Replace" masking function is built as an add-on as well. DataSunrise installs the Regexp replace function
automatically when you create a corresponding masking Rule, but you need to do some actions before using this
method of masking:
1. Create a database user and grant this user the required privileges to enable Regex Replace masking
2. Get the following script: <DataSunrise_installation_folder>/scripts/Masking/SQL Server/regexp_replace.sql

Important: on Linux, you need to log into your SSMS as sysadmin to execute the required queries

3. Insert the corresponding parameter values into the script:


• @LOGIN: your Login
• @PWD: your Password
• @ALLOW_TO_CREATE_FUNCTIONS: set "1" as the parameter's value
4. Execute the script in your SSMS.

5.4.7 Configuring Kerberos on SQL Server Startup Under


Domain Account
Let’s assume that we have a server, sqlsrv1.HAG.LOCAL with MSSQLSERVER instance running under HAG
\Administrator account. The following instruction describes how to make it work with DataSunrise.
1. To do this, it is necessary to change the account for MSSQLSERVER because it is impossible to delegate
authorization through a proxy server under an administrative account (even if the delegation is performed by the
administrator):
• Create a separate domain user, HAG\mssql-svc
• Run the SQL Server (MSSQLSERVER) system service
2. After starting the service under the mssql-svc account, it is necessary to make sure that direct access
to it with Kerberos-type authorization is possible. To do this, it is required to check if the MSSQLSvc/
sqlsrv1.HAG.LOCAL:14533 SPN exists and this SPN is bound to the mssql-svc account and there are no SPN
conflicts on the domain.
5 Database Configurations | 81
3. If such an SPN does not exist, create it:

setspn -A MSSQLSvc/sqlsrv1.HAG.LOCAL:1433 mssql-svc

Check for SPN conflicts:

setspn -X

Delete conflicting entries:

setspn -D MSSQLSvc/sqlsrv1.HAG.LOCAL:1433 conflicted_account

4. To check authorization, run SSMS on any other host of the domain, connect to sqlsrv1.HAG.LOCAL,1433 and
execute the following query:

select auth_scheme from sys.dm_exec_connections where c.session_id=@@spid

If everything is configured correctly, the query result will be:

KERBEROS

5. Having configured the authorization, configure DataSunrise (it should be installed on another host, for example,
test2008.HAG.LOCAL):
• Create an instance which can proxy to the sqlsrv1.HAG.LOCAL:1433 server, on port 1438 for example
• Create an MSSQLSvc/test2008.HAG.LOCAL:1438 SPN and assign it to the mssql-svc account
• Enable delegation for the mssql-svc account
6. If everything is configured correctly, when connecting to test2008.HAG.LOCAL:1438 with SSMS (from any other
host on the domain) and with enabled MSSQL tracing, there should be similar messages in the log:

conn#40396730420301: Client Negotiation Info : Kerberos (Microsoft Kerberos V1.0)


conn#40396730420301: Credentials Lifetime : 09/13/30828 02:48
conn#40396730420301: Context Lifetime : 04/12/2017 18:33
conn#40396730420301: Credentials User : [email protected]

conn#40396730420301: Using the SPN : MSSQLSvc/sqlsrv1.HAG.LOCAL:1433

conn#40396730420301: Proxy Negotiation Info : Kerberos (Microsoft Kerberos V1.0)


conn#40396730420301: Credentials Lifetime : 04/18/2017 23:50
conn#40396730420301: Context Lifetime : 04/12/2017 18:33
conn#40396730420301: Credentials User : [email protected]

The main information about two connections: client → proxy and proxy → server. Both connections authorize the
user using KERBEROS.
All errors associated with KERBEROS are displayed in the log too.
For example:

conn#40396683210101: Client Negotiation Info : Kerberos (Microsoft Kerberos V1.0)


conn#40396683210101: Credentials Lifetime : 09/13/30828 02:48
conn#40396683210101: Context Lifetime : 04/12/2017 18:33
conn#40396683210101: Credentials User : [email protected]

conn#40396683210101: Using the SPN : MSSQLSvc/sqlsrv1.HAG.LOCAL:1433

conn#40396683210101: Proxy Negotiation Info : NTLM (NTLM Security Package)


conn#40396683210101: Credentials Lifetime : 04/12/2017 18:33
conn#40396683210101: Context Lifetime : 04/12/2017 18:23
5 Database Configurations | 82
conn#40396683210101: Credentials User : [email protected]

Here is the same connection but with delegation disabled: the first connection authorized the user using
KERBEROS because the MSSQLSvc/test2008.HAG.LOCAL:1438 SPN exists, and the second connection
authorized the user using NTLM because delegation is prohibited for the mssql-svc account.
If there is a problem with KERBEROS authorization on the client → proxy level, the log will contain something like
that:

conn#40216941060201: Client Negotiation Info : NTLM (NTLM Security Package)


conn#40216941060201: Credentials Lifetime : 09/13/30828 02:48
conn#40216941060201: Context Lifetime : 04/10/2017 16:27
conn#40216941060201: Credentials User : [email protected]

conn#40216941060201: NTLM User: HAG\test-user


conn#40216941060201: NTLM Workstation: TEST2008
conn#40216941060201: NTLM Version: 6.3 Build 9600 NLMPv15

5.4.8 Configuring Windows Authentication for Microsoft


SQL Server
By default, SQL Server authorization is used to access the database. If it is required to use the Windows
Authentication, and DataSunrise, database server and client applications are installed on separate machines, it is
necessary to use the Active Directory (AD) service.
When working with AD, SSPI user authorization is used (based on NTLM or Kerberos protocols). Since Kerberos-
based authorization is preferable, it is necessary to perform the following to activate this protocol:
1. Enable delegation for the DataSunrise proxy's host account. Enter Active Directory Users and Computers and
find the profile of a machine DataSunrise is installed on. Open its properties → Delegation tab and enable the
Trust this computer for delegation to any service switch.
2. The proxy's address should match or resolve a name into a registered SPN (more on SPNs here: https://
msdn.microsoft.com/en-en/library/ms191153.aspx) for Kerberos connection (MSSQLSvc service). To do this, use
the Setspn.exe tool which is supplied with the Windows Server's support tools, to register two required SPNs for
a profile of a machine the delegation is enabled for:

setspn -A MSSQLSvc/<Proxy_host>:<Proxy_port> <Proxy_host>


setspn -A MSSQLSvc/<Full_FQDN_proxy_host>:<Proxy_port> <Proxy_host>

For example:

setspn -A MSSQLSvc/vsunr-03:1435 vsunr-03


setspn -A MSSQLSvc/vsunr-03.db.local:1435 vsunr-03

It's important to run setspn.exe as a domain administrator or as a domain user with the privilege of "Validated
write to service principal name" for AD object for which it is necessary to configure an SPN.
5 Database Configurations | 83

To grant this privilege, go to Active Directory Users and Computers, select the server the database is installed
on, open its properties → Security tab, add the required user and check the Validated write to service principal
name check box. More information here: https://technet.microsoft.com/en-en/library/cc731241(v=ws.10).aspx
Use the following command to get a list of all registered SPNs:

setspn -L <Proxy_host>

To delete an SPN, execute:

setspn -D MSSQLSvc/<Proxy_host>:<Proxy_port>

To check the authorization scheme, connect to the server and execute the following query:

select auth_scheme from sys.dm_exec_connections where session_id=@@spid

The query result will show the authorization scheme used by the database server (SQL, NTLM or Kerberos).

5.4.9 Getting Metadata with an AD User


To enable an Active Directory user to get the target database's metadata, it is necessary to create a user, login and
give this user required privileges:

CREATE LOGIN [<domain name>\<firewall's host name>$] FROM WINDOWS;


GO
CREATE USER [<domain name>\<firewall's host name>$] FROM LOGIN [<domain>\<machine name>$];
GO
GRANT CONNECT ANY DATABASE TO [<domain name>\<firewall's host name>$];
GO
5 Database Configurations | 84

5.4.10 Configuring Windows Authentication for Microsoft


SQL Server on Linux
By default, SQL Server authorization is used to access the database. If it is required to use the Windows
Authentication, and DataSunrise, database server and client applications are installed on separate machines, it is
necessary to use the Active Directory (AD) service.
When working with AD, Kerberos-based authorization is preferable, it is necessary to perform the following to
activate this protocol:
1. Create an AD user DataSunrise will be acting as:
• Log into the AD domain controller server, click Start → Administrative Tools and launch Active Directory
Users and Computers
• If it is not already selected, click the node for your domain (domain.com)
• Right-click Users, point to New and click User
• In the New Object → User dialog box, specify the parameters of the new user. It could be a regular user
because it's not required to provide the user with any additional privileges. The user account should be active
(the Account is disabled check box unchecked) and the password for the account should be perpetual (the
Password never expires check box checked).
2. Create an SPN using the FQDN of the machine DataSunrise is run on. The proxy's address should match or
resolve a name into a registered SPN for Kerberos connection (the MSSQLSvc service). To do this, use the
setspn.exe tool which is supplied with the Windows Server's support tools to register two required SPNs for the
profile of the machine the delegation is enabled for:

setspn -S MSSQLSvc/<Proxy_host>:<Proxy_port> <User_from_step_1>


setspn -S MSSQLSvc/<Full_FQDN_proxy_host>:<Proxy_port> <User_from_step_1>

For example:

setspn -S MSSQLSvc/vsunr-03:1435 dsunuser


setspn -S MSSQLSvc/vsunr-03.db.local:1435 dsunuser

It's important to run setspn.exe as a domain administrator or as a domain user with the privilege of "Validated
write to service principal name" for AD object for which it is necessary to configure an SPN.

To grant this privilege, navigate to Active Directory Users and Computers, select the server the database
is installed on, open its properties → Security tab, add the required user and check the Validated write
5 Database Configurations | 85
to service principal name check box. More information here: https://technet.microsoft.com/en-en/library/
cc731241(v=ws.10).aspx
Use the following command to get a list of all registered SPNs:

setspn -L <proxy's host>

To delete an SPN, execute:

setspn -D MSSQLSvc/<proxy's host>:<proxy's port>

3. Enable user delegation. On the domain controller machine, navigate to Active Directory Users and Computers,
locate the account of the user created in step 1.
• In the Properties section, go to the Delegation tab and select Trust this computer for delegation to
specified services only and click Add
• In the Users and Computers window, specify the user account that was used to launch the database or the
name of the server the database is installed on.
• Optionally, you can use Check names to check if the specified user or computer exists, then select the
required service and click OK.
4. Create a keytab by executing the following command:

ktpass -princ MSSQLSvc/<fqdn>:<proxy_port>@<domain> -mapuser <user_from_step_1> -pass


<password_from_step_1> -ptype KRB5_NT_PRINCIPAL -out datasunrise.keytab

5. Use the keytab you got in step 4 to configure Kerberos on the DataSunrise's machine (you need to move
the keytab to Datasunrise's machine first). Refer to step 4 of the following guide for details: https://
www.datasunrise.com/blog/professional-info/configuring-kerberos-authentication-protocol/. Edit the krb.conf.file
and input the required parameter values:

[libdefaults]
default_realm = <domain_realm>
default_keytab_name = FILE:<path_to gsssvc.keytab>
default_client_keytab_name = FILE:<path_to gsssvc.keytab>
clockskew = 300
ticket_lifetime = 1d
forwardable = true
proxiable = true
dns_lookup_realm = true
dns_lookup_kdc = true
default_ccache_name = FILE:<path_to krb5cc>
verify_ap_req_nofail = false
<DOMAIN_REALM> = {
kdc = <fqdn>
admin_server = <fqdn>
default_domain = <fqdn>
}
[domain_realm]
.<fqdn> = <DOMAIN_REALM>[appdefaults]
pam = {
ticket_lifetime = 1d
renew_lifetime = 1d
forwardable = true
proxiable = true
retain_after_close = false
minimum_uid = 1
debug = false
}

For example:

[libdefaults]
5 Database Configurations | 86
default_realm = DB.LOCAL
#default_keytab_name = FILE:D:\fw_home\default.keytab
#default_keytab_name = FILE:D:\fw_home\oraproxy_gssapi.db.local.keytab
default_keytab_name = FILE:W:\krb\gsssvc.keytab
default_client_keytab_name = FILE:W:\krb\gsssvc.keytab
clockskew = 300
ticket_lifetime = 1d
forwardable = true
proxiable = true
dns_lookup_realm = true
dns_lookup_kdc = true
#allow_weak_crypto = true
default_ccache_name = FILE:W:\krb\krb5cc
verify_ap_req_nofail = false
#default_tkt_enctypes = arcfour-hmac
#default_tgs_enctypes = arcfour-hmac
#permitted_enctypes = arcfour-hmac
#kdc_req_checksum_type = -138 #1[realms]
DB.LOCAL = {
kdc = dsun.db.local
admin_server = dsun.db.local
default_domain = dsun.db.local
}[domain_realm]
.db.local = DB.LOCAL[appdefaults]
pam = {
ticket_lifetime = 1d
renew_lifetime = 1d
forwardable = true
proxiable = true
retain_after_close = false
minimum_uid = 1
debug = false
}
#[plugins]
#clpreauth = { disable = yes }

6. Run DataSunrise. It doesn't matter what user you use to do this because the domain user you created in step 1
will be used anyway.
7. A client can establish a connection using the FQDN you used in step 2. Database server's FQDN should be
specified in your target database Instance's settings (Configuration → Databases) as Host.

5.4.11 Connecting to an Amazon Redshift Database Using


IAM Authentication
There are two options, depending on the protocol used, you can establish a connection to a Redshift database
through a DataSunrise proxy using the IAM authentication mechanism:
• Redshift ODBC. Specify IP address of the proxy your Redshift Instance is connected through in the ODBC driver's
host field. Specify Redshift's cluster's ID in ClusterID and Redshift's Region in Region. All other parameters
should be similar to the ones described in the following guide: https://docs.aws.amazon.com/en_us/redshift/
latest/mgmt/generating-iam-credentials-configure-jdbc-odbc.html
• Redshift JDBC. Create an alias for the proxy's address similarly to the Redshift cluster's address.
For example, if Redshift cluster's address is

examplecluster.abc123xyz789.us-west-2.redshift.amazonaws.com

the alias may be the following:

examplecluster.abc123xyz789.us-west-2.redshift.amazonaws.com.example.com
5 Database Configurations | 87
When establishing a connection, specify proxy's alias instead of the cluster's host. For example:

examplecluster.abc123xyz789.us-west-2.redshift.amazonaws.com.example.com

All other parameters should be similar to the ones described in the following guide: https://
docs.aws.amazon.com/en_us/redshift/latest/mgmt/generating-iam-credentials-configure-jdbc-odbc.html

5.4.12 Connecting to an Amazon Elasticsearch Using IAM


Authentication
This is how you can configure a connection to an Amazon Elasticsearch through a DataSunrise proxy using IAM
authentication mechanism:
• Install DataSunrise on an EC2 machine
• Create an AIM role (replace the Resource values with your ones):

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"es:ESHttpDelete",
"es:ESHttpGet",
"es:ESHttpHead",
"es:ESHttpPost",
"es:ESHttpPut"
],
"Resource": "arn:aws:es:us-west-1:987654321098:domain/test-domain/*"
},
{
"Effect": "Allow",
"Action": [
"es:CreateElasticsearchDomain",
"es:DeleteElasticsearchDomain",
"es:DescribeElasticsearchDomain",
"es:DescribeElasticsearchDomainConfig",
"es:DescribeElasticsearchDomains",
"es:UpdateElasticsearchDomainConfig"
],
"Resource": "arn:aws:es:us-west-1:987654321098:domain/test-domain"
},
{
"Effect": "Allow",
"Action": [
"es:AddTags",
"es:DeleteElasticsearchServiceRole",
"es:DescribeElasticsearchInstanceTypeLimits",
"es:DescribeReservedElasticsearchInstanceOfferings",
"es:DescribeReservedElasticsearchInstances",
"es:ListDomainNames",
"es:ListElasticsearchInstanceTypeDetails",
"es:ListElasticsearchInstanceTypes",
"es:ListElasticsearchVersions",
"es:ListTags",
"es:PurchaseReservedElasticsearchInstanceOffering",
"es:RemoveTags"
],
"Resource": "*"
}
]
}

• Attach the role to the EC2 machine DataSunrise is installed on.


5 Database Configurations | 88

5.4.13 Connecting to an Amazon PostgreSQL/MySQL


Database Using IAM Authentication
This is how you can configure a connection to a PostgreSQL/MySQL database through a DataSunrise proxy using the
IAM authentication mechanism:
• MySQL
• Create an RDS MySQL database. Select Password and IAM database authentication in the Database
authentication tab
• Log in into your MySQL database as admin and create a user which will be used for IAM authentication

CREATE USER <User_name> IDENTIFIED WITH AWSAuthenticationPlugin AS 'RDS';

• Create an EC2 machine and attach an IAM role with minimum possible privileges to it. Navigate to the IAM
service and create a new Policy. Navigate to the JSON tab and input the following:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"rds-db:connect"
],
"Resource": [
"arn:aws:rds-db:us-east-2:1234567890:dbuser:db-ABCDEFGHIJKL01234/db_user"
]
}
]
}

You need to replace the required parameters' values in the Resource subsection with your own values:
• Replace "us-east-2" with your Region value
• Replace "1234567890" with your account's ID
• Replace "db-ABCDEFGHIJKL01234" with your RDS database's Resource Id
• Replace "db_user" with the name of your database user you will use for IAM authentication. You should
use something like this:

arn:aws:rds-db:us-east-1:042001279082:dbuser:<resid>/mysql_test_user

• Create a new Role ( https://console.aws.amazon.com/iam/home#/roles ) and attach your Policy to this Role.
• Attach the Role to your EC2 machine. Start EC2 and install the AWS CLI. You can download it at here: https://
docs.aws.amazon.com/en_us/cli/latest/userguide/install-cliv1.html
• Install MySQL on your EC2 machine
• Check the installation: generate a token with the following command:

aws rds generate-db-auth-token --hostname <rds-host> --port 3306 --region us-east-1 --username
mysql_test_user

• Check the connection to the database:

mysql --host=<rds-host> --port=3306 --enable-cleartext-plugin --user=mysql_test_user --


password=”token”

• Open the DataSunrise's Web Console and create a new MySQL Instance. Select IAM Role in the Authentication
Method.
5 Database Configurations | 89
• PostgreSQL. Actions to be done for RDS PostgreSQL are similar to the ones mentioned above except some
Postgres-specific actions:
• Install psql on your EC2 machine and create a database user with the following command:

CREATE USER <User_name> WITH LOGIN;


GRANT <RDS_IAM> TO <User_name>;

• You don't need to create a Parameters Group when deploying an RDS.

5.4.14 Setting up a Proxy or a Reverse Proxy for Amazon


S3, Minio or Alibaba OSS
DataSunrise features Database Activity Monitoring and Dynamic Masking of JSON, CSV, XML and unstructured
files stored in Amazon S3 buckets or in any storage compatible with S3 protocol such as Alibaba OSS or Minio. To
accomplish these tasks, DataSunrise enables you to set up a proxy or a reverse proxy for your storage.
HTTP / HTTPS Proxy is an applicative proxy that redirects a connection to the host specified in the special packet
(CONNECT) when a session starts. The applicative proxy does not know the endpoint in advance at that. The client
knows about the proxy, so he should specify the HTTP Proxy's host. This works only for clients that run over HTTP.
When working as a reverse proxy, DataSunrise becomes invisible for client because client app “thinks” that it is
connected to an Amazon S3 bucket directly. To use DataSunrise in reverse proxy mode, your DNS server should be
reconfigured so that DataSunrise’s host name is resolved into your Amazon S3’s IP address.
When deployed as a regular proxy, DataSunrise enables you to use both the DAM and Dynamic Masking features on
S3-stored files but requires reconfiguring of the client application (S3 Browser in our case).
To use DataSunrise features on S3, a dedicated database profile should be created in the Web Console. You can find
some S3-related notes below:
1. Note the Protocol drop-down list in the S3 database profile of the Web Console. There are four options there:
HTTP/HTTPS Proxy and HTTP/HTTPS Reverse Proxy. Note that the only difference between the HTTP and HTTPS
options is the possibility to establish an SSL-encrypted connection for the HTTPSs.
2. Set up a proxy. For proxy mode, open your S3 Browser application and in the Tools → Options → Connection,
select Use Proxy Settings Below and specify your DataSunrise proxy's IP address and port number. As a result,
you will be able to establish a connection between the S3 bucket and your S3 Browser using DataSunrise as a
proxy.
3. If you're using S3 Browser to connect to a Minio server, do the following:
• If your DataSunrise S3 proxy is set to HTTP/HTTPS proxy, navigate to Advanced S3-Compatible Storage
Settings of your S3 Browser and select Signature Version 4. Specify your Minio server endpoint in REST
Endpoint and specify your DataSunrise's proxy in the proxy settings of your S3 Browser.
• If your DataSunrise S3 proxy is set to HTTP/HTTPS reverse proxy, specify your DataSunrise proxy's endpoint as
REST Endpoint. Note that proxy usage should be disabled in your S3 Browser's settings.

5.4.15 Connecting to Athena through database connectors


(DBC)
To configure DataSunrise and DBC drivers (ODBC and JDBC) to be able to connect to Athena Instance through a
DataSunrise proxy, do the following:
Create an Athena Instance:
1. Navigate to Configuration → Databases and click Add Database
2. Input connection details for your Athena
3. In the Capture Mode section, select proxy. Select an SSL Key Group in the Proxy Keys drop-down list. Note that
you can generate a new SSL Key Group and attach it to your proxy by selecting Create New
4. Navigate to Configuration → SSL Key Groups
5 Database Configurations | 90
5. Locate your SSL Key Group in the list and open it, copy Certificate from the corresponding field. Paste the copied
certificate in a text file (for example, dsca.crt)
6. Depending on driver type and client application, do the following:
• JDBC:
• Add your certificate from dsca.crt to an existing key storage in your Java folder. For example:

keytool -importcert -trustcacerts -alias dsca -v -keystore "C:/Program Files/Java/jdk-17.0.1/


lib/security/cacerts" -file "dsca.crt" -storepass changeit

• To enable Java application to use the JKS file in Trust Store, add the following options:

java -Djavax.net.ssl.trustStore=<jks_file_path> -Djavax.net.ssl.trustStorePassword=<jks_file


password> -jar <jdbc_application>


Important: the path to your certificate storage should be like that:

C:/Program Files/Java/<your JDK folder>/lib/security/cacerts

• DBeaver:
• Locate file dbeaver.ini in your Dbeaver installation folder and open it with a text editor
• Add the following lines to the end of the file:

-Djavax.net.ssl.trustStore=<jks_file_path>
-Djavax.net.ssl.trustStorePassword=<jks_file_password>

For example:

-Djavax.net.ssl.trustStore=C:/Program Files/Java/jdk-11.0.1/lib/security/cacerts
-Djavax.net.ssl.trustStorePassword=changeit

• Configure a connection to your Athena from DataSunrise proxy by specifying proxy connection details
in DBeaver. At the Driver properties tab, set ProxyHost and ProxyPort according to your Athena proxy's
settings. Test the connection.
• You can face an error Unable to find valid certification path to requested target, caused by DataSunrise's
self-signed certificate. You can use a certificate from CA or do the following: run command line as
administrator and navigate to your DBeaver installation folder. Add your Athena certificate (see step 5) to
cacerts which is located in <DBeaver installation folder>jre\security\. For example:

keytool.exe -importcert -trustcacerts -alias dsca -v -keystore "C:/Program Files/DBeaver/jre/


lib/security/cacerts" -file "C:/athena/dsca.crt" -storepass changeit

• Run DBeaver with the following parameters (example):

dbeaver.exe -vm "C:\Program Files\DBeaver\jre\bin" -vmargs -Djavax.net.ssl.trustStore="C:


\Program Files\DBeaver\jre\lib\security\cacerts" -Djavax.net.ssl.trustStorePassword=changeit

• Connect to your Athena through DataSunrise proxy using DBeaver.


• ODBC:
• Download and install the ODBC driver: https://docs.aws.amazon.com/athena/latest/ug/connect-with-
odbc.html
• Enter DataSunrise's Web Console and create a new Athena Instance. When configuring a proxy, select
Create New in Proxy Keys (new SSL Key Group will be created automatically)
5 Database Configurations | 91
• Navigate to Configuration → SSL Key Groups and open your Group's settings. Copy the Certificate
• Locate the cacerts.pem file placed in your <odbc_folder>/lib/
• Add your certificate to the end of your cacerts.pem file
• Open ODBC Data Sources as administrator. Add new user DSN
• Configure DSN Setup, Authentication Options, Proxy Options. Test connection
• Create new connection in your ODBC client app with your DSN.
• AWS CLI Athena
• Install AWS CLI
• Add new environment variable:

setx HTTPS_PROXY <Your_proxy_server_Host_and_Port>

For example:

setx HTTPS_PROXY https://127.0.0.1:443

• Configure the CLI by executing the following command::

aws configure

An example of CLI settings:

Access key ID: AMDAXRKAB6KR5BTZKLNV


Secret access key: T1+b3OFGk4Y17YlT+zFKaRmCZz4LoD04LgLYXCAA
region: us-east-2
output: json

• Enter DataSunrise's Web Console and create a new Athena Instance. When configuring a proxy, select
Create New in Proxy Keys (new SSL Key Group will be created automatically)
• Navigate to Configuration → SSL Key Groups and open your Group's settings
• Copy Certificate and paste it in a text file (dsca.crt for example)
• Now you can query your Athena through the proxy like this:

aws athena start-query-execution --ca-bundle dsca.crt --query-execution-context


Database=tests --query-string "SELECT * FROM dev.simple limit 100;" --result-configuration
OutputLocation=s3://my-bucket/myathena/

5.4.16 Connecting to Snowflake through NET Snowflake


Connector (Windows)
To configure DataSunrise and your Snowflake driver to connect to Snowflake Instance through a DataSunrise proxy,
do the following:

1. Navigate to Configuration → Databases and create a Snowflake database Instance (refer to Creating a Target
Database Profile on page 58). Configure a proxy
2. Navigate to Configuration → SSL Key Groups and open the Proxy default SSL key group for CA certificate Group
3. Copy the certificate from the CA field and paste it to a .pem file (root_ca.pem for example)
4. Add the certificate to Trusted Root Certification Authorities Store
5 Database Configurations | 92
5. Use the following connection string to establish a connection through your DataSunrise proxy:

conn.ConnectionString = "USEPROXY=true; PROXYHOST=<Proxy_host>; PROXYPORT=<Proxy_port_number>;


INSECUREMODE=true; HOST=<Snowflake_host>; account=<Snowflake_account>; user=<Snowflake_user>;
password=<Snowflake_password>; ROLE=<Your_Snowflake_Role>; db=<Database>; schema=<Schema>";

5.5 Processing Encrypted Traffic


This subsection describes how to configure processing of encrypted traffic.

5.5.1 Configuring SSL Encryption for DB2


To configure DataSunrise to process DB2's SSL-encrypted traffic, perform the following:
1. Prepare a DB2 server for working with SSL. You need to get a certificate the server delivers to a client during an
SSL connection (hereafter db2_server.crt). Refer to the following page for example: http://www.ibm.com/support/
knowledgecenter/SSEPGG_10.5.0/com.ibm.db2.luw.admin.sec.doc/doc/t0025241.html
2. Install the GSKit package on the DataSunrise server. Create a trusted certificate storage and a withdrawn
certificates storage. Place the server certificate into the trusted certificate storage (db2_server.crt).
Refer to the following page for example: http://www.ibm.com/support/knowledgecenter/SSEPGG_10.5.0/
com.ibm.db2.luw.admin.sec.doc/doc/t0053518.html
3. Specify the full path to the certificate storages in the Db2KeyStoragePath and Db2KeyStashPath parameters of
the System Settings → Additional.
4. Configure the client workstation for processing of DB2 traffic with DataSunrise. It is required to install the trusted
DB2 server certificate on the client side. Refer to the following page for example: http://www.ibm.com/support/
knowledgecenter/SSEPGG_10.5.0/com.ibm.db2.luw.admin.sec.doc/doc/t0053518.html

5.5.2 Configuring SSL for Microsoft SQL Server


5.5.2.1 Enabling SSL Encryption for MS SQL Server
To configure DataSunrise to process SSL-encrypted traffic, perform the following:
1. Install the MakeCert utility (it is included in the Windows SDK). You can download the Windows SDK at this page:
https://www.microsoft.com/en-us/download/details.aspx?id=8279
2. Execute the following command to create a certificate:

makecert -r -pe -n "CN= SERVER_HOST" -b 01/01/2016


-e 01/01/2036 -eku 1.3.6.1.5.5.7.3.1 -ss my
-sr localMachine -sky exchange
-sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12

Replace SERVER_HOST with actual SQL Server host name and set required certificate lifetime.
3. Run the SQL Server Configuration Manager utility and select SQL Server Network Configuration → Protocols
for (DB instance_name).
4. Right-click on Protocols for... and select Properties.
5. On the Certificate tab, select the certificate generated in step 2 of this instruction.
6. On the Flags tab you may set the Force Encryption parameter to Yes to encrypt all TDS traffic. Or set it to No to
encrypt client authorization packet only.
7. Restart your SQL Server. To do this, select SQL Server Services → SQL Server (DB instance name) and click
Restart Service.
5 Database Configurations | 93
5.5.2.2 Generating an SSL Certificate with OpenSSL
To create an SSL certificate for SQL Server using OpenSSL, do the following:
1. Create a configuration file named config.cfg and replace SERVER_HOST with actual SQL Server's hostname:

[req]
distinguished_name = req_distinguished_name
prompt = no

[req_distinguished_name]
countryName = USA
stateOrProvinceName = Washington
localityName = Seattle
organizationName = DataSunrise
organizationalUnitName = IT
commonName = SERVER_HOST
emailAddress = [email protected]

[ext]
extendedKeyUsage = 1.3.6.1.5.5.7.3.1

2. Run the following script:

openssl genrsa -des3 -out key.pem 2048


openssl rsa -in mssql-rsa.pem -out mssql-rsa.pem
openssl req -config config.cfg -new -key key.pem -out req
openssl req -x509 -config config.cfg -extensions ext -days 365 -key key.pem -in req -out
certificate.cer
openssl pkcs12 -export -in certificate.cer -inkey key.pem -out certificate.pfx

When executing the first command you will need to enter some password twice. The second command resets the
password, but you will need to enter it once again. The third command creates a certificate request within the
req file. The fourth command generates a self-signed certificate within the certificate.cer file. The last command
packs the key and the certificate into the certificate.pfx file, protecting it with a password (enter the password
twice). Then you should import certificate.pfx via MMC console to the Personal container.
3. Install the certificate for your proxy (refer to subs. Installing an SSL Certificate for an MS SQL Server Proxy on page
95).

5.5.2.3 Generating a Signed SSL Certificate with OpenSSL


If certificate check is enabled (checked “Encrypt connection” check box for SSMS lower than 2016 and unchecked
“Trust server certificate” for SSMS 2016), you should use a signed SSL certificate.
To generate a certificate with OpenSSL, do the following.
1. Prepare a required infrastructure:

mkdir db
mkdir db\new
mkdir db\private
echo. 2>db\index
echo 01> ./db/serial
echo unique_subject = no> ./db/index.attr

2. Create a configuration file (named ca in this case):

[req]
distinguished_name = req_distinguished_name
prompt = no
RANDFILE = ./db/private/.rand

[req_distinguished_name]
countryName = US
stateOrProvinceName = Washington
5 Database Configurations | 94
localityName = Seattle
organizationName = DataSunrise
organizationalUnitName = IT
commonName = DataSunrise
emailAddress = [email protected]

3. Create a configuration file named cfg:

[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
RANDFILE = ./db/private/.rand

[req_distinguished_name]
countryName = US
stateOrProvinceName = Washington
localityName = Seattle
organizationName = ACME
organizationalUnitName = IT
emailAddress = [email protected]
commonName = 127.0.0.1

[v3_req]
subjectAltName = @alt_names

[alt_names]
DNS.1=127.0.0.1
DNS.2=192.168.3.2
DNS.3=10.0.8.22
DNS.4=FLAK-PC

[ext]
extendedKeyUsage = 1.3.6.1.5.5.7.3.1

[ca]
default_ca = CA_default

[CA_default]

dir = ./db # top dir


database = $dir/index # index file.
new_certs_dir = $dir/new # new certs dir

certificate = $dir/ca.cer # The CA cert


serial = $dir/serial # serial no file
private_key = $dir/private/ca.pem # CA private key
RANDFILE = $dir/private/.rand # random number file

default_days = 365 # how long to certify for


default_crl_days = 30 # how long before next CRL
default_md = sha384

policy = policy_any # default policy


email_in_dn = no # Don't add the email into cert DN

name_opt = ca_default # Subject name display option


cert_opt = ca_default # Certificate display option
copy_extensions = copy

[policy_any]
countryName = supplied
stateOrProvinceName = optional
organizationName = optional
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
5 Database Configurations | 95

Important: subjectAltName includes all the hosts that should be covered by the certificate including the one in
commonName.

4. Generate a root certificate ./db/ca.cer and a key ./db/private/ca.pem:

@ECHO OFF
SET RANDFILE=./db/private/.rand
openssl genrsa -des3 -out ./db/private/ca.pem 2048
openssl rsa -in ./db/private/ca.pem -out ./db/private/ca.pem
openssl req -new -x509 -days 3650 -key ./db/private/ca.pem -out ./db/ca.cer -config ca
openssl x509 -noout -text -in ./db/ca.cer

5. Generate and sign a certificate for the server or proxy:

@ECHO OFF
SET friendlyName=CA-signed certificate for DSUNRISE
SET RANDFILE=./db/private/.rand
SET /P serial=<./db/serial
openssl genrsa -des3 -out ./db/private/%serial%.pem 2048
openssl rsa -in ./db/private/%serial%.pem -out ./db/private/%serial%.pem
openssl req -new -key ./db/private/%serial%.pem -nodes -config cfg -out req
openssl ca -config cfg -extensions ext -infiles req
openssl pkcs12 -export -in ./db/new/%serial%.pem -inkey ./db/private/%serial%.pem -name
"%friendlyName%" -out ./db/private/%serial%.pfx
MOVE .\db\new\%serial%.pem .\db\new\%serial%.cer

6. Generated certificates will be saved in the db\new folder. Generated keys and .pfx files (packed keys and
certificates) will be saved in the db\private folder.
It is required that the CN (canonical name) used in the certificate should be available at the client/proxy side and
a client or proxy should use this name to connect to the server/proxy. It is required because otherwise they will
not pass certificate check even if a client/proxy recognizes the root certificate as trusted. You can achieve this by
adding the CN to the hosts file or by adding a corresponding entry to the DNS (if administering AD).

5.5.2.4 Installing an SSL Certificate for an MS SQL Server Proxy


To install an SSL certificate for an SQL Server proxy, do the following:
1. Run certmgr.msc (or add it via the Microsoft Management Console).
2. Locate an SSL certificate (Personal / Certificates folder).
3. Export a certificate with closed key to a *.pfx file.
4. Retrieve a private key from the *.pfx by executing the command below and replacing SERVER_HOST with the
actual SQL Server's host name:

openssl pkcs12 -in certname.pfx -nocerts -out key.pem -nodes

5. Add the key to DataSunrise: create a new group in Configuration → SSL Key Groups and insert the key into the
Private Key text field.
6. Link the created group to the Instance: navigate to Configuration → Databases → your database profile. Then
in the Capture Mode subsection, open your proxy's settings and select your SSL Key Group in the Proxy Keys
drop-down list.

5.5.2.5 Disabling Ephemeral Keys-Based Encryption


DataSunrise's sniffer does not support processing of traffic encrypted with [EC]DHE protocol based on ephemeral
keys.
To enable DataSunrise to process traffic of SQL Server 2014 or higher, it is required to disable [EC]DHE on the
database server by using IIS Crypto utility: https://www.nartac.com/Products/IISCrypto. Use the guide below.
Alternatively, you can disable [EC]DHE-based ciphers for SQL Server's crypto provider using the method described
here: https://support.microsoft.com/en-us/kb/245030
5 Database Configurations | 96
1. Run IIS Crypto.
2. Uncheck ECDH and Diffie-Hellman check boxes in the Key Exchanges subsection. Click Apply.

3. Restart DB's server for the changes to take effect.

5.6 Two-Factor Authentication (2FA)


Two-factor authentication (2FA) is an additional layer of security except standard login/password authentication
when accessing the target database. The 2FA can be based on email and on one-time passwords (OTP).

5.6.1 Configuring 2FA Based on Emails


To enable Email-based 2FA, do the following:

Important: when working in High availability configuration, you need to specify your Load balancer's host name as
the LoadBalancerHost additional parameter's value (see Additional Parameters on page 337) to be able to get valid
authentication links in emails sent by DataSunrise. If this is not done, an internal IP will be used that can't be verified
on an external device.

1. To enable sending letters with confirmation links, it is necessary to configure an SMTP server at Configuration
→ Subscribers → Add Server (refer to subs. Configuring an SMTP Server on page 212 for details) and set the
Send security emails from this server at least for one server.
2. Open your target database profile (Configuration → Databases) and in the Advanced Settings, check the
Accept Only Two-factor Authentication Users check box if you need to block connection attempts of
unauthenticated users.
3. Navigate to Configuration → Database Users and select a user you will use to log in into your target database
as. Open this user's profile and select E-mail in the Type of Two-Factor Authentication drop-down list.
4. Now you can connect to your target database via some client application. You will get an email with a special link
which you should open to authenticate to the database. Note that the connection time is unlimited (there is no
timeout). After the connection is terminated, you can still connect to the database without using a confirmation
link during the next 10 minutes. A connection can be established only when using the user name and IP address
you've used for the previous connection. If the timeout time (10 minutes) has exceeded from the termination of
connection moment, you should authenticate via email again.
5 Database Configurations | 97

5.6.2 Configuring 2FA Based on OTP


To enable one-time-password 2FA, do the following:
1. Install Google Authenticator on your smartphone. Open the profile of the database user you want to authenticate
as (Configuration → Database Users). In the user profile, select Time-based One-time Password in the Type
of Two-Factor Authentication drop-down list. Click Reset TOTP Key to display a QR code and scan it with your
Google Authenticator. You will get a secret code for your database account.
2. To execute the queries listed below, you should be able to connect to the service database. Since DataSunrise
will block access to the database, you should make it ignore some client apps' service queries to be able to
execute the query aimed at authentication. Expand Advanced Parameters of your target database profile
(Configuration → Databases) and select a query group in the Query Group list. DataSunrise provides pre-built
query groups for pgAdmin, SSMS, toad for Oracle and Oracle SQL developer, but you can also use your own
query groups for other client applications. Refer to subs. Query Groups on page 207 for details on creating
query groups.
3. Log into your database through DataSunrise proxy as the database user you want to configure the 2FA for and
execute the following query in your database's SQL editor:

set ds_verify_code = <secret code>

For MongoDB, use the following query:

db.ds.distinct('DS_VERIFY_CODE = <secret code>')

For SAP HANA, use the following query:

SET 'DS_VERIFY_CODE' = '<secret code>'

For DynamoDB, execute the following query in the AWS CLI:

aws dynamodb query --table-name='DS_VERIFY_CODE=<secret code>' --endpoint-url=https://


<PROXY_HOST>:<PROXY_PORT> --no-verify-ssl

After the connection is terminated, you can still connect to the database without using a secret code during the
next 10 minutes.
If you encounter a sort of "Unrecognizable parameter" error when executing the query, probably your client
doesn't allow SET commands. In this case, use the SELECT command. For example:

select ds_verify_code = <secret code>

5.7 Reconfiguring Client Applications


This subsection describes how to configure the most common client applications to accept proxy connections from
DataSunrise.

5.7.1 PGAdmin (PostgreSQL Client)


To enable PGAdmin to connect to a target database through the DataSunrise proxy, perform the following:
1. Run PGAdmin. Note Object browser tab displaying server connections. By default, PGAdmin displays one or two
available connections (PostgreSQL 9.4 in this case).
5 Database Configurations | 98

Figure 20: Object browser tab

2. Click File → Add Server to add a new connection.

Figure 21: Adding a connection

3. Specify the connection details for the existing DataSunrise proxy.


5 Database Configurations | 99

Figure 22: Configuring a connection

Field Description
Name Logical name of the connection (any name)
Host IP address or name of the DataSunrise proxy's host
Port Port number of the DataSunrise's proxy

4. After configuring is completed, a new connection will appear.

Figure 23: List of existing connections/servers


5 Database Configurations | 100

5.7.2 SQL Server Management Studio (MS SQL Server


Client)
To enable SSMS to connect to a target database through the DataSunrise proxy, perform the following:
1. Start SSMS, click File → Connect Object Explorer.

Figure 24: Starting Connect Object Explorer

2. In the Connect Object Explorer window, input DataSunrise proxy details. Use the IP address and the port
number of the proxy you've configured in the corresponding database profile.

Figure 25: Connect to Server window

Field Description
Server name IP address and port of DataSunrise proxy, separated by a
comma
Authentication Select SQL Server Authentication, not Windows
authentication
Login Database user name required for database connection
Password Password required for database connection
5 Database Configurations | 101
3. You can also use tcp: prefix before the IP address, to enable TCP/IP for the connection.

Figure 26: Alternative method of enabling TCP/IP

4. Click Connect to connect to the proxy.

5.7.3 MySQL Workbench (MySQL Client)


To enable Workbench to connect to a target database through the DataSunrise proxy, perform the following:
1. Open Workbench. In the top left corner of the screen, click Plus to create a new connection.

Figure 27: New connection

2. Enter the connection details.


Figure 28: Connection details

Field Description
Connection Name Logical name of the connection (any name)
Connection Method Use Standard method (TCP/IP)
Hostname Specify your DataSunrise proxy's IP address
Port Port number of the DataSunrise's proxy
Username Name of a database user to use for authentication

3. Click Test Connection to check if you've configured everything properly and click OK. A new connection will be
created.

Figure 29: New connection icon

4. Click on the connection icon to connect to the database.


6 Database Users | 103

6 Database Users
Rules' settings enable DataSunrise to employ traffic filtering for processing queries from certain database users (for
example, you can block certain user's queries). To use this feature, you need to create your database user profiles
because DataSunrise should be aware of this user.
The Database Users subsection enables you to perform the following actions:
• Creating and editing of target DB user profiles (manually or using a .CSV file).
• Creating and editing of target DB user groups.
It is also possible to create DB user profiles automatically using DataSunrise's self-learning functionality (refer to
Learning Mode Overview).

6.1 Creating a Target DB User Profile


Manually
To add information about a new user of a target database whose queries should be processed by DataSunrise, do
the following:
1. Click Database Users.
A list of existing database user profiles will appear.
2. Click Add User to create a new user profile.
3. Enter the required information about the user according to the following table:
Interface element Description
Login text field Logical name of the user
Database Type drop-down list Type of the database the user belongs to
Instance drop-down list Database instance the user belongs to

4. Click Save to apply new settings.

6.2 Creating Multiple DB User Profiles Using


a CSV or TXT File
If you need to create multiple DB user profiles fast, you can load them from a .CSV or .TXT file. To do this, perform
the following:
1. Prepare a .CSV or .TXT file which contains a list of database users to be added to DataSunrise.
Each line of the file should start with the user; keyword, followed by a user name. Each line should contain a
single user entry only. UTF-8 encoding is preferred.
Example:

user;user_name1
user;user_name2
user;user_name3
6 Database Users | 104
You can also add DB Instance and DB type parameters by using the following lines:

user;<user name>;<db type>;<instance name>

For example:

user;myuser;postgresql;pg_local

You can add the following database types (should be written in lower case):
• any
• mssql for MS SQL Server
• oracle for Oracle Database
• db2 for IBM DB2. Note that for DB2 LUW you should use db2. For DB2 z/OS users your should use db2zos
• postgresql for PostgreSQL
• netezza for IBM Netezza
• teradata for Teradata
• greenplum for Greenplum
• redshift for Amazon Redshift
• aurora for Amazon Aurora MySQL
• mariadb for MariaDB
• hive for Apache HIVE
• sap hana for SAP Hana
• vertica for Vertica
• mongodb for MongoDB
• aurorapgsql for Aurora PostgreSQL
• aurorapostgres for Aurora PostgreSQL
• dynamodb for DynamoDB
• elasticsearch for Elasticsearch
• cassandra for Cassandra
• impala for Impala
• snowflake for Snowflake
• informix for IBM Informix
• athena for Amazon Athena
• s3 for Amazon S3
• sybase for Sybase
2. If you want to specify <instance name>, ensure that the DB Instance's entry with the same name already exists
in the list of Instances (Configuration → Databases). When specifying an Instance, you don't need to change
anything - just copy and paste your Instance name. If Instance name includes spaces or non-standard characters
(for example: DB2 Z/[email protected]:50000), you should just paste it to your CSV or TXT file as is.
3. Click Actions → Import from file. The Import User page will open.
4. At the Import User page, drag-and-drop your file or click the corresponding link for the file browser and select
your file.
5. Click Attach to save new settings.

Note: If you try to import users that already exist in the list of DataSunrise's DB Users (Configuration →
Database Users), these users will be skipped.

Please note that uploading of a user list is a two-stage process. First, when you select a file, it is uploaded to the
DataSunrise server. And when you click Attach, the contents of the file are processed by DataSunrise.
6 Database Users | 105

6.3 Creating a User Group


In order to simplify managing of multiple database user profiles while configuring traffic filtering, you can arrange
database users into groups. A User Group enables you to handle all DB User profiles included in a User Group as a
single object. For example, if you create a Data Security Rule and need to block queries from multiple users, you can
specify a required User Group in the Rule's settings instead of specifying these Users one by one.
To create a new User Group, do the following:
1. Navigate to Database Users
A list of existing DB User profiles will be displayed.
2. Click Add Group to create a new Group.
3. Enter the required information about a new User Group according to the following table:
Parameter Description
Group Name text field User group's logical name (any name)
Database Type drop-down list Type of the database the users belong to
Instance drop-down list Database instance the users belong to

4. To add Database Users to a User Group, do the following:


a) Click Add User or Group in the Process requests from Database Users subsection.
a new window will open which contains a list of existing DB Users.
b) Check User profiles that should be added to the Group.
c) Click Add Items.
5. Click Save to save User Group.
7 SSL Key Groups | 106

7 SSL Key Groups


SSL Key Groups are used to store SSL certificates, private keys etc. and enable DataSunrise to conveniently refer to
all entities included in a group as a single entity.
DataSunrise supports database network traffic encryption using SSL for all supported databases. To utilize the
encryption, you need to configure SSL Key Groups and Database Instance settings. With SSL encryption, DataSunrise
acts as a man-in-the-middle by decrypting and encrypting database traffic.
Except SSL, DataSunrise supports Oracle Native Network Encryption which enables you to encrypt database
connections (Enabling Oracle Native Encryption on page 107).
DataSunrise also includes pre-built SSL Key Groups that contain certificates for AWS and Microsoft Azure databases.
You can find these groups in Configuration → SSL Key Groups.

7.1 Creating an SSL Key Group


To create a new SSL Key Group, do the following:
1. Go to Configurations → SSL Key Groups and Click Group+
2. Specify group's logical name in the Group Name field
3. Specify key type in the Type drop-down list (certificate for a client or for a server)
4. Insert required SSL certificate into the Certificate field
5. Insert required private key into the Private Key field
6. Insert required DH parameters into the Diffie Hellman Parameters field if necessary. Note that DH Parameters
are optional, they need to be set if you're using Diffie-Helman encryption only. Otherwise, if you haven't provided
DH Parameters, you will get a low-level warning in DataSunrise logs but your connection will work
7. Insert required parameters into the Curve Diffie Hellman Parameters field if necessary
8. Click Save to save the group.

7.2 Enabling SSL Encryption and Server


Certificate Check for the Target Database
1. Open the SSL Key Groups subsection and add a new group containing a CA certificate which should be used
to verify the server certificate: Configuration → SSL Key Groups → Group+
2. Name the group: ServerCAGroup
3. Insert the contents of the CA certificate file (the server's one) into the CA field. Click Save
4. Open the SSL Key Groups subsection and add a new group containing a key pair for the proxy: Configuration
→ SSL Key Groups → Group+
5. Name the group: ProxyKeyPairGroup
6. Insert the contents of the public key certificate file (the proxy's one) into the Certificate field. Click Save. Note
that the proxy certificate is stored in the proxy.pem file located in the DataSunrise's installation folder so if you
need to replace a proxy certificate, you can replace it in the proxy.pem and restart the DataSunrise's system
service.
7. Create a new database profile (Configuration → Databases)
8. Specify the following connection parameters for the database:
7 SSL Key Groups | 107
• Host
• Port number
• Default login (DB user name)
• Password (DB user password)
9. In the Database keys drop-down list, select ServerCA Group created before and check the VerifyCA check box.
Click Save
10. When in the Databases subsection, click Edit on your database profile's name to open the profile's settings.
11. Click Edit on the database's proxy to open the proxy's settings.
12. In the Proxy Keys drop-down list, select ProxyKeyPairGroup created before and check the Verify CA check box.
Click Save.
13. Now you can establish an SSL-encrypted connection to your database through a DataSunrise proxy. If you need
the client application to verify the proxy's certificate, you should specify its CA certificate which was used to
sign the public key proxy certificate.

7.3 Enabling Oracle Native Encryption


DataSunrise supports Oracle's native encryption mechanism which can be used to encrypt database connections
instead of SSL.
To enable Oracle native encryption, do the following:
1. Provide your Oracle user with the following grant:

GRANT SELECT ON SYS.USER$ TO <User_name>;

2. Go to System Settings → Additional Parameters and enable the EnableOracleNativeEncryption parameter


3. OPTIONAL. If encryption isn't enabled on the server, add the following lines to the sqlnet.ora file:

SQLNET.ENCRYPTION_CLIENT = REQUIRED
SQLNET.ENCRYPTION_TYPES_CLIENT = (AES128)

Note: you can also use the AES256 encryption method here as a more secure method. If you omit the second
line, any available encryption method will be used.

4. Update the database's metadata and restart the DataSunrise's Core.


8 Encryptions | 108

8 Encryptions
The Encryption feature enables data-at-rest encryption to be applied to your target database. Encryption greatly
reduces the risk of intentional data leakage because it makes the data useless for bad guys that managed to access
the database.
At the moment, DataSunrise supports data-at-rest encryption for PostgreSQL only. DataSunrise uses pgcrypto
module for PostgreSQL databases and AES-128 algorithm for encryption.
DataSunrise utilizes Transparent Database Encryption (TDE) technology. In other words, it encrypts data in the target
database and decrypts it only when a database user connects to the database through DataSunrise. Encryption
and decryption are performed transparently for client applications which means that you don’t need to modify any
clients. To see the actual database contents, you need to access your database through DataSunrise proxy.
As a result of the encryption process, a copy of a source table with encrypted data included is created. Then the
source table is renamed (as “table_original” for example) and replaced with the encrypted copy named as the source
table.
An encryption key is stored in DataSunrise (or in CyberArk or AWS Key Management Service optionally). For each
new connection, an encryption key is passed to the database server and stored in a temporary table. It is safe to
store an encryption key in a temporary table because the temporary table is created during connection of the user
to the DataSunrise’s backend and no one except the user can get access to this temporary table. We developed a
special secure algorithm for fetching encryption keys to the server, which excludes the possibility of losing the key
even if a bad guy intercepts the packets used for key exchange.
Here’s the key exchange algorithm description (all steps are performed during the client connection):
• When establishing a connection between a client and a database, DataSunrise generates a pair of RSA keys, a
public key and a private key, and passes the public key to the database server.
• The database server generates a session key, encrypts it using the public key and passes the encrypted
DataSunrise session key.
• DataSunrise receives the encrypted session key, decrypts it using the private key, encrypts data key with it and
passes the data key to the database server.
• The database server gets the encrypted data key, decrypts it with its own session key and saves in a temporary
table (“ds_local”). After that, the key exchange can be considered as completed. Then the data key can be used
for encryption and decryption of data in the database. Currently, it is possible to use multiple encryption keys for
one database.
Once the key exchange is completed, encryption/decryption of data on the database server becomes possible.
DataSunrise enables you to enable encryption for separate columns or for complete tables. When encryption is
enabled, data to be encrypted is encrypted with a data key. Then you can access this data through DataSunrise
only. DataSunrise also can create indexes for encrypted columns without decreasing query execution speed while
encryption is enabled.
DataSunrise offers three ways of processing indexes:
• Leaving index columns unencrypted. Query execution speed is constant in this case, but Index columns data will
be accessible for reading for everyone.
• Encrypting index columns but creating an index using unencrypted data. It is impossible to get access to the
unencrypted index data by using standard means. And for cloud databases it is not possible at all. This is default
behavior.
• Indexes-free. If creation of indexes using unencrypted data is not acceptable, it would be necessary to not to use
them. It is the most secure option but at the same time the slowest one.
DataSunrise encryption and decryption is transparent for the user. The user performs queries to tables and
DataSunrise modifies the queries to encrypt or decrypt the data if needed.
8 Encryptions | 109

Warning: if you're using Encryptions on a PostgreSQL database, make sure that nobody changes encrypted
database tables' contents directly (bypassing DataSunrise's proxy), because it makes the table undecryptable.

8.1 Using Encryptions


To use the Encryption feature, do the following:
1. Enable encryption settings by navigating to System Settings → Additional Parameters and enabling the
ShowEncryptionSettings parameter if it is disabled. Also set flushTimeout parameter's value to "90". To save
the "table_original", enable SaveDsOriginalTable parameter (disabled by default).
2. Navigate to Configurations → Encryptions for encryption settings.
3. Click Add Encryption to create a new encryption task.
4. Specify a logical name for the task, choose database instance to enable encryption for, specify database
credentials (Login and Password) to connect to the target database (click Log On to connect to the DB).
5. Click Create Encrypted Columns and click Select to select column(s) to apply encryption to. In the Storage
Type, select a method of storing your encryption keys.
Storing method Description
DataSunrise Internal Storage Storing the keys in SQLite
CyberArk Storing the keys in CyberArk. Specify CyberArk's safe name, folder
name and object name.
AWS Key Management Service Using the AWS Key Management Service. Encryption keys are created
randomly and encrypted with an AWS's master key. Specify the
Customer Master Key to encrypt the keys with.

6.
Note: an encryption key can consist of up to 16 pairs of hexadecimal values (0-9 digits and ABCDEF characters).
For example, you can use something like "0F9A4E6F" as an encryption key. Note that you can use a unique
encryption key for each column.

Specify encryption keys to use in the Encryption Keys.


7. Click Save to save the encryption task.
As a result, you will be able to access the original table when connecting to the database through DataSunrise's
proxy only, otherwise its contents will be encrypted.
8. To delete your Encryption Task, navigate to the task list, locate your Task and click the red-X button. You will have
two options:
• Regular deletion. Deletes the Task and restores encrypted database tables to their original state.
• Forced deletion. Deletes the Task and leaves the tables encrypted. Note that this is irreversible.
9 DataSunrise Rules | 110

9 DataSunrise Rules
DataSunrise's functionality is based on a system of policies (Rules) used to control data auditing, database firewall
and data masking capabilities: Data Audit Rules, Data Security Rules and Data Masking Rules respectively.
DataSunrise's self-learning system (the Learning Mode) is controlled with its own set of Rules — Learning Rules.
In fact, a Rule is a set of settings that define when Rule-related module should be activated and how it should act.
Depending on certain Rule's settings, DataSunrise can activate its functionality when the following events occur:
• A user query to any target DB or to a target DB of certain type was intercepted;
• A user query addressing certain target DB's elements (schemas, tables, columns) was intercepted;
• A query came from a certain IP address, network interface or socket;
• Queries issued by certain target DB's users or client applications;
• A query matches a certain SQL pattern;
• A query contains some signs of SQL injection attack.
Each Rule's settings entail a certain action DataSunrise should execute when the Rule is activated ("triggered").
Activation and deactivation of Rules can be done in their settings or via the context menu. Right-click Rule's name in
the Rules list and select Disable to deactivate a Rule or Enable to activate.
You can configure a Rule to be activated automatically at certain time and weekday (refer to Schedules on
page 219). You can also notify concerned parties (Subscribers) about activation of a Rule via Email or instant
messengers (refer to Subscriber Settings on page 212).

9.1 Execution Order of DataSunrise Rules


DataSunrise executes its Rules in the following order: Data Audit Rules → Data Security Rules → Data Masking
Rules. So each SQL query intercepted by DataSunrise goes through the following processing stages:
1. A query is examined for matching conditions defined by existing Data Audit Rules. If a query matches certain
Audit Rule's conditions, it undergoes data auditing.
2. Then the query is examined for matching conditions defined by existing Data Security Rules. If a certain Rule
matches, the firewall blocks or ignores the query depending on the Rule's settings.
3. If the query was not blocked at the previous stage, it is examined for matching conditions defined by existing
Masking Rules. If a Masking Rule matches, DataSunrise modifies the query's code according to the Rule's settings
and redirects the modified query to the target DB. Having received the modified query, the target DB edits its
response and outputs obfuscated ("masked") values instead of actual DB contents.
If multiple Rules of the same type exist (two Audit Rules for example) DataSunrise executes them according to the
priority level of each Rule. Each Rule's settings entail a certain action DataSunrise should execute when the Rule is
activated ("triggered"). For example, when creating a Data Audit Rule you can select one of the following actions:
• Audit
• Skip - DataSunrise skips auditing
It allows you to create an Audit Rule to audit access to a whole schema and then to create a higher priority Rule
to skip auditing of some tables. Security, Masking and Learning Rules feature similar settings which enable you to
create sophisticated Rule configurations.
Visually, you can define certain Rule's priority in the Rules list by how close it is to the top of the list (the closer to
the top of the list, the higher the Rule's priority). To adjust the location of a Rule on the list, click Priority Mode
and arrange the Rules on the list by drag-and-dropping to set the priority of execution. After that, Save Priority
or Discard Changes. For example, in the picture below you can see that the "masking_rule" Rule will be executed
before the "mask_cons" Rule.
9 DataSunrise Rules | 111

Figure 30: Changing Rule priority

Note: you can also apply other actions to your Rules by selecting them on a list and expanding the Actions menu.
Thus, you can arrange multiple Rules into groups of Rules (Group/Ungroup), create a duplicate of a Rule (Duplicate)
and add standalone Rules to existing Groups (Merge).

9.2 General Settings


This section is common for all types of Rules. It contains the following elements:
Interface element Description
Name text field Logical name of the Rule
Database Type drop-down list Target database type ("PostgreSQL" for example)
Instance drop-down list Target database instance. The Rule will process queries directed to
the selected instance. Select Any to monitor all databases known to
DataSunrise. If you want to add a new database instance, click "Plus" (+)
and input the required information
Comment text field Write your comments here (optional).

9.3 Filter Sessions


This section is common for all types of Rules. It enables you to define which queries (from which hosts, client
applications, users etc.) should trigger the rule. Traffic filtering is customizable, multiple conditions can be added to
the filter. Visually, the filter settings resemble a multi-level logical expression consisting of multiple clauses.
DataSunrise includes the following conditions:
9 DataSunrise Rules | 112

Condition Description
Application Client application name (Creating a Client Application Profile on page 211)
Application RegEx Client application RegEx
Application User RegEx Client application User RegEx
Application User Client application User (Capturing of Application Users on page 406)
Application User Group Client application User Group
DB User Database User (Creating a Target DB User Profile Manually)
DB User Group Database User Group (Creating a User Group)
DB User RegEx Database User RegEx
Host Host: IP address or host name (Creating a Host Profile on page 209)
Host Group Host Group (Creating a Group of Hosts on page 210)
OS User Operating System User
OS User Group Operating System User Group
OS User RegEx Operating System User RegEx
Proxy DataSunrise proxy
Sniffer DataSunrise sniffer
Interface Network interface
Session Parameters The following parameter is applicable to Oracle only:
• AUTH_TYPE: user authentication type. It supports the following values:
• PASSWORD: login/password authentication
• KERBEROS: KERBEROS and KERBEROS5PRE-based authentication
• NTS: NTS-based authentication
• BEQ: BEQ-based authentication
• RADIUS: RADIUS-based authentication

Configuring filtering expression:

In the example above, the filtering expression includes two sub-clauses.


9 DataSunrise Rules | 113
• The first sub-clause defines the following condition: a query must come either from "postgres" OR from "test"
• The second sub-clause defines the following condition: a query must come from the specified proxy.
As the main filter parameter is set to Match All, the full filtering expression defines the following condition: a query
must come from Kathy OR Mike AND from the specified proxy.

Interface element (from left to right) Description


AND/OR drop-down list • AND: a query must match all the specified conditions of the clause.
• OR: a query must match any of the specified conditions.

Add Condition button Add filtering by certain parameters to the clause.


Add Group button Add a new branch to the filtering expression.

Using Regular expressions


You can also use regular expressions to specify the following entities:
• Application Regexp: Set a regular expression pattern for application's name.
• DB User Regexp: Set a regular expression pattern for database user's name.
• OS User Regexp: Set a regular expression pattern for operating system user's name.

Figure 31: Selecting stored procedures to be added to a group

Here are some examples of useful Regular expressions:

RegExp example Description


.* Search for any value
[a-z0-9_-]*@[a-z]*.[a-z]* Search for email addresses
[A-Za-z]* Search for any words that include a-z and A-Z characters
Kathy Search for the specified word (exact match)
9 DataSunrise Rules | 114

9.4 Filter Statements


This section is common for Audit and Security types of Rules. It enables you to configure traffic filtering based on
SQL statements included in an incoming query.
• Object Group. Use this filter type to audit DML operations with schemas, tables and columns. This filter type
also enables performing auditing of stored procedure calls and function calls inside SQL statements and PL/SQL
scripts. Refer to Object Group Filter on page 115
• Query Group. Use this filter to audit custom (selected) SQL statements. Refer to Query Group Filter on page
117.
• Query Types. Use this filter to audit certain types of operations. Refer to Query Types Filter on page 117.
• SQL Injection. Use this filter to audit SQL injections. Refer to SQL Injection Filter on page 119.
• Session Events. Use this filter to log session events. Refer to Session Events Filter on page 117.
• User Blocking Filters. Is unique for Security Rules. It enables you to block users that can't authorize properly.
Refer to User Blocking Filters
9 DataSunrise Rules | 115

9.4.1 Object Group Filter


Use this filter type to create a Rule to audit DML operations with selected schemas, tables, columns and stored
procedure calls.

Filter parameters Description


Process SQL Statements check boxes Check to monitor corresponding SQL statements.
Custom Functions check box Enables buttons and fields that are used to select
functions and procedures to be audited by the Rule.

Note:
DataSunrise can process queries directed to certain
functions, but there are functions that belong to SQL
language not to the database itself. For example:
current_catalog for PostgreSQL or current_user for
MySQL. Thus if you cannot find the function you need
to process in the UI's function browser, this function
belongs to the SQL language. You cannot process
such functions by specifying them in the Process SQL
Statements to Functions.

Process Tables in drop-down list Source of tables to be processed by the Rule:


• Current Rule
• Object Group: group of objects (refer to Object
Groups on page 203). Use Choose Object Groups
drop-down list to define an Object Group to monitor.
Otherwise, proceed to Process SQL to Databases,
Schemas, Tables.

Choose Object Groups (for Process Tables in → Object Groups of objects containing tables to be processed by
Groups only) the Rule. Click "Plus" (+) to add a new group to the list.
Skip Tables in drop-down list (for Process Tables in → Skip tables when processing.
Object Groups only)
9 DataSunrise Rules | 116

Filter parameters Description


Process Query to Databases, Schemas, Tables, Databases, schemas, and/or columns to monitor.
Columns (for Process Tables in → Current Rule only)
• Click Select to select required objects manually (refer
to Adding Objects to an Object Group Manually on
page 204).

Note: when working with MongoDB, you cannot


select Documents. Only Databases and Collections
are available.

Note: when working with some databases like


DynamoDB, only primary keys will be displayed in
the database object tree. To select columns other
than primary keys, specify the names of these
columns using the Add Exact Name button.

• Click ADD REGEXP to select required objects using


regular expressions (refer to Adding Objects to an
Object Group Using Regular Expressions on page
205). Regular expressions enable you to select
certain database elements. Even if the database
structure was changed, the Rule still will be active.

Process Query to Procedures (for Process Tables in Database functions to monitor.


→ Current Rule only and if Process SQL Statements to
• Click Select to select required functions manually
Functions is enabled)
(refer to Adding Stored Procedures to an Object Group
Manually on page 206).
• Click ADD REGEXP to select required functions
using regular expressions (refer to Adding Stored
Procedures to an Object Group Using Regular
Expressions on page 206).

Skip Tables in drop-down list Source of tables the rule should ignore:
• Current Rule
• Object Group: a group of objects

Skip Query to Databases, Schemas, Tables, Columns Ignore selected databases, schemas, tables and columns
(for Skip Tables in → Current Rule only) during monitoring.
• Click Select to select required objects manually (refer
to Adding Objects to an Object Group Manually on
page 204).
• Click ADD REGEXP to select required objects using
regular expressions (refer to Adding Objects to an
Object Group Using Regular Expressions on page
205).
9 DataSunrise Rules | 117

Filter parameters Description


Skip Query to Procedures (for Skip Tables in → Ignore selected functions during monitoring.
Current Rule only and if Custom Functions is enabled)
• Click Select to select required functions manually
(refer to Adding Stored Procedures to an Object Group
Manually on page 206).
• Click ADD REGEXP to select required functions
using regular expressions (refer to Adding Stored
Procedures to an Object Group Using Regular
Expressions on page 206).

9.4.2 Query Group Filter


Select this filter type to create a Rule for auditing custom SQL statements.

Filter parameters Description


Process Group of Query drop-down list Name of a SQL group which contains SQL statements the Audit
Rule should process. Click "Plus" (+) to add a new group of SQL
statements to the drop-down list
Skip Group of Query drop-down list Name of a SQL group which contains SQL statements the Audit Rule
should ignore
Choose Object Groups drop-down list Select a group of objects an incoming query is directed at.

Note: Refer to subs. Query Groups on page 207 for details on creating SQL statements groups.

9.4.3 Query Types Filter


Select this filter type to create a Rule for auditing certain operations (also can be used for Data Security rules).
Specify one or more statements from the list to be audited.
To select certain query types for a given database type, click Add Query Type and select the required types of
queries. To add a Query type to the list of Query Types, use Queries Map (System Settings → Queries Map). Refer
to Queries Map on page 398

9.4.4 Session Events Filter


This filter type can be used both for Audit Rules and Security Rules. Despite the filter's name is similar for both types
of Rules, the functionality is completely different.
Audit Rules
When used with Audit Rules, this filter type can be used for auditing session trails (session events). For a list of
captured session events, navigate to Audit → Session Trails
The filter's settings include a variety of clauses. The settings are self-explanatory, so we will not dwell on them.
Security Rules
When used with Security Rules, this filter type can be utilized to prevent DDOS attacks or attempts to guess the
target database's password or Brute-Force it (the filter blocks a user by name or IP address when a number of failed
access attempts exceed the specified value). To enable protection, check the Block DDOS or Brute Force check box
and create a clause.
9 DataSunrise Rules | 118

Let's take a look at the example pictured above. The example's settings mean that DataSunrise will Block incoming
queries if the number of Failed Sessions (unsuccessful login attempts) is higher than 10 times a minute. In other
words, if a user is trying to guess or brute-force a database password. As a result, DataSunrise will Block the
database user trying to access the target database Permanently by User name and IP address.
9 DataSunrise Rules | 119

9.4.5 SQL Injection Filter


The filter's functionality is based on a system of penalty points ("penalties") assessed to a suspicious query which
has signs of an SQL injection in its code. When a suspicious query achieves the predefined number of penalties,
DataSunrise blocks it.

Filter parameter (text field) Description


Warning level Number of penalties a query should achieve to be considered as suspicious.
When the sum of penalties for a given query exceeds this value, DataSunrise logs
the event with a warning but doesn't block the query.
Blocking level Number of penalties a query should achieve to be considered as an SQL-
injection query. If the sum of penalties for the query exceeds the specified value,
DataSunrise blocks it.
Comment penalty Number of penalties for comments in a query's code. For example:
SELECT * FROM Users WHERE username='Administrator' -- ' It's a comment

A Keyword in a Comment Number of penalties for comments containing one or multiple SQL keywords. For
Penalty example:
SELECT * FROM Users WHERE username='Administrator' -- ' AND pass='123'

Double Query Penalty Number of penalties for multiple SQL statements separated with semicolons. For
example:
SELECT * FROM Users; DROP TABLE Transactions

OR Penalty Number of penalties for "OR" statement. For example:


SELECT * FROM Users WHERE userid = 123 OR 1=1

Constant Expression Penalty Number of penalties for an expression which is always true. For example:
SELECT * FROM events WHERE rowid = '4' OR '1'='1';

Union Penalty Number of penalties for "UNION" statement. For example:


SELECT * FROM events WHERE rowid=1 UNION ALL SELECT null, null, null,
null,null, null, null

Suspicious Conversion: Blind A specific Blind SQL injection attack: the attacker tries to execute SQL
Error attack statements using standard database functions like CAST or CONVERT to analyze
error messages and statement's result-sets of the database instance.

Suspicious Function Call A specific type of Blind SQL Injection attack. The attacker tries to use database-
specific functions like SLEEP or PG_SLEEP in SQL statements to analyze errors and
statement's result-sets.
Concatenation of Single A specific Blind SQL injection attack: the attacker sends SQL statements with
Characters for Many types of character concatenations using CHR or CHAR built-in functions applicable for
Attacks specific database type.
Suspicious Condition A specific Blind SQL injection attack: the attacker uses UNICODE, ORD, ASCII
or similar function in conjunction with a conversion from character to numeric to
analyze error codes and error messages.
9 DataSunrise Rules | 120

9.5 Response-Time Filter


This section is common both for Audit and Security types of Rules.
To use the Response-Time filter, set the Trigger the Rule only if the number of affected/fetched rows is not less
than value (Filter Statements → Object Group filter). Please note that this filter works with the Disconnect blocking
method only.
For SELECT-type operations, a connection with a target database will be interrupted if the number of rows in the
database output exceeds the Response-time filter's value. The number of rows set in the Filter's settings will be or
won't be returned to the client depending on the database driver.
For UPDATE and DELETE operations, DataSunrise can't get the number of rows that will be deleted or modified,
before the query is executed. Once the query has been executed, DataSunrise will interrupt the connection if the
number of returned rows exceeds the Filter's threshold. In such case a transaction will be rolled back (If the query
was not included in an implicit transaction).

9.6 Rule Triggering Threshold


This section is common both for Audit and Security types of Rules. It enables you to trigger the Rule if the frequency
of user queries exceeds the specified threshold. Check the Enable check box of the Rule Triggering Threshold
section to set the threshold parameters:

Figure 32: Example: the Rule will be triggered only if there is a database user that performed more than 100
operations per hour

• Time Span: time period to set the counter for. Once a Rule is triggered, the counter is nulled out and the process
is repeated again.
• Set Threshold on: threshold variable to set the counter for:
• Operations: all the operations specified in the Rule
• Rows: returned database rows
• Threshold Value:
• Set Threshold On = Operations: the number of operations to be executed before the Rule will be triggered.
The operations include all queries specified in the Rule's settings only, all other operations will be skipped
• Set Threshold On = Rows: the number of database rows to be returned before the Rule will be triggered
• Calculation per:
• Rule: only settings of the current Rule will be considered when setting up the threshold
• Database User: only database user queries will be considered when setting up the threshold
• OS User: only operating system user queries will be considered when setting up the threshold
• Application User: only client application user queries will be considered when setting up the threshold

Note: if the User value (either Database User or OS User or Application User) is selected, operations and rows of
each user will be calculated separately. If the Rule value is selected, the total number of operations and rows will
be calculated.
9 DataSunrise Rules | 121

9.7 Data Filter


The Data Filter enables you to configure a Rule to be triggered only if the value of interest is included in the query
response. If bindings are used, Data Filter works without waiting for a response.
1. Check the Data Filter check box to activate the filtering. If Data Filter is enabled, Bindings Data will be checked
2. Select Reg Exp (regular expression) or Contains Value in the drop down list. The Contains Value enables you to
search for exact words across the target tables, so we will not dwell on that
3. The Reg Exp option enables you to use regular expressions to search for the required values. For example:

.*Kathy.*

means that the Rule will be triggered only if the response includes "Kathy" (exact value). Another regular
expression example can be used to trigger the Rule if email addresses are contained in the columns you're
searching across:

[a-z0-9_-]*@[a-z]*.[a-z]*

9.8 Creating DataSunrise Rules from


Transactional Trails
DataSunrise enables creating Data Audit, Security and Dynamic masking Rules based on parameters captured by the
Data Audit functionality.
1. Navigate to Audit → Transactional Trails for the list of captured events.
2. Select an event in the list and check the corresponding check box to highlight the event.
3. Click Create Rule to create a new Rule. Select rule type in the Rule drop-down list and check objects to add to
the Rule in the Conditions to check object tree:

Parameter Description
Application Client application used to send the query
DB user Database user the query is sent by
Query type Query type Filter Statements → Query Types
Query match Add the query to an existing query group
Objects involved in the query Database objects addressed by the query

4. You will be redirected to a new Rule page. All parameters selected at the previous step will be added to the Rule.

9.9 Data Audit (Database Activity


Monitoring)
The data auditing capability enables real-time database activity monitoring and logging the information about
queries reaching the database, such as database content modification, extraction or deletion. DataSunrise provides
9 DataSunrise Rules | 122
real-time tracking of database user actions, also monitors changes in database configuration and system settings.
The audit logs are stored in the DataSunrise-integrated SQLite database or in an external database.
Logged data helps to comply with requirements of regulatory standards such as SOX, HIPAA, PCI DSS, and other
regulators and acts.
Data Audit function is available in Sniffer mode and in Proxy mode. You can create new Data Audit Rules or edit
existing ones in the Data Audit section. Rules can be set to audit transaction on a certain database or from certain
database users, IP addresses and client applications.

Figure 33: Database Activity Monitoring

9.9.1 Creating a Data Audit Rule


To perform audit on a target database, it is necessary to create an Audit Rule. To do it, perform the following:

1. Go to the Data Audit → Rules subsection and click Add Rule.


2. Input the required information to the Main section subsection (General Settings on page 111)
3. Input the required information to the Actions subsection:
9 DataSunrise Rules | 123

Interface element Description


Skip check box If checked, this subsection is skipped, except Schedule
and Notify a Subscriber, if the Rule is triggered
Log Event in Storage check box Save event info in the Audit Storage (refer to Audit
Storage Settings on page 383)
Log Unique Events Only check box Log only unique queries that triggered the Rule. This
option is available only if Log Event in Storage is
checked
Depersonalize Queries before Logging check box Hide sensitive data in user queries when displaying
them in the Transactional Trails subsection
Check Other Rules Even if This One Has Been Continue checking conditions established by other
Triggered check box existing Audit Rules
Syslog Configuration drop-down list Select a CEF group to use when exporting data through
Syslog (refer to Syslog Settings (CEF Groups) on page
222)
Max Row Count to Log Query Results/Bind Variables If the Log Query Results check box is checked, this
drop-down list parameter defines a maximum number of lines to log.
By default, the number of lines to log is defined by the
MaxSaveRowsCount parameter in the Firewall settings
section (refer to Additional Parameters on page 337)
Log Bind Variables check box Log usage of bind variables
Log Query Results check box Log query results

4. Input the required information to the Filter Sessions subsection (Filter Sessions on page 111).
5. Select the required traffic filter (Filter Statements on page 114)
6. Set Response-time filter if necessary (Response-Time Filter on page 120)
7. Configure Data Filter if necessary (see Data Filter on page 121).
8. Check the Enable check box of the Rule Triggering Threshold section to set threshold parameters (Rule
Triggering Threshold on page 120).
9. Input Tags if necessary (Tags on page 199).
10. Click Save to save the Rule's settings.

9.9.2 Using Audit Trail for auditing Amazon RDS Oracle


database queries
This feature enables you to get auditing results collected by Oracle native audit tools. First and foremost, this feature
can be used on Amazon RDS Oracle databases because DataSunrise doesn't support sniffing on RDS databases.
Note that XML-based trailing is available for Oracle 12+ only and table-based trailing is available for Oracle 11+
only. Note that you need to configure Oracle's native auditing of connections and sessions - that's compulsory!
DataSunrise supports reading and processing of audit log files generated by the Oracle Database kernel when
audit_trail init parameter is configured to generate such data. The Audit confiiguration on the database side is done
using the AUDIT command that provides many ways of shaping your database activity logging strategy. DataSunrise
supports the following configuration of Oracle Audit Trail native auditing mechanism:
• audit_trail = DB,EXTENDED: the audit data is stored in the database files (SYS.AUD$ table or DBA_AUDIT_TRAIL
for read-only access)
• audit_trail = XML,EXTENDED: the audit data is stored in the OS files of XML format outside database (V
$XML_AUDIT_TRAIL for read-only access)
9 DataSunrise Rules | 124
• audit_sys_operations = TRUE: audit actions performed by high-privileged users (including RDS instance master
user) like ones who connect with SYSASM, SYSBACKUP, SYSDBA, SYSDG, SYSKM, or SYSOPER roles.
The EXTENDED word in the audit_trail init parameter is compulsory for DataSunrise passive monitoring and
stands for “Record SQL statements and bind variables (including their values) as well”. Otherwise the Oracle
native audit is not so efficient and may not suit the regulatory requirements.
Both DB and XML modes are valid and have their own advantages and disadvantages. Configuration steps
depend on the choice between these two modes. AWS RDS Oracle native audit can be configured for any Edition
of the currently supported DBMS versions (11.2 - 19.0)
Below is the summarized difference chart for audit_trail modes:
Mode Audited data location DataSunrise Authentication DataSunrise's Check logged
access method ability of data in the
removing database
processed logs
DB Database files (SYS.AUD$) Directly from Database Yes SYS.AUD$
SYS.AUD$ table instance user DBA_AUDIT_TRAIL
XML External OS files (XML) RDS API calls Access/Secret key No V
stored at RDS IAM Role $XML_AUDIT_TRAIL

Note that all the following actions should be performed as the admin user.
1. For Oracle package, Oracle 12+ before creating any objects you should select your pdb container and create
objects in this container:

ALTER SESSION SET CONTAINER=<pdb container name>;

These are the actions need to be done: https://oracle-base.com/articles/misc/list-files-in-a-directory-from-plsql-


and-sql-dbms-backup-restore, https://oracle-base.com/articles/9i/load-xmltype-from-file Note that you need to
specify the sys.load_xml procedure and grant your user the following privileges:

GRANT EXECUTE ON sys.load_xml TO <User_name>;


GRANT SELECT ON SYS.DBA_TAB_PRIVS TO <User_name>;
GRANT SELECT ON V$PARAMETER TO <User_name>;

2. Prepare your Oracle RDS Instance (audit_trail = DB, EXTENDED):


• Create a new Parameter group for corresponding Oracle DBMS engine version
• Configure your Parameter group: audit_trail=DB,EXTENDED, audit_sys_operations=TRUE
• Modify existing Oracle RDS instances by replacing the standard option group with a new one
• Alternatively, you can enhance your Parameter group with the provided values
• Since this is a static parameter (see Apply type), the changes will be applied only after rebooting the
instance
• Check the RDS after reboot:
• The parameter group should be in in-sync status
Prepare an Oracle RDS Instance (audit_trail = XML, EXTENDED):
• Prepare a parameter group as follows:
9 DataSunrise Rules | 125

Name Values Allowed values Modifiable Source Apply Description


type
audit_trail XML,EXTENDED DB, OS, NONE, true user static Enables
XML, EXTENDED system
auditing

audit_sys_operations TRUE TRUE, FALSE true user static Enables sys


auditing

commit_logging IMMEDIATE IMMEDIATE, true user dynamic Transaction


BATCH commit
log writing
behavior

• Apply the parameter group to the instance. Restart the instance for the changes to take effect
• Prepare an IAM Role for DataSunrise EC2 instance used for passive monitoring of the RDS instance with the
audit_trail=XML,EXTENDED mode configured. The IAM Role should include the following IAM policy:

{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action":


[ "rds:DownloadDBLogFilePortion", "rds:DescribeDBLogFiles", "rds:DownloadCompleteDBLogFile" ],
"Resource":"arn:aws:rds:<region>:<012345678901>:db:<db-instance-name>" } ] }

Substitute the values wrapped into <> symbols by the corresponding region, AWS account ID and instance
identifier
• Apply the resulting IAM Role with an IAM Policy to the EC2 Instance
3. Prepare Oracle Database Audit Trails
• Connect to the Oracle DB as instance master user (recommended). Note that master user is recommended as
you will have to provide access to the SYS catalog database objects
• You can also test the init parameters with a SQL client (e.g. SQL*PLUS) by executing the following command:

show parameters audit%r;

• Use the AUDIT statement to configure the auditing strategy for the native audit mechanism:
• Note that auditing logon/logoff (SESSION item) is compulsory
• You can configure object-based, user-based, statement-based-for-all-users AUDIT configuration. It is
recommended to configure AUDIT for particular statements and DB users in order to not cause too much
overhead for your DBMS engine
• Example configuration for AUDIT is shown below:

AUDIT SESSION BY BOB_DBA BY ACCESS;


AUDIT GRANT ANY OBJECT PRIVILEGE, GRANT ANY PRIVILEGE, GRANT ANY ROLE by BOB_DBA BY ACCESS;

• To disable configured audit, you can use the NOAUDIT command (example below):

NOAUDIT SESSION BY BOB_DBA;


NOAUDIT GRANT ANY OBJECT PRIVILEGE, GRANT ANY PRIVILEGE, GRANT ANY ROLE by BOB_DBA;

You can find more information on the AUDIT command structure in the official Oracle documentation:
https://docs.oracle.com/cd/E11882_01/server.112/e41084/statements_4007.htm#SQLRF01107
• You can check existing audit policies by executing the following command:

select * from DBA_STMT_AUDIT_OPTS where User_name is not null


UNION ALL select * from dba_priv_audit_opts where User_name is not null;
9 DataSunrise Rules | 126
• To check if Oracle native audit works, do the following:
• Execute a SQL query that complies with the configured AUDIT settings
• Check the SYS.AUD$ system table or DBA_AUDIT_TRAIL view. Use the database account that is
allowed to read from SYS.AUD$ or is allowed to read from DBA% views or is explicitly allowed to read
the DBA_AUDIT_TRAIL view. Note that you can use the WHERE clause to filter collected events by
SQL_TEXT(DBA_AUDIT_TRAIL) or SQLTEXT(SYS.AUD$) column or by any other column from the native audit
access objects
• In case the audit_trail=XML,EXTENDED mode is used, you can check the audit data in the V
$XML_AUDIT_TRAIL view (the same columns like from DBA_AUDIT_TRAIL static data dictionary view)
4. Provide your database user with the following grants:

begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'AUD$',
p_grantee => 'User_name',
p_privilege => 'SELECT');
end;

• For DB, EXTENDED you need the following grants:

begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'AUD$', p_grantee => 'User_name',


p_privilege => 'SELECT');end;
begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'AUD$', p_grantee => 'User_name',
p_privilege => 'DELETE');end;
begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'V$SESSION', p_grantee => 'User_name',
p_privilege => 'SELECT');end; *

• If you're going to use the Delete Processed Logs feature, you need the following grants:

begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'AUD$',
p_grantee => 'User_name',
p_privilege => 'DELETE');
end;

5. Prepare a DataSunrise user for getting instance metadata and reading native audit logs
• Create an Oracle database user for DataSunrise
• Due to platform specifics, AWS RDS Oracle instances are managed a bit differently than a self-maintained
(e.g. EC2-hosted) database. The most notable difference is that Oracle RDS provides its own package of
procedures and functions for accomplishing regular DBA tasks like database permissions provisioning.
• By default, Oracle database saves and stores database user names in UPPER case. This means that in
every rdsadmin.rdsadmin_util.grant_sys_object procedure call you should pass your user name in UPPER
CASE as well with the exception for the cases when you're enclosing the user name during the CREATE
USER command into 'single quotes'. This commands the DBMS to create a user case-sensitive user name.
Example:

CREATE USER test1 IDENTIFIED BY testuserpwd;

An example of a wrong user name case usage (lower case):

begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'AUD$', p_grantee => 'test1',


p_privilege => 'SELECT'); end;
9 DataSunrise Rules | 127
An example of a proper user name case usage (upper case):

begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'AUD$', p_grantee => 'TEST1',


p_privilege => 'SELECT'); end;

Below is the adapted version of the permission required for DataSunrise Oracle v12 database user:

CREATE USER <User_name> IDENTIFIED BY <Password>;


GRANT CREATE SESSION TO <User_name>;
GRANT CREATE TABLE TO <User_name>;
GRANT SELECT ON V$DATABASE TO <User_name>;
GRANT SELECT ON DBA_USERS TO <User_name>;
GRANT SELECT ON GV$INSTANCE TO <User_name>;

begin rdsadmin.rdsadmin_util.grant_sys_object(p_obj_name => 'COLLECTION$',p_grantee =>


'<User_name>',p_privilege => 'SELECT');end;
begin rdsadmin.rdsadmin_util.grant_sys_object(p_obj_name => 'OBJ$',p_grantee =>
'<User_name>',p_privilege => 'SELECT');end;
begin rdsadmin.rdsadmin_util.grant_sys_object(p_obj_name => 'COL$',p_grantee =>
'<User_name>',p_privilege => 'SELECT');end;
begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'USER$', p_grantee =>
'<User_name>', p_privilege => 'SELECT'); end;
begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'COLTYPE$', p_grantee =>
'<User_name>', p_privilege => 'SELECT'); end;
begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'HIST_HEAD$', p_grantee =>
'<User_name>', p_privilege => 'SELECT'); end;
begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'TAB$', p_grantee =>
'<User_name>', p_privilege => 'SELECT'); end;
begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'CDB_SYNONYMS', p_grantee =>
'<User_name>', p_privilege => 'SELECT'); end;
begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'DBA_OBJECTS', p_grantee =>
'<User_name>', p_privilege => 'SELECT'); end;
begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'V_$SERVICES', p_grantee =>
'<User_name>', p_privilege => 'SELECT'); end;
begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'DBA_TYPES', p_grantee =>
'<User_name>', p_privilege => 'SELECT'); end;
begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'DBA_TYPE_ATTRS', p_grantee =>
'<User_name>', p_privilege => 'SELECT'); end;
begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'DBA_OBJECTS', p_grantee =>
'<User_name>', p_privilege => 'SELECT'); end;
begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'DBA_OBJECT_TABLES', p_grantee =>
'<User_name>', p_privilege => 'SELECT'); end;
begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'DBA_TABLES', p_grantee =>
'<User_name>', p_privilege => 'SELECT'); end;
begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'DBA_NESTED_TABLES', p_grantee =>
'<User_name>', p_privilege => 'SELECT'); end;
begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'CDB_OBJECTS', p_grantee =>
'<User_name>', p_privilege => 'SELECT'); end;
begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'CDB_PROCEDURES', p_grantee =>
'<User_name>', p_privilege => 'SELECT'); end;
begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'V_$SESSION', p_grantee =>
'<User_name>', p_privilege => 'SELECT'); end;

• To work with the audit_trail=DB,EXTENDED mode, you should also provide the following permissions
throught rdsadmin_util.grant_sys_object_procedure. To be able to access the session statistics (track down
sessions), to access the native audit storage table:

begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'AUD$', p_grantee =>


'<User_name>', p_privilege => 'SELECT'); end;

(Optional, recommended). To delete processed events from a native audit storage table by DataSunrise:

begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'AUD$', p_grantee => '<User_name>',


p_privilege => 'DELETE'); end;
9 DataSunrise Rules | 128
In case you have an Oracle RDS of another version, refer to section 5.2
6. Add a new Oracle instance to DataSunrise using the Audit Trail option
• For audit_trail=DB,EXTENDED mode, open DataSunrise's Web Console and navigate to Configuration →
Databases. Open the required database instance details page where audit_trail was configured.
• At the bottom section of the page, click Trail DB Audit Logs
• Select the database interface and DataSunrise server the sync with audit_trail will be established on. Save the
changes
• You can enable the Delete Processed Logs option to save space on your Oracle server by emptying the
audit storage table
• To audit users with SYSASM, SYSBACKUP, SYSDBA, SYSDG, SYSKM or SYSOPER privileges, enable Audit
System Events check box. For this, additionally configure the trails (AWS, SMB, Local, Package) in the same
way as audit_trail XML mode.
• For the audit_trail=XML,EXTENDED mode:
• Select Format Type = XML
• Fill out the remaining fields according to your instance details (interface, region, DB identifier,
authentication method). Select IAM Role if you have the proper IAM Role configured using the steps
above. Alternatively, you can use AWS Regular authentication (Access Key + Secret Key)
• Configure an Audit Rule to capture data from Oracle using DataSunrise's Audit Trail mode. You can use
an empty Object Group or Query Types Rule to test Audit Trail. The first option will capture basic CRUD
operations issued towards any objects while the second option will capture queries of any type (as long as
they're registered using Oracle AUDIT statement)
• Connect to the Oracle database directly and execute queries audited registered using Oracle's AUDIT
statement
• Check the data at the following places:
• DataSunrise's Web Console: Audit → Transactional Trails
• Oracle database: DBA_AUDIT_TRAIL view for DB,EXTENDED or V$XML_AUDIT_TRAIL for XML,EXTENDED
modes

9.9.3 Using Audit Trail for auditing on-prem Oracle


database queries
Along with Amazon RDS, you can use Audit Trail to audit queries directed to an on-premises Oracle database. Note
that you need to configure Oracle's native auditing of connections and sessions - that's compulsory!

1. Prepare Oracle Database Audit Trails:


• Connect to your Oracle DB
• The options of Oracle Audit Trail should look as follows:
Name Value
audit_sys_operations TRUE
audit_trail DB, EXTENDED

• Modify the audit_trail option using the following query:

ALTER SYSTEM SET audit_trail=DB,EXTENDED scope=spfile;

• If you need to audit the SYS/SYSTEM operations or SYSASM, SYSBACKUP, SYSDBA, SYSSDG, SYSKM or
SYSOPER roles activity, issue:

ALTER SYSTEM SET audit_sys_operations=true scope=spfile;


9 DataSunrise Rules | 129
• Restart the Oracle database to apply the changes:

SHUTDOWN IMMEDIATE;
STARTUP;

• Use the AUDIT statement to enable auditing of SESSION (compulsory) and the statements of interest. Example
for the SYSTEM user:

AUDIT SESSION, SELECT TABLE, INSERT TABLE, DELETE TABLE, EXECUTE PROCEDURE BY SYSTEM BY ACCESS;

• You can check existing audit policies using the following statement:

select * from DBA_STMT_AUDIT_OPTS where user_name is not null UNION ALL select * from
dba_priv_audit_opts where user_name is not null;

you can find more information on AUDIT structure in the official Oracle documentation: https://
docs.oracle.com/cd/E11882_01/server.112/e41084/statements_4007.htm#SQLRF01107
2. Add new Oracle instance to DataSunrise using the Audit Trails option
• For an existing instance:
• Navigate to Configuration → Databases and open the required database instance details page where
audit_trail was configured
• At the bottom section of the page, (Proxies and Sniffers) click Trail DB Audit Logs
• Select the database interface and DataSunrise server the sync with audit_trail was established on. Save the
changes
• You can enable the Delete processed logs option to save space on your Oracle server
3. Grant the following privileges: to your Oracle user

GRANT SELECT ON SYS.AUD$ TO <User_name>;


GRANT SELECT ON V_$SESSION TO <User_name>;
GRANT SELECT ON SYS.DBA_TAB_PRIVS TO <User_name>;
GRANT SELECT ON V$PARAMETER TO <User_name>;
GRANT SELECT ON SYS.DBA_STMT_AUDIT_OPTS TO <User_name>;

To be able to use Delete Processed Logs, grant the following privilege:

GRANT DELETE ON SYS.AUD$ TO <User_name>;

4. First, you need to enable LOGON and LOGOFF auditing in your Oracle database. To do it, execute the following
command:

AUDIT SESSION BY <User_name>

You can also specify exact operations to be audited. Example:

AUDIT SESSION, SELECT TABLE, INSERT TABLE, DELETE TABLE, EXECUTE PROCEDURE BY <User_name> BY ACCESS

To log queries, execute the following query:

ALTER SYSTEM SET audit_trail='db','extended' SCOPE=spfile;

To disable auditing, you can use the following command:

NOAUDIT
9 DataSunrise Rules | 130
For example:

NOAUDIT SESSION, SELECT TABLE, INSERT TABLE, DELETE TABLE, EXECUTE PROCEDURE BY <User_name>

5. Connect to your DataSunrise's Web Console. Create a Database profile in the Configurations → Databases or
edit an existing profile. In the Capture Mode section, click Trail DB Audit Logs
• Fill out all the required fields:
Interface element Description
Server drop-down list DataSunrise server
Format Type drop-down list Format of the file to store audit data in (use Database for local
databases)
Delete processed logs check box Note the Delete processed logs check box. It enables you to delete
auditing results stored in your Oracle's SYS.AUD$ table. The data is
deleted from the table as soon as it was processed by DataSunrise.
Note that if DataSunrise has been inactive for a long period of time
this operation can take a while.
Audit System Events check box To audit users with SYSASM, SYSBACKUP, SYSDBA, SYSDG, SYSKM
or SYSOPER privileges, enable Audit System Events check box. For
this, additionally configure the trails (AWS, SMB, Local, Package) in the
same way as audit_trail XML mode.

6. Navigate to Audit and create an audit Rule for your Database instance. For auditing results, navigate to Audit →
Transactional Trails.

9.9.4 Configuring Audit Trail for Oracle SMB


DataSunrise can download audit results collected by Oracle native auditing tools from a shared remote folder
(available for Oracle 12+). To configure it, do the following:

1. You need an operational SMB server, enabled auditing to XML files and a shared folder at your SMB server to
store the logs.
2. Provide your database user with the following grants:

GRANT SELECT ON SYS.V_$SESSION TO <User_name>;


GRANT SELECT ON SYS.DBA_TAB_PRIVS TO <User_name>;
GRANT SELECT ON V$PARAMETER TO <User_name>;

3. Run DataSunrise's Web Console and create an Oracle database profile in Configuration → Databases.
4. Configure Trail DB Audit Logs:
Setting Required value
Type XML
Connection SMB
Hostname Host name or IP address of the machine your shared folder is located
at
Login SMB server login
Password SMB server password
Path Path to the shared folder located at your SMB server
9 DataSunrise Rules | 131
5. Navigate to Audit and create an audit Rule for your Database instance. For auditing results, navigate to Audit →
Transactional Trails.

9.9.5 Configuring Audit Trail for Oracle Package


DataSunrise can download audit results collected by Oracle native auditing tools from a folder located at your
Oracle server. You need to create the required procedures and tables first and then get the logs using SQL queries.
Note that you need to configure Oracle's native auditing of connections and sessions - that's compulsory!
You need an operational Oracle server, a database user with the required permissions and all the required
procedures and tables. Note that you need to provide your user with the grants for accessing the sys.xml_tab table
and the sys.load_xml procedure: Note that all the actions should be done as the sys as sysdba user. For Oracle 12+,
you need to select the pdb container first. Create all the required objects in this container.
1. Prepare your Oracle database server by executing the commands below. Note that there are two options,
Java and PL/SQL, so execute the required commands depending on your configuration. For the Java option,
open DataSunrise's Web Console, navigate to System Settings → Additional Parameters and enable the
TrailDBLogDownloaderJavaVersion parameter (it's disabled by default).

Common settings. Execute these commands regardless of Java or PL/SQL option chosen:

CREATE OR REPLACE DIRECTORY XML_DIR AS <Oracle_xml-logs_directory>;


CREATE TABLE XML_TAB
(
file_name VARCHAR2(255),
length NUMBER(19),
last_mod NUMBER(19),
creation_time NUMBER(19)
);
CREATE TABLE CLOB_TAB
(
id NUMBER(10),
content CLOB
);
CREATE OR REPLACE PROCEDURE LOAD_XML (p_dir IN VARCHAR2,
p_filename IN VARCHAR2,
p_offset IN INTEGER) IS
l_clob CLOB;
l_bfile BFILE := BFILENAME(p_dir, p_filename);
l_dest_offset INTEGER := 1;
l_src_offset INTEGER := p_offset;
l_bfile_csid NUMBER := 0;
l_lang_context INTEGER := 0;
l_warning INTEGER := 0;
BEGIN
DBMS_LOB.CREATETEMPORARY(l_clob, FALSE, DBMS_LOB.CALL);
DBMS_LOB.fileopen(l_bfile, DBMS_LOB.file_readonly);
DBMS_LOB.loadclobfromfile (
dest_lob => l_clob,
src_bfile => l_bfile,
amount => DBMS_LOB.lobmaxsize,
dest_offset => l_dest_offset,
src_offset => l_src_offset,
bfile_csid => l_bfile_csid ,
lang_context => l_lang_context,
warning => l_warning);
DBMS_LOB.fileclose(l_bfile);
MERGE INTO sys.clob_tab dst
USING (SELECT 1 id FROM sys.dual) src
ON (dst.id = src.id)
WHEN MATCHED THEN UPDATE SET dst.content = l_clob
WHEN NOT MATCHED THEN INSERT (dst.id, dst.content)
VALUES (1, l_clob);
DBMS_LOB.FREETEMPORARY(l_clob);
COMMIT;
END;
9 DataSunrise Rules | 132
In case you're using PL/SQL, execute the following commands:

Important: If you are using Oracle Database 11g, you need to exclude the following arguments: onlyfnm =>
TRUE, and normfnm => TRUE.

CREATE OR REPLACE TYPE sys.ob_varchar2_arr AS TABLE OF VARCHAR2(500);


CREATE OR REPLACE FUNCTION SYS.OB_GET_FILES (p_pattern IN VARCHAR2,
p_file_separator IN VARCHAR2 := '/')
RETURN sys.ob_varchar2_arr PIPELINED AS
l_pattern VARCHAR2(32767);
l_ns VARCHAR2(32767);
BEGIN
l_pattern := RTRIM(p_pattern, p_file_separator) || p_file_separator;
sys.DBMS_BACKUP_RESTORE.searchfiles(
pattern => l_pattern,
ns => l_ns,
onlyfnm => TRUE,
normfnm => TRUE);
FOR cur_rec IN (
SELECT fname_krbmsft
FROM sys.x$krbmsft
WHERE INSTR(SUBSTR(fname_krbmsft, LENGTH(l_pattern)+1), p_file_separator) = 0)
LOOP
PIPE ROW(SUBSTR(cur_rec.fname_krbmsft, LENGTH(l_pattern)+1));
END LOOP;
RETURN;
END;
CREATE OR REPLACE PROCEDURE UPDATE_XML (p_dir IN VARCHAR2) AS
l_dir VARCHAR2(224);
l_tmpdir VARCHAR2(224);
l_fexists BOOLEAN;
l_file_length NUMBER;
l_block_size BINARY_INTEGER;
BEGIN

SELECT directory_path INTO l_dir


FROM all_directories
WHERE upper(directory_name) = p_dir;
FOR list IN (
SELECT COLUMN_VALUE AS filename
FROM TABLE(SYS.OB_GET_FILES(l_dir))
) LOOP
UTL_FILE.FGETATTR(
location => p_dir,
filename => list.filename,
fexists => l_fexists,
file_length => l_file_length,
block_size => l_block_size
);
IF NOT l_fexists THEN
l_tmpdir := CONCAT(CONCAT(l_dir, '\'), REGEXP_SUBSTR(list.filename, '^(\w+\\)+'));
execute immediate 'create or replace directory TMP_DIR as ''' || l_tmpdir || '''';
UTL_FILE.FGETATTR(
location => 'TMP_DIR',
filename => list.filename,
fexists => l_fexists,
file_length => l_file_length,
block_size => l_block_size
);
END IF;
IF l_fexists THEN
MERGE INTO sys.xml_tab dst
USING (SELECT list.filename filename FROM sys.dual) src
ON (dst.file_name = src.filename)
WHEN MATCHED THEN UPDATE SET dst.length = l_file_length
WHEN NOT MATCHED THEN INSERT (dst.file_name, dst.length)
9 DataSunrise Rules | 133

VALUES (src.filename, l_file_length);


END IF;
END LOOP;
COMMIT;
END;

If you're using Java, execute the following commands:

CREATE OR REPLACE TYPE file_rec_t AS OBJECT


(
file_name VARCHAR2(255),
length NUMBER(19),
last_mod NUMBER(19),
creation_time NUMBER(19)
);
CREATE OR REPLACE TYPE file_list_t AS TABLE OF sys.file_rec_t;

If you're using SQL*PLUS, execute the following commands:

SET DEFINE OFF;


CREATE OR REPLACE AND COMPILE JAVA SOURCE NAMED "filelist" as
import java.io.*;
import java.sql.*;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.attribute.BasicFileAttributes;
import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.Stream;
public class FileList
{
public static void listFilesByExtension(String pathStr, String fileExtension)
throws IOException, SQLException
{
Connection conn = DriverManager.getConnection("jdbc:default:connection:");
boolean origAutoCommit = conn.getAutoCommit();
conn.setAutoCommit(false);
Path path = Paths.get(pathStr);
List<Path> result = findByExtension(path, fileExtension);
PreparedStatement pstmt = conn.prepareStatement(
"INSERT INTO XML_TAB (file_name, length, last_mod, creation_time)" +
" values (?,?,?,?)");
for (Path p : result) {
BasicFileAttributes attr = Files.readAttributes(p, BasicFileAttributes.class);
pstmt.setString(1, p.toString().substring(pathStr.length() + 1));
pstmt.setLong(2, attr.size());
pstmt.setLong(3, attr.lastModifiedTime().toMillis());
pstmt.setLong(4, attr.creationTime().toMillis());
pstmt.addBatch();
}
pstmt.executeBatch();
try {
conn.commit();
}
catch (SQLException e) {
conn.rollback();
throw e;
}
finally {
conn.setAutoCommit(origAutoCommit);
pstmt.close();
}
}
public static List<Path> findByExtension(Path path, String fileExtension)
throws IOException
{
List<Path> result;
9 DataSunrise Rules | 134
if (fileExtension.length() > 0 && fileExtension.charAt(0) != '.')
fileExtension = "." + fileExtension;
final String ext = fileExtension;
result = Files.find(path, Integer.MAX_VALUE,
(filePath, basicFileAttributes) -> {
return Files.isRegularFile(filePath)
&& filePath.toString().endsWith(ext);
})
.collect(Collectors.toList());
return result;
}
}
CREATE OR REPLACE PROCEDURE GET_FILE_LIST_BY_EXTENSION (dirPath IN VARCHAR2,
extension IN VARCHAR2) AS LANGUAGE JAVA
NAME 'FileList.listFilesByExtension(java.lang.String, java.lang.String)';
CREATE OR REPLACE PROCEDURE UPDATE_FILE_LIST(dirPath IN VARCHAR2,
extension IN VARCHAR2) AS PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
EXECUTE IMMEDIATE 'TRUNCATE TABLE SYS.XML_TAB';
SYS.GET_FILE_LIST_BY_EXTENSION (dirPath, extension);
COMMIT;
END;
CREATE OR REPLACE FUNCTION GET_FILES_BY_EXTENSION(dirPath IN VARCHAR2,
extension IN VARCHAR2)
RETURN sys.file_list_t PIPELINED IS
BEGIN
SYS.UPDATE_FILE_LIST(dirPath, extension);
FOR cur_rec IN (SELECT * FROM SYS.XML_TAB)
LOOP
PIPE ROW(sys.file_rec_t(file_name => cur_rec.file_name,
length => cur_rec.length,
last_mod => cur_rec.last_mod,
creation_time => cur_rec.creation_time));
END LOOP;
RETURN;
END;

2. Provide your database user with the following grants:


Common grants:

GRANT SELECT ON SYS.XML_TAB TO <User_name>;


GRANT SELECT ON SYS.CLOB_TAB TO <User_name>;
GRANT SELECT ON SYS.DBA_TAB_PRIVS TO <User_name>;
GRANT SELECT ON V$PARAMETER TO <User_name>;
GRANT SELECT ON SYS.DBA_STMT_AUDIT_OPTS TO <User_name>;
GRANT EXECUTE ON SYS.LOAD_XML TO <User_name>;

In case you're using PL/SQL:

GRANT EXECUTE ON SYS.UPDATE_XML TO <User_name>;

In case you're using Java and the TrailDBLogDownloaderJavaVersion parameter is enabled (it's disabled by
default).:

GRANT EXECUTE ON SYS.GET_FILES_BY_EXTENSION TO <User_name>;

3. Open DataSunrise's Web Console and create an Oracle database profile in Configuration → Databases
4. Configure Trail DB Audit Logs:
9 DataSunrise Rules | 135

Setting Required value


Format Type XML
Connection Oracle package
Path Path to the folder your Oracle uses to store logs

5. Navigate to Audit and create an audit Rule for your Oracle Database instance. For auditing results, navigate to
Audit → Transactional Trails.

9.9.6 Using Oracle Unified auditing for auditing Amazon


RDS Oracle database queries
This feature enables you to get auditing results collected by Oracle's Unified Auditing functionality. First and
foremost, this feature can be used in the mixed mode on Amazon RDS Oracle databases. Note that you need to
configure Oracle's native auditing of connections and sessions - that's compulsory!
Note that Unified Auditing is available for Oracle 12+ only.
1. Grant your Oracle user the following privileges:

begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'UNIFIED_AUDIT_TRAIL',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;

begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'DBMS_AUDIT_MGMT',
p_grantee => '<User_name>',
p_privilege => 'EXECUTE');
end;

begin
rdsadmin.rdsadmin_until.grant_sys_object(
p_obj_name => 'DBA_TAB_PRIVS',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;

begin
rdsadmin.rdsadmin_until.grant_sys_object(
p_obj_name => 'V$PARAMETER',
p_garantee => '<User_name>',
p_privilege => 'SELECT');
end;

or

GRANT SELECT ON SYS.UNIFIED_AUDIT_TRAIL TO <User_name>;


GRANT SELECT ON SYS.DBA_TAB_PRIVS TO <User_name>;
GRANT SELECT ON V$PARAMETER TO <User_name>;
GRANT EXECUTE ON SYS.DBMS_AUDIT_MGMT TO <User_name>;

You can enable the Delete Processed Logs option to save space on your Oracle server by emptying the audit
storage table (see step 5). In this case, provide your user with the following grant:

GRANT EXECUTE ON SYS.DBMS_AUDIT_MGMT TO <User_name>;


9 DataSunrise Rules | 136
2. Create policies:

CREATE AUDIT POLICY <policy_name>


{ {<privilege_audit_clause> [<action_audit_clause>] [<role_audit_clause>]}
| { <action_audit_clause> [<role_audit_clause> ] }
| { <role_audit_clause> }
}
[WHEN <audit_condition> EVALUATE PER {STATEMENT|SESSION|INSTANCE}]
[CONTAINER = {CURRENT | ALL}];

The following policies are mandatory:


Logon policy:

CREATE AUDIT POLICY <Policy name>


ACTIONS LOGON;

You can also specify a certain user:

CREATE AUDIT POLICY <Policy name>


ACTIONS LOGON WHEN 'SYS_CONTEXT (''USERENV'', ''CURRENT_USER'') = ''<User_name>''' EVALUATE PER
SESSION;

End of session auditing:

CREATE AUDIT POLICY <Policy name>


ACTIONS LOGOFF;

Having created the mandatory policies, you can specify the objects you want to audit. You can find the full list
in the Oracle official documentation: https://docs.oracle.com/database/121/SQLRF/statements_5001.htm, https://
docs.oracle.com/database/121/DBSEG/audit_config.htm#GUID-526A09B1-0782-47BA-BDF3-17E61E546174
For example:

CREATE AUDIT POLICY table_policy


PRIVILEGES CREATE ANY TABLE, DROP ANY TABLE;

CREATE AUDIT POLICY dml_policy


ACTIONS DELETE on OT.LOCATIONS,
INSERT on OT.LOCATIONS,
UPDATE on OT.LOCATIONS,
SELECT on OT.LOCATIONS;

CREATE AUDIT POLICY logon_policy


ACTIONS LOGON
WHEN 'INSTR(UPPER(SYS_CONTEXT(''USERENV'', ''CLIENT_PROGRAM_NAME'')), ''SQLPLUS'') > 0'
EVALUATE PER SESSION;

3. Enable auditing by specifying all policies created in step 2:

AUDIT POLICY <Policy name>;

You can stop auditing with the following command:

NOAUDIT POLICY <Policy_name>;

4. Prepare your Oracle RDS Instance:


• Prepare a parameter group as follows:
9 DataSunrise Rules | 137

Name Values Allowed values Modifiable Source Apply Description


type
audit_trail DB DB, OS, NONE, true user static Enables
XML, EXTENDED system
auditing
audit_sys_operations TRUE TRUE, FALSE true user static Enables sys
auditing

• Apply the parameter group to your RDS instance. Restart the instance for the changes to take effect
5. Add a new Oracle instance to DataSunrise using the Unified auditing option
• Open DataSunrise's Web Console and navigate to Configuration → Databases. Open the required database
Instance details page where audit_trail was configured or create a new Instance.
• At the Capture Mode section of the page, in the Mode drop-down list, select Trailing the DB Audit Logs
• Select the database interface and DataSunrise server the sync with audit_trail will be established on. In the
Format Type drop-down list, select Unified auditing. Save the changes.
• You can enable the Delete Processed Logs option to save space on your Oracle server by emptying the
audit storage table
• To audit users with SYSASM, SYSBACKUP, SYSDBA, SYSDG, SYSKM or SYSOPER privileges, enable Audit
System Events check box. For this, additionally configure the trails (AWS, SMB, Local, Package) in the same
way as audit_trail XML mode
• Fill out the remaining fields according to your instance details (interface, server, periodicity of requesting
data)
• Configure an Audit Rule to capture data from Oracle using DataSunrise's Audit Trail mode. You can use an
empty Object Group or Query Types Rule to test Audit Trail.
• To ensure that auditing works, check the data in the UNIFIED_AUDIT_TRAIL table:

select SESSIONID, DBUSERNAME, CLIENT_PROGRAM_NAME, EVENT_TIMESTAMP, OBJECT_SCHEMA, OBJECT_NAME,


SQL_TEXT from UNIFIED_AUDIT_TRAIL;

To display the complete table, execute:

SELECT * FROM UNIFIED_AUDIT_TRAIL;

To display a list of enabled policies, execute:

SELECT * FROM AUDIT_UNIFIED_ENABLED_POLICIES;

• For auditing results, navigate to Audit → Transactional Trails.

9.9.7 Using Audit Trail for auditing Amazon RDS


PostgreSQL database queries
This feature enables you to get auditing results collected by PostgreSQL native audit tools. First and foremost, this
feature can be used on Amazon RDS databases because DataSunrise doesn't support sniffing on RDS databases.

1. You need to prepare your RDS PostgreSQL database first. Do the following:
• Create an RDS Parameter Group and set the following parameter values:
9 DataSunrise Rules | 138

Parameter name Value to set


log_checkpoints 0
log_connections 1
log_destination csvlog
log_disconnections 1
pgaudit.log all
pgaudit.role rds_pgaudit
shared_preload_libraries pg_stat_statements, pgaudit

• Assign the Parameter group to your RDS Postgres database instance (RDS Instance → Configuration → Modify
→ database's Additional Configuration)
• Connect to your RDS Postgres database using some client and execute the following query to create a
database role named rds_pgaudit:

CREATE ROLE rds_pgaudit;

• Reboot your RDS Postgres database instance to apply the changes


• Ensure that pgaudit is initialized by executing the following command:

show shared_preload_libraries;

You should receive the following response:

shared_preload_libraries
--------------------------
rdsutils,pg_stat_statements,pgaudit

• Create the pgaudit extension by executing the following command

CREATE EXTENSION pgaudit;

• Ensure that the pgaudit.role is set to rds_pgaudit by executing the following command:

SHOW pgaudit.role;

You should receive the following response:

pgaudit.role
------------------
rds_pgaudit

2. Create an AWS IAM Policy using the following JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"rds:DownloadDBLogFilePortion",
"rds:DescribeDBLogFiles",
"rds:DownloadCompleteDBLogFile",
9 DataSunrise Rules | 139
"rds:DescribeDbClusters"
],
"Resource": "arn:aws:rds:us-east-2:012345678901:cluster:test-au-pg"
"Resource": "arn:aws:rds:us-east-2:012345678901:db:datasunrise-au-pg-node-1"
"Resource": "arn:aws:rds:us-east-2:012345678901:db:datasunrise-au-pg-node-2"
...
"Resource": "arn:aws:rds:us-east-2:012345678901:db:datasunrise-au-pg-node-n"
}
]
}

•Attach the policy to your IAM Role (Policies → Policy actions → Attach) and attach the Role to your
DataSunrise EC2 machine (EC2 machine → Instance Settings → Attach/Replace IAM Role)
• In case of cluster it's necessary to list all nodes of the cluster in the resources
• In case of cluster it is necessary to use rds:DescribeDbClusters in Action
• In case of cluster it is required to configure only the cluster parameter group instead of each node
• To create a pgaudit role and pgaudit extension for Aurora PostgreSQL it is necessary to connect to Writer
Node
• In case of creating a Read Replica (Regular RDS) or a Reader node of Aurora Cluster it is not necessary to
create anything in them because all the settings (pgaudit role and pgaudit extension) will be replicated from
original instance or writer node.
3. Connect to your DataSunrise's Web Console. Create a Database profile in the Configuration → Databases. In
the Capture Mode drop-down list, select Trail DB Audit Logs. Note that you can also create a Trailing task for an
existing Database profile. For this, navigate to the Capture Mode section and click Trail DB Audit Logs.
• Fill out all the required fields:
Interface element Description
Server DataSunrise server
Format Type Format of the file to store audit data in
Region AWS Region your target database is located in
Identifier Database Instance name
Authentication method • IAM Role: use the attached IAM role for authentication
• Regular: authentication using AWS Access/Secret Key

Instance Interface DataSunrise EC2 machine's address

4. Navigate to Audit and create an audit Rule for your Database instance. For auditing results, navigate to Audit →
Transactional Trails.

9.9.8 Using Audit Trail for auditing standalone PostgreSQL


database queries
This feature enables you to get auditing results collected by PostgreSQL native audit tools.
To configure Audit DB Trail for your local PostgreSQL database, do the following:
1. Download the pgaudit extension from here: https://github.com/pgaudit/pgaudit. Note that you may need to
install the following components:

sudo apt install postgresql-server-dev-<PG version>


sudo apt-get install libssl-dev
sudo apt-get install libkrb5-dev
9 DataSunrise Rules | 140
2. Assemble pgaudit. First, specify the path to PG_CONFIG. You can pinpoint its location with the following
command:

pg_config | grep BINDIR

Then follow the instruction: https://github.com/pgaudit/pgaudit


You may need to install additional components:

sudo apt install gcc

3. You can find the logs in the following folder by default:

cat /etc/postgresql/12/main/postgresql.conf | grep data_directory

Note: only super admins or file owners can read the logs. To enable other users to read the logs, you need to
save logs in another folder. You can solve this issue by doing something like the following:

mkdir /var/log/psql_logs
chmod 755 /var/log/psql_logs
chown postgres:postgres /var/log/psql_logs

Then edit postgresql.conf in the following way:

log_file_mode = 0755

So other users will be able to read the logs. Refer to the following page for details: https://www.postgresql.org/
docs/9.1/runtime-config-logging.html

4. Configure the PostgreSQL configuration. Locate the following file: /etc/postgresql/12/main/postgresql.conf and
uncomment the following settings:

log_destination ='csvlog'
logging_collector = on
log_directory = '/var/log/psql_logs'
log_file_mode = 0755
log_checkpoints = off
log_connections = on
log_disconnections = on
pgaudit.role = 'pgaudit_role' # may be another
pgaudit.log = all

5. Connect to your PostgreSQL using some client and execute the following query to create a database role:

CREATE ROLE pgaudit_role;

Restart your PostgreSQL database instance to apply the changes


Ensure that pgaudit is initialized by executing the following command:

show shared_preload_libraries;

You should receive the following message:

shared_preload_libraries
--------------------------
pg_stat_statements,pgaudit
9 DataSunrise Rules | 141
6. Create the pgaudit extension with the following command:

CREATE EXTENSION pgaudit;

Ensure that the pgaudit.role is set by executing the following command:

SHOW pgaudit.role;

You should receive the following response:

pgaudit.role
------------------
pgaudit_role

7. Navigate to the Configuration → Databases section of the Web Console and create a new PostgreSQL
database Instance.
8. In the Capture Mode section, select Local Folder in the Connection drop-down list; specify the path to the
folder PostgreSQL stores its logs.
9. If you need to delete PostgreSQL logs automatically depending on your settings, do the following:
• Grant your user (datasunrise here) the permission to delete logs (Linux):

sudo chmod -R 775 /var/log/psql_logs


sudo usermod -a -G postgres datasunrise
sudo service datasunrise restart

• In the Log Files Cleaning Options section of your Local Folder Trailings settings, set Limit Total Size of Log
Files (Mbytes) or/and Time Period to Store Log Files.

Note: set Limit Total Size of Log Files carefully since there is a chance of deleting a current file. This would
happen if you set the Limit Total Size... less that the default log file size. Therefore it's worth to set Limit Total
Size... at least double size of Default Log File Size.

10. Configure an Audit Rule to capture data from your PostgreSQL using DataSunrise's Audit Trail mode. For
auditing results, navigate to Transactional Trails section of the Web Console

9.9.9 Using Audit Trail for auditing MS SQL Server database


queries
This feature enables you to get auditing results collected by MS SQL Server native audit tools. Note that you will
need to create a Server Audit specification and a Database Audit specification for that - It's compulsory!

DataSunrise follows the natural model for data auditing:


- A session is established over a TCP connection;
- Events (SQL statements or OPERATIONS) take place within the session;
Thus, it's a compulsory requirement to configure native audit to log connections
and/or sessions using native auditing options for each supported DBMS type.

1. First, you need to create a file to collect audit data in (audit target):

CREATE SERVER AUDIT <Server_name> TO FILE ( FILEPATH = '<Path_to_the_file>', MAXSIZE=500MB );

For example:
9 DataSunrise Rules | 142
Windows:

CREATE SERVER AUDIT audi_1 TO FILE ( FILEPATH = 'C:\Program Files\Microsoft SQL Server\120\audit\',
MAXSIZE=500MB ); );

Linux:

CREATE SERVER AUDIT audi_1 TO FILE ( FILEPATH = '/var/opt/mssql/audit/', MAXSIZE=500MB );

2. Create Server Audit specification. It's compulsory. It includes events to log:

CREATE SERVER AUDIT SPECIFICATION <Audit_specification_name>


FOR SERVER AUDIT <Server_name>
ADD ( failed_login_group ),
ADD ( successful_login_group ),
ADD ( logout_group ),
ADD ( transaction_group )
WITH ( STATE = ON )

Example:

CREATE SERVER AUDIT SPECIFICATION audi_server_login_transaction


FOR SERVER AUDIT audi_1
ADD ( failed_login_group ),
ADD ( successful_login_group ),
ADD ( logout_group ),
ADD ( transaction_group )
WITH ( STATE = ON )

3. Create Database Audit specification. It's compulsory. It includes events to log:

CREATE DATABASE AUDIT SPECIFICATION <Database_Audit_specification_name>


FOR SERVER AUDIT <Server_name>
ADD ( DATABASE_CHANGE_GROUP ),
ADD ( DATABASE_LOGOUT_GROUP ),
ADD ( DATABASE_OBJECT_CHANGE_GROUP ),
ADD ( FAILED_DATABASE_AUTHENTICATION_GROUP ),
ADD ( SCHEMA_OBJECT_CHANGE_GROUP ),
ADD ( SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP ),
ADD ( SELECT, INSERT, UPDATE, DELETE, EXECUTE ON DATABASE::<Database name> BY public )
WITH ( STATE = ON )

Example:

CREATE DATABASE AUDIT SPECIFICATION audi_database_crud_t1


FOR SERVER AUDIT audi_1
ADD ( DATABASE_CHANGE_GROUP ),
ADD ( DATABASE_LOGOUT_GROUP ),
ADD ( DATABASE_OBJECT_CHANGE_GROUP ),
ADD ( FAILED_DATABASE_AUTHENTICATION_GROUP ),
ADD ( SCHEMA_OBJECT_CHANGE_GROUP ),
ADD ( SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP ),
ADD ( SELECT, INSERT, UPDATE, DELETE, EXECUTE ON DATABASE::WORK BY public )
WITH ( STATE = ON )

4. Run audit target:

ALTER SERVER AUDIT <Server_name> WITH (STATE = ON);


9 DataSunrise Rules | 143
For example:

ALTER SERVER AUDIT audi_1 WITH (STATE = ON);

5. Now you can see captured audit events:

SELECT * FROM sys.fn_get_audit_file ('<Path_to_the_file>*', default, default)

For example:

SELECT * FROM sys.fn_get_audit_file ('/var/opt/mssql/audit/*', default, default)

6. Connect to your DataSunrise's Web Console. Create a Database profile in the Configurations → Databases. In
the Mode drop-down list, select Trail DB Audit Logs. Note that you can also create a Trailing task for an existing
Database profile. For this, navigate to the Capture Mode section and click Trail DB Audit Logs
7. Insert the following lines into the Log files path/url field:
• Regular MS SQL instance: <path to the file you store audit logs in>/*
• Amazon RDS: <according to the Amazon documentation>/*
• Microsoft Azure: <Azure URL>/*

Note: you can check what folder your MS SQL Server uses to store auditing results in the
sys.server_file_audits. Refer to the following page for details: https://docs.microsoft.com/en-us/sql/relational-
databases/system-catalog-views/sys-server-file-audits-transact-sql?view=sql-server-ver15

8. Navigate to Audit and create an audit Rule for your Database instance. Select an Instance interface and a
DataSunrise server to run the task on. For auditing results, navigate to Audit → Transactional Trails.

9.9.10 Using Audit Trail for auditing Amazon RDS MS SQL


Server database queries
This feature enables you to get auditing results collected by MS SQL Server native audit tools.

1. First, you need to prepare your AWS RDS Instance. We recommend using Amazon Linux2 for hosting
DataSunrise.
2. Create a custom option group for your RDS MS SQL. Configure an AWS RDS Service4 Role required for MS
SQL's SQLSERVER_AUDIT option (there are two options):
• Create new IAM Role for AWS RDS service using corresponding drop-down list from the IAM Role
subsection. You need to provide details on S3 bucket where the generetaed logs will be stored based on the
logs Retention settings
• The AWS Account User should be authorized to create IAM Policies, Service Roles, attaching Policies to
Roles
• In case of unsufficient privileges, please request your IAM Service Administrator to provide you the
missing privileges to create the IAM Service Role required for MS SQL Native Audit option
• If you want to create the IAM Role in advance, use the following example policy with the RDS Service as a
trusted entity:
IAM Policy:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:ListAllMyBuckets",


"Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetBucketLocation", "s3:GetBucketACL",
"s3:ListBucket" ], "Resource": [ "arn:aws:s3:::<S3_bucket_name>" ] }, { "Effect": "Allow",
"Action": [ "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload", "s3:PutObject" ],
"Resource": [ "arn:aws:s3:::<S3_bucket_name>/<optional_prefix>" ] } ] }
9 DataSunrise Rules | 144
AssumeRolePolicyDocument:

{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal":


{ "Service": "rds.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }

Note: the details on IAM Role policy and other topics can be found in the AWS official guide
on SQL Server Audit configuration: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/
Appendix.SQLServer.Options.Audit.html

3. Add SQL Server Audit option to the Option group:


4. Apply your Option group to your RDS MS SQL Instance by modifying it through the RDS Additional
Configuration menu
5. Prepare you MS SQL and Database Audit specifications. Consider the following:
• Don't exceed the maximum number of supported server audits per Instance of 50
• Instruct SQL Server to write data to a binary file
• Don't use RDS_ as a prefix in the server audit name
• For FILEPATH, specify <D:\rdsdbdata\SQLAudit>\*
• For MAXSIZE, specify a size between 2 MB and 50 MB
• Don't configure MAX_ROLLOVER_FILES or MAX_FILES
• Don't configure SQL Server to shut down the DB Instance if it fails to write the audit record (By default it's
configured as CONTINUE (on failure)).
6. Connect to the MS SQL RDS using high-privileged users and perform the following queries taking into account
notes from the previous step
• CREATE SERVER AUDIT dsunrs_audit TO FILE ( FILEPATH = 'D:\rdsdbdata\SQLAudit', MAXSIZE=50MB );

• CREATE SERVER AUDIT SPECIFICATION dsunrs_server_login_transaction


FOR SERVER AUDIT dsunrs_audit
ADD ( failed_login_group ),
ADD ( successful_login_group ),
ADD ( logout_group ),
ADD ( transaction_group )
WITH ( STATE = ON )

• CREATE DATABASE AUDIT SPECIFICATION dsunrs_database_crud_ops_dbname


FOR SERVER AUDIT dsunrs_audit
ADD ( DATABASE_CHANGE_GROUP ),
ADD ( DATABASE_LOGOUT_GROUP ),
ADD ( DATABASE_OBJECT_CHANGE_GROUP ),
ADD ( FAILED_DATABASE_AUTHENTICATION_GROUP ),
ADD ( SCHEMA_OBJECT_CHANGE_GROUP ),
ADD ( SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP ),
ADD ( SELECT, INSERT, UPDATE, DELETE, EXECUTE ON DATABASE::<database name> BY public ) WITH
( STATE = ON )

Note: you need to generate a Database audit spec for each database that is required to monitor for
database activity. Please note that one Database Audit Specification can be attached to one Server Audit
only. In case if you need to audit multiple Databases, you have to create more Server Audit units and
Database Audit Specifications. You can also reduce the amount of audited events generated by the SQL
Server by editing example Audit Specifications provided above. Please refer to the Microsoft official
documentation for full coverage: https://docs.microsoft.com/en-us/sql/relational-databases/security/
auditing/sql-server-audit-database-engine?view=sql-server-ver15

7. Enable your Server Audit unit:

ALTER SERVER AUDIT dsunrs_audit WITH (STATE = ON);


9 DataSunrise Rules | 145
8. Provide an access to the rds_fn_get_audit_file table-valued function:
• Prerequisite: you need to create a DataSunrise user and map it on the corresponding database user. You can
find more information on SQL Server database user creation in 5.3.1 Granting Necessary Privileges to an MS
SQL Server User (also an AD user) section of DataSunrise User Guide
• You can provide access to the corresponding securable via the database Query/Administration tool (SSMS)
or by issuing the following query:

USE MSDB;
CREATE USER <DATASUNRISE_DATABASE_USER> FOR LOGIN <DATASUNRISE_SERVER_LOGIN>;
GRANT SELECT ON DBO.RDS_FN_GET_AUDIT_FILE TO <DATASUNRISE_DATABASE_USER>;
GO

9. Prepare your DataSunrise server and configure your RDS MS SQL Instance for passive logging:
• Install unixODBC on your DataSunrise server
• Connect to your DataSunrise's Web Console. Create a Database profile in the Configurations → Databases.
In the Capture Mode drop-down list, select Trail DB Audit Logs. Note that you can also create a Trailing task
for an existing Database profile. For this, navigate to the Capture Mode section and click Trail DB Audit
Logs

Note: you need to install Microsoft SQL Server ODBC Driver 17 (recommended). For installation procedure
on Linux refer to the following document: https://docs.microsoft.com/en-us/sql/connect/odbc/linux-mac/
installing-the-microsoft-odbc-driver-for-sql-server?view=sql-server-ver15#redhat17


Fill out all the required fields. You can leave Request data... as by default. For Log files path/url, provide
the same parameter as in your SQL Server Audit Specification (D:\rdsdbdata\SQLAudit\*.sqlaudit). Note that
the path should be combined with *.sqlaudit.
10. Navigate to Audit and create an audit Rule for your Database instance. Select an Instance interface and a
DataSunrise server to run the task on. For auditing results, navigate to Audit → Transactional Trails. Events
may be displayed with a slight lag due to SQL Server engine handling the audit event.

9.9.11 Using the MariaDB Audit Plugin for auditing MySQL/


MariaDB database queries on AWS
Amazon RDS supports using the MairaDB Audit Plugin on MySQL database instances. The plugin records database
activity such as users logging on to the database, queries run against the database, and more. The record of
database activity is stored in a log file.

1. Add the MariaDB Audit Plugin to your DB instance.


• Create a new option group or use an existing one.
• Add the MARIADB_AUDIT_PLUGIN option to your option group and configure the option settings
• Apply the option group to a new or existing DB instance.
more info: https://docs.amazonaws.cn/en_us/AmazonRDS/latest/UserGuide/
Appendix.MySQL.Options.AuditPlugin.html
2. Configure the option settings:
• server_audit_query_log_limit - max length of audited query. It is recommended to set to 10485760 (instead of
default 1024) to allow database to save full text of query.
• server_audit_file_path - by default
• server_audit_file_rotate_size - log file size
• server_audit_file_rotations - number of rotated files
• server_audit_events - "CONNECT, QUERY" (by default) log everything, CONNECT is mandatory, others are
optional
• server_audit_incl_users - names of users that should be audited. All user actions are logged if not specified
9 DataSunrise Rules | 146
• server_audit_excl_users - names of users that shouldn't be audited
• server_audit_logging - run the feature (ON)
3. Create an AWS IAM Policy using the following JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"rds:DownloadDBLogFilePortion",
"rds:DescribeDBLogFiles",
"rds:DownloadCompleteDBLogFile",
"rds:DescribeDbClusters"
],
"Resource": "arn:aws:rds:us-east-2:012345678901:cluster:test-au-mysql"
"Resource": "arn:aws:rds:us-east-2:012345678901:db:datasunrise-au-mysql-node-1"
"Resource": "arn:aws:rds:us-east-2:012345678901:db:datasunrise-au-mysql-node-2"
...
"Resource": "arn:aws:rds:us-east-2:012345678901:db:datasunrise-au-mysql-node-n"
}
]
}

• Attach the policy to your IAM Role (Policies → Policy actions → Attach) and attach the Role to your
DataSunrise EC2 machine (EC2 machine → Instance Settings → Attach/Replace IAM Role)
• In case of cluster it's necessary to list all nodes of the cluster in the resources
• In case of cluster it is necessary to use rds:DescribeDbClusters in Action
• In case of cluster it is required to configure only the cluster parameter group instead of each node
• To create a mysql role for Aurora MySQL, it is necessary to connect to Writer Node
• In case of creating a Read Replica (Regular RDS) or a Reader node of Aurora Cluster it is not necessary to
create anything in them because all the settings (the role) will be replicated from original instance or Writer
node.
4. Connect to your DataSunrise's Web Console. Create a Database profile in the Configuration → Databases. In
the Capture Mode drop-down list, select Trail DB Audit Logs. Note that you can also create a Trailing task for an
existing Database profile. For this, navigate to the Capture Mode section and click Trail DB Audit Logs.
5. Navigate to Audit and create an audit Rule for your Database instance. Select an Instance interface and a
DataSunrise server to run the task on. For auditing results, navigate to Audit → Transactional Trails.

9.9.12 Using Audit DB Trail General logs for auditing


MySQL database queries on AWS
To get audit data in the MySQL General Logs format, do the following:

1. In your AWS MySQL Parameter group, set the following parameters as shown below:

general_log = 1
log_output=FILE

2. Create an AWS IAM Policy using the following JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
9 DataSunrise Rules | 147
"rds:DownloadDBLogFilePortion",
"rds:DescribeDBLogFiles",
"rds:DownloadCompleteDBLogFile",
"rds:DescribeDbClusters"
],
"Resource": "arn:aws:rds:us-east-2:012345678901:cluster:test-au-mysql"
"Resource": "arn:aws:rds:us-east-2:012345678901:db:datasunrise-au-mysql-node-1"
"Resource": "arn:aws:rds:us-east-2:012345678901:db:datasunrise-au-mysql-node-2"
...
"Resource": "arn:aws:rds:us-east-2:012345678901:db:datasunrise-au-mysql-node-n"
}
]
}

• Attach the policy to your IAM Role (Policies → Policy actions → Attach) and attach the Role to your
DataSunrise EC2 machine (EC2 machine → Instance Settings → Attach/Replace IAM Role)
• In case of cluster it's necessary to list all nodes of the cluster in the resources
• In case of cluster it is necessary to use rds:DescribeDbClusters in Action
• In case of cluster it is required to configure only the cluster parameter group instead of each node
• To create a mysql role for Aurora MySQL, it is necessary to connect to Writer Node
• In case of creating a Read Replica (Regular RDS) or a Reader node of Aurora Cluster it is not necessary to
create anything in them because all the settings (the role) will be replicated from original instance or Writer
node.
3. Create an Option Group for your database and set the following parameters' values:
• SERVER_AUDIT_ROTATE_SIZE: not less than 1000000
• SERVER_AUDIT_FILE_ROTATIONS: not less than 10
4. Connect to your DataSunrise's Web Console. Create a Database profile in the Configuration → Databases. In
the Capture Mode drop-down list, select Trail DB Audit Logs. Note that you can also create a Trailing task for an
existing Database profile. For this, navigate to the Capture Mode section and click Trail DB Audit Logs. Select
General Log in the Format Type drop-down list.
5. Navigate to Audit and create an audit Rule for your Database instance. Select an Instance interface and a
DataSunrise server to run the task on. For auditing results, navigate to Audit → Transactional Trails.

9.9.13 Using Audit Trail for auditing standalone MySQL


database queries
This feature enables you to get auditing results collected by MySQL native audit tools.

1. Add the MariaDB Audit Plugin to your DB instance.


• You can get it from a MariaDB distribution installed on your system or downloaded from the official web site
• Copy the MariaDB Audit Plugin file to the MySQL plugins folder:

sudo cp /usr/lib/x86_64-linux-gnu/mariadb19/plugin/server_audit.so /usr/lib/mysql/plugin/


server_audit.so

2. Run The MySQL console as a root user:

mysql -u root -p

3. Install the plugin by executing the following command:

INSTALL PLUGIN SERVER_AUDIT SONAME 'server_audit.so';

4. Create a folder to store logs and make the mysql:adm user the owner:

sudo mkdir /var/log/mysql/server_audit


9 DataSunrise Rules | 148
sudo chown mysql:adm /var/log/mysql/server_audit

5. Open the /etc/my.cnf.d/server.cnf file and add the following lines to its [mysqld] section:

plugin_load_add = server_audit.so
server_audit_events = CONNECT,QUERY
server_audit_file_path = /var/log/mysql/server_audit/server_audit.log
server_audit_file_rotate_size = 1073741824
server_audit_file_rotations = 4
server_audit_logging = ON
server_audit_output_type = file
server_audit_query_log_limit = 8192

6. Restart your MySQL server:

sudo service mysql restart

7. Ensure that MySQL auditing works: connect to your database server with a client application and execute some
queries. If everything is OK, MySQL will create the following file: /var/log/mysql/server_audit/server_audit.log
8. Learn what group the log files belong to (adm or mysql as a rule):

sudo ls -l /var/log/mysql/server_audit

9. Add datasunrise user to the group the log files belong to (see the previous step):

sudo usermod -aG adm datasunrise

Note that if you don't want to do it because of security concerns, use Samba.
10. Grant your user the privilege to read the logs:

sudo chmod -R 775 /var/log/mysql

11. Navigate to the Configuration → Databases section of the Web Console and create a new MySQL database
Instance.
12. In the Capture Mode section, select Local Folder in the Connection drop-down list; specify the path to the
folder MySQL stores its logs.
13. Configure an Audit Rule to capture data from your MySQL using DataSunrise's Audit Trail mode. For auditing
results, navigate to Transactional Trails section of the Web Console.

9.9.14 Using Audit Trail for auditing standalone MariaDB


database queries
The following actions should be taken only if the target database and DataSunrise are installed on the same
machine. This feature enables you to get auditing results collected by MariaDB native audit tools.

Note: the Audit Plugin mentioned here is just an example, you can also use other methods (NFS for example) to get
audit data from MariaDB.

1. Install the MariaDB Audit Plugin by executing the following command:

INSTALL PLUGIN SERVER_AUDIT SONAME 'server_audit.so';

2. Create a folder to store logs and make the mysql:adm user the owner:

sudo mkdir /var/log/mysql/server_audit


9 DataSunrise Rules | 149
sudo chown mysql:adm /var/log/mysql/server_audit

3. Open the /etc/mysql/mariadb.conf.d/server_audit.cnf file (other possible locations are: /etc/my.cnf, /etc/my.cnf.d/
server.cnf, /etc/my.cnf.d/mariadb-server.cnf) and add the following lines to its [mariadb] section:

plugin_load_add = server_audit.so
server_audit_events = CONNECT,QUERY
server_audit_file_path = /var/log/mysql/server_audit/server_audit.log
server_audit_file_rotate_size = 1073741824
server_audit_file_rotations = 4
server_audit_logging = ON
server_audit_output_type = file
server_audit_query_log_limit = 8192

4. Restart your MariaDB server:

sudo service mysql restart

5. Ensure that MariaDB auditing works: connect to your database server with a client application and execute
some queries. If everything is OK, MariaDB will create the following file: /var/log/mysql/server_audit/
server_audit.log
6. Learn what group the log files belong to (adm or mysql as a rule):

sudo ls -l /var/log/mysql/server_audit

7. Add datasunrise user to the group the log files belong to (see the previous step):

sudo usermod -aG adm datasunrise

Note that if you don't want to do it because of security concerns, use Samba.
8. Grant your user the privilege to read the logs:

sudo chmod -R 775 /var/log/mysql

9. Navigate to the Configuration → Databases section of the Web Console and create a new MariaDB database
Instance.
10. In the Capture Mode section, select Local Folder in the Connection drop-down list; specify the path to the
folder MariaDB stores its logs.
11. Configure an Audit Rule to capture data from your MariaDB using DataSunrise's Audit Trail mode. For auditing
results, navigate to Transactional Trails section of the Web Console.

9.9.15 Using Audit Trail for auditing standalone MySQL/


MariaDB database queries using Samba
This feature enables you to get auditing results collected by MySQL/MariaDB native audit tools. Note that you can
also use another methods (NFS for example) to accomplish that task.

1. Add the MariaDB Audit Plugin to your DB instance.


• You can get it from a MariaDB distribution installed on your system or downloaded from the official web site
• Copy the MariaDB Audit Plugin file to the MySQL plugins folder:

sudo cp /usr/lib/x86_64-linux-gnu/mariadb19/plugin/server_audit.so /usr/lib/mysql/plugin/


server_audit.so
9 DataSunrise Rules | 150
2. Run The MySQL console as a root user:

mysql -u root -p

3. Install the plugin by executing the following command:

INSTALL PLUGIN SERVER_AUDIT SONAME 'server_audit.so';

4. Create a folder to store logs and make the mysql:adm user the owner:

sudo mkdir /var/log/mysql/server_audit


sudo chown mysql:adm /var/log/mysql/server_audit

5. Open the /etc/my.cnf.d/server.cnf file and add the following lines to its [mysqld] section:

plugin_load_add = server_audit.so
server_audit_events = CONNECT,QUERY
server_audit_file_path = /var/log/mysql/server_audit/server_audit.log
server_audit_file_rotate_size = 1073741824
server_audit_file_rotations = 4
server_audit_logging = ON
server_audit_output_type = file
server_audit_query_log_limit = 8192

6. Restart your MySQL server:

sudo service mysql restart

7. Ensure that MySQL auditing works: connect to your database server with a client application and execute some
queries. If everything is OK, MySQL will create the following file: /var/log/mysql/server_audit/server_audit.log
8. Learn what group the log files belong to (adm or mysql as a rule):

sudo ls -l /var/log/mysql/server_audit

9. Install samba (server and client)

sudo aptitude install samba smbclient

10. Configure samba by editing the /etc/samba/smb.conf file in the following way:

[global]
workgroup = WORKGROUP
security = user
map to guest = bad user
wins support = no
dns proxy = no
log file = /var/log/samba/log.%m
max log size = 65536
logging = file

[server_audit]
path = /var/log/mysql/server_audit/
valid users = smbuser
guest ok = no
browsable = yes

11. Restart samba:

sudo service smbd restart


9 DataSunrise Rules | 151
12. Add a user named smbuser:

sudo useradd smbuser -M -s /sbin/nologin

13. Add smbuser to the group the logs belong to (see step 8):

sudo usermod -aG adm smbuser

14. Set a user password:

sudo smbpasswd -a smbuser

15. Ensure that everything works OK:

smbclient \\\\<host name or IP address>\\server_audit -U smbuser

16. Grant your user the privilege to read the logs:

sudo chmod -R 775 /var/log/mysql

17. Check samba and MySQL by getting a file list:

ls

If a file list is displayed, everything is configured correctly


18. Navigate to the Configuration → Databases section of the Web Console and create a new MySQL database
Instance.
19. In the Capture Mode section, select Local Folder in the Connection drop-down list; specify the path to the
folder MySQL stores its logs ( /server_audit ).
20. Configure an Audit Rule to capture data from your MySQL using DataSunrise's Audit Trail mode. For auditing
results, navigate to Transactional Trails section of the Web Console.

9.9.16 Using Audit Trail for auditing Snowflake database


queries
This feature enables you to get auditing results collected by Snowflake native audit tools.
To configure Audit DB Trail for your Snowflake database, do the following:
1. Log in into your Snowflake as the ACCOUNTADMIN user or another user privileged to grant privileges. Create the
required VIEW by executing the following query:

CREATE DATABASE <DATABASE_NAME>;


USE <DATABASE_NAME>;
CREATE VIEW <VIEW_NAME> AS
SELECT DISTINCT
LOGIN_EVENT_ID,
SESSION_ID,
CREATED_ON,
CLIENT_APPLICATION_ID
FROM SNOWFLAKE.ACCOUNT_USAGE.SESSIONS;

2. Grant your Snowflake role the following privilege:

GRANT SELECT ON VIEW <DATABASE_NAME>.<SCHEMA>.<VIEW_NAME> TO ROLE <ROLE_NAME>;


GRANT USAGE ON DATABASE <DATABASE_NAME> TO ROLE <ROLE_NAME>;
GRANT USAGE ON SCHEMA <DATABASE_NAME>.<SCHEMA_NAME> TO ROLE <ROLE_NAME>;
9 DataSunrise Rules | 152
Grant your Role the MONITOR privilege on all available database users so to give your Role the ability to track all
user activity:

GRANT MONITOR ON USER <USER_NAME> TO ROLE <ROLE_NAME>;

3. Connect to your DataSunrise's Web Console. Create a Database profile in the Configuration → Databases. In
the Capture Mode drop-down list, select Trail DB Audit Logs. Note that you can also create a Trailing task for an
existing Database profile. For this, navigate to the Capture Mode section and click Trail DB Audit Logs:
• Select the database interface and DataSunrise server the sync with audit trail will be performed on. Save the
changes
• Specify the database, schema, VIEW you created before and the Role that you granted the required privileges
before
4. Configure an Audit Rule to capture data from Snowflake using DataSunrise's Audit Trail mode. You can use an
empty Object Group or Query Types Rule to test Audit Trail. Note: wait for about 120 minutes for the audited
queries to be displayed at the Transactional Trails section of the Web Console. Such a delay is caused by
Snowflake itself because Snowflake refreshes session information every 120 minutes.

9.9.17 Using Audit Trail for auditing AWS S3 queries


This feature enables you to get auditing results collected by AWS native auditing functionality. First and foremost
this is useful for auditing AWS S3 bucket access.
To enable DataSunrise to get AWS S3 auditing results, do the following:
1. Prepare an S3 bucket to store the logs in:
• Navigate to your bucket's Properties → Server access logging, click Edit and enable Server access logging
• Specify a bucket to store audit logs in in the Target bucket field

Important: DO NOT select the S3 bucket you're going to audit as the one you will use for storing audit log
files. This will lead to auditing of unnecessary DataSunrise and Amazon activity and may pose a threat to your
S3 security.

• Save changes
2. Add a new Amazon S3 instance to DataSunrise using the Audit Trail option:
• Open DataSunrise's Web Console and navigate to Configuration → Databases. Create new Amazon S3
database Instance or open an existing database instance details page where Audit Trail was configured
• At the bottom section of the page, click Trail DB Audit Logs
3. Configure an Audit Rule to capture data from S3 using DataSunrise's Audit Trail mode
• For auditing results, navigate to Audit → Transactional Trails.

9.9.18 Using Audit Trail for auditing standalone Neo4J


database queries
This feature enables you to get auditing results collected by Neo4J native audit tools. Note that DataSunrise doesn't
support Neo4J installed on Windows so the following guide is applicable to Linux AMI.
To configure Audit DB Trail for your Neo4J database, do the following:
1. Open the following file: /etc/neo4j/neo4j.conf and uncomment the following lines in it:

dbms.connector.bolt.enabled=true
dbms.connector.bolt.tls_level=DISABLED
dbms.connector.bolt.listen_address=:7687
dbms.connector.bolt.advertised_address=:7687
and
9 DataSunrise Rules | 153
dbms.logs.query.rotation.size=20k
dbms.logs.query.rotation.keep_number=7

2. Run your Neo4J server and set an initial password:

sudo systemctl start neo4j


bin/neo4j-admin set-initial-password <password>

Check password and authorization:

sudo neo4j-admin set-initial-password <password>

3. If necessary, execute queries you want to audit. You can find the logs in the following folder: /var/log/neo4j/
4. Navigate to the Configuration → Databases section of the Web Console and create a new Neo4J database
Instance.
5. In the Capture Mode section, select Local Folder in the Connection drop-down list; In the Mode drop-down list,
select Trailing the db audit logs; specify the path to the folder Neo4J stores its logs (/var/log/neo4j/ by default)
6. Configure an Audit Rule to capture data from your Neo4J using DataSunrise's Audit Trail mode. For auditing
results, navigate to Transactional Trails section of the Web Console.

9.9.19 Solving the Missing Grants Issue


You may face an issue caused by missing user permissions required for performing DB audit trailing from a local
folder. To solve this issue, do the following:
Let's assume that you need to configure local trailing for MySQL and the logs are stored in /var/log/mysql. Note that
the dsuser user is used to configure trailing.
1. Learn the owner of the folder where the logs are stored (the logs folder):

stat -c %G /var/log/mysql

Most probably it is mysql for Fedora and adm for Ubuntu.


2. Ensure that the logs folder owner group is privileged to read and scan the logs folder:

stat -c %A /var/log/mysql

(r (read) and x (execute) privileges should be displayed)


Or:

stat -c %a /var/log/mysql

(the second number in the line displaying rights should be 5 or 7)


3. If the owner group is not granted with the required privileges, grant them:

sudo chmod -R 755 /var/log/mysql

4. Ensure that dsuser is included in the logs folder owner group:

sudo groupmems -g mysql -l

If the user is not included in the owner group, add this user to the group:

sudo groupmems -g mysql -a dsuser


9 DataSunrise Rules | 154
If you can't execute the command above, try to execute the following one before:

sudo usermod -a -G mysql dsuser

5. Ensure that the user is granted to view the logs folder:

sudo -u dsuser ls -alh --color /var/log/mysql/

6. Ensure that the user is granted to read the log files:

sudo -u dsuser cat /var/log/mysql/mysqld.log

9.9.20 Configuring Audit Trail for auditing Redshift


database queries
To configure DataSunrise to receive audit results collected by Redshift native audit tools, do the following:

Note: The number and size of Amazon Redshift log files in Amazon S3 depends heavily on the activity in your
cluster. If you have an active cluster that is generating a large number of logs, Amazon Redshift might generate the
log files more frequently. You might have a series of log files for the same type of activity, such as having multiple
connection logs within the same hour. Because Amazon Redshift uses Amazon S3 to store logs, you incur charges
for the storage that you use in Amazon S3. Before you configure logging, you should have a plan for how long you
need to store the log files. As part of this, determine when the log files can either be deleted or archived based
on your auditing needs. The plan that you create depends heavily on the type of data that you store, such as data
subject to compliance or regulatory requirements. For more information about Amazon S3 pricing, go to Amazon
Simple Storage Service (S3) Pricing.

1. Create a Redshift cluster on AWS.


2. Create an AWS S3 bucket and create a folder in it
3. Attach a Parameter group to the Redshift cluster (you can create a Parameter group in Config → Workload
management):
• Configure enable_user_activity_logging == TRUE in your Parameter group
• Configure access to your S3 bucket (Bucket name → Permissions → Bucket policy → Edit) by pasting the
following code and replacing <Account ID> with the corresponding Account ID (see the table below) and
<Bucket name> with your S3 Bucket name:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Put bucket policy needed for audit logging",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AccountId>:user/logs"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::<BucketName>/*"
},
{
"Sid": "Get bucket policy needed for audit logging ",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AccountID>:user/logs"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::<BucketName>"
9 DataSunrise Rules | 155
}
]
}

Table

Region name Region Account ID


US East (N. Virginia) Region us-east-1 193672423079
US East (Ohio) Region us-east-2 391106570357
US West (N. California) Region us-west-1 262260360010
US West (Oregon) Region us-west-2 902366379725
Africa (Cape Town) Region af-south-1 365689465814
Asia Pacific (Hong Kong) Region ap-east-1 313564881002
Asia Pacific (Mumbai) Region ap-south-1 865932855811
Asia Pacific (Osaka) Region ap-northeast-3 090321488786
Asia Pacific (Seoul) Region ap-northeast-2 760740231472
Asia Pacific (Singapore) Region ap-southeast-1 361669875840
Asia Pacific (Sydney) Region ap-southeast-2 762762565011
Asia Pacific (Tokyo) Region ap-northeast-1 404641285394
Canada (Central) Region ca-central-1 907379612154
Europe (Frankfurt) Region eu-central-1 053454850223
Europe (Ireland) Region eu-west-1 210876761215
Europe (London) Region eu-west-2 307160386991
Europe (Milan) Region eu-south-1 945612479654
Europe (Paris) Region eu-west-3 915173422425
Europe (Stockholm) Region eu-north-1 729911121831
Middle East (Bahrain) Region me-south-1 013126148197
South America (São Paulo) Region sa-east-1 075028567923

4. Navigate to Properties → Database Configurations → Edit → Edit audit logging and specify bucket name and
prefix (folder to store logs in)
5. Enable Publicly accessible (Cluster name → Actions → Modify publicly accessible settings)
6. Open DataSunrise's Web Console. Navigate to Configuration → Databases and create a Redshift database
Instance
7. Click Trail DB Audit Logs and in the trailing settings, input your S3 bucket name and Prefix.

9.9.21 Getting Audit Events via AWS DAS (Database


Activity Streams) for Aurora PostgreSQL
Database activity streams provide a near-real-time stream of the activity in your DB cluster. DAS require use of
Amazon Kinesis: Aurora pushes activities to an Amazon Kinesis data stream. From Kinesis, you can configure AWS
services such as Amazon Kinesis Data Firehose and AWS Lambda to consume stream and store the data. Note that
DAS require use of AWS Key Management Service (AWS KMS).
9 DataSunrise Rules | 156
1. Create a KMS Key. Navigate to AWS KMS and create a key:
• Key type: Symmetric
• Key material origin: KMS
• Regionality: Single-region key
• Key policy: default
2. Create an RDS Aurora PostgreSQL cluster. Note that you need to use at least db.r5.large or higher-grade machine
for your database. In Additional configuration → Encryption → Enable encryption → AWS KMS Key. select
the key you created before
3. Navigate to the list of RDS databases and select your database in the list. Click Actions → Start database
activity stream
4. Configure DAS:
• Master key: your KMS key
• Database activity stream mode: Asynchronous
You can access the stream at the Configuration section of your RDS cluster's settings
5. Prepare an IAM Role for your Aurora PG database Navigate to AWS IAM → Roles and create a Role, associate the
Role with the Policy below. You can also use an existing Role and associate it with the fofllowing Policy:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"Kinesis:DescribeStreamSummary",
"Kinesis:ListShards",
"Kinesis:GetShardIterator",
"Kinesis:GetRecords",
"Kinesis:DescribeStreamSummary",
"KMS:Decrypt",
"RDS:DescribeDBClusters"],
"Resource": ["<ARN of your Kinesis stream>",”<ARN of your KMS key>”,”<ARN of your RDS>”]
}
]
}

Associate the Role with your RDS database


6. Open DataSunrise's Web Console and navigate to Configuration → Databases. Create a new Aurora PG
Database Instance:
• Note that you need to specify the endpoint of your Aurora PG cluster in the Host field
• In the Capture Mode section, select Trailing the DB Audit Logs from the Mode drop-down list
• In the Format Type drop-down list, select Database activity stream
7. Navigate to Audit → Rules and create a data audit Rule for your Aurora PG Instance. For auditing results,
navigate to Audit → Transactional Trails. Note that results might appear with a delay of about 5-10 minutes
8. You can also adjust DAS-based trailing by changing the following Additional parameters (seeAdditional
Parameters on page 337 ):
• TrailDASIntervalTime
• TrailDASOffsetTime

9.9.22 Configuring Audit Trail for auditing MS Azure


Synapse database queries
To configure DataSunrise to receive audit results of Synapse database activity, do the following:
1. Navigate to Synapse and create a new Synapse workspace
9 DataSunrise Rules | 157
2. Navigate to your workspace → Azure SQL Auditing and enable auditing. Select Storage account to save your
audit data in
3. Navigate to Storage accounts and locate your account. Navigate to Data Storage (Containers) → you
container. Copy URL of the container
4. Open DataSunrise's Web Console, navigate to Configuration → Databases and create a new MS SQL database
Instance
5. Specify your Synapse's SQL Endpoint as Host, port 1433, login and password for your Synapse workspace
6. Create a DB Trailing. In Log files path/url, input the URL of your storage container where audit logs reside. For
example:

https://trailtest.blob.core.windows.net/sqldbauditlogs/antony-test/master/SqlDbAuditing_ServerAudit/

7. Create an Audit Rule for your Synapse Instance. For auditing results, navigate to Audit → Transactional trails.

9.9.23 Configuring Audit Trail for MS Azure


DataSunrise can get audit results collected by MS Azure SQL database native auditing tools.

Important: you may experience some issues with opening and closing sessions but all the database events will be
saved to DataSunrise without any problems. In case of Azure SQL Managed Instance, the configuration procedure is
similar to the one for MS SQL with the following extent: https://docs.microsoft.com/en-us/azure/azure-sql/managed-
instance/auditing-configure#createspec

Do the following:
1. Create an Azure SQL database server. Navigate to the tergate database page or in case you need to audit all
database logs, navigate to the database server page
2. Navigate to the Auditing section from the left panel (Azure SQL Auditing) and set Enable Azure SQL Auditing
to ON
3. Add a storage. Click Storage account → Configure required settings. At the next page, click Create new or
select an existing one
4. Add your Azure SQL Instance to DataSunrise (Configuration → Databases)
5. Configure Trail DB Audit Logs:
Setting Required value
Instance Interface Navigate to Azure Storage Explorer, select your Storage and Copy URL
to the blob container. Paste it here
Log files path/url URL of the Azure storage you created before

6. Navigate to Audit and create an audit Rule for your Azure SQL database Instance. For auditing results, navigate
to Audit → Transactional Trails.

9.9.24 Configuring Audit Trail for auditing MS Azure MySQL


database queries
To configure DataSunrise to receive auditing results of Azure-hosted MySQL database activity, log in into Azure
portal and do the following:
1. Navigate to App registrations → New registration and register your application. Note that Redirect URI
(optional) is not required
2. At the API permissions page, click Add a permission
9 DataSunrise Rules | 158
3. In the Microsoft APIs tab, select Azure Storage then select Delegated permissions and user_impersonation.
Click Add permissions to apply the changes
4. Navigate to your app registration. Select Certificates and secrets
5. At Client secrets, click New client secret to create a new secret. Save the VALUE key somewhere
6. Navigate to Resource groups and create a Resource group
7. Navigate to Storage accounts and create an Account
8. Select your Resource group and specify a Storage acount name. Leave all other settings as by default
9. Navigate to Access Control (IAM). Click Add role assignment
10. Select Reader: Next
11. Click +Select members and select your App. Review and assign
12. Click Add role assignment again and select Storage Blob Data Reader
13. Click +Select Members and select your app. Review and assign
14. Navigate to All Services → Azure Database for MySQL servers and create a new server: Navigate to Create
→ Flexible server → Create
15. Provide all the required server details
16. At the Networking tab, check Allow public access... and add required firewall rules. Complete the deployment
17. Navigate to your MySQL database → Server Parameters and enable audit_log_enabled
18. Select event types to be logged by configuring audit_log_events
19. Add MySQL users to be included in or excluded from logging by configuring audit_log_exclude_users and
audit_log_include_users respectively. Save the settings
20. Navigate to Diagnostic settings → Add
21. Check MySqlAuditLogs and Archive to a storage account and select your Storage Account
22. Open DataSunrise's Web Console and create a MySQL Instance in Configuration → Databases. Select Trailing
the DB Audit Logs in the Instance's settings
23. Copy ClientID and TenantID from your Azure App
24. Client Secret is the VALUE mentioned in step 5
25. You can find Blob container name in Storage Accounts - Your account → Containers of your Azure settings
26. Create some Audit Rules to get the logs. For auditing results, navigate to Audit → Transactional trails

9.9.25 Configuring Audit Trail for auditing MS Azure


PostgreSQL database queries
To configure DataSunrise to receive audit results of Azure-hosted PostgreSQL database activity, do the following:
1. Navigate to App registrations → New registration and register your application. Note that Redirect URI
(optional) is not required
2. Navigate to your app registration. Select Certificates and secrets
3. At Client secrets, click New client secret to create a new secret. Save the VALUE key
4. At API permissions page, select Delegated permissions then user_impersonation then click Add
permissions
5. Configure your PostgreSQL flexible server's settings:

log_checkpoints = OFF
log_destination = CSVLOG
pgaudit.log = ALL
shared_preload_libraries = PG_STAT_STATEMENTS, PGAUDIT
log_line_prefix = %t-%c-u"%u"u-

6. Enable diagnostic settings for your PostgreSQL server using either the Azure portal, CLI, REST API, or
PowerShell. The log category to select is PostgreSQLLogs. See the next step for a guide on using Azure portal
for that
9 DataSunrise Rules | 159
7. In Azure portal, navigate to Diagnostic settings of your PostgreSQL server. Click Add Dianostic Setting. Fill
out all the required fields. Select log type PostgreSQLLOgs. In Destination details, select Archive to a storage
account and select your Storage account. Save the setting
8. Assign an Azure role for access to BLOB data (Reader and Storage BLOB Data Reader for the app created earlier)
9. Navigate to Access Control (IAM). Click Add role assignment
10. Select Reader: Next
11. Click +Select members and select your app. Review and assign
12. Click Add role assignment again and select Storage Blob Data Reader
13. Click +Select Members and select your app. Review and assign
14. Open DataSunrise's Web Console and create a PostgreSQL Instance in Configuration → Databases. Select
Trailing the DB Audit Logs in the Instance's settings
15. Copy ClientID and TenantID from your Azure App
16. Client Secret is the VALUE mentioned in step 3
17. You can find Blob container name in Storage Accounts - your account → Containers of your Azure settings
18. Create some Audit Rules to get the logs. For auditing results, navigate to Audit → Transactional trails

9.9.26 Configuring Audit Trail for auditing Sybase database


queries
DataSunrise supports the Audit Trail feature for Sybase 16.0+ only. To configure DataSunrise to receive audit results
of Sybase database activity, do the following:
1. Create at least two Devices for Audit Tables and a Device for Audit Transaction Log:
For details, refer to:
• https://help.sap.com/docs/SAP_ASE/2705a3b1e3df4514ab089cfedf87750d/
a94c9c45bc2b1014a51bc472e0f66dba.html?locale=en-US&version=16.0.3.3
• https://help.sap.com/docs/SAP_ASE/2705a3b1e3df4514ab089cfedf87750d/
a71393f7bc2b10149656b9268f8672ec.html?locale=en-US&version=16.0.3.3
2. Connect to your master database using a user granted with sso_role (user sa for example)
3. Switch auditing on by enabling the following parameter:

sp_configure "auditing", 1

4. Create a database for storing archived Audit Tables (aud_db for example)
5. Create an archive table with columns similar to those in sybsecurity Audit Tables:

use aud_db
go
select *
into audit_data
from sybsecurity.dbo.sysaudits_01
where 1 = 2

6. Create a threshold procedure in the +sybsecurity database. Example of two Audit Tables:

create procedure audit_thresh as


declare @audit_table_number int
/*
** Select the value of the current audit table
*/
select @audit_table_number = scc.value
from master.dbo.syscurconfigs scc, master.dbo.sysconfigures sc
where sc.config=scc.config and sc.name = "current audit table"
/*
9 DataSunrise Rules | 160
** Set the next audit table to be current.
** When the next audit table is specified as 0,
** the value is automatically set to the next one.
*/
exec sp_configure "current audit table", 0, "with truncate"
/*
** Copy the audit records from the audit table
** that became full into another table.
*/
if @audit_table_number = 1
begin
insert aud_db.dbo.sysaudits
select * from sysaudits_01
truncate table sysaudits_01
end
else if @audit_table_number = 2
begin
insert aud_db.dbo.sysaudits
select * from sysaudits_02
truncate table sysaudits_02
end
return(0)

7. Attach a stored procedure to audit segments. To see the information on segments, execute the following query
in the sybsecurity database

sp_helpsegment

Attach the stored procedure to audit segments by executing the following queries:

use sybsecurity
go
sp_addthreshold sybsecurity, aud_seg_01, 250, audit_thresh
go
sp_addthreshold sybsecurity, aud_seg_02, 250, audit_thresh
go

When sysaudits_01 is from 250 pages from being full, the threshold procedure audit_thresh is triggered.
The procedure switches current Audit Table to sysaudits_02 and SAP ASE starts writing new audit records
to sysaudits_02. The procedure also copies all audit data from sysaudits_01 to the audit_data archive table
located in the audit_db database. The rotation of the Audit Tables continues in this manner without any manual
intervention
8. Set auditing options. Having installed auditing, use sp_audit to set the following auditing options:

sp_audit <option>, <login_role_name>, <object_name> [,<settings>]

To audit login and logout, execute:

sp_audit "login", "all", "all", "pass"@ and @sp_audit "logout", "all", "all", "on"

Refer for information on other options:


• https://help.sap.com/docs/SAP_ASE/29a04b8081884fb5b715fe4aa1ab4ad2/
ab54050ebc2b1014b5d9ca93507f4a1d.html?version=16.0.3.4&locale=en-US
9. Open DataSunrise's Web Console and create a Sybase Instance in Configuration → Databases. Select Trailing
the DB Audit Logs in the Instance's settings
10. Provide the required details on schema, database and table the audit data is copied to during the execution of
the threshold procedure.
11. Create some Audit Rules to get the logs. For auditing results, navigate to Audit → Transactional trails
9 DataSunrise Rules | 161

9.9.27 Using Audit Trail for auditing Google Cloud BigQuery


database queries
This feature enables you to get auditing results collected by GCloud BigQuery's native audit tools.
DataSunrise supports reading and processing of audit log files generated by BigQuery's engine. Before creating a
BigQuery Instance in DataSunrise you need to create a Service Account on Google Cloud.
1. Log into Google Cloud Platform Console. Expand the Options menu (three bars in the top left corner)
2. Navigate to IAM & Admin → Service Accounts. Click Create Service Account
3. Name your Service Account: Service Account ID will be generated based on the Account's name. Click Create
and Continue
4. Add the following roles to your Account:
• BigQuery Resource Viewer: view all BigQuery resources, with no option to make changes or purchase
decisions
• BigQuery User: this will provide you the grants needed to execute queries, create datasets, read dataset
metadata, and list tables
• BigQuery Data Viewer: this will grant you access to view datasets and their contents
• BigQuery Job User: this will enable you to run BigQuery jobs. Note that if you are going to access BigQuery
with software that runs jobs (DataEdo or DbSchema for example), you should add the BigQuery Job User to
your Service Account.

Note: to make other projects visible in DataSunrise, apply the above roles to your Service Account for each
project.

5. Click Continue. The last step is to grant user access to your Service Account. This step is optional and may be
skipped
6. Now you need to create a Key File. Enter your Account's settings
7. Navigate to the KEYS tab. Click Add Key → Create new key
8. Select the JSON file type and click CREATE
9. At this point a Key File will be generated and you will receive a prompt to download it. Download the key file
10. Save the key file in a secure location as it contains the private key required for establishing a connection with
your BigQuery database. Note that it cannot be generated again if lost
11. Open DataSunrise's Web Console and navigate to Configuration → Databases. Click Add Database
12. Provide the following connection details:
• Logical Name: any
• Database Type: BigQuery
• Hostname or IP: leave it as by default
• Port: default 443
• Authentication Method: Regular
• Service Account Email: use the Email address from the JSON file you downloaded while creating a Service
Account
• Save Secret Key: optional
• RSA Private Key: use the Private Key from the JSON file you downloaded while creating a Service Account
• Project ID: use the Project ID from the JSON file you downloaded while creating a Service Account
13. Select Trailing the DB Audit Logs in the Instance's settings. Test the connection between DataSunrise and your
database and save the settings
14. Create some Audit Rules to get the logs. For auditing results, navigate to Audit → Transactional trails.
9 DataSunrise Rules | 162

9.10 Data Security


The Data Security component protects target databases against unauthorized queries and SQL injection attacks.
Data protection functionality is available in the Proxy mode only. Deployed as an intermediary between users and
databases, DataSunrise intercepts all incoming and outcoming queries. With the help of advanced SQL analysis
algorithms it detects unauthorized access attempts, SQL injection attacks and any kind of suspicious activity that
violates existing security policies.
In case of an attack attempt or excessive privilege abuse, apart from blocking malicious queries, DataSunrise notifies
about the activity. Hence, database administrators can timely react to incidents and take measures to prevent
targeted attacks.
Data protection algorithms are controlled by means of dedicated Security Rules. The rules are highly adjustable for
various business needs. You can create and edit data security rules in the Data Security section of the DataSunrise's
Web Console.

Figure 34: DataSunrise database firewall

9.10.1 Creating a Data Security Rule


To create a new Security rule, perform the following:

1. Go to the Security → Rules subsection and click Add Rule.


2. Input the required information to the Main Section subsection (General Settings on page 111)
3. Input the required information to the Action subsection:
9 DataSunrise Rules | 163

Parameter Description
Allow check box Ignore the incoming queries.
Log Event in Storage check box Save the event info in the Audit Storage (refer to Audit Storage
Settings on page 383).
Syslog Configuration drop-down list Select a CEF group to use when exporting data through Syslog
(refer to Syslog Settings (CEF Groups) on page 222).
Blocking Method drop-down list Method of blocking an SQL query when the rule is triggered:
• Query Error: query is blocked and an SQL error notification
is sent
• Disconnect: query is blocked and client application is
disconnected from the target database
• Empty Result Set: query is blocked and the client gets an
empty result set instead of actual data

Custom Blocking Message field A message DataSunrise displays when blocking a query. Can be
unique for each Rule. It enables you to address each Rule with
respective meaningful, context-aware message to keep your
users informed of the reasons behind their access limitations to
certain areas or even during certain time periods when using
along with the Schedule feature.

4. Input the required information to the Filter Sessions subsection (Filter Sessions on page 111).
5. Input the required information to the Filter Statements subsection (Filter Statements on page 114)
6. Set Trigger the Rule only if the number of affected/fetched rows is not less than: if necessary (Response-
Time Filter on page 120)
7. Configure User Blocking Filters if necessary. These filters can be used to prevent user attempts to reach the
protected database (it blocks a user by name or IP address when a number of prohibited operations exceeds
the specified value). Use the following elements to configure the blocking:
Filter parameters Description
User Block Options drop-down list • Don't block: ignore all user operations
• Block Temporarily: block a user for a certain period of time (see
Block User for a Period of Time)
• Block Permanently: block a user permanently

User Block Method drop-down list • By User name and Host: block all access attempts coming from
the specified User and IP address/Host
• By Host only: block access attempts coming from the specified IP
address/Host

Block User for a Period of Time (minutes) Specify a period of time to temporarily block a user for
field (for Block temporarily)
Trigger the Rule if the Number of Specify the number of prohibited operations intercepted to trigger
Prohibited Operations Reached field the Rule. When the number of operations exceeds this number, the
user will be blocked
Per (minutes) field Specify the number of prohibited operations intercepted per minute
to trigger the Rule (if necessary)

8. Configure Data Filter if necessary (Data Filter on page 121).


9 DataSunrise Rules | 164
9. Check the Enable check box of the Rule Triggering Threshold section to set the threshold parameters (Rule
Triggering Threshold on page 120).
10. Specify Tags if necessary (Tags on page 199).
11. Click Save Rule to save the Rule's settings.

9.11 Data Masking


Data Masking is used to protect confidentiality of personally identifiable data, personal sensitive data or
commercially sensitive data when it is required for valid test cycles. DataSunrise obfuscates the real database
content and displays fake values thus preserving the genuine structure of the information. It is a helpful tool to mask
credit card numbers, phone numbers, email addresses, medical information, etc. from unauthorized users.
DataSunrise features both Static and Dynamic data masking. DataSunrise supports masking of table calls included in
functions and stored procedures.

Important: Random Email, Random string, Random from Lexicon, Random Credit Card Number and Regexp replace
(MS SQL only) masking methods (refer to Masking Methods on page 167) require creation of a dedicated schema
or database called DS_ENVIRONMENT (by default) to store tables and views needed to perform masking using the
aforementioned methods. This is applicable both to Dynamic and Static masking. You can change your Environment
name in Configuration → Databases → Your DB Instance → Advanced Settings → Environment Name.

9.11.1 Generating a Private Key Needed for Data Masking


To use Format-Preserving Masking methods (Format-Preserving Masking on page 177) and Random methods with
the Consistent masking option enabled (for Static Masking), you need to create an encryption key. For this, do the
following:
1. To create a new key, navigate to Masking → Masking Keys and click Add Key. Either generate a new key at the
Generate tab or navigate to the Insert tab and paste/upload an existing key.
2. You can find masking keys in the Masking → Masking Keys section. You can edit your keys but note that they
are of fixed length.

9.11.2 Dynamic Data Masking


Dynamic Masking is performed on-the-fly, so it doesn't require additional resources to store the copied database.
While constructing a response to a query, DataSunrise replaces actual values in the query results with random
values, predefined values or special characters.

Figure 35: Dynamic Data Masking


9 DataSunrise Rules | 165
For relational databases, DataSunrise modifies an incoming query itself making a target database to construct
a response with obfuscated data inside. For NoSQL databases (DynamoDB, Mongo, Elasticsearch), DataSunrise
modifies the database response before redirecting it to a client application.
An example of a SELECT query before masking applied (PostgreSQL)

SELECT * FROM public.customers ORDER BY "Email" ASC

An example of a SELECT query after masking applied (PostgreSQL, the "Email" column is being masked)

SELECT public.customers."Order", public.customers."FirstName", public.customers."LastName",


public.customers."Address", public.customers."State", public.customers."ZIP",
CAST(regexp_replace(trim(public.customers."Email"), '[[:alnum:]](?![[:alnum:]]*($| ))', '*', 'g') as
character varying(30)) as "Email", public.customers."Card" FROM public.customers
14:54:52.502170| (8972|2132) ORDER BY CAST(regexp_replace(trim("Email"), '[[:alnum:]](?!
[[:alnum:]]*($| ))', '*', 'g') as character varying(30)) ASC

Except relational and NoSQL databases, DataSunrise also can mask contents of CSV files stored in Amazon S3
buckets. The masking is done by certain comma-separated fields.

Restriction: there is a limitation exist which is associated with using stored procedures for Dynamic Masking.
Let's assume that two masking Rules exist and each Rule is configured to be triggered when a certain column
is SELECTed: the first Rule is configured on "column1" and the second Rule is configured on "column2". If both
columns are SELECTed using a stored procedure, only the second Rule will be triggered.

Restriction: for AWS RDS-hosted MariaDB, dynamic masking inside functions and procedures doesn't work
because admin privileges required for masking inside routines can't be obtained on RDS databases.

Important: for Dynamic Masking using random-based methods, you need a dedicated schema (DS Environment) in
your database (see Configuring DataSunrise for Masking with random-based methods on page 177).

9.11.3 Creating a Dynamic Data Masking Rule


To create a new Rule, do the following:

1. Navigate to the Masking → Rules section and click Add Rule.


2. Input the required information to the Main section (General Settings on page 111).
3. Input the required information to the Action subsection. For parameters common for all Rules, refer to the Audit
Rules description:
9 DataSunrise Rules | 166

Parameter Description
Keep Row Count check box Disable masking of columns included into GROUP BY,
HAVING, ORDER BY, WHERE clauses.
Mask SELECTs Only check box Mask only SELECT queries. For example, the following
query will not be masked:
UPDATE customers SET id = id RETURNING *

Action drop-down list Select an appropriate option from the list to block
certain queries aimed at modification of masked
columns. For example, such queries might be blocked
(the Email column is the masked column):
UPDATE test.customers SET "Order" = '1234' WHERE
"Email" = '[email protected]';

4. Input the required information to the Filter sessions subsection (Filter Sessions on page 111).
5. Input the required information to the Masking Settings subsection:
Parameter Description
Mask Data subsection Specify database columns to mask. Click Select to do it manually and select
the required columns in the objects tree. Click Select then ADD REGEXP to use
regular expressions.
Masking Method drop-down Data obfuscation algorithm. Refer to Masking Methods on page 167.
list (for Mask Data only)
Hide Rows subsection Hide table rows which don't match the specified Masking Value. Refer to Masking
Methods on page 167. Click Select to select a table to hide rows in.
Condition for Column Value Condition for the value of the column rows of which should be hidden (any
to Show Rows field (for Hide WHERE type conditions). For example Age>25 means that the table rows where
Rows only) the Age column's value is less than 25 will be hidden.
More examples:
LastName = 'Smith'
LastName LIKE ('%Smi%')
EmployeeKey <= 500 EmployeeKey = 1 OR EmployeeKey = 8 OR EmployeeKey = 12
EmployeeKey <= 500 AND LastName LIKE '%Smi%' AND FirstName LIKE '%A%'
LastName IN ('Smith', 'Godfrey', 'Johnson')
EmployeeKey Between 100 AND 200

Note: if you select a column(s) associated with another column (linked with a primary key for example), you will
be prompted that there are columns exist that contain related data. Click on this message to select the associated
columns. Once you select them, these columns will be added to the list of columns to be masked. More on
associations: Table Relations on page 400.

6. Check Disable Rule if you don't need it active.


7. Input Tags if necessary (Tags on page 199)
8. Click Save Rule to apply the new settings.
9 DataSunrise Rules | 167

9.11.4 Masking Methods


When creating a Masking Rule, it is necessary to specify which data obfuscation algorithm DataSunrise should
employ. Use the Masking Method drop-down list to select one of the following algorithms:
Masking type Description DM SM
Default INT-type values are replaced with zeroes (0) and STRING-type values are + +
replaced with white spaces.
Fixed String STRING-type values are replaced with a predefined string. + +
Empty value STRING-type values are replaced with white spaces. + +
Function call Calling a user-created custom function for data obfuscation. You can pass + +
various parameters to a function by clicking Add Parameter and selecting
the required parameter in the drop-down list.
Email masking The user name and domain section of email addresses are replaced + +
with "*", except the first one and the last one in a row. For example:
a***@**.**m.
Email masking full The user name and domain section of email addresses "*", except the "@" + +
character and top-level domain name. For example: ***@**.com.
Mask username of Masking the user name section of email addresses "*". For example: + +
Email ***@datasunrise.com.
Credit card Masking credit card numbers. It displays the last four digits of a credit card + +
masking number, other characters are replaced with "X". For example: XXXX-XXXX-
XXXX-1234.
Mask last chars Masking a specified number (Character Count) of database entry's last + +
symbols.
Show last chars Showing a specified number (Character Count) of database entry's last + +
symbols.
Mask first chars Masking a specified number (Character Count) of database entry's first + +
symbols.
Show first chars Showing a specified number (Character Count) of database entry's first + +
symbols.
Show first and last Showing a specified number (Character Count) of database entry's first + +
chars and last symbols.
Mask first and last Masking a specified number (Character Count) of database entry's first + +
chars and last symbols.
Regexp replace Replacing regular expressions with a predefined string (specify Replace + +
By). You need to specify a pattern (Replacing Pattern) to search for
regular expressions in columns. Available for all the supported databases
except Sybase.

Important: Regexp Replace is not available for MS SQL Server databases


hosted on cloud services such as AWS RDS and MS Azure SQL Database

Mask URL Masking of URL addresses. + +


Unstructured Replacing potentially sensitive values with the "*" character. + +
masking
Masking with Lua Masking using a Lua script created by user. + +
script
9 DataSunrise Rules | 168

Masking type Description DM SM


FP Tokenization Format-preserving masking for emails. + +
Email
FP Tokenization Format-preserving masking for SSNs. + +
SSN
FP Tokenization Format-preserving masking for credit card numbers. Works with realistic + +
Credit Card credit card numbers only (the ones created using the Luhn algorithm).
FP Tokenization Format-preserving masking for STRING-type values. You can select an + +
String alphabet to use when encrypting data in the Alphabets drop-down list.

Important: if you're going to use this masking method, ensure that your
case sensitivity settings correspond to the case sensitivity settings of the
database server you're going to mask data at.

FP Tokenization Format-preserving masking for NUMBER-type values. + +


Number
FP Encryption FF3 Format-preserving encryption for emails using the FF3 encryption + +
Email algorithm.
FP Encryption FF3 Format-preserving encryption for SSNs using the FF3 encryption + +
SSN algorithm.
FP Encryption FF3 Format-preserving encryption for credit card numbers using the FF3 + +
Credit Card encryption algorithm. Works with realistic credit card numbers only (the
ones generated using the Luhn algorithm).
FP Encryption FF3 Format-preserving encryption for STRING-type values using the FF3 + +
String encryption algorithm. You can select an alphabet to use when encrypting
data in the Alphabets drop-down list.

Important: if you're going to use this masking method, ensure that your
case sensitivity settings correspond to the case sensitivity settings of the
database server you're going to mask data at.

FP Encryption FF3 Format-preserving encryption for NUMBER-type values using the FF3 + +
Number encryption algorithm.
Random US Phone Replacing of a US phone number with a random-generated phone number + +
Number in the following format : 1-555-XXX-XXXX. Available for MySQL, MariaDB,
Aurora MySQL, PostgreSQL, Aurora PostgreSQL, Redshift, TiDB, Greenplum,
Oracle and MS SQL Server.
NULL Value Replaces masked database entry with a NULL. + +
Substring Creating of a substring out of the original string. Starting position defines + +
the starting character of the resulting substring and String's Length
defines the substring length. Available for MySQL, MariaDB, Aurora MySQL,
Oracle, Redshift, PostgreSQL, Aurora PostgreSQL, TiDB, Greenplum, MS
SQL Server.
Random String Returns a random string of a random length (the string's length can be + +
defined with the Minimum Length and Maximum Length). Available for
MySQL, MariaDB, Aurora MySQL, Redshift, PostgreSQL, Aurora PostgreSQL,
Greenplum and Oracle.
9 DataSunrise Rules | 169

Masking type Description DM SM


Random US Social US SSN masking supports Numeric and String column types. Masks the + +
Security Number value with random numbers as follows: AAA-BB-CCCC (for String columns)
and AAABBCCCC (for Numeric columns). Available for MySQL, MariaDB,
Aurora MySQL, Oracle, PostgreSQL, Aurora PostgreSQL, Redshift, TiDB,
Greenplum, MS SQL Server.
Random Credit Replacing credit card numbers with random numbers created according to + +
Card the Luhn algorithm. Available for MySQL, MariaDB, Aurora MySQL, Amazon
Athena, Redshift, PostgreSQL, Aurora PostgreSQL, Greenplum, MS SQL
Server and Oracle. Supports String and Numeric column types.
Random Email Replacing emails with random characters like the following: + +
[email protected]. Available for MySQL, MariaDB, Aurora MySQL,
Redshift, PostgreSQL, Aurora PostgreSQL, Greenplum, MS SQL Server,
Oracle .
RandomValue Replaces masked database entry with a random entry from Lexicon. + +
From Lexicon Available for MySQL, Aurora MySQL, MariaDB, MS SQL Server, PostgreSQL,
Aurora PostgreSQL, Oracle. Note that you need some additonal grants
to use this method. Refer to Creating a MySQL/Aurora MySQL/MariaDB
Database User on page 240.
US ZIP Code Masking of 5-digits ZIP codes according to the US De-Identification + +
Masking Standard. If the population of the area ZIP code to be masked of is less
than 20000, then all the digits of ZIP code will be replaced with zero. If the
population is more than 20000, then first three digits will be left intact and
other two digits will be replaced with zero.
Fixed Number NUMBER-type and INT-type values are replaced with predefined values. + +
Random value like Database entry is replaced with random values. + +
current
Random from Numeric and String column types are replaced with values from the + +
interval specified range (specify the minimum value (Min) and the maximum value
(Max) of the range.
If the Decimal Numbers generation checkbox is enabled, it generates
random values in the part after the decimal position. The number of
characters after the decimal position will be equal or less than the value in
the Number of Decimal Digits field. Decimal numbers are generated only
for non-integer data types. For floating-point data types specified number
of decimal digits is not guaranteed.

Fixed date Replacing date values with a fixed value. Select date (fixed value) via (Date) + +
drop-down lists.
Fixed time Replacing time values with a fixed value. Select time (fixed value) via (Time) + +
drop-down lists.
Fixed datetime Replacing time values with a fixed value. + +
Random date Replacing date values with a random value from a predefined range. + +
interval Specify a range of dates to select a random value from, via Starting Date
and Ending Date drop-down lists.
Random time Replacing time values with a random value from a predefined range. + +
interval Specify a range of time to select a random value from, via Starting Time
and Ending Time drop-down lists.
9 DataSunrise Rules | 170

Masking type Description DM SM


Random date Replacing date values with random values from a predefined range. Specify + +
offset the maximum deviation (days) of the "masked" date from the initial date in
Max Dispersion, day field. Supports String column types.
Random time Replacing time values with random values from a predefined range. Specify + +
offset the maximum deviation (hours, minutes, seconds) of the "masked" time
from the initial time in Max Dispersion, hours field. Supports String
column types as well.
Random datetime Replacing time values with random values from a specified interval (you + +
interval need to set Starting Date/Time and Ending Date/Time values of the
interval)
Random datetime Replacing time values with random values from a predefined range. + +
offset Supports String column types as well.
Hide rows Hiding table rows which don't match your Masking Value (Condition for +
Column Value to Show Rows). For example Age>25 means that the table
rows where the Age column's value is less than 25 will be hidden.
Random Date with Masking of dates according to HIPAA. All elements of dates (except year) + +
Constant Year for dates that are directly related to an individual, including birth date,
admission date, discharge date, death date, and all ages over 89 and all
elements of dates (including year) indicative of such age, except that such
ages and elements may be aggregated into a single category of age 90 or
older.
Random Datetime Masking of dates and time according to HIPAA. All elements of dates + +
with Constant Year (except year) for dates that are directly related to an individual, including
birth date, admission date, discharge date, death date, and all ages over
89 and all elements of dates (including year) indicative of such age, except
that such ages and elements may be aggregated into a single category of
age 90 or older.
Mask Data Masking with a permanent random value based on using a masking cache. + +
Consistently This option is available for Random String, Random From Lexicon, Random
Credit Сard, Random Email. Select Use System Cache in the Mask Data
Consistently drop-down list or create a new cache for your Masking Rule.

Warning: Sometimes data masking will not work. For example, if Show First and Last algorithm you selected is
configured to show three first and three last characters of DB column's entry, and the entry itself is six characters
long, the masking will not work. In such cases use other masking types or purpose-written functions.

Note: when masking entries that include strings of fixed length ("char", "varchar", "nchar", "nvarchar" data types
for example), the string got after masking may be longer than the original string. The following masking types may
cause an obfuscated entry to exceed the original string length:
• Fixed string
• Function call
• Regexp replace
9 DataSunrise Rules | 171
9.11.4.1 Using a Custom Function for Masking
Along with prebuilt masking methods, you can use your own masking algorithms in the form of functions. To
employ custom function-based masking, do the following:
1. Create a function that will be used to mask your data. For example, here is a function for PostgreSQL database
supposed to replace logins of emails with random values (consisting of prefixes + mids + suffixes):

CREATE OR REPLACE FUNCTION public.get_random_fake_login()


RETURNS TEXT
AS $$
DECLARE
prefixes TEXT[] :=
'{bel,nar,gob,ab,ad,a,ac,as,ben,co,alm,cha,che,dea,kit,mac,par,ren,sie,sto}';
mids TEXT[] := '{adur,aes,ten,mar,sta,er,wa,le,kin,tow,han,an,tar,ou,eva,gag,urn,cac}';
suffixes TEXT[] := '{ux,ix,li,ci,cia,oth,wood,nen,oli,oir,ort,int,lin,ne,ns,si,hu,well}';
output TEXT := '';
BEGIN
output := prefixes[1+random()*(array_length(prefixes, 1)-1)] ||
mids[1+random()*(array_length(mids, 1)-1)] || suffixes[1+random()*(array_length(suffixes, 1)-1)];
IF random() > 0.5 THEN
output := output || trunc(random() * (90-70) + 70)::TEXT;
END IF;
RETURN output;
END;
$$ LANGUAGE PLPGSQL;

CREATE OR REPLACE FUNCTION public.get_masked_email_login(a text)


RETURNS TEXT
AS $$
SELECT regexp_replace(a, '.*(?=.*@.*)', public.get_random_fake_login());
$$ LANGUAGE SQL;

drop table if exists public.customers_names_map;


create table public.customers_names_map
(
src text PRIMARY KEY,
dst text
);

CREATE OR REPLACE FUNCTION public.hide_emails(


val text)
RETURNS text AS
$BODY$
DECLARE
res text;
sed float;
row_count integer;
rand_row integer;
BEGIN
--check in mapping tables
SELECT dst into res FROM public.customers_names_map WHERE src = val;
IF FOUND = FALSE THEN
res = public.get_masked_email_login(val);
INSERT INTO public.customers_names_map VALUES (val, res);
END IF;
return res ;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;

select "LastName", hide_emails("Email") from customers

This function includes three functions.


• The get_random_fake_login function returns TEXT (output) consisting of prefixes+mids+suffixes listed in the
DECLARE subsection of the function.
• The get_masked_email_login function gets the output of the previous function (the random values) and
transforms them into email addresses.
9 DataSunrise Rules | 172
• Then a service tables is created: customers_names_map. This table contains masked entries mapped on real
email entries.
• A function named hide_emails is created. It outputs the masked values. This function should be called when
creating a masking Rule.
• The last line transforms a regular SELECT query issued by a client into a query supposed to output the masked
values.
2. Install your function to your database. As a rule, to install a function, you need to execute it in your database's
client app.
3. Create a new Dynamic masking Rule.
4. Select columns to obfuscate and select the "Function Call" masking method.
5. Locate your function and select it. The name of your function should be displayed.
6. Click Save Rule to apply new settings. Then you can query your table to get the masked data.

9.11.4.2 NLP Data Masking (Unstructured Masking)


The NLP (Natural Language Processing) Dynamic Data Masking feature enables you to obfuscate sensitive data
contained in database columns that contain non-structured data. Non-structured data can be stored in the target
database in binary format (BLOB for example). First you specify a column(s) you want to obfuscate the sensitive data
in and select the "Unstructured Masking" masking method to use. Then DataSunrise parses the column's contents
and finds the sensitive data to obfuscate and replaces the sensitive data with asterisks (*).
The NLP Data Masking engine supports the following file formats:
• Microsoft Word: DOC, DOCX, RTF, DOT, DOTX, DOTM, DOCM, FlatOPC, FlatOpcMacroEnabled, FlatOpcTemplate,
FlatOpcTemplateMacroEnabled
• OpenOffice: ODT, OTT
• WordprocessingML: WordML
• Web: HTML, MHTML
• Text: TXT
• PDF (MySQL, PostgreSQL. Text only, images can't be masked). Note that the asterisk character which is used to
mask data is wider than some letters. This is the reason of overlapping text by asterisks in PDF files.
Example
Unmasked data:

Procedure Findings. The patient, Patrick Kelley, is a 39 year old male born on October 6, 1979. He has
a 6 mm sessile polyp that was found in the ascending colon and removed by snare, no cautery. Patrick's
address is 19 North Ave. Humbleton WA 02462. His SSN is 123-23-234. He experienced the polyp after
getting out of his blue Honda Accord with a license number of WDR-436. We were able to control the
bleeding. Moderate diverticulosis and hemorrhoids were incidentally noted. Recurrent GI bleed of
unknown etiology; hypotension perhaps secondary to this but as likely secondary to polypharmacy. He
reports first experiencing hypotension while eating queso at Chipotle.

Masked data:

Procedure Findings. The patient, **************, is a ** year old male born on


October *, ****. He has a * mm sessile polyp that was found in the ascending colon
and removed by snare, no cautery. *******'s address is ** ********** ************** *****.
His SSN is **********. He experienced the polyp after getting out of
his blue ************ with a license number of WDR-***. We were able to control
the bleeding. Moderate diverticulosis and hemorrhoids were incidentally noted.
Recurrent GI bleed of unknown etiology; hypotension perhaps secondary to this but
as likely secondary to polypharmacy. He reports first experiencing hypotension
while eating queso ***********.

Example 2
9 DataSunrise Rules | 173
Unmasked data:

Dear Mark,I am writing you to enquire about the status of the task #18897 in TRACKME task manager
(https://cd.trackme.com/18897). As a manager of Customer Development department, it is your
responsibility to speed up this stuck task. As far as I know, it was assigned to Ellie Sanders,
junior customer relationship manager #056. Please speed this up, because Mr. Williams is expecting to
get some insights from your research for the sales campaign which will be kicked off on 2019-11-11.
You can email me at [email protected] call me. My phone no is 202-555-0181P.S. Please check
emails from Mrs. Martinez. She was looking for you to give you some details on your business trip to
Phoenix.Cheers,Mike

Masked data:

*********, I am writing you to enquire about the status of the task #***** in ******* task manager
*****************************). As a manager of ******************** department, it is your
responsibility to speed up this stuck task. As far as I know, it was assigned to *************, junior
customer relationship manager #***. Please speed this up, because ************ is expecting to get
some insights from your research for the sales campaign which will be kicked off on **********. You
can email me at ******************* *or call me. My phone no is ************ P.S. Please check emails
from *************. *** was looking for you to give you some details on your business trip to *******.
Cheers, ***

Note: you need to install Java 1.8+ to be able to use NLP Data Masking. If you're running DataSunrise on Linux, you
need to configure JVM as well (Configuring JVM on Linux on page 173). If you're experiencing some problems with
JVM on Windows, add the path to your JVM folder to the PATH environment variable (for example: C:\Program Files
\Java\jre1.8.0_301\bin\server).

For instructions on how to use Unstructured masking, refer to subs. Dynamic Data Masking on page 164
Configuring JVM on Linux
To utilize the NLP Data Masking, you need to configure a Java Virtual Machine (JVM). To do this, perform the
following:

1. Locate the JVM library by executing the following command:

sudo find / -name "libjvm.so"

2. Copy the path to your "libjvm.so"


3. Navigate to the location of configuration files that contain library paths:

cd /etc/ld.so.conf.d/

4. Create a configuration file that will be used to register your Java library:

sudo vim java.conf

5. Paste the path to your "libjvm.so" into the configuration file. For example:

/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.201.b09-0.amzn2.x86_64/jre/lib/amd64/server/

6. Update cache:

sudo ldconfig

7. Check if the system knows about the library:

/opt/datasunrise/JvmChecker
9 DataSunrise Rules | 174
You should get something like this:

JVM load succeeded. Version 1.8

8. Restart DataSunrise's Core to apply the settings.

9.11.4.3 Static and Dynamic Masking Using Lua Script


DataSunrise enables you to use Lua Script for Static and Dynamic masking.
To use Lua Script for masking, do the following:
1. First, you need to prepare your scripts. Navigate to Configuration → Lua Scripts and click Create Lua Script
2. Set a logical name for the script and input the script into the Script field. Note that you can see Global variables
that can be used in the script (switch to the Global Variables... tab)
3. For the process of masking configuring, refer to Creating a Dynamic Data Masking Rule on page 165. While
selecting a masking method, select Masking with Lua script and select your script in the Lua Script drop-down
list
4. For Dynamic masking, two Global variables are used: #batchRecords and #dataFormat. When a user SELECTs
a table to be masked, all results are saved in the #batchRecords variable. This variable is an array that contains
the selected column's contents. To mask your table's entries, you should modify values contained in the
#batchRecords. The #dataFormat variable is for format of data included in the #batchRecords (0 - string, 1 -
binary). See an example of a script below:

for i = 1, #batchRecords do
if dataFormat==0
then
batchRecords[i]= "masked"
end
end

This script replaces all string values (dataFormat==0) in a table with "masked" string
5. For Static masking, the following Global variables are available:
• columnName (string) - column name
• fullColumnType (string) - actual column type
• columnValue (string) - value contained in the column
• columnType (number) - column data type (0 - number, 1- string, 2 - date, 3 - date and time, 4 - time, 5 - other)
For Static masking, DataSunrise returns table's contents by rows and columns. Thus, you can mask certain
columns with a script. You should use maskValue as the output parameter. See an example of a script below:

if (columnType == 0) then
maskValue = 1
elseif (columnType == 1) then
maskValue = "masked"
elseif (columnType == 2) then
maskValue = "2017.08.09"
elseif (columnType == 3) then
maskValue = "2017.08.09 12:00:00"
elseif (columnType == 4) then
maskValue = "12:00:00"
else
maskValue = "masked"
end

This script replaces values of different types (note columnType) with corresponding values (maskValue). For
example all columns of columnType==1 (string) will be masked by replacing the contents with "masked"
string.
9 DataSunrise Rules | 175
9.11.4.4 Extending Lua Script Functionality
You can plug-in 3rd-party Lua modules to extend DataSunrise's Lua functionality.
To access the modules in the DataSunrise's Lua snippet, do the following:
1. Use 64-bit C-compiled modules only (.dll, .so).
2. Check the modules on dependencies with the Dependency Walker application before using them.
3. Place all the modules you're going to use (and the ones they depend on) into the DataSunrise's installation folder.
4. Example. Let's assume that we're going to us a custom "cjson" module. We open the DataSunrise's Lua Script
editor and add the following lines to the script:

local cjson = require "cjson"


json_text = '[ true, { "foo": "bar" } ]'
value = cjson.decode(json_text)
json_text = cjson.encode(value)
print(json_text)

Note that we specify required modules as well.


5. You can also use simple scripts as modules. Here's an example of such a module called "mymodule.lua"

local mymodule = {}
function mymodule.foo()
print("Hello World!")
end
return mymodule

To call the function included in this module in your Lua script, add the following lines to the script:

local mymodule = require "mymodule"


mymodule.foo()

9.11.4.5 Conditional Masking


The Conditional Masking option enables you to obfuscate sensitive data according to different specified conditions.
Sensitive data will be filtered and masked according to the chosen condition.
Conditional Masking is available for the following databases:
• MySQL
• MariaDB
• PostgreSQL
• Oracle
• Aurora PostgreSQL
• Aurora MySQL
• Greenplum
• Redshift
• CockroachDB
• TiDB
• MsSQL
You can use the following types of Conditional Masking:
• Contains (available for string data types only). Checks if the condition meets the sequence of characters specified
in the Value field.
• Does not contain (available for string data types only). Checks if the condition mismatches the sequence of
characters specified in the Value field.
• Matches. Checks if the column value fully matches the value from the Rule.
9 DataSunrise Rules | 176
• Does not match. Checks if the column value does not match the value from the Rule.
• RegEx (available for string data types only). Checks if the column value matches a Regex pattern.
• Custom condition. Checks any custom condition that returns true/false as a result of execution. This may include
checking other columns from the table.
Also, you can use Conditional Masking with the KeepNull option that prevails. It means that, Conditional Masking
will not obfuscate null values, even if these values match the condition.

Important: Conditional Masking is an additional optional parameter available for all masking methods except
for FP Masking methods, Unstructured Masking, and Masking with Lua script.

9.11.4.6 Consistent Masking (Dynamic Masking)


The Consistent Masking option enables you to mask similar table entries with similar random-generated values to
make the data look consistently.
Consistent Masking is based on the usage of dedicated tables (cache) that store original values in the form of
hash and corresponding random-generated values used to replace original values during masking process. Cache
tables are located in the DS_Environment schema/database within your target database. You can apply Consistent
Masking either to all oblects involved in a certain Rule or to objects involved in multiple Rules (see notes below).
Note that Consistent Masking works only inside a database a DS_Environment schema is located in. Thus if you need
to perform Dynamic masking in another database, create a DS_Environment schema first (Configuring DataSunrise
for Masking with random-based methods on page 177).
There are two cache types exist:
• System cache: applicable to a current Rule only
• Custom cache: can be applied to multiple Rules
Consistent Masking is available for the following databases:
• MySQL
• MariaDB
• PostgreSQL
• Oracle
• Aurora PostgreSQL
• Aurora MySQL
Consistent masking can be used in conjunction with the following masking methods (Masking Methods on page
167):
• Random String
• Random from Lexicon
• Random Credit Card
• Random Email
• Random Value Like Current
Using Consistent Masking (Dynamic Masking)
To involve the Consistent Masking option into your Dynamic masking process, do the following:
1. Create the DS_ENVIRONMENT schema/database in your target database instance (Configuring DataSunrise for
Masking with random-based methods on page 177)
2. Create a Masking Rule for your target database: select database columns to mask in Masking Settings and
select a masking method compatible with Consistent Masking
3. In Mask Data Consistently, select either Use System Cache or your custom cache if it exists. Note that system
cache works for a current Rule only and can't be applied to multiple Masking Rules
4. To create a custom cache, click the Plus (+) button and name your cache
5. You can manage custom caches in Configuration → Masking Caches: you can either clean or delete a cache.
9 DataSunrise Rules | 177
Using Consistent Masking (Static Masking)
Static Consistent Masking is based on the usage of Masking Keys (Generating a Private Key Needed for Data
Masking). It can be used in conjunction with random-based methods. To involve the Consistent Masking option into
your Static masking process, do the following:
1. Create a Static Masking task: select columns to mask and choose for a certain column a masking method
compatible with Consistent Masking
2. Enable Consistent Masking checkbox and select Masking Key.
For Static Masking use the same Masking Key to replace original values of several columns by similar random-
generated values.

9.11.6 Configuring DataSunrise for Masking with random-


based methods
For Masking using masking methods based on random-generated values, you need a dedicated schema/database
(DS_Environment) in your database. This schema is intended to be used as a storage for the database objects such as
functions and cache tables required for Masking with random-based methods.
A DS_Environment schema is required for the following masking methods:
• Random Email
• Random string
• Random from Lexicon
• Random Credit Card Number
• US ZIP Code
• Mask Data Consistently (see Consistent Masking (Dynamic Masking) on page 176)
• Regex Replace, Random Value Like Current (MS SQL only)
You can create a DS_Environment schema either aiutomatically or manually:
1. To create a DS_Environment schema automatically, check the Automatically Create Environment check
box in your database Instance's settings (Configuration → Databases → <Your DB Instance> → Advanced
Parameters). Note that you need your database user created for DataSunrise (Creating Database Users Required
for Getting the Database's Metadata on page 63) to be granted with high privileges that may not be acceptable in
your situation.
2. The another option is to create a DS Environment schema manually by executing required queries (see
subsections below).

9.11.6.1 Creating a "DS_Environment" in PostgreSQL/Aurora PostgreSQL


To create a DS_Enviroment manually, execute the following queries in your target database:

CREATE SCHEMA IF NOT EXISTS "DS_ENVIRONMENT";


GRANT USAGE ON SCHEMA "DS_ENVIRONMENT" TO public;
GRANT ALL PRIVILEGES ON SCHEMA "DS_ENVIRONMENT" to <User_name>;

9.11.6.2 Creating a "DS_Environment" in Oracle


To create a DS_Enviroment manually, execute the following queries in the target database:

ALTER USER <User_name> QUOTA 50M ON USERS;


GRANT RESOURCE TO <User_name>;

9.11.6.3 Creating a "DS_Environment" in SQL Server


To create a DS_Enviroment manually, use the environment_permissions.sql script located in
<DataSunrise_installation_folder>\scripts\mssql\
9 DataSunrise Rules | 178
9.11.6.4 Creating a "DS_Environment" in Redshift
To create a DS_Enviroment manually, execute the following queries in your target database:

CREATE SCHEMA `DS_ENVIRONMENT`;


GRANT ALL ON `DS_ENVIRONMENT`.* TO '<User_name>'@'%';
GRANT USAGE ON LANGUAGE plpythonu TO <User_name>;

9.11.6.5 Creating a "DS_Environment" in Greenplum


To create a DS_Enviroment manually, execute the following queries in the target database:

create schema if not exists "DS_ENVIRONMENT";


GRANT USAGE ON SCHEMA "DS_ENVIRONMENT" TO public;
GRANT ALL privileges ON SCHEMA "DS_ENVIRONMENT" TO <User_name>;

9.11.6.6 Creating a "DS_Environment" in MySQL/Aurora MySQL/MariaDB


To create a DS_Enviroment manually, execute the following queries in the target database:

CREATE DATABASE `DS_ENVIRONMENT`;


GRANT ALL ON `DS_ENVIRONMENT`.* to '<User_name>'@'%';

--for all database users


GRANT EXECUTE ON `DS_ENVIRONMENT`.* TO 'any_user'@'%';

Important: AWS RDS Maria DB 10.5+ doesn't support the GRANT ALL privilege. In case your database doesn't
support GRANT ALL, execute the following query:

create database `DS_ENVIRONMENT`;


GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES,
LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER ON
`DS_ENVIRONMENT`.* TO '<User_name>'@'%';

--for all database users:


GRANT EXECUTE ON `DS_ENVIRONMENT`.* TO 'any_user'@'%';

9.11.7 Masking XML, CSV, JSON and Unstructured Files


Stored in Amazon S3 Buckets
DataSunrise can mask columns of CSV files, text inside XML elements of XML files and keys' values of JSON files and
contents of unstructured files stored in Amazon S3 buckets or S3 protocol compatible file storage services such as
Minio and Alibaba OSS. To do this, follow the steps listed below:
1. Create a Dynamic Masking Rule (Dynamic Data Masking on page 164).
2. In the Masking Settings subsection, select the required file type (CSV, XML or JSON) and input the required
information according to the table below:
9 DataSunrise Rules | 179

Parameter Description
Full File Name text field Path to the file which should be masked. Note that it should start
with "/". For example: /mybucket/customers.xml
Fields/Columns text field Names of columns or numbers of columns to mask for CSV or text
inside tags to mask for XML. For example: first_name,last_name
Note that CSV column numbers start with 1 and not 0.
According to the following example, the first_name
and last_name columns of a CSV file will be masked:
first_name,last_name,middle_name masked,masked,Jonathan
masked,masked,Robert

And the text inside the <first_name> and <last_name> tags will be
masked for XML:
<first_name>masked</first_name> <last_name>masked</
last_name>

For XML, an abridged version of XPath is used. You should specify the
tag whose contents should be masked in the following way:
/root_tag/sub_tag1/sub_tag2/sub_tag.../target_tag

For your convenience, you can use "?" for one nesting level or "*" for
unknown number of nesting levels. For example:
/*/first_name means that the contents of all "first_name" tags in the
XML file will be masked.
JSON Path text field (for JSON only) Keys' values of a JSON file to be masked.
Here's a JSON example:
[ { "firstName": "John", "lastName" : "doe", "age" : 26, "address" :
{ "streetAddress": "Naist street", "city" : "Nara", "postalCode" :
"630-0192" }, ]

In this case, the JSON Path may look like: $..address..city


$.address.streetAddress
For your convenience, you can use "?" for one nesting level or "*" for
unknown number of nesting levels. For example:
/*/city /*/streetAddress
Filler text field A placeholder to replace the masked values with. For example:
masked
Additional Options subsection (CSV only) Configuring of special characters. Note that the Row Delimiter should
match the corresponding character in the CSV file to be masked: for
Unix/Linux it's "\n", for Windows it's "\r\n".

3. Click Save Rule to apply the new settings.

9.11.8 Informix Dynamic Masking Additional Info


By default, Informix doesn't include some functions required for Dynamic masking (EmailMasking, EmailMaskingFull,
EmailMaskingUserName, RegexpReplace, MaskUrl, RandomFromInterval, RandomValueLikeCurrent). DataSunrise
9 DataSunrise Rules | 180
installs the required functions automatically when you create an Informix Instance in Configuration → Databases so
you don't need to do it manually.

Note: just in case, you can find the Informix masking scripts in the DataSunrise installation folder, scripts/Masking
folder

9.11.9 Cassandra Masking Additional Info


Cassandra masking relies on calling of dedicated functions on the server side. These functions are created during the
database Instance creation. By default, the usage of user-defined functions is disabled in the server's settings, so you
need to enable it before using the DataSunrise Masking features. To enable functions, do the following:
1. Locate the cassandra.yaml file on the Cassandra server. Open it.
2. Enable the enable_user_defined_functions parameter (change its value to true).
3. Restart the Cassandra server.

9.11.10 Enabling Dynamic Masking for Teradata 13


If you're going to use DataSunrise's masking on a Teradata 13 database, you should enable it by executing certain
queries in the Teradata's client to install the required masking functions to the database. DataSunrise distribution
includes required SQLs located in the <DataSunrise_installation_folder>\scripts\Masking\Teradata folder. Do the
following steps:
1. Navigate to the <DataSunrise_installation_folder>\scripts\Masking\Teradata folder and open a required *.c file
with your notepad-like application.
2. Note the header subsection of the file which is commented out. Locate the EXTERNAL NAME string and
specify the location of your file in this line instead of "C:\Program Files\DataSunrise Database Security Suite
\teradata13MaskFunctions\*.c"
3. Execute the query included in the header of a file in your Teradata client application to install the function. Note
that you don't need to execute the *.c file itself.
4. Some DataSunrise masking methods use UDF functions. DataSunrise creates these functions automatically when
you create a masking Rule. To enable DataSunrise to create these functions, install the BTEQ 13.20-16.20 client.
UDF functions are required for the following masking methods:
• Mask first chars, Mask last chars, Mask first and last chars
• Show first chars, Show last chars, Show first and last chars
• Email masking, Email masking full, Mask username of email, Credit card masking.

9.11.5 Data Masking


Data Masking is used to protect confidentiality of personally identifiable data, personal sensitive data or
commercially sensitive data when it is required for valid test cycles. DataSunrise obfuscates the real database
content and displays fake values thus preserving the genuine structure of the information. It is a helpful tool to mask
credit card numbers, phone numbers, email addresses, medical information, etc. from unauthorized users.
DataSunrise features both Static and Dynamic data masking. DataSunrise supports masking of table calls included in
functions and stored procedures.

Important: Random Email, Random string, Random from Lexicon, Random Credit Card Number and Regexp replace
(MS SQL only) masking methods (refer to Masking Methods on page 167) require creation of a dedicated schema
or database called DS_ENVIRONMENT (by default) to store tables and views needed to perform masking using the
aforementioned methods. This is applicable both to Dynamic and Static masking. You can change your Environment
name in Configuration → Databases → Your DB Instance → Advanced Settings → Environment Name.
9 DataSunrise Rules | 181
9.11.1 Generating a Private Key Needed for Data Masking
To use Format-Preserving Masking methods (Format-Preserving Masking on page 177) and Random methods with
the Consistent masking option enabled (for Static Masking), you need to create an encryption key. For this, do the
following:
1. To create a new key, navigate to Masking → Masking Keys and click Add Key. Either generate a new key at the
Generate tab or navigate to the Insert tab and paste/upload an existing key.
2. You can find masking keys in the Masking → Masking Keys section. You can edit your keys but note that they
are of fixed length.

9.11.2 Dynamic Data Masking


Dynamic Masking is performed on-the-fly, so it doesn't require additional resources to store the copied database.
While constructing a response to a query, DataSunrise replaces actual values in the query results with random
values, predefined values or special characters.

Figure 36: Dynamic Data Masking

For relational databases, DataSunrise modifies an incoming query itself making a target database to construct
a response with obfuscated data inside. For NoSQL databases (DynamoDB, Mongo, Elasticsearch), DataSunrise
modifies the database response before redirecting it to a client application.
An example of a SELECT query before masking applied (PostgreSQL)

SELECT * FROM public.customers ORDER BY "Email" ASC

An example of a SELECT query after masking applied (PostgreSQL, the "Email" column is being masked)

SELECT public.customers."Order", public.customers."FirstName", public.customers."LastName",


public.customers."Address", public.customers."State", public.customers."ZIP",
CAST(regexp_replace(trim(public.customers."Email"), '[[:alnum:]](?![[:alnum:]]*($| ))', '*', 'g') as
character varying(30)) as "Email", public.customers."Card" FROM public.customers
14:54:52.502170| (8972|2132) ORDER BY CAST(regexp_replace(trim("Email"), '[[:alnum:]](?!
[[:alnum:]]*($| ))', '*', 'g') as character varying(30)) ASC

Except relational and NoSQL databases, DataSunrise also can mask contents of CSV files stored in Amazon S3
buckets. The masking is done by certain comma-separated fields.

Restriction: there is a limitation exist which is associated with using stored procedures for Dynamic Masking.
Let's assume that two masking Rules exist and each Rule is configured to be triggered when a certain column
is SELECTed: the first Rule is configured on "column1" and the second Rule is configured on "column2". If both
columns are SELECTed using a stored procedure, only the second Rule will be triggered.
9 DataSunrise Rules | 182

Restriction: for AWS RDS-hosted MariaDB, dynamic masking inside functions and procedures doesn't work
because admin privileges required for masking inside routines can't be obtained on RDS databases.

Important: for Dynamic Masking using random-based methods, you need a dedicated schema (DS Environment) in
your database (see Configuring DataSunrise for Masking with random-based methods on page 177).

9.11.3 Creating a Dynamic Data Masking Rule


To create a new Rule, do the following:

1. Navigate to the Masking → Rules section and click Add Rule.


2. Input the required information to the Main section (General Settings on page 111).
3. Input the required information to the Action subsection. For parameters common for all Rules, refer to the Audit
Rules description:
Parameter Description
Keep Row Count check box Disable masking of columns included into GROUP BY,
HAVING, ORDER BY, WHERE clauses.
Mask SELECTs Only check box Mask only SELECT queries. For example, the following
query will not be masked:
UPDATE customers SET id = id RETURNING *

Action drop-down list Select an appropriate option from the list to block
certain queries aimed at modification of masked
columns. For example, such queries might be blocked
(the Email column is the masked column):
UPDATE test.customers SET "Order" = '1234' WHERE
"Email" = '[email protected]';

4. Input the required information to the Filter sessions subsection (Filter Sessions on page 111).
5. Input the required information to the Masking Settings subsection:
9 DataSunrise Rules | 183

Parameter Description
Mask Data subsection Specify database columns to mask. Click Select to do it manually and select
the required columns in the objects tree. Click Select then ADD REGEXP to use
regular expressions.
Masking Method drop-down Data obfuscation algorithm. Refer to Masking Methods on page 167.
list (for Mask Data only)
Hide Rows subsection Hide table rows which don't match the specified Masking Value. Refer to Masking
Methods on page 167. Click Select to select a table to hide rows in.
Condition for Column Value Condition for the value of the column rows of which should be hidden (any
to Show Rows field (for Hide WHERE type conditions). For example Age>25 means that the table rows where
Rows only) the Age column's value is less than 25 will be hidden.
More examples:
LastName = 'Smith'
LastName LIKE ('%Smi%')
EmployeeKey <= 500 EmployeeKey = 1 OR EmployeeKey = 8 OR EmployeeKey = 12
EmployeeKey <= 500 AND LastName LIKE '%Smi%' AND FirstName LIKE '%A%'
LastName IN ('Smith', 'Godfrey', 'Johnson')
EmployeeKey Between 100 AND 200

Note: if you select a column(s) associated with another column (linked with a primary key for example), you will
be prompted that there are columns exist that contain related data. Click on this message to select the associated
columns. Once you select them, these columns will be added to the list of columns to be masked. More on
associations: Table Relations on page 400.

6. Check Disable Rule if you don't need it active.


7. Input Tags if necessary (Tags on page 199)
8. Click Save Rule to apply the new settings.
9 DataSunrise Rules | 184
9.11.4 Masking Methods
When creating a Masking Rule, it is necessary to specify which data obfuscation algorithm DataSunrise should
employ. Use the Masking Method drop-down list to select one of the following algorithms:
Masking type Description DM SM
Default INT-type values are replaced with zeroes (0) and STRING-type values are + +
replaced with white spaces.
Fixed String STRING-type values are replaced with a predefined string. + +
Empty value STRING-type values are replaced with white spaces. + +
Function call Calling a user-created custom function for data obfuscation. You can pass + +
various parameters to a function by clicking Add Parameter and selecting
the required parameter in the drop-down list.
Email masking The user name and domain section of email addresses are replaced + +
with "*", except the first one and the last one in a row. For example:
a***@**.**m.
Email masking full The user name and domain section of email addresses "*", except the "@" + +
character and top-level domain name. For example: ***@**.com.
Mask username of Masking the user name section of email addresses "*". For example: + +
Email ***@datasunrise.com.
Credit card Masking credit card numbers. It displays the last four digits of a credit card + +
masking number, other characters are replaced with "X". For example: XXXX-XXXX-
XXXX-1234.
Mask last chars Masking a specified number (Character Count) of database entry's last + +
symbols.
Show last chars Showing a specified number (Character Count) of database entry's last + +
symbols.
Mask first chars Masking a specified number (Character Count) of database entry's first + +
symbols.
Show first chars Showing a specified number (Character Count) of database entry's first + +
symbols.
Show first and last Showing a specified number (Character Count) of database entry's first + +
chars and last symbols.
Mask first and last Masking a specified number (Character Count) of database entry's first + +
chars and last symbols.
Regexp replace Replacing regular expressions with a predefined string (specify Replace + +
By). You need to specify a pattern (Replacing Pattern) to search for
regular expressions in columns. Available for all the supported databases
except Sybase.

Important: Regexp Replace is not available for MS SQL Server databases


hosted on cloud services such as AWS RDS and MS Azure SQL Database

Mask URL Masking of URL addresses. + +


Unstructured Replacing potentially sensitive values with the "*" character. + +
masking
Masking with Lua Masking using a Lua script created by user. + +
script
9 DataSunrise Rules | 185

Masking type Description DM SM


FP Tokenization Format-preserving masking for emails. + +
Email
FP Tokenization Format-preserving masking for SSNs. + +
SSN
FP Tokenization Format-preserving masking for credit card numbers. Works with realistic + +
Credit Card credit card numbers only (the ones created using the Luhn algorithm).
FP Tokenization Format-preserving masking for STRING-type values. You can select an + +
String alphabet to use when encrypting data in the Alphabets drop-down list.

Important: if you're going to use this masking method, ensure that your
case sensitivity settings correspond to the case sensitivity settings of the
database server you're going to mask data at.

FP Tokenization Format-preserving masking for NUMBER-type values. + +


Number
FP Encryption FF3 Format-preserving encryption for emails using the FF3 encryption + +
Email algorithm.
FP Encryption FF3 Format-preserving encryption for SSNs using the FF3 encryption + +
SSN algorithm.
FP Encryption FF3 Format-preserving encryption for credit card numbers using the FF3 + +
Credit Card encryption algorithm. Works with realistic credit card numbers only (the
ones generated using the Luhn algorithm).
FP Encryption FF3 Format-preserving encryption for STRING-type values using the FF3 + +
String encryption algorithm. You can select an alphabet to use when encrypting
data in the Alphabets drop-down list.

Important: if you're going to use this masking method, ensure that your
case sensitivity settings correspond to the case sensitivity settings of the
database server you're going to mask data at.

FP Encryption FF3 Format-preserving encryption for NUMBER-type values using the FF3 + +
Number encryption algorithm.
Random US Phone Replacing of a US phone number with a random-generated phone number + +
Number in the following format : 1-555-XXX-XXXX. Available for MySQL, MariaDB,
Aurora MySQL, PostgreSQL, Aurora PostgreSQL, Redshift, TiDB, Greenplum,
Oracle and MS SQL Server.
NULL Value Replaces masked database entry with a NULL. + +
Substring Creating of a substring out of the original string. Starting position defines + +
the starting character of the resulting substring and String's Length
defines the substring length. Available for MySQL, MariaDB, Aurora MySQL,
Oracle, Redshift, PostgreSQL, Aurora PostgreSQL, TiDB, Greenplum, MS
SQL Server.
Random String Returns a random string of a random length (the string's length can be + +
defined with the Minimum Length and Maximum Length). Available for
MySQL, MariaDB, Aurora MySQL, Redshift, PostgreSQL, Aurora PostgreSQL,
Greenplum and Oracle.
9 DataSunrise Rules | 186

Masking type Description DM SM


Random US Social US SSN masking supports Numeric and String column types. Masks the + +
Security Number value with random numbers as follows: AAA-BB-CCCC (for String columns)
and AAABBCCCC (for Numeric columns). Available for MySQL, MariaDB,
Aurora MySQL, Oracle, PostgreSQL, Aurora PostgreSQL, Redshift, TiDB,
Greenplum, MS SQL Server.
Random Credit Replacing credit card numbers with random numbers created according to + +
Card the Luhn algorithm. Available for MySQL, MariaDB, Aurora MySQL, Amazon
Athena, Redshift, PostgreSQL, Aurora PostgreSQL, Greenplum, MS SQL
Server and Oracle. Supports String and Numeric column types.
Random Email Replacing emails with random characters like the following: + +
[email protected]. Available for MySQL, MariaDB, Aurora MySQL,
Redshift, PostgreSQL, Aurora PostgreSQL, Greenplum, MS SQL Server,
Oracle .
RandomValue Replaces masked database entry with a random entry from Lexicon. + +
From Lexicon Available for MySQL, Aurora MySQL, MariaDB, MS SQL Server, PostgreSQL,
Aurora PostgreSQL, Oracle. Note that you need some additonal grants
to use this method. Refer to Creating a MySQL/Aurora MySQL/MariaDB
Database User on page 240.
US ZIP Code Masking of 5-digits ZIP codes according to the US De-Identification + +
Masking Standard. If the population of the area ZIP code to be masked of is less
than 20000, then all the digits of ZIP code will be replaced with zero. If the
population is more than 20000, then first three digits will be left intact and
other two digits will be replaced with zero.
Fixed Number NUMBER-type and INT-type values are replaced with predefined values. + +
Random value like Database entry is replaced with random values. + +
current
Random from Numeric and String column types are replaced with values from the + +
interval specified range (specify the minimum value (Min) and the maximum value
(Max) of the range.
If the Decimal Numbers generation checkbox is enabled, it generates
random values in the part after the decimal position. The number of
characters after the decimal position will be equal or less than the value in
the Number of Decimal Digits field. Decimal numbers are generated only
for non-integer data types. For floating-point data types specified number
of decimal digits is not guaranteed.

Fixed date Replacing date values with a fixed value. Select date (fixed value) via (Date) + +
drop-down lists.
Fixed time Replacing time values with a fixed value. Select time (fixed value) via (Time) + +
drop-down lists.
Fixed datetime Replacing time values with a fixed value. + +
Random date Replacing date values with a random value from a predefined range. + +
interval Specify a range of dates to select a random value from, via Starting Date
and Ending Date drop-down lists.
Random time Replacing time values with a random value from a predefined range. + +
interval Specify a range of time to select a random value from, via Starting Time
and Ending Time drop-down lists.
9 DataSunrise Rules | 187

Masking type Description DM SM


Random date Replacing date values with random values from a predefined range. Specify + +
offset the maximum deviation (days) of the "masked" date from the initial date in
Max Dispersion, day field. Supports String column types.
Random time Replacing time values with random values from a predefined range. Specify + +
offset the maximum deviation (hours, minutes, seconds) of the "masked" time
from the initial time in Max Dispersion, hours field. Supports String
column types as well.
Random datetime Replacing time values with random values from a specified interval (you + +
interval need to set Starting Date/Time and Ending Date/Time values of the
interval)
Random datetime Replacing time values with random values from a predefined range. + +
offset Supports String column types as well.
Hide rows Hiding table rows which don't match your Masking Value (Condition for +
Column Value to Show Rows). For example Age>25 means that the table
rows where the Age column's value is less than 25 will be hidden.
Random Date with Masking of dates according to HIPAA. All elements of dates (except year) + +
Constant Year for dates that are directly related to an individual, including birth date,
admission date, discharge date, death date, and all ages over 89 and all
elements of dates (including year) indicative of such age, except that such
ages and elements may be aggregated into a single category of age 90 or
older.
Random Datetime Masking of dates and time according to HIPAA. All elements of dates + +
with Constant Year (except year) for dates that are directly related to an individual, including
birth date, admission date, discharge date, death date, and all ages over
89 and all elements of dates (including year) indicative of such age, except
that such ages and elements may be aggregated into a single category of
age 90 or older.
Mask Data Masking with a permanent random value based on using a masking cache. + +
Consistently This option is available for Random String, Random From Lexicon, Random
Credit Сard, Random Email. Select Use System Cache in the Mask Data
Consistently drop-down list or create a new cache for your Masking Rule.

Warning: Sometimes data masking will not work. For example, if Show First and Last algorithm you selected is
configured to show three first and three last characters of DB column's entry, and the entry itself is six characters
long, the masking will not work. In such cases use other masking types or purpose-written functions.

Note: when masking entries that include strings of fixed length ("char", "varchar", "nchar", "nvarchar" data types
for example), the string got after masking may be longer than the original string. The following masking types may
cause an obfuscated entry to exceed the original string length:
• Fixed string
• Function call
• Regexp replace
9 DataSunrise Rules | 188
9.11.4.1 Using a Custom Function for Masking
Along with prebuilt masking methods, you can use your own masking algorithms in the form of functions. To
employ custom function-based masking, do the following:
1. Create a function that will be used to mask your data. For example, here is a function for PostgreSQL database
supposed to replace logins of emails with random values (consisting of prefixes + mids + suffixes):

CREATE OR REPLACE FUNCTION public.get_random_fake_login()


RETURNS TEXT
AS $$
DECLARE
prefixes TEXT[] :=
'{bel,nar,gob,ab,ad,a,ac,as,ben,co,alm,cha,che,dea,kit,mac,par,ren,sie,sto}';
mids TEXT[] := '{adur,aes,ten,mar,sta,er,wa,le,kin,tow,han,an,tar,ou,eva,gag,urn,cac}';
suffixes TEXT[] := '{ux,ix,li,ci,cia,oth,wood,nen,oli,oir,ort,int,lin,ne,ns,si,hu,well}';
output TEXT := '';
BEGIN
output := prefixes[1+random()*(array_length(prefixes, 1)-1)] ||
mids[1+random()*(array_length(mids, 1)-1)] || suffixes[1+random()*(array_length(suffixes, 1)-1)];
IF random() > 0.5 THEN
output := output || trunc(random() * (90-70) + 70)::TEXT;
END IF;
RETURN output;
END;
$$ LANGUAGE PLPGSQL;

CREATE OR REPLACE FUNCTION public.get_masked_email_login(a text)


RETURNS TEXT
AS $$
SELECT regexp_replace(a, '.*(?=.*@.*)', public.get_random_fake_login());
$$ LANGUAGE SQL;

drop table if exists public.customers_names_map;


create table public.customers_names_map
(
src text PRIMARY KEY,
dst text
);

CREATE OR REPLACE FUNCTION public.hide_emails(


val text)
RETURNS text AS
$BODY$
DECLARE
res text;
sed float;
row_count integer;
rand_row integer;
BEGIN
--check in mapping tables
SELECT dst into res FROM public.customers_names_map WHERE src = val;
IF FOUND = FALSE THEN
res = public.get_masked_email_login(val);
INSERT INTO public.customers_names_map VALUES (val, res);
END IF;
return res ;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;

select "LastName", hide_emails("Email") from customers

This function includes three functions.


• The get_random_fake_login function returns TEXT (output) consisting of prefixes+mids+suffixes listed in the
DECLARE subsection of the function.
• The get_masked_email_login function gets the output of the previous function (the random values) and
transforms them into email addresses.
9 DataSunrise Rules | 189
• Then a service tables is created: customers_names_map. This table contains masked entries mapped on real
email entries.
• A function named hide_emails is created. It outputs the masked values. This function should be called when
creating a masking Rule.
• The last line transforms a regular SELECT query issued by a client into a query supposed to output the masked
values.
2. Install your function to your database. As a rule, to install a function, you need to execute it in your database's
client app.
3. Create a new Dynamic masking Rule.
4. Select columns to obfuscate and select the "Function Call" masking method.
5. Locate your function and select it. The name of your function should be displayed.
6. Click Save Rule to apply new settings. Then you can query your table to get the masked data.
9.11.4.2 NLP Data Masking (Unstructured Masking)
The NLP (Natural Language Processing) Dynamic Data Masking feature enables you to obfuscate sensitive data
contained in database columns that contain non-structured data. Non-structured data can be stored in the target
database in binary format (BLOB for example). First you specify a column(s) you want to obfuscate the sensitive data
in and select the "Unstructured Masking" masking method to use. Then DataSunrise parses the column's contents
and finds the sensitive data to obfuscate and replaces the sensitive data with asterisks (*).
The NLP Data Masking engine supports the following file formats:
• Microsoft Word: DOC, DOCX, RTF, DOT, DOTX, DOTM, DOCM, FlatOPC, FlatOpcMacroEnabled, FlatOpcTemplate,
FlatOpcTemplateMacroEnabled
• OpenOffice: ODT, OTT
• WordprocessingML: WordML
• Web: HTML, MHTML
• Text: TXT
• PDF (MySQL, PostgreSQL. Text only, images can't be masked). Note that the asterisk character which is used to
mask data is wider than some letters. This is the reason of overlapping text by asterisks in PDF files.
Example
Unmasked data:

Procedure Findings. The patient, Patrick Kelley, is a 39 year old male born on October 6, 1979. He has
a 6 mm sessile polyp that was found in the ascending colon and removed by snare, no cautery. Patrick's
address is 19 North Ave. Humbleton WA 02462. His SSN is 123-23-234. He experienced the polyp after
getting out of his blue Honda Accord with a license number of WDR-436. We were able to control the
bleeding. Moderate diverticulosis and hemorrhoids were incidentally noted. Recurrent GI bleed of
unknown etiology; hypotension perhaps secondary to this but as likely secondary to polypharmacy. He
reports first experiencing hypotension while eating queso at Chipotle.

Masked data:

Procedure Findings. The patient, **************, is a ** year old male born on


October *, ****. He has a * mm sessile polyp that was found in the ascending colon
and removed by snare, no cautery. *******'s address is ** ********** ************** *****.
His SSN is **********. He experienced the polyp after getting out of
his blue ************ with a license number of WDR-***. We were able to control
the bleeding. Moderate diverticulosis and hemorrhoids were incidentally noted.
Recurrent GI bleed of unknown etiology; hypotension perhaps secondary to this but
as likely secondary to polypharmacy. He reports first experiencing hypotension
while eating queso ***********.

Example 2
9 DataSunrise Rules | 190
Unmasked data:

Dear Mark,I am writing you to enquire about the status of the task #18897 in TRACKME task manager
(https://cd.trackme.com/18897). As a manager of Customer Development department, it is your
responsibility to speed up this stuck task. As far as I know, it was assigned to Ellie Sanders,
junior customer relationship manager #056. Please speed this up, because Mr. Williams is expecting to
get some insights from your research for the sales campaign which will be kicked off on 2019-11-11.
You can email me at [email protected] call me. My phone no is 202-555-0181P.S. Please check
emails from Mrs. Martinez. She was looking for you to give you some details on your business trip to
Phoenix.Cheers,Mike

Masked data:

*********, I am writing you to enquire about the status of the task #***** in ******* task manager
*****************************). As a manager of ******************** department, it is your
responsibility to speed up this stuck task. As far as I know, it was assigned to *************, junior
customer relationship manager #***. Please speed this up, because ************ is expecting to get
some insights from your research for the sales campaign which will be kicked off on **********. You
can email me at ******************* *or call me. My phone no is ************ P.S. Please check emails
from *************. *** was looking for you to give you some details on your business trip to *******.
Cheers, ***

Note: you need to install Java 1.8+ to be able to use NLP Data Masking. If you're running DataSunrise on Linux, you
need to configure JVM as well (Configuring JVM on Linux on page 173). If you're experiencing some problems with
JVM on Windows, add the path to your JVM folder to the PATH environment variable (for example: C:\Program Files
\Java\jre1.8.0_301\bin\server).

For instructions on how to use Unstructured masking, refer to subs. Dynamic Data Masking on page 164
Configuring JVM on Linux
To utilize the NLP Data Masking, you need to configure a Java Virtual Machine (JVM). To do this, perform the
following:

1. Locate the JVM library by executing the following command:

sudo find / -name "libjvm.so"

2. Copy the path to your "libjvm.so"


3. Navigate to the location of configuration files that contain library paths:

cd /etc/ld.so.conf.d/

4. Create a configuration file that will be used to register your Java library:

sudo vim java.conf

5. Paste the path to your "libjvm.so" into the configuration file. For example:

/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.201.b09-0.amzn2.x86_64/jre/lib/amd64/server/

6. Update cache:

sudo ldconfig

7. Check if the system knows about the library:

/opt/datasunrise/JvmChecker
9 DataSunrise Rules | 191
You should get something like this:

JVM load succeeded. Version 1.8

8. Restart DataSunrise's Core to apply the settings.


9.11.4.3 Static and Dynamic Masking Using Lua Script
DataSunrise enables you to use Lua Script for Static and Dynamic masking.
To use Lua Script for masking, do the following:
1. First, you need to prepare your scripts. Navigate to Configuration → Lua Scripts and click Create Lua Script
2. Set a logical name for the script and input the script into the Script field. Note that you can see Global variables
that can be used in the script (switch to the Global Variables... tab)
3. For the process of masking configuring, refer to Creating a Dynamic Data Masking Rule on page 165. While
selecting a masking method, select Masking with Lua script and select your script in the Lua Script drop-down
list
4. For Dynamic masking, two Global variables are used: #batchRecords and #dataFormat. When a user SELECTs
a table to be masked, all results are saved in the #batchRecords variable. This variable is an array that contains
the selected column's contents. To mask your table's entries, you should modify values contained in the
#batchRecords. The #dataFormat variable is for format of data included in the #batchRecords (0 - string, 1 -
binary). See an example of a script below:

for i = 1, #batchRecords do
if dataFormat==0
then
batchRecords[i]= "masked"
end
end

This script replaces all string values (dataFormat==0) in a table with "masked" string
5. For Static masking, the following Global variables are available:
• columnName (string) - column name
• fullColumnType (string) - actual column type
• columnValue (string) - value contained in the column
• columnType (number) - column data type (0 - number, 1- string, 2 - date, 3 - date and time, 4 - time, 5 - other)
For Static masking, DataSunrise returns table's contents by rows and columns. Thus, you can mask certain
columns with a script. You should use maskValue as the output parameter. See an example of a script below:

if (columnType == 0) then
maskValue = 1
elseif (columnType == 1) then
maskValue = "masked"
elseif (columnType == 2) then
maskValue = "2017.08.09"
elseif (columnType == 3) then
maskValue = "2017.08.09 12:00:00"
elseif (columnType == 4) then
maskValue = "12:00:00"
else
maskValue = "masked"
end

This script replaces values of different types (note columnType) with corresponding values (maskValue). For
example all columns of columnType==1 (string) will be masked by replacing the contents with "masked"
string.
9 DataSunrise Rules | 192
9.11.4.4 Extending Lua Script Functionality
You can plug-in 3rd-party Lua modules to extend DataSunrise's Lua functionality.
To access the modules in the DataSunrise's Lua snippet, do the following:
1. Use 64-bit C-compiled modules only (.dll, .so).
2. Check the modules on dependencies with the Dependency Walker application before using them.
3. Place all the modules you're going to use (and the ones they depend on) into the DataSunrise's installation folder.
4. Example. Let's assume that we're going to us a custom "cjson" module. We open the DataSunrise's Lua Script
editor and add the following lines to the script:

local cjson = require "cjson"


json_text = '[ true, { "foo": "bar" } ]'
value = cjson.decode(json_text)
json_text = cjson.encode(value)
print(json_text)

Note that we specify required modules as well.


5. You can also use simple scripts as modules. Here's an example of such a module called "mymodule.lua"

local mymodule = {}
function mymodule.foo()
print("Hello World!")
end
return mymodule

To call the function included in this module in your Lua script, add the following lines to the script:

local mymodule = require "mymodule"


mymodule.foo()

.
9.11.4.5 Conditional Masking
The Conditional Masking option enables you to obfuscate sensitive data according to different specified conditions.
Sensitive data will be filtered and masked according to the chosen condition.
Conditional Masking is available for the following databases:
• MySQL
• MariaDB
• PostgreSQL
• Oracle
• Aurora PostgreSQL
• Aurora MySQL
• Greenplum
• Redshift
• CockroachDB
• TiDB
• MsSQL
You can use the following types of Conditional Masking:
• Contains (available for string data types only). Checks if the condition meets the sequence of characters specified
in the Value field.
• Does not contain (available for string data types only). Checks if the condition mismatches the sequence of
characters specified in the Value field.
• Matches. Checks if the column value fully matches the value from the Rule.
9 DataSunrise Rules | 193
• Does not match. Checks if the column value does not match the value from the Rule.
• RegEx (available for string data types only). Checks if the column value matches a Regex pattern.
• Custom condition. Checks any custom condition that returns true/false as a result of execution. This may include
checking other columns from the table.
Also, you can use Conditional Masking with the KeepNull option that prevails. It means that, Conditional Masking
will not obfuscate null values, even if these values match the condition.

Important: Conditional Masking is an additional optional parameter available for all masking methods except
for FP Masking methods, Unstructured Masking, and Masking with Lua script.

9.11.4.6 Consistent Masking (Dynamic Masking)


The Consistent Masking option enables you to mask similar table entries with similar random-generated values to
make the data look consistently.
Consistent Masking is based on the usage of dedicated tables (cache) that store original values in the form of
hash and corresponding random-generated values used to replace original values during masking process. Cache
tables are located in the DS_Environment schema/database within your target database. You can apply Consistent
Masking either to all oblects involved in a certain Rule or to objects involved in multiple Rules (see notes below).
Note that Consistent Masking works only inside a database a DS_Environment schema is located in. Thus if you need
to perform Dynamic masking in another database, create a DS_Environment schema first (Configuring DataSunrise
for Masking with random-based methods on page 177).
There are two cache types exist:
• System cache: applicable to a current Rule only
• Custom cache: can be applied to multiple Rules
Consistent Masking is available for the following databases:
• MySQL
• MariaDB
• PostgreSQL
• Oracle
• Aurora PostgreSQL
• Aurora MySQL
Consistent masking can be used in conjunction with the following masking methods (Masking Methods on page
167):
• Random String
• Random from Lexicon
• Random Credit Card
• Random Email
• Random Value Like Current
Using Consistent Masking (Dynamic Masking)
To involve the Consistent Masking option into your Dynamic masking process, do the following:
1. Create the DS_ENVIRONMENT schema/database in your target database instance (Configuring DataSunrise for
Masking with random-based methods on page 177)
2. Create a Masking Rule for your target database: select database columns to mask in Masking Settings and
select a masking method compatible with Consistent Masking
3. In Mask Data Consistently, select either Use System Cache or your custom cache if it exists. Note that system
cache works for a current Rule only and can't be applied to multiple Masking Rules
4. To create a custom cache, click the Plus (+) button and name your cache
5. You can manage custom caches in Configuration → Masking Caches: you can either clean or delete a cache.
9 DataSunrise Rules | 194
Using Consistent Masking (Static Masking)
Static Consistent Masking is based on the usage of Masking Keys (Generating a Private Key Needed for Data
Masking). It can be used in conjunction with random-based methods. To involve the Consistent Masking option into
your Static masking process, do the following:
1. Create a Static Masking task: select columns to mask and choose for a certain column a masking method
compatible with Consistent Masking
2. Enable Consistent Masking checkbox and select Masking Key.
For Static Masking use the same Masking Key to replace original values of several columns by similar random-
generated values.

9.11.6 Configuring DataSunrise for Masking with random-based methods


For Masking using masking methods based on random-generated values, you need a dedicated schema/database
(DS_Environment) in your database. This schema is intended to be used as a storage for the database objects such as
functions and cache tables required for Masking with random-based methods.
A DS_Environment schema is required for the following masking methods:
• Random Email
• Random string
• Random from Lexicon
• Random Credit Card Number
• US ZIP Code
• Mask Data Consistently (see Consistent Masking (Dynamic Masking) on page 176)
• Regex Replace, Random Value Like Current (MS SQL only)
You can create a DS_Environment schema either aiutomatically or manually:
1. To create a DS_Environment schema automatically, check the Automatically Create Environment check
box in your database Instance's settings (Configuration → Databases → <Your DB Instance> → Advanced
Parameters). Note that you need your database user created for DataSunrise (Creating Database Users Required
for Getting the Database's Metadata on page 63) to be granted with high privileges that may not be acceptable in
your situation.
2. The another option is to create a DS Environment schema manually by executing required queries (see
subsections below).
9.11.6.1 Creating a "DS_Environment" in PostgreSQL/Aurora PostgreSQL
To create a DS_Enviroment manually, execute the following queries in your target database:

CREATE SCHEMA IF NOT EXISTS "DS_ENVIRONMENT";


GRANT USAGE ON SCHEMA "DS_ENVIRONMENT" TO public;
GRANT ALL PRIVILEGES ON SCHEMA "DS_ENVIRONMENT" to <User_name>;

9.11.6.2 Creating a "DS_Environment" in Oracle


To create a DS_Enviroment manually, execute the following queries in the target database:

ALTER USER <User_name> QUOTA 50M ON USERS;


GRANT RESOURCE TO <User_name>;

9.11.6.3 Creating a "DS_Environment" in SQL Server


To create a DS_Enviroment manually, use the environment_permissions.sql script located in
<DataSunrise_installation_folder>\scripts\mssql\
9 DataSunrise Rules | 195
9.11.6.4 Creating a "DS_Environment" in Redshift
To create a DS_Enviroment manually, execute the following queries in your target database:

CREATE SCHEMA `DS_ENVIRONMENT`;


GRANT ALL ON `DS_ENVIRONMENT`.* TO '<User_name>'@'%';
GRANT USAGE ON LANGUAGE plpythonu TO <User_name>;

9.11.6.5 Creating a "DS_Environment" in Greenplum


To create a DS_Enviroment manually, execute the following queries in the target database:

create schema if not exists "DS_ENVIRONMENT";


GRANT USAGE ON SCHEMA "DS_ENVIRONMENT" TO public;
GRANT ALL privileges ON SCHEMA "DS_ENVIRONMENT" TO <User_name>;

9.11.6.6 Creating a "DS_Environment" in MySQL/Aurora MySQL/MariaDB


To create a DS_Enviroment manually, execute the following queries in the target database:

CREATE DATABASE `DS_ENVIRONMENT`;


GRANT ALL ON `DS_ENVIRONMENT`.* to '<User_name>'@'%';

--for all database users


GRANT EXECUTE ON `DS_ENVIRONMENT`.* TO 'any_user'@'%';

Important: AWS RDS Maria DB 10.5+ doesn't support the GRANT ALL privilege. In case your database doesn't
support GRANT ALL, execute the following query:

create database `DS_ENVIRONMENT`;


GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES,
LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER ON
`DS_ENVIRONMENT`.* TO '<User_name>'@'%';

--for all database users:


GRANT EXECUTE ON `DS_ENVIRONMENT`.* TO 'any_user'@'%';

9.11.7 Masking XML, CSV, JSON and Unstructured Files Stored in Amazon S3 Buckets
DataSunrise can mask columns of CSV files, text inside XML elements of XML files and keys' values of JSON files and
contents of unstructured files stored in Amazon S3 buckets or S3 protocol compatible file storage services such as
Minio and Alibaba OSS. To do this, follow the steps listed below:
1. Create a Dynamic Masking Rule (Dynamic Data Masking on page 164).
2. In the Masking Settings subsection, select the required file type (CSV, XML or JSON) and input the required
information according to the table below:
9 DataSunrise Rules | 196

Parameter Description
Full File Name text field Path to the file which should be masked. Note that it should start
with "/". For example: /mybucket/customers.xml
Fields/Columns text field Names of columns or numbers of columns to mask for CSV or text
inside tags to mask for XML. For example: first_name,last_name
Note that CSV column numbers start with 1 and not 0.
According to the following example, the first_name
and last_name columns of a CSV file will be masked:
first_name,last_name,middle_name masked,masked,Jonathan
masked,masked,Robert

And the text inside the <first_name> and <last_name> tags will be
masked for XML:
<first_name>masked</first_name> <last_name>masked</
last_name>

For XML, an abridged version of XPath is used. You should specify the
tag whose contents should be masked in the following way:
/root_tag/sub_tag1/sub_tag2/sub_tag.../target_tag

For your convenience, you can use "?" for one nesting level or "*" for
unknown number of nesting levels. For example:
/*/first_name means that the contents of all "first_name" tags in the
XML file will be masked.
JSON Path text field (for JSON only) Keys' values of a JSON file to be masked.
Here's a JSON example:
[ { "firstName": "John", "lastName" : "doe", "age" : 26, "address" :
{ "streetAddress": "Naist street", "city" : "Nara", "postalCode" :
"630-0192" }, ]

In this case, the JSON Path may look like: $..address..city


$.address.streetAddress
For your convenience, you can use "?" for one nesting level or "*" for
unknown number of nesting levels. For example:
/*/city /*/streetAddress
Filler text field A placeholder to replace the masked values with. For example:
masked
Additional Options subsection (CSV only) Configuring of special characters. Note that the Row Delimiter should
match the corresponding character in the CSV file to be masked: for
Unix/Linux it's "\n", for Windows it's "\r\n".

3. Click Save Rule to apply the new settings.

9.11.8 Informix Dynamic Masking Additional Info


By default, Informix doesn't include some functions required for Dynamic masking (EmailMasking, EmailMaskingFull,
EmailMaskingUserName, RegexpReplace, MaskUrl, RandomFromInterval, RandomValueLikeCurrent). DataSunrise
installs the required functions automatically when you create an Informix Instance in Configuration → Databases so
you don't need to do it manually.
9 DataSunrise Rules | 197

Note: just in case, you can find the Informix masking scripts in the DataSunrise installation folder, scripts/Masking
folder

9.11.9 Cassandra Masking Additional Info


Cassandra masking relies on calling of dedicated functions on the server side. These functions are created during the
database Instance creation. By default, the usage of user-defined functions is disabled in the server's settings, so you
need to enable it before using the DataSunrise Masking features. To enable functions, do the following:
1. Locate the cassandra.yaml file on the Cassandra server. Open it.
2. Enable the enable_user_defined_functions parameter (change its value to true).
3. Restart the Cassandra server.

9.11.10 Enabling Dynamic Masking for Teradata 13


If you're going to use DataSunrise's masking on a Teradata 13 database, you should enable it by executing certain
queries in the Teradata's client to install the required masking functions to the database. DataSunrise distribution
includes required SQLs located in the <DataSunrise_installation_folder>\scripts\Masking\Teradata folder. Do the
following steps:
1. Navigate to the <DataSunrise_installation_folder>\scripts\Masking\Teradata folder and open a required *.c file
with your notepad-like application.
2. Note the header subsection of the file which is commented out. Locate the EXTERNAL NAME string and
specify the location of your file in this line instead of "C:\Program Files\DataSunrise Database Security Suite
\teradata13MaskFunctions\*.c"
3. Execute the query included in the header of a file in your Teradata client application to install the function. Note
that you don't need to execute the *.c file itself.
4. Some DataSunrise masking methods use UDF functions. DataSunrise creates these functions automatically when
you create a masking Rule. To enable DataSunrise to create these functions, install the BTEQ 13.20-16.20 client.
UDF functions are required for the following masking methods:
• Mask first chars, Mask last chars, Mask first and last chars
• Show first chars, Show last chars, Show first and last chars
• Email masking, Email masking full, Mask username of email, Credit card masking.

9.12 Learning Mode Overview


To make it easier to create Data Audit, Security and Masking Rules and to make them more effective, DataSunrise
features the Learning Mode.
The Learning Mode is a system of Learning Rules. While running Learning Rules, DataSunrise analyzes database
traffic and captures all the queries, database objects (schemas, tables, columns these queries address to),
applications, functions calls and puts them into groups. These groups are formed when your Learning Rule is active.
You can use the collected data in your Audit, Security and Masking rules (in the “Filter Statements” and “Filter
Sessions” section of any rule depending on what type of data you want to use). The same group (database objects,
queries, applications, etc.) can be used for different Audit, Security and Masking rules or combined which gives a
good opportunity to fine-tune your database protection.
You can activate the Learning Mode for a certain period of time in several ways:
• Set up you Learning Rule to run and capture the required data for a certain period of time and later switch it off
manually.
• Use the “Schedule” option in the settings of a Learning Rule and make it active only on certain days and at time
periods that suit you best.
9 DataSunrise Rules | 198

9.12.1 Creating a Learning Rule


To create a Learning Rule, do the following:

1. Navigate to the Audit → Learning Rules section and click Add Rule
2. Enter the required information to General Settings (General Settings on page 111)
3. Input the required information to the Filter sessions subsection (Filter Sessions on page 111)
4. Input the required information to the Actions subsection
Interface element Description
Learn radio button Log incoming queries, database objects, database user names and
client application names and add them to the predefined SQL groups
Skip radio button Ignore incoming queries
Keep Checking the List of Rules check Check other existing Rules even if the current one is triggered
box
Schedule drop-down list See Creating a Schedule on page 219

5. Fill out the Filter SQL Statements subsection:


Interface element Description
SQL Statements tab
Save Statements in the Group drop-down list An SQL group DataSunrise should add logged SQL
statements to. Click "Plus" (+) to add a new group to
the list
Save Objects in the Group drop-down list An Object group DataSunrise should add logged objects
to. Click "Plus" (+) to add a new group to the list
Save Users in the Group drop-down list A User group DataSunrise should add logged users to.
Click "Plus" (+) to add a new group to the list
Save Applications check box Select Yes to create client application name entries (refer
to Creating a Client Application Profile on page 211)
SQL queries check boxes Select a query type to learn from
Table Relations tab
Process Tables from drop-down list Process queries either captured in a current Rule or
included in an Object Group
Process Query to Database Objects Process queries directed to selected DB objects
Skip Tables from drop-down list Skip queries either captured in a current Rule or
included in an Object Group
Skip Query to Database Objects Skip queries directed to selected DB objects
Save Relations in Table Relation drop-down list Table Relation to save detected table relations in (see
Table Relations on page 400)

6. Input tags if necessary (Tags on page 199)


7. Click Save Rule to save the Rule's settings.
9 DataSunrise Rules | 199

9.13 Tags
You can assign certain tags to DataSunrise Rules. You can use these tags to quickly locate your Rule or Rules in a list
of Rules. This subsection is common for all types of Rules.
To create a tag for a Rule, do the following:
• Create or open an existing Rule and navigate to the Tags subsection of the Rule's settings
• Click Edit and enter tag's Key (tag's logical name) and tag's value
• Click Save to save the tag. Once the tag is saved, you will be proposed to create a new tag. Create a new one or
click Close to close the Tags window and save the tags you've created.
Having created a tag, you will be able to see it in the Rule's list (Tags column). You can also click Edit Columns (gear
icon) and select your tag from the list of columns to display all Rules marked with this tag.
To filter Rules by tags, click Filter and select Tags to view.

9.14 Viewing Transactional Trails (Audit


Events)
DataSunrise enables viewing detailed information about transactional trails (audit events). "Transactional trails" are
database user queries and query execution results that triggered existing Data Audit Rules and were logged by the
Data Audit functionality.
Though you can view transactional trails in manual mode, it is highly recommended to use third-party SIEM systems
for extensive analysis (refer Syslog Integration Settings on page 389).
To view transactional trails in manual mode, do the following:
1. Click Transactional Trails.
2.
Note: To display or hide certain columns, use the Options (gear wheel) button at the top right corner of the list.

Specify a date range to display. Use the From drop-down list to select an initial date and the To drop-down list
for an end date of the date range. DataSunrise will display a list of transactional trails (in the form of a table).
9 DataSunrise Rules | 200

Audit events list column Description


ID Event identifier (not used outside DataSunrise).
Database Type Database type.
Session Session.
Operation ID Operation ID.
Execution ID Execution ID.
Rule Link to an Audit Rule which was triggered by the SQL query.
Login Name of a database user that queried the database.
Application Name of a client application which was used to query the database.
Application User App user name.
Instance Database instance a logged query was directed to.
Query Query's code.
Time Time at which an SQL query was intercepted.
Rows Number of database rows affected by the intercepted queries.
Error Database error (if occurred).
Query Type Query type.
Transaction ID Transaction ID. Displays all queries included into transaction.
Transaction State Transaction state (Prepared, Committed, Rolled Back, Opened).

3. Select the ID of a required event in the list to view its details.


4. DataSunrise displays query's SQL code (SQL Query), query execution results (Query Results) if the rule is
configured to log query results, and database objects involved into the query (Database Objects Involved in
this Query). It is possible to view basic information about the event and session information. This information is
displayed in the Basic&Session Info subsection.
Basic&Session Info subsection
9 DataSunrise Rules | 201

List element Description


Event ID Event identifier (not used outside DataSunrise).
Rules Name of an Audit Rule which was triggered by the SQL query.
Application User Name of client application user which was used to send the query.
Start Time Event start time.
End Time Event end time.
Affected Rows Number of database rows affected by the query.
Error Database error, if occurred (True or False).
Error Code Database error code.
Error Text Database error text.
Session ID Session identifier (not used outside DataSunrise).
Login Name of a database user who sent the query.
Application Name of a client application which was used to send the query.
Client Host Name of client application host.
Connect Time Session start time.
Disconnect Time Session end time.

9.15 Examples of Rules


9.15.1 Making a Database Read-Only
Let's assume that we need to make our database read-only (only SELECTS should be allowed). We have three
Security rules that:
1. Allow SELECT-type queries to our database ("allow_select" rule)
2. Block DDL queries to our database ("block_ddl" rule).
3. Block DML queries to our database ("block_dml" rule)

Note: You need to create an object group with the required database specified to add to the Rule which affects
DDL queries.

Important: When executing a SELECT query, some SQL clients send additional queries to the database, resulting in
blocking of the SELECT query. In this case, you need to configure a rule which allows SHOW queries. You can check
which exactly query caused blocking in the Security → Events subsection.
As a result, the "allow_select" rule will allow only SELECT-type queries to the specified schema:

SELECT * from complex_table;

And the other Rule ("block_ddl") will block DDL queries:

CREATE table table_1;


CREATE table newTable as (SELECT col1, col2 from complex_table);

And the "block_dml" rule will block all other DML queries.

9.15.2 Making a Table Column Accessible


Let's assume that we need to make only one column of our table accessible and all other columns should be
blocked from accessing. We have two Security rules that:
1. Allow queries to the col1 column of the complex_table element in our database ("allow_col1" rule).
2. Block queries to the whole table ("block_all_table" rule).

As the priority of the allowing rule is higher (it is located higher in the list), only queries to the col1 column will be
allowed and all other queries will be blocked. This query will be allowed:

SELECT col1 from complex_table;

These queries will be blocked:

SELECT col1,col2 from complex_table;


SELECT * from complex_table;
10 DataSunrise Configurations | 203

10 DataSunrise Configurations
In order to utilize its data auditing, protection and masking capabilities (refer to DataSunrise Functional Modules
on page 228), DataSunrise requires information about a target database as well as about its users and client
applications used to query this database.
Configurations section enables you to accomplish the following tasks:
• Creating target DB profiles
• Creating target DB user profiles and profiles of client applications that interact with the target DB
• Entering information about IP addresses (hosts), target DB queried from as well as creating groups of IP
addresses
• Arranging target DB's objects into Object groups
• Arranging user queries intercepted by the firewall into SQL groups
• Creating Schedules
• Configuring notifications on system events via email, instant messengers and Syslog messages.

10.1 Object Groups


DataSunrise enables you to arrange your target database's objects such as schemas, tables, columns and functions
into groups - Object Groups. Thus you can handle multiple objects a group consists of as a single object. When
selecting objects to be monitored by an Audit or Security Rule, you can specify just a group instead of all these
objects.
For example, if you're creating a Data Security Rule and need to restrict access to five different schemas, you
can arrange these schemas into an Object Group (for example "MySchemas") and specify this Group in the Filter
Statements subsection of the Rule's settings. As a result, all objects included in the "MySchemas" Group will be
protected from access by DataSunrise. Moreover, having created a Rule using an Object Group, you don't need to
worry about any possible modifications of these objects that would be done in the future — DataSunrise will restrict
access to all the objects included in the Group anyway.
Object groups can be formed either manually or automatically with the Learning Mode functionality (refer to
Learning Mode Overview).

10.1.1 Creating an Object Group


To create a new Object Group, do the following:

1. Navigate to Object Groups.


2. Click Create Object Group and enter a new group's logical name into the Enter name for new group text field..
3. Select database instance which objects should be added to the Object Group, in the Instance drop-down list.
4. Select either Tables for databases, schemas, tables and columns or Procedures for stored procedures. Click
Select and check the required objects. These objects will be added to the Group
10 DataSunrise Configurations | 204

10.1.2 Adding Objects to an Object Group Manually


You can populate an existing Object group with target database objects manually. To do this, perform the following:

Figure 37: Object selection window

1. Click Select.
Select a database network interface in the Interface drop-down list located in the Check Columns window.
2. Check the objects of interest in the object tree. Note that you can select a row of objects by clicking the name
of the first object of the row and the name of the ending object of the row while holding the Shift button (the
selected items will be highlighted). Then click Select Multiple to check these objects.
3. Click Done to apply changes.

Note: You can search across the database object tree. Enter required DB element's name into the corresponding
text field and click Show:

Text field Description


Find Databases Display databases
Find Schemas Display database schemas
Find Tables Display database tables
Find Columns Display database columns

To be able to preview data in the Object Tree you need to be sure that the DataSunrise user has the Reading
Database Data Web Console action and has the select table grant (see. Creating Database Users)
10 DataSunrise Configurations | 205

10.1.3 Adding Objects to an Object Group Using Regular


Expressions
You can populate an existing Object group with target database objects using regular expressions. To do this,
perform the following:

Figure 38: Selecting database objects with regular expressions.

1. Click Add Regex


2. Input a regular expression

Note: Names of database elements marked with red asterisks are considered to be regular expressions and
not taken directly from the database. If a database connection is missing, all table and column names will be
considered as regular expressions.

3. Select an element to be added to the object group and click Add. In such way, you can add multiple objects one
by one
4. When you're done with adding objects, click Close to complete the operation.
10 DataSunrise Configurations | 206

10.1.4 Adding Stored Procedures to an Object Group


Manually
To populate an existing Object group with stored procedures manually, do the following:

Figure 39: Selecting stored procedures to be added to a group

1. Click Procedures. Click Select.


Select database network interface in the Interface drop-down list
2. Check the required stored procedures in the database object tree
3. Click Done to apply changes.

Note: You can search across the database object tree. Enter the required element's name into the corresponding
text field and click Show:

Text field Description


Find Databases Display specified databases
Find Schemas Display specified database schemas
Find Function Display specified database functions

10.1.5 Adding Stored Procedures to an Object Group Using


Regular Expressions
You can populate an existing Object group with stored procedures using regular expressions. To do this, perform the
following:

1. Click Reg Exp


10 DataSunrise Configurations | 207
2. Click Get Databases to connect to your target database
3. Select a directory from the corresponding drop-down lists. You can select a certain database element or use a
regular expression

Note: Names of database elements marked with red asterisks are considered to be regular expressions and
not taken directly from the database. If a database connection is missing, all table and column names will be
considered as regular expressions.

4. Click Close to complete the operation and Save to apply changes.

10.2 Query Groups


Audit and Security Rules' settings enable you to specify which SQL statements should trigger the Rule to perform
defined actions. By default, you can do it manually, but in order to simplify managing of multiple SQL statements
while creating Audit and Security Rules, DataSunrise enables you to arrange SQL queries into groups (Query
Groups).
Besides that, DataSunrise logs user queries that can be used while configuring Audit and Security Rules as well.
You can create Query Groups and populate them with SQL statements either manually or automatically with the
Learning Mode functionality (refer to Learning Mode Overview).

Figure 40: Query Groups tab

Click Query Groups to access Query Groups' settings. A list of SQL statements previously logged by DataSunrise will
be displayed:
10 DataSunrise Configurations | 208

Interface element Description


List of existing query groups (left block) Select an existing Query Group for viewing and editing
Create Group button Create a new Query Group
Pencil button Edit the selected Query Group
Bucket button Delete the selected Query Group
Search in Query search field Search across the list of SQL queries

10.2.1 Creating a New Query Group


To create a new group of SQL queries, perform the following:

1. Click Create Group.


2. Assign a logical name for the new SQL group and click Save.
3.
Note: You can specify the required queries in the following ways:
• Specify an exact query. Example:

select * from customers where customer_id=1;

Note that "customer_id" here has an exact value ("1")


• Specify a depersonalized template. Example:

select * from customers where customers_id=?;

Note that "?" value means any value


• Specify a query using a regular expression. Example:

select * from customers where customers_id=(\d)+

Note that "(\d)+" means any numeric value

To add a new SQL query to a group, click Actions → Add and enter the SQL query code in the Edit the Query
window. If you want to use regular statements to select SQL statements, check the Regular Expression checkbox.

10.2.2 Populating a SQL Group with Statements


Automatically Logged by DataSunrise
DataSunrise enables populating of SQL groups with SQL statements saved by Data Audit, Data Security and Data
Masking components. For this, do the following:

1. Navigate to the corresponding section of the Web Console (Data Audit (Data Audit (Database Activity
Monitoring)), DataSecurity (Data Security) or Data Masking (Data Masking).
2. Navigate to the Transactional Trails or Events page, select a SQL statement you want to add to an existing
Group from the list and click its ID to view the query's code.
3. Click Add Query to the Group and select a Group you want to add the SQL query to from the Query Statement
group drop-down list. Click Apply.
10 DataSunrise Configurations | 209

10.3 IP Addresses
Rules' settings (DataSunrise Rules on page 110) enable DataSunrise to process queries coming from certain hosts, IP
addresses or networks. To use this feature you need to create host profiles respectively so DataSunrise be aware of
these IPs.
Hosts subsection enables you to perform the following actions:
• Creating and editing of host profiles (either manually or using a .CSV file)
• Creating and editing of host groups.

Note: It is possible to create host profiles automatically using DataSunrise's self-learning functionality (refer to
Learning Mode Overview).

10.3.1 Creating a Host Profile


To create a new Host profile, do the following:
1. Click Hosts in the left pane.
a list of existing Hosts will be displayed.
2. Click Add Host to add a new host.
3. Enter the required information about your host into the Add Host tab:
Interface element Description
Alias text field Host profile's logical name (any name)
Address Type drop-down list IP address type. The following values are available:
• Host: IP address or host name
• Range IPv4: range of IPv4 addresses
• Range IPv6: range of IPv6 addresses
• Network IPv6: network of IPv6 addresses

Address text field (for Host and Network IPv6 types Actual IP address
only)
Network text field (for Network type only) Actual Subnet mask
Starting IP Address text field (for Range IPv4 type Initial IP address of the range
only)
Ending IP Address text field (for Range IPv4 type only) Ending IP address of the range
Network text field (for Range IPv6 type only) Subnet mask

4. Click Save to apply new settings.

10.3.2 Adding Multiple IP Addresses Using a CSV or TXT


File
To upload a list of IP addresses to DataSunrise using a .CSV or .TXT file, perform the following:

1. Prepare a text file with a list of IP addresses that should be added to DataSunrise. Each line should start with
host;, followed by an IP address.
10 DataSunrise Configurations | 210
Example:

host;10.10.0.1
host;10.10.0.25
host;10.10.0.30

Important: Each line can contain a single host entry only.

2. Click Actions → Import Hosts. The Import Host page will open.
3. Drag and drop your file or click the corresponding link for the file browser and select your file.
If you need to upload a range of IP addresses, begin each line with the range key word (for IPv4 addresses) or
the range_ipv6 key word (for IPv6 addresses), then enter initial IP address and ending IP address of the range
separated with a semicolon:

range;10.0.0.1;10.0.0.100 (for IPv4)


range_ipv6;0:0:0:0:0:ffff:7f00:1;0:0:0:0:0:ffff:7f00:6 (for IPv6)

If you need to upload network settings, each line of your file should start with the network key word (for IPv4
addresses) or network_ipv6 key word (for IPv6 addresses):

network;10.0.0.1;255.255.255.0 (for IPv4)


network_ipv6;fe80:0:0:0:200:f8ff:fe21:67cf (for IPv6)

4. Click Attach to save new settings.

Note: The host list uploading is a two-stage process. First, when you drag and drop, the file you choose
is uploaded to the DataSunrise server. And when you click Attach, the contents of the file is processed by
DataSunrise.

10.3.3 Creating a Group of Hosts


To arrange multiple hosts into a group, do the following:

Tip:
Host groups enable you to handle all IP addresses a group includes as a single object. For example, when creating
a Data Security Rule for blocking queries from multiple IP addresses, you can specify a required host group in the
Rule's settings instead of specifying these hosts one by one.

1. Click Hosts in the left pane.


a list of existing Hosts will be displayed.
2. Click Add Group.
3. Enter logical name of a new Group of Hosts into the Group Name field (any name).
4. Check hosts which should be added into a new group in the Members of the Group window.
5. Click Save to save a new Group of Hosts.
10 DataSunrise Configurations | 211

10.4 Client Applications


Rules' settings (DataSunrise Rules on page 110) enable DataSunrise to process queries sent by certain client
applications. To use this feature, you need to create a client application profile respectively so DataSunrise be aware
of the applications, queries of which it should process.
The Applications subsection enables you to create and edit client application profiles.

Note: It is possible to create application profiles automatically using DataSunrise's self-learning functionality (refer
to Learning Mode Overview on page 197).

10.4.1 Creating a Client Application Profile


To add information about a client application queries of which should be processed by DataSunrise, do the
following:

1. Click the Applications link.


a list of existing application profiles will be displayed.
2. Click Add Application to create a new application profile.
3. Enter application's name into the Enter Program Name text field.
4. Click Save to save the profile.

10.4.2 Creating Multiple Client Application Profiles Using a


CSV or TXT File
If you need to create multiple client application profiles fast, you can load them from a .CSV or .TXT file. For this, do
the following:

1. Prepare a .CSV or .TXT file which contains a list of client applications to be added to DataSunrise.
Each line should start with the app; keyword, followed by an application name.
Example:

app;application_name1
app;application_name2
app;application_name3

Important: Each line can contain a single application entry only.

Important: UTF-8 encoding is preferred.

2. Click Actions → Import Applications. The Import Application page will open.
3. Drag and drop your file or click the corresponding link for the file browser and select your file.
4. Click Attach to save changes.

Note: uploading of an Applications list is a two-stage process. First, when you drag and drop a file, it is
uploaded to the DataSunrise server. And when you click Attach, the file's contents is processed by DataSunrise.
10 DataSunrise Configurations | 212

10.5 Subscriber Settings


DataSunrise is able to notify concerned parties (subscribers) via Email, SNMP or instant messages about activation of
Rules and about certain system events.
To establish a subscription, configure a mail server which should be used to send notifications to Subscribers. Then
create a profile for each Subscriber, where you should specify subscriber's email address and system events to send
notifications on.

Note: The Subscribers subsection can be used only to configure mail servers and to create subscriber profiles.
To establish subscription on specific Rule events, go to the Rule settings and add existing subscribers to the
Notifications list.

10.5.1 Configuring Servers


10.5.1.1 Configuring an SMTP Server
To configure a mail server which should be used to send SMTP notifications to Subscribers, do the following:

Figure 41: SMTP server settings example

1. Navigate to Subscribers.
2. Click Add Server.
3. Select SMTP in the Type drop-down list.
4. Enter the required data into the Server tab:
10 DataSunrise Configurations | 213

Interface element Default Description


value
SSL drop-down list enabled SSL encryption:
• Disabled: disable SSL
• STARTTLS preferred: use Opportunistic TLS only, if
the server supports encryption
• STARTTLS required: use Opportunistic TLS and
terminate connection, if the server does not support
encryption

Verify Server Certificate check box disabled Self explanatory


Address text field - Outgoing mail server address or authorization token for
messenger applications
Port text field - Mail server port number
Login text field - Email user name
Save Password drop-down list Save in Method of saving the SMTP server email's password:
DataSunrise
• Save in DataSunrise. DataSunrise inner storage
• Retrieve from CyberArk. In this case you should
specify CyberArk's Safe, Folder and Object
• Retrieve from AWS Secrets Manager. In this case you
should specify AWS Secrets Manager ID
• Retrieve from Azure Key Vault. You should specify
Secret Name and Azure Key Vault.

Password text field - Email password


Mail From text field - Outgoing mail address
Test button - Test connection with the SMTP server

5. Click Save to save the settings.

10.5.1.2 Configuring an SNMP Server


To configure a server which should be used to send SNMP notifications to subscribers, do the following:

1. Click Add Subscriber.


2. Select "SNMP" in the Server Type drop-down list.
3. Enter required data into the Address and Port text fields:
Interface element Default value Description
Address text field - IP address of an SNMP manager
Port text field 162 Port number of an SNMP manager

4. Click Save to save the settings.


10 DataSunrise Configurations | 214
10.5.1.3 Configuring an External Application Server
To configure a mail server used to send notifications to Subscribers via an external application (such as instant
messengers), do the following:

Figure 42: Example of External app server settings (Slack Enterprise). The Command field contains the Slack
authorization token.

1. Navigate to Configurations → Subscribers and click Add Server.


2. Select External in the Type drop-down list.
3. Insert an appropriate command into the Command field:

Note: Refer to the following article: https://www.datasunrise.com/blog/sending_notifications-to-slack/

4. Click Save to save the settings.

10.5.1.4 Configuring a Slack (direct) Server


To configure a Slack server to send notifications to a Slack channel, perform the following:

1. Follow the link https://api.slack.com/ and create a Slack application for sending notifications. You can configure it
to send messages to a certain Slack channel or to certain Slack users.
2. In the DataSunrise's Web Console, navigate to Configuration → Subscribers and add a new Server (Add
Server).
3. Select Slack (direct), use port 443 (default).
4. Specify the Path with tokens. For example, to post to group "#random", the token should look like the following:

T1D93E7U6/BBPKEJWBB/cYJhcmidqsCuL8z9hQsgmeTN

5. Click Save to save the settings.


10 DataSunrise Configurations | 215
10.5.1.5 Configuring a Slack Legacy Token Server
This option enables you to send notifications to Slack channels via the REST API. You should get a token to use this
method.
1. You should get a Slack token first. You can do this by using OAuth 2.0 framework or by using Slack's legacy token
generator (for more information refer to the following link: https://api.slack.com/custom-integrations/legacy-
tokens). A token should look like the following:

xoxp-00000000000-000000000000-000000000000-00000000000000000000000000000000

2. In the DataSunrise's Web Console, navigate to Configuration → Subscribers and add a new server (Add Server).
3. Select Slack (token), use port 443 (default).
4. Paste the token into the Token field.
5. Input sender's name into the From field.
6. Click Save to save the settings.

10.5.1.6 Configuring a NetcatTCP/NetcatUDP Server


Netcat mentioned here is Unix utility and not a CMS. Actually, this method is not based on Netcat itself, it just works
similarly. NetcatTCP/NetcatUDP enables you to send messages to network using TCP or UDP protocol. To capture a
message you need genuine Netcat utility or a sniffer of some kind.
1. Navigate to Configuration → Subscribers and add a new Server (Add Server).
2. Select Netcat (TCP) or Netcat (UDP).
3. Specify the name of the host you're going to send messages to and port number (7777 for example).
4. Click Save to save the settings.

10.5.1.7 Configuring a ServiceNow Server


This option enables you to send notifications to the ServiceNow system.
1. Navigate to Configuration → Subscribers and add a new server (Add Server).
2. Select ServiceNow, use port 443 (default).
3. Input your ServiceNow instance's address (you should get it during registration at ServiceNow) into the Host
field.
4. Specify ServiceNow user Login and Password.
5. Click Save to save the settings.

10.5.1.8 Configuring a Jira Server


This option enables you to send notifications to Jira system.

1. In the DataSunrise Web Console, navigate to Configuration → Subscribers and add a new server (Add Server).
2. Select Jira. Specify host, protocol (HTTP or HTTPS, HTTP by default). Specify port number (443 for HTTPS, 80 for
HTTP by default).
3. Input your email into the Login field and password into the Password.
4. Input your project key into the Project key field.
5. Click Save to save the settings.

10.5.1.9 Configuring a Syslog Server


To configure a Syslog server which should be used to send notifications to Subscribers, do the following:
1. Navigate to Subscribers.
2. Click Add Server.
3. Input a logical name and select Syslog on the Type drop-down list.
4. Input the required information:
10 DataSunrise Configurations | 216

Interface element Default Description


value
Host text field - SIEM system's IP address (the IP address of the server to send logs
to)
Port text field 514 SIEM system's port number
Protocol drop-down list - Protocol to use: RFC 3164 or RFC 5424

5. Click Save to save the settings.

10.5.2 Creating a Subscriber Profile


To create a new Subscriber profile, do the following:
1. Navigate to Configuration → Subscribers and click Add Subscriber.
2. Select type of the sending server on the Server Type drop-down list.
3. Select an outgoing mail server in the Gate drop-down list or specify IP address and port number for an SNMP
server.
4. Specify SMTP Subscriber's Email address, port number for SNMP, Slack channel name for Slack (token) or
additional parameters for an External server.
5. Select week days to send notifications on.
6. To notify a Subscriber about DataSunrise system events such as Configuration, Authentication, Core events,
Backend or Metadata events or Audit viewer Errors, check corresponding check boxes in the Events tree. For the
description of events, refer to DataSunrise System Events IDs on page 435
7. For an SNMP Subscriber, there is a list of additional characteristics for notification in the Send Current Indicator
Values Periodically subsection. Select required indicators.
As a result, DataSunrise sends the current value of a selected indicator every specified period of time. Below is
the list of additional indicators for SNMP Subscribers.
To indicate the required parameter in an SNMP client, navigate to System Settings → Additional Parameters
and check the corresponding object's ID from the table below.
10 DataSunrise Configurations | 217

Interface element Description Object ID for SNMP


Audit Queue Audit queue length SnmpAuditQueueLengthOIDSuffix
Average Operations Average amount of operations per second SnmpAverageOperationsOIDSuffix
Average Read Bytes Average throughput of read operations SnmpAverageReadBytesOIDSuffix
Average Write Bytes Average throughput of write operations SnmpAverageWriteBytesOIDSuffix
Audit Free Space Free space on the disc where audit files are SnmpAuditFreeSpaceOIDSuffix
stored
Log Free Space Free space on the disk where logs are SnmpLogsFreeSpaceOIDSuffix
stored
Mailer Queue Mailer queue length SnmpMailerQueueLengthOIDSuffix
Core Memory DataSunrise Core memory usage SnmpCoreMemoryOIDSuffix
Backend Memory DataSunrise Backend memory usage SnmpBackendMemoryOIDSuffix
Proxy Queue Proxy queue length SnmpProxyQueueLengthOIDSuffix
Sniffer Queue Sniffer queue length SnmpSnifferQueueLengthOIDSuffix
Average Executions Average amount of query executions per SnmpAverageExecutionsOIDSuffix
second

8. Click Save to apply the changes.

10.5.3 Email Templates


The Email templates functionality enables you to change templates of Email messages sent by DataSunrise to
Subscribers. For the templates settings, click E-Mail Templates at the Subscribers page.
There are four types of templates you can select using the Type drop-down list: General, Event, Report and Security.
General-type message is a HTML message that includes a list of Event-type messages arranged into one message.
Report type messages are sent separately with separate emails.
Available options:
General messages
Parameter Description

${Server.Name} Name of the server the events occurred on

${HA.ClusterName} Cluster name

${Content} Body of message includes a list of Events formed with


EmailTemplateType::etEvent

Event type messages


This type of message is formed when the following event occurs:
• ConfigurationChanges
• Authentication
• CoreEvent
• AuditError
• BackendEvents
10 DataSunrise Configurations | 218
• MetadataChanges
• RuleTrigger

Parameter Description
Body of message

${Event.Time} Time

${Rule.Type} Type of triggered Rule (Audit, Masking...)

${Rule.Description} Name of the Rule

${Operation.Id} Identifier of operation in the audit database

${Operation.SqlQuery} SQL query

Misc.

${Event.Time} Time

${Event.Name} Name of an event (Configuration changes, Authentication)

${Event.Description} Message

Report type messages


This type of message is formed when the following report is created:
• Authentication Error
• Operations Report
• Sensitive Data Report
Parameter Description

${Server.Name} Name of the server the events occurred on

${HA.ClusterName} Name of the cluster

Message subject

${Email.Subject} Report header ("Authentication Error Report", "Operations Report"...)

Body of message

${Report.Name} Name of the report ("Auth Errors", "Audit Operations"...)

${Report.Time} Time

Security type messages


This type of message is formed when the following events occur:
• Sending a confirmation code for 2FA
• Sending an Email confirmation code
• Sending a password reset confirmation
• Sending a password
• Login attempts
10 DataSunrise Configurations | 219
•Parameter Description
${Security.Code} Security code used for confirmation of some operations
Connection Details

${Server.Name} Name of the server the events occurred on

${HA.ClusterName} Name of the cluster

${Connection.ClientAddr} Client application address


${Event.Time} Time event occurred at
${WebConsole.Url} URL of the Web Console

10.6 Schedules
Schedules can be used to activate and deactivate DataSunrise rules automatically at predefined time.
In fact, Schedules don't control overall DataSunrise's behavior but control the behavior of separate Rules. Thus if you
need to set a Schedule for a certain Rule, you should specify it in the Rule's settings. You can use one Schedule to
control multiple Rules as well.
To create and edit Schedules, navigate to the Schedules subsection.

10.6.1 Creating a Schedule


To create a new Schedule, do the following:

Figure 43: Schedule example


10 DataSunrise Configurations | 220

Figure 44: Selecting a Schedule in Rule's settings

1. Navigate to Configuration → Schedules


A list of existing Schedules will be displayed.
2. Click Add Schedule to create a new Schedule.
3. Enter Schedule's logical name into the Schedule Name text field.
4. If you need a Schedule to activate and deactivate a related Rule only once (not periodically), specify its activity
period in the Active Period subsection.
4.1 Select the initial date of Schedule's active period from the From drop-down list.

Note: You can select an exact date via the date picker by clicking the "Calendar" icon to the right of From.

4.2 Select end date of the Schedule's active period from the To drop-down list.

Note: You can select an exact date via the date picker by clicking the "Calendar" icon to the right of To.

Important: Thus a schedule-related Rule will be activated at the initial date of the active period and deactivated at
the end date.

5. If you need a Schedule to activate and deactivate a related Rule periodically (daily, weekly etc.), specify its activity
periods in the Time Intervals subsection.
5.1 Click Add Time Interval.
5.2 Select a day of week the Schedule should activate the related Rule on.
5.3 Specify a period of time the Schedule should activate the related Rule at, in the From and To.
Click Add Time Interval to add another activity interval to the Schedule's settings.

Important: You can create multiple time intervals for one Schedule (for example, for every day of week).

6. Click Save to save the Schedule.


7. To apply a Schedule to a certain Rule, navigate to the Rule's settings and select your schedule in the Schedule
subsection.

10.6.2 Examples of Schedules


10.6.2.1 Configuring Active Period of a Schedule
Let’s assume that we have a blocking Rule and we need to configure DataSunrise to block all incoming queries to a
certain database for the next two days.
1. Create a new Schedule. Name it
10 DataSunrise Configurations | 221
2. Set Active Period of the Rule. Since we need to block all queries for two days, we select the required time and
days via the Calendar
3. You should get something like that:

10.6.2.2 Configuring Active Days of a Schedule


This is a more complex scenario. Let’s assume that we need a schedule to work at night on week days and on
weekend days for all the day long (20.00-08.00 on Monday-Friday and 00.00-24.00 on Saturday and Sunday).
1. Create a new Schedule. Name it
2. Since we need a Rule to work on certain days at certain time, we should use the Time Interval settings
3. Select a day of week
4. Click Add Time Interval to add a new Time interval. Select time period at which the Rule should be active.
5. You should get the following result:
10 DataSunrise Configurations | 222

10.7 Syslog Settings (CEF Groups)


DataSunrise can export data collected by the Data Audit module and information about target database system
events to external SIEM systems via Syslog. The Syslog Settings subsection enables you to create CEF (Common
Event Format) groups. You can specify a CEF group in Rule's settings and DataSunrise will transfer messages about
events included in the CEF group to your Syslog server. To pass information about DataSunrise system events to
Syslog, you should create a new Syslog-type Server in Configuration → Subscribers and set up a subscriber profile to
receive notifications about DataSunrise events.

UI element Description
Name text field Logical name of the CEF group
Enabled check box Enable the current group
Members subsection
Add CEF Item button Add new CEF entry. Click the button and you will be redirected to a new
page. Enter item's name, select type of message and enter CEF.
CEFs list Includes system events and corresponding CEF codes of messages
transferred to Syslog
Save button Save current CEF group

To add a new item to the list, click Add CEF Item:

UI element Description
Name text field Logical name of a CEF item
Type drop-down list Event type to report on
CEF field CEF code of an item. You can use the Parameters list as a reference
Enabled check box Enable the item

10.8 Periodic Tasks


DataSunrise enables you to perform some tasks such as health check and metadata update periodically on schedule.
To access these tasks, navigate to Configuration → Periodic Tasks. The Periodic Tasks subsection includes the
following types of tasks:
• Backup Dictionary (Backup Dictionary Task on page 223)
• Clean Audit (Clean Audit Task on page 223)
• Health Check (Health Check on page 224)
• Update Metadata (Update Metadata on page 224)
• AWS Remove Unused Servers (AWS Remove Unused Servers Periodic Task on page 225)
• User Behavior (Periodic User Behavior on page 225)
• Query History Relation Learning (Database Query History Analysis on page 400)
• DDL Table Relation Learning (Periodic DDL Table Relation Learning Task on page 404)
• Database User Synchronization (Database User Synchronization on page 226)
• Azure Remove Unused Servers (Azure Remove Unused Servers Periodic Task on page 226)
• Transfer Audit to Elasticsearch (Integrating Elasticsearch and Kibana with DataSunrise on page 270)
• AWS S3 Crawler (AWS S3 Crawler on page 253)
• Kubernetes Remove Unused Servers (Kubernetes Remove Unused Servers Periodic Task on page 227)
10 DataSunrise Configurations | 223

Note: each Periodic Task features a general subsection where you should specify Task's logical name and
DataSunrise server to start the Task on.

10.8.1 Backup Dictionary Task


Backup Dictionary task enables you to make backups of your Dictionary database on schedule

1. Create a new Task. Input general information


2. Configure Dictionary backing up in the Backup Dictionary subsection
Parameter Description
Backup Name field Name of the backup file.

Note: Backup Name means the date and time of backing up and at the
same time the name of the folder the backup is saved in.
On Linux, to create new backups in different folders, use the following
Exterrnal Commands (see below):

bash -c 'gsutil cp "<backup_path>/dictionary.db" "gs://my_dict/


$(date)/"'
bash -c 'gsutil cp "<backup_path>/dictionary.db" "gs://my_dict/
$(date)/dict.bak"'
bash -c 'gsutil cp "./dictionaryBackup/back/dictionary.db" "gs://
my_dict/$(date)/dict.bak"'

Backup Settings check box Include information about DataSunrise's settings in the backup
Backup Users check box Include information about DataSunrise's Users in the backup
Backup Configurations check box Include information about DataSunrise objects: servers, instances
(Interfaces, Proxies, Sniffers, metadata), database Users and Groups, Hosts,
Schedules, Applications, Static Masking tasks, Data Discovery tasks, Report
Generator reports, Query Groups, Subscribers settings, SSL key groups, Data
Discovery groups, CEF groups, Rules
External Command text field An arbitrary command.

3. Set a schedule for the Task in the Startup Frequency subsection


4. If required, keep results or remove old results.

10.8.2 Clean Audit Task


Clean Audit task enables you to clean the Audit Storage on schedule

1. Create a new Task. Input general information


2. Configure audit cleaning in the Clean Audit subsection
10 DataSunrise Configurations | 224

Parameter Description
Archive Data to be Removed Save the audit data to a separate folder before removal. You can move
before Cleaning check box this data to an Amazon S3 storage. The data will be saved in CSV format
compatible with Athena.
Archive Folder field Folder to save the audit data in
Execute Command after Archiving Execute a command or script to handle the data saved in the Archive Folder
field (see above). For example, you can move the data to an S3 storage with your
script.
Remove All Audited Data Older Self-explanatory. Delete outdated audit data.
Than, Days field

3. Set a schedule for The task in the Startup Frequency


4. If required, keep results or remove old results.

10.8.3 Health Check


Enables you to perform DataSunrise health check on schedule. It checks a connection of DataSunrise with a target
database, with proxies and load balancers if configured.

1. Create a new task. Input general information


2. Set a schedule for the task in the Frequency of Task Starting
3. If required, keep all Health check results or remove old results
4. Configure Health check parameters:
Parameter Description
Instances drop-down list Database instance to check connection with
Check Connection with External Resource check box Display external resource (Load balancer) connection
parameters to check connection between an external
resource (load balancer) and DataSunrise instance
Connection Details
AWS Endpoint, Public DNS or IP address text field Public IP, public DNS or virtual IP of external resource
Port text field External resource (Load balancer's) port number
Proxy Testing Method drop-down list • Local proxy — check proxy on a local system
• Web health check — check proxy on a remote
machine (DataSunrise sends a command to a remote
machine to check proxy)

10.8.4 Update Metadata


Update Metadata feature enables you to update database's metadata on schedule

1. Create a new Task. Input the general information


2. Set a schedule for the Task in the Frequency of Task Starting
3. If required, keep all Health check results or remove old results.
4. Configure Update Metadata parameters:
10 DataSunrise Configurations | 225

Parameter Description
Instances drop-down list Database instance to update metadata of

10.8.5 AWS Remove Unused Servers Periodic Task


This type of Periodic Task enables you to detect and remove inactive AWS servers from your configuration.
To create this type of Periodic task, do the following:
1. Navigate to Configuration → Periodic Tasks.
2. Click New Task to create a new Periodic Task
3. Select AWS Remove Unused Servers in the Task Type drop-down list
4. Select your DataSunrise server to execute the task on in the Start on Server drop-down list
5. Adjust Startup Frequency if necessary.

10.8.6 Periodic User Behavior


DataSunrise provides you with suspicious activity monitoring- security solution based on data science. You can
gather more insights about database activity and detect its anomalous behavior with the User Behavior analysis
(UB). This UB feature is an autonomic neural network based on perceptrons. No information leaves DataSunrise
instances, the training datasets stored locally, in the home directory of DataSunrise instance. Like any neural
network, firstly UB has to be trained.
Step 1. The training to learn regular users activity on the protected database instance.
For this purpose, DataSunrise administrators firstly have to set one or many Audit Rules to collect events and client
connections with some regular client activity (Audit → Rules → New). These events and sessions should reflect
strictly usual user activity; the goal of this events in the Audit → Transactional Trail and Sessions of DataSunrise
instance is to gather training datasets to build user behavior patterns in the UB module. As soon as the events and
sessions in DataSunrise Audit Storage have been collected, DataSunrise administrators can set up a new Periodic
Task (Configuration → Periodic Tasks → New → select User Behavior).
Step 2. Setting up the User Behavior Periodic Task.
When the UB is trained on multiple available criteria in DataSunrise Audit Storage it can start to analyze further
Audit Transactional Trails and Sessions against the trained neural network data. The UB data includes the following
criteria: operations and operation types, queries and query types, database objects affected, session information
(user database logins, IP addresses and host names) etc. The new UB Periodic Task will use the selected Training
Start Date and Training End Date parameters to build its original patters of the regular database user activity. Please
note that on the date, following the Training End Date, the UB Periodic Task will not update its training datasets, the
UB Periodic Task started on Startup Frequency parameter, will analyze the Audit Storage Audit Trails and Sessions
data against the collected neural network datasets to reveal unsuspected user activity. As soon as such activity is
revealed, the UB task will store this information in the Results subsection of its Periodic Task.
Step 3. Setting up Audit Rules to monitor possible unsuspected user activity.
To detect unsuspected user activity some Audit Rules must be enabled, you can leave the rule you have created at
Step 1, or add some extra Audit Rules to provide the UB Periodic Task with more data. When new Audit Rules added,
the UB Periodic Task will use the neural network data collected in the DataSunrise instance configuration and will
detect any unsuspected user activity on the protected database instance from the Audit Transactional Trails. Please
review periodically the UB Periodic Task Results to detect any unsuspected user activity.
User Behavior uses existing audit data to create an allow list of user activity in the target database. So you should
have at least one Audit Rule existing and auditing results to base the User Behavior training on.
To use the User Behavior feature, do the following:
1. Navigate to Configuration → Periodic Tasks and create a new User Behavior task.
10 DataSunrise Configurations | 226
2. Specify the period the existing auditing results cover in the User Behavior Training subsection, Training Start
Date and Training End Date. The auditing data of this period will be used for creating an allow list of user
activity.
3. Set a schedule for the task in the Startup Frequency. Note that you can start the training manually by selecting
Manual.
4. If required, keep all search results or remove obsolete search results.
5. Save the task. You will be redirected to the list of all Periodic Tasks. Select your task in the list to open its settings.
6. In the Results subsection, you will see a list of user actions that don't match the existing allow list. Select an
action and mark it as Suspicious or False Alarm. If you mark an action as False Alarm, it will be added to the
allow list of actions.

10.8.7 Database User Synchronization


The Database User Synchronization periodic task enables you to import database, LDAP, SAP ECC or Oracle EBS user
names to your DataSunrise's settings.

1. Create a new Task. Input the general information.


2. In the Target subsection, select a database to import users from and a User Group to save these users in.

Note: before selecting your target database in the task's settings, make sure that this database's credentials are
saved in DataSunrise (refer to Creating a Target Database Profile on page 58).

3.
Note: if you're using Oracle EBS, you need to grant the following permissions to your Oracle EBS user to be able
to do user synchronization:

GRANT SELECT ON APPLSYS.FND_USER TO <User Name>;


GRANT SELECT ON APPLSYS.FND_RESPONSIBILITY TO <User name>;
GRANT SELECT ON APPLSYS.FND_RESPONSIBILITY_TL TO <User name>;
GRANT SELECT ON APPLSYS.FND_REQUEST_GROUPS TO <User name>;
GRANT SELECT ON APPS.FND_USER_RESP_GROUPS_ALL TO <User name>;

In the Source subsection, select a source of users to be imported. It's either database for database users or
LDAP, Oracle EBS or SAP ECC for corresponding users. Note that you can use filtering by database login roles by
selecting such roles in the Roles drop-down list.
4. Configure other parameters of the task if necessary. Run the task.
5. For added users, navigate to Configuration → Database Users and locate the group specified in the task's
settings.

10.8.8 Azure Remove Unused Servers Periodic Task


This type of Periodic Task enables you to detect and remove inactive MS Azure servers from your configuration.
To create this type of Periodic task, do the following:
1. Navigate to Configuration → Periodic Tasks.
2. Click New Task to create a new Periodic Task
3. Select Azure Remove Unused Servers in the Task Type drop-down list
4. Select your DataSunrise server to execute the task on in the Start on Server drop-down list
5. Adjust Startup Frequency if necessary.
10 DataSunrise Configurations | 227

10.8.9 Kubernetes Remove Unused Servers Periodic Task


This task enables you to detect and remove unused Google Cloud Kubernetes servers. To use this task, do the
following:

1. Execute the following commands in Google Cloud Shell:

gcloud container clusters get-credentials <cluster_name> --zone <location> --project <project_name>


kubectl create rolebinding ca-test-view --clusterrole=view --
serviceaccount=<namespace_name>:<service_account_name>

Note: if the namespace differs from the default one, execute the following command:

kubectl config set-context --current --namespace=<namespace_name>

2. Navigate to Configuration → Periodic Tasks and create a new task. Name it


3. Select a server to run the task on
4. In the Parameters section, provide the Namespace matching the following regular expression:

^(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$

5. Set Startup Frequency. Select Manual for manual starting of the task
6. If required, keep all search results or remove old results by checking the corresponding check box.

10.8.10 Kubernetes Remove Unused Servers Periodic Task


This type of Periodic Task enables you to detect and remove inactive Kubernetes servers from your configuration.
To create this type of Periodic task, do the following:
1. Navigate to Configuration → Periodic Tasks.
2. Click New Task to create a new Periodic Task
3. Select Kubernetes Remove Unused Servers in the Task Type drop-down list
4. Select your DataSunrise server to execute the task on in the Start on Server drop-down list
5. Adjust Startup Frequency if necessary.
11 DataSunrise Functional Modules | 228

11 DataSunrise Functional Modules


This section describes DataSunrise functional modules: Data Audit, Data Security, Data Masking and Sensitive Data
Discovery.

11.1 Static Data Masking


The Static Masking feature enables you to create a full-functional copy of a production database that contains
obfuscated data instead of the original sensitive data. This enables you to obtain a perfect testing or development
environment while preventing accidental data leak to outsourcers or contractors by providing them with a fully
functional "dummy" database. There is no way to retrieve the original data, as the fake data has no connection to
the original content. Static masking is safer than dynamic masking, but it requires additional hardware resources to
store the database copy.

Important: for the list of supported databases, refer to Supported Databases and Features on page 13.

By default, DataSunrise performs Static Masking directly without using a proxy (a separate Core), but you can also
configure it to create a temporary proxy with a Rule processed by the engine which is used to perform Dynamic data
masking (old DataSunrise behavior). Note that it's not available for the In-Place option (In-Place Static Masking on
page 243).

Figure 45: Static Data Masking

Important: for Static Masking using random-based methods, you need a dedicated schema (DS Environment) in
your database (see Configuring DataSunrise for Masking with random-based methods on page 177).

We performed testing of Static Masking Rules on PostgreSQL, MySQL, Oracle and AWS Aurora PostgreSQL hosted
on AWS. During testing, DataSunrise was installed on AWS EC2 machine, and source and target databases were
hosted on AWS RDS. EC2 and RDS machines of the same class were used:
11 DataSunrise Functional Modules | 229
Table

Database type DB version Arithmetic average, Max speed, MB/s Configuration (RDS
MB/s and EC2 class)
PostgreSQL 13 132 146 m5.2xlarge
MySQL 8 13 22 m5.2xlarge
Oracle 19 64 131 m5.2xlarge
AWS Aurora 12.7 139 131 r5.4xlarge
PostgreSQL

Table

CPUs RAM (GB) Storage type Provisioned IOPS


RDS M5.2xlarge
8 32 Provisioned IOPS SSD (I01) 3000
RDS R5.4xlarge
16 128
EC2 m5.2xlarge
8 32
EC2 R5.4xlarge
16 128

11.1.1 Creating a Static Masking task


Below, you can find a guide on Configuring a Static Masking task and some notes on database-specific issues.

Important: like other databases, PostgreSQL-based databases (Postgres, Greenplum, Redshift) feature data types
which support time zones (time with time zone, timestamp with time zone). But Postgres-based databases transform
such data to UTC-supported format which causes loss of information about the time shift. Thus, date/time value is
extracted according to the client’s time zone. It means that saved time/date can’t be properly restored when using
Static Masking.

Note: DataSunrise's CLI is more convenient for static masking of large amounts of database elements. Refer to the
"Static Masking" subsection of the DataSunrise's CLI Guide.
To create a copy of a certain database with masked columns inside, do the following:

1. Navigate to Masking → Static Masking


2. Click New to create a new Static Data Masking task.
3. In the Source and Target Instances, select a source database instance from the drop-down list, enter
credentials for the database user and click Log on.
Once the connection is established, select a database from the Database drop-down list.

Note: If your target DB doesn't support the "database" entity, the list will include only the [[master]] database.
11 DataSunrise Functional Modules | 230
4. Select a target database instance where the database with masked data will be stored, enter credentials for the
database user and click Log on.
Once the connection is established, select a database from the Database drop-down list.

Note: the structure of a target table should be similar to the structure of a source table (same data type, same
column names). If there are no tables of the required structure, DataSunrise will create the required table.

5. By default, Static Masking creates a target table if it doesn't exist and all its objects. But you can also configure
DataSunrise's behavior by checking/unchecking the corresponding check boxes (most of them are self-
explanatory):
• Create tables if they don't exist
• Create unique constraints
• Create foreign keys: DataSunrise creates foreign keys after all the target tables have been created and the
masked data have been transferred
• Create indexes
• Create check constraints
• Create default constraints
• Apply Related Table Filters (see Table Relations on page 400)
• Automatically resolve relationships between related tables if there are undefined ones (see Table Relations
on page 400)
• Use Parallel Load: enables parallel data loading useful when processing large tables (see Additional
Parameters on page 337, StaticMaskingParallelLoadThreadsCount, see also Creating a MySQL/Aurora
MySQL/MariaDB Database User on page 240)
• Check for empty target table: DataSunrise checks if the target table is empty
• Truncate target tables: DataSunrise cleans the target table
• Disable triggers
• Drop foreign keys before truncating: DataSunrise deletes all foreign keys before wiping out the data then
transfers the masked data and creates new foreign keys (available for MySQL-like databases)

Note: If a primary key exists, it will always be created.

• Ignore column and reference checks


6. Click Select source tables to transfer... in the Transferred Tables subsection.
A window displaying the contents of the selected database will open.

Important: MS SQL Server-specific data types hierarchyid and sql_variant are not recognized by OTL,
which DataSunrise uses to get the table data. Thus, database columns that contain such type of data can’t be
transferred to a target database.

Important: By default, DataSunrise uses the Direct Path Load mechanism to load data from Oracle Database
tables. This mechanism has a restriction: it cannot load XMLType data. To circumvent this restriction, use
standard ODBC instead of Direct Path Load, for which go to the System Settings→Additional subsection of
the Web Console and enable the LoadXMLTypeViaODBC parameter.

Important: When using "Empty" and "Default" masking algorithms for obfuscation of string-type values in
Oracle, you may get the following error:

Error: ORA-01400: cannot insert NULL into (column)

This error occurs because Oracle doesn't see the difference between an empty column and a NULL thus it tries
to replace masked non-NULL values with NULLs.
11 DataSunrise Functional Modules | 231
7. Select columns to be masked and click Done.
You will be redirected to the Transferred Table subsection.
8. Select the required column, click Set Masking Method and select a masking method to use for this particular
column. Repeat for other columns if necessary.

Note: you can use the Filter feature for transferring source database columns according to the filter's
condition. The filtering is based on column value. For this, click Add Filter, input a condition and save it. As
a result, DataSunrise will transfer to the target table only those columns that match the filter's condition. For
example, for the "Age">25 condition only those entries will be transferred where the "Age" column's value is
higher than 25.
The filter Rows Count enables you to transfer a certain number of rows to the target database. The obligatory
parameter "Order by" works like SQL Order By, to identify the first and the last lines. In the parameters "Limit"
and "Offset" you can input any values. For example, to transfer 10 lines from the database, specify the "Offset" -
5 and "Limit" - 10. The lines from 6 to 15 will be transferred to your target database. This filter also works with
related tables by enabling the checkbox "Apply related tables filter". You can also apply this filter to multiple
tables in the task. If you input a function in the "Order by" filter, tables should not be related to each other.
You can also upload a list of columns to be masked using a CSV file. Click Import from CSV and upload a CSV
file with the following contents:

dbName,schemaName,tableName,columnName
<DB_name>,<schema_name>,<table_name>,<column_name>

For example:

dbName,schemaName,tableName,columnName
postgres,myschema,url,column1
postgres,myschema,names,id
postgres,myschema,url,id
postgres,myschema,names,name

9. Select a loader to use for static masking. Loaders differ from each other in limitations and performance (refer to
Static Masking Loaders on page 232).
We recommend using the default loader in most cases.
10. In Startup Frequency, set frequency of starting the task. Set Manual to start the task manually on demand.
11. You can also delete obsolete results of Static Masking tasks from your Dictionary by checking Remove Results
Older Than and specifying time period. It prevents overflooding of Dictionary with outdated data.
11 DataSunrise Functional Modules | 232

11.1.2 Static Masking Loaders


The table below contains some information on loaders that can be used for Static Masking. Note that we
recommend using the default loader in most cases.
Loader Description
Oracle
defaultLoad SELECT and INSERT executed simultaneously
DirectPath First the data is SELECTed and uploaded to a file. Then the file's contents is uploaded
to a target table using Oracle API. Note that target table is locked while the Static
Masking task is running. For information on Oracle's DBLINK and Direct Path, refer to
Creating an Oracle Database User on page 237
DBLink Data is uploaded with the Database Link mechanism using the following query type:
INSERT INTO … AS SELECT .... Note that the LONG data type is not supported at that.
This loader can't be used with response maskers (Lua scripts, FP/FPE methods). For
information on Oracle's DBLINK and Direct Path, refer to Creating an Oracle Database
User on page 237.
PostgreSQL-like / Vertica
defaultLoad SELECT and INSERT executed simultaneously
libPqCopy First the data is SELECTed and uploaded to a file. Then the file's contents is uploaded
to a target table using the COPY query. This loader requires the database user used for
masking to have the MD5 encryption algorithm enabled. LibPq's ability to work with
primary keys on Vertica is limited
DBLink If you're going to use the DBLink loader for PostgreSQL, you might need to install the
dblink extension. Create this extension in the target database, schema public. Refer
to step 4 of Creating a PostgreSQL/Aurora PostgreSQL Database User on page 238.
For PostgreSQL, the DBLINK loader can't be used for Unstructured masking, FPE, FPT
masking and Lua script based masking
MySQL-like
defaultLoad SELECT and INSERT executed simultaneously
loadDataInFile First the data is SELECTed and uploaded to a file. Then the file is uploaded to a target
table using the LOAD DATA LOCAL INFILE mechanism
MS SQL Server
defaultLoad SELECT and INSERT executed simultaneously
BCPLoad First the data is SELECTed and uploaded to RAM. Then the data is uploaded to a target
table using BCP API included in MS SQL's ODBC driver
Teradata
Auto-select operator DataSunrise selects which loader to use automatically
11 DataSunrise Functional Modules | 233

Loader Description
Load operator TBuild Load operator-based loader. Similar to FastLoad. Doesn't support the following
column data types:
• Long BLOB
• Var Graphic
• Byte
• CLOB
• JSON
• XML
• Period Date
• Period Time
• Period Time Tz
• Period Timestamp
• Period Timestamp Tz

Stream Operator TBuild Stream operator-based loader. Similar to Tpump operator. Doesn't support the
following column data types:
• Long BLOB
• Var Graphic
• Byte
• Period Date
• Period Time
• Period Time Tz
• Period Timestamp
• Period Timestamp Tz

Upate Operator TBuild Update operator-based loader. Doesn't support the following column data
types:
• Long BLOB
• Var Graphic
• Byte
• Period Date
• Period Time
• Period Time Tz
• Period Timestamp
• Period Timestamp Tz

Use load via tbuild utility Teradata SQL Inserter Operator


Informix
defaultLoad First the data is SELECTed and uploaded to RAM. Then the data is uploaded to a target
table using Insert Cursor
Netezza
defaultLoad SELECT and INSERT executed simultaneously
ExternalTable First the data is SELECTed and uploaded to a file. Then the file is uploaded to a target
table using the INSERT INTO … SELECT * FROM EXTERNAL FILENAME query
11 DataSunrise Functional Modules | 234

11.1.3 Batch Setup of Masking Methods for Database


Columns
DataSunrise enables you to select multiple columns and set masking methods for them, using a CSV file with the
following column set and content. For example:

dbName,schemaName,tableName,columnName,maskType,maskValue
"postgres","public","addresses","user_id","fixed number","123"
"postgres","public","addresses","street","fixed string","masked"

Your CSV can also contain the following columns only:

dbName,schemaName,tableName,columnName

In such a case, the columns specified in your CSV file will be selected for masking, but you will need to assign
masking methods for the columns manually.
For columns which are not included in your CSV file, masking type will not be changed. You can assign masking type
by checking the required columns and selecting masking method from the Masking Method drop-down list.
To use a CSV file for specifying database columns and masking methods, do the following:
1. In the Select source tables to transfer and columns to mask subsection, click Import Columns from CSV and
select your CSV file to upload. Click Import. All columns included in your CSV file will be shown and masking
methods will be selected according to the CSV file.
2. After assigning masking types for all required columns, click Save.
11 DataSunrise Functional Modules | 235
List of available masking methods and masking values
Masking method name Masking value example Notes
Default ""
Fixed number "123"
Fixed string "abc" Double quotes should be escaped.
For example: "maskValue" : "ab\"c"
Empty value ""
Random value like current ""
Random from interval "{ \"minVal\":\"123\", \"maxVal\":\"1234\", minVal - minimum value. maxVal
\"decimals\":\"1\" }" - maximum value. decimals - a
number of values after the decimal
position.
Function call "{ \"function_name\":\"my_function\",
\"arguments\": [ { \"type\":\"masked_column\",
\"value\":\"\" }, { \"type\":\"user_name\", \"value
\":\"\" } ] }"
Email masking ""
Email masking full ""
Mask username of Email ""
Credit card masking ""
Mask last chars "{ \"maskCount\":3, \"paddingText\":\"*\" }" maskCount - character count.
paddingText - masking text.
Show last chars See "Mask last chars" See "Mask last chars"
Mask first chars See "Mask last chars" See "Mask last chars"
Show first chars See "Mask last chars" See "Mask last chars"
Show first and last chars See "Mask last chars" See "Mask last chars"
Mask first and last chars See "Mask last chars" See "Mask last chars"
Regexp replace "{ \"replaceString\":\"*\", \"pattern\":\"qwe\" }" pattern - a regular expression.
replaceString - masking text.
Fixed datetime "2020-02-21 01:02:03" Format: "YYYY-MM-DD hh:mm:ss".
Fixed date "2020-02-21" Format: "YYYY-MM-DD".
Fixed time "01:02:03" Format: "hh:mm:ss".
11 DataSunrise Functional Modules | 236

Masking method name Masking value example Notes


Random datetime interval "{ \"minVal\":\"2020-02-21 01:02:03\", \"maxVal minVal -- mandatory, a start
\":\"2020-02-22 01:02:03\", \"dateFormat\": value of the interval.maxVal -
\"\" }" mandatory, an end value of the
interval. Value format: "YYYY-
MM-DD hh:mm:ss". dateFormat
is used only for DynamoDB,
MongoDB, and Elasticsearch.
Optional. If omitted it is treated as
an empty string.For DynamoDB,
MongoDB and Elasticsearch
this method returns the date
formatted according to the format
string specified in dateFormat
argument. The format string
supports all substitutions found
in the strftime() function from
the standard C library.For example,
if you want to get a value in the
YYYY-MM-DD hh:mm:ss format,
dateFormat should be ' %Y-%m-%d
%H:%M:%S '.
Random date interval "{ \"minVal\":\"2020-02-21\", \"maxVal\": See Random datetime
\"2020-02-22\", \"dateFormat\":\"\" }" interval. Value format: "YYYY-
MM-DD".
Random time interval "{ \"minVal\":\"01:02:03\", \"maxVal\": See Random datetime
\"02:02:03\", \"dateFormat\":\"\" }" interval. Value format:
"hh:mm:ss".
Random datetime offset "{ \"offset\":90000, \"dateFormat\":\"\" }" offset -- mandatory,
a time dispersion in
seconds.dateFormat -
mandatory. See Random
datetime interval.
Random date offset "{ \"days\":\"180\", \"dateFormat\":\"\" }" days - time dispersion in
days.dateFormat - optional. See
Random datetime interval
Random time offset "{ \"offset\":3600, \"dateFormat\":\"\" }" See Random datetime
interval
Mask url ""
Unstructured masking ""
Masking with Lua script "{\"luaScriptId\":1}" luaScriptId - a Lua script ID
FP Tokenization Email ""
FP Tokenization SSN ""
FP Tokenization Credit Card ""
FP Tokenization Number ""
FP Tokenization String ""
FP Encryption FF3 Email ""
FP Encryption FF3 SSN ""
11 DataSunrise Functional Modules | 237

Masking method name Masking value example Notes


FP Encryption FF3 Credit ""
Card
FP Encryption FF3 Number ""
FP Encryption FF3 String ""

11.1.4 Creating Database Users Required for Static Masking


This section describes how to create a target database user with sufficient privileges to perform Static masking on
the target database. Such a user should be used to establish connection with the target database (the user's login/
password should be used in the target database's profile in the Web Console).

11.1.4.1 Creating an Oracle Database User


1. To create a new Oracle user, log into the database as the SYS user and execute the following query:

CREATE USER <User_name> IDENTIFIED BY <Password>;

2. To grant the new user the required privileges, execute the following query (being logged in as the SYS user):

GRANT CREATE SESSION, CREATE ANY TABLE, SELECT ANY TABLE, INSERT ANY TABLE, ALTER ANY TABLE,
SELECT_CATALOG_ROLE TO <User_name>;
GRANT EXECUTE ON dbms_metadata to <User_name>;
GRANT DROP ANY TABLE TO <User_name>;
GRANT RESOURCE TO <User_name>;
GRANT CREATE ANY INDEX TO <User_name>;
GRANT CREATE ANY PROCEDURE TO <User_name>;
GRANT CREATE ANY VIEW TO <User_name>;
GRANT CREATE ANY SEQUENCE TO <User_name>;

• If you get an error caused by insufficient privileges on accessing the "users" tablespace, execute the following
query:

ALTER USER <User_name> quota unlimited on <Tablespace_name>;

• To enable DataSunrise to download the database's metadata, it is necessary to grant your user the privileges
listed in Creating an Oracle Database User on page 64.

Important: Oracle offers two loaders for execution of a Static Masking task: DBLINK and Direct Path. Both
loaders require extra permissions:
• To use DBLINK, you need the following grant:

GRANT CREATE DATABASE LINK TO <User_name>;

• To use Direct Path, you need the following grant:

GRANT LOCK ANY TABLE TO <User_name>;

• If you're going to use random-based masking methods (Masking Methods on page 167) for Static masking,
you need to create a dedicated schema named DS_ENVIRONMENT in your source database (refer to
Configuring DataSunrise for Masking with random-based methods on page 177).
11 DataSunrise Functional Modules | 238
11.1.4.2 Creating a PostgreSQL/Aurora PostgreSQL Database User
1. To create a new PostgreSQL/Aurora PostgreSQL user, execute the following query:

CREATE USER <User_name> WITH PASSWORD ‘<Password>’;

2. Execute the following queries to provide your user with the necessary privileges:

GRANT SELECT ON pg_catalog.pg_database, pg_catalog.pg_namespace, pg_catalog.pg_class,


pg_catalog.pg_attribute, pg_catalog.pg_user, pg_catalog.pg_settings, pg_catalog.pg_db_role_setting
TO <User_name>;
GRANT USAGE ON SCHEMA <Source_schema> TO <User_name>;
GRANT SELECT ON <Source_table> TO <User_name>;
GRANT CREATE ON SCHEMA <Target_schema> TO <User_name>;
GRANT INSERT ON ALL TABLES IN SCHEMA <Target_schema> TO <User_name>;
GRANT USAGE ON SCHEMA <Target_schema> TO <User_name>;
GRANT SELECT ON ALL TABLES IN SCHEMA <Target_schema> TO <User_name>;
GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA <Source_schema> TO <User_name>;

In case you're using a static function for masking, it is necessary to grant the privilege of executing that function:

GRANT EXECUTE ON FUNCTION <Custom_function> TO <User_name>;

In case you're going to use Truncate Target Tables, grant the following privilege:

GRANT TRUNCATE ON <Target_table> TO <User_name>;

3. In case the source table is created by another user, execute the following query:

ALTER TABLE <Table_name> OWNER TO <User_name>;

4. If you're going to use the DBlink loader for masking, install the required extension:

CREATE EXTENSION dblink;

5. If you're going to use random-based masking methods (Masking Methods on page 167) for Static masking, you
need to create a dedicated schema named DS_ENVIRONMENT in your source database (refer to Configuring
DataSunrise for Masking with random-based methods on page 177).

11.1.4.3 Creating a Greenplum Database User


1. To create a new Greenplum user, execute the following query:

CREATE USER <User name> WITH PASSWORD ‘<Password>’;

2. Execute the following query to provide the user with necessary privileges:

GRANT SELECT ON pg_database, pg_namespace, pg_class, pg_catalog, pg_attribute, pg_user, pg_settings,


pg_db_role_setting TO <User name>;
GRANT SELECT ON <Source table> TO <User name>;
GRANT INSERT ON <Target table> TO <User name>;
GRANT CREATE ON SCHEMA <Schema name> TO <User name>;

3. In case you're using a static function for masking, it is necessary to grant the privilege of execution of that
function:

GRANT EXECUTE ON FUNCTION <Custom_function> TO <User_name>;


11 DataSunrise Functional Modules | 239
4. If you're going to use random-based masking methods (Masking Methods on page 167) for Static masking, you
need to create a dedicated schema named DS_ENVIRONMENT in your source database (Configuring DataSunrise
for Masking with random-based methods on page 177)

11.1.4.4 Creating an SAP Hana Database User


1. To create an SAP Hana user for static masking, execute the following command:

CREATE USER <User_name> PASSWORD "<Password>" NO FORCE_FIRST_PASSWORD_CHANGE;

2. Granting required privileges for static masking includes several stages (depends on SAP Hana database roles
management):
a) Since the Static Masking feature requires selecting data from <Source_schema>.<Source table (optional)>
and further data insertion into <Target_schema>, grant the privileges required for access to the source
schema to the new user logged in as <Source_schema> owner.

GRANT SELECT ON SCHEMA <Source_schema> TO <User_name>

b) Grant the privilege required for data insertion into the <Target_schema> to the new user logged in as <Target
_schema> owner.

GRANT INSERT ON SCHEMA <Target_schema> TO <User_name>

Note: It is important to grant privileges as object OWNER.

11.1.4.5 Creating a SQL Server Database User


1. Create a login and a user. To do this, execute the following query:

CREATE LOGIN <Login_name> WITH PASSWORD = '<Password>';


GO
USE <Source DB>
GO
CREATE USER <User_name> FOR LOGIN <Login_name>
GO
CREATE SCHEMA <Target_schema>
GO

2. Execute the following query to provide the user with necessary privileges:

GRANT SELECT ON OBJECT :: <Source_schema>.<Source_table> TO <User_name>;


GRANT INSERT ON SCHEMA :: <Target_schema> TO <User_name>;
GRANT ALTER ON SCHEMA :: <Target_schema> TO <User_name>;
GRANT CREATE TABLE TO <User_name>;
SET IDENTITY_INSERT <Target_schema>.<Target_table> ON;

3. In case you're using a static function for masking, it is necessary to grant the EXECUTE privilege for that function:

GRANT EXECUTE ON OBJECT :: <Custom_function> TO <User_name>


GO;

4. If you're going to use random-based masking methods (Masking Methods on page 167) for Static masking, you
need to create a dedicated schema named DS_ENVIRONMENT in your source database (refer to Configuring
DataSunrise for Masking with random-based methods on page 177).
11 DataSunrise Functional Modules | 240
11.1.4.6 Creating a MySQL/Aurora MySQL/MariaDB Database User
1. To create a new MySQL/Aurora MySQL/MariaDB user, execute the following query:

CREATE USER <User_name> IDENTIFIED BY '<Password>';

2. Execute the following query to provide the user with necessary privileges:

GRANT SELECT ON <Source_schema>.* TO <User_name>;


GRANT CREATE, INSERT ON <Target_Schema>.* TO <User_name>;
GRANT ALTER ON <Target_Schema>.* TO <User_name>;
GRANT DROP ON <Target_Schema>.* TO <User_name>;
GRANT INDEX ON <Target_Schema>.* To <User_name>;
GRANT REFERENCES ON <Target_Schema>.* To <User_name>;
GRANT SELECT ON <Target_Schema>.* To <User_name>;
FLUSH PRIVILEGES;

In case you're using a static function for masking, it is necessary to grant privileges to execution of that function:

GRANT EXECUTE ON FUNCTION <Custom function> TO <User_name>;

If the Use Parallel Load check box is enabled (Masking → Static Masking → Transferred Tables), it is necessary to
grant INSERT for all tables:

GRANT INSERT ON *.* TO <User_name>;

If you're going to use the Random from Lexicon masking method for dynamic or static masking, you need to
provide your user with the grants listed below. DataSunrise requires these grants to be able to create a schema:

GRANT USAGE ON SCHEMA <Target_Schema> TO <User_name>;


GRANT CREATE ON DATABASE <Target_DB> TO <User_name>;

3. If you're going to mask the contents of functions, grant your user the following privileges:
• MySQL 8.0, if the user specified in DEFINER doesn't have the system privileges:

GRANT SET_USER_ID ON *.* TO '<User_name>'@'%';

• MySQL 8.0. if the user specified in DEFINER has the system privileges (root for example), provide the user with
the aforementioned grant. Additionally, grant the following privilege:

GRANT SYSTEM_USER ON <Database_name>.* TO '<User_name>'@'%';

• MySQL 5:

GRANT SUPER ON <Database_name>.* TO '<User_name>'@'%';

• For masking inside ROUTINES:

GRANT EXECUTE ON <Database_name>.* TO '<User_name>'@'%';

• For masking of data inside stored procedures and functions, you need to grant your user the following
privilege:

GRANT CREATE ROUTINE, ALTER ROUTINE ON <Database_name>.* TO <User_name>@'%';


11 DataSunrise Functional Modules | 241
4. If you're going to use random-based masking methods (Masking Methods on page 167) for Static masking, you
need to create a dedicated schema named DS_ENVIRONMENT (refer to Configuring DataSunrise for Masking with
random-based methods on page 177).

11.1.4.7 Creating a Netezza Database User


1. To create a new Netezza user, execute the following query:

CREATE USER <User_name> WITH PASSWORD '<Password>';

2. Execute the following query to provide the user with necessary privileges:

GRANT LIST ON AGGREGATE, DATABASE, EXTERNAL TABLE, FUNCTION, GROUP, MANAGEMENT TABLE, MANAGEMENT
VIEW, PROCEDURE, SEQUENCE, SYNONYM, SYSTEM TABLE, SYSTEM VIEW, TABLE, USER, VIEW to <User_name>;
GRANT SELECT ON <Source_table> TO <User_name>;
GRANT INSERT ON <Target_schema> TO <User_name>;
GRANT CREATE TABLE IN <Target_schema> TO <User_name>;

In case you're using a static function for masking, it is necessary to grant privileges to execution of that function:

GRANT EXECUTE ON FUNCTION <Custom_function> TO <User_name>;

11.1.4.8 Creating a Redshift Database User


1. To create a new Redshift user, execute the following query:

CREATE USER <User_name> PASSWORD ‘<Password>’;

2. Execute the following query to provide the user with necessary privileges:

GRANT USAGE ON SCHEMA <Schema_name> TO <User_name>;


GRANT SELECT ON ALL TABLES IN SCHEMA <Schema_name> TO <User_name>;
GRANT INSERT ON TABLE <Table_name> TO <User_name>;
GRANT CREATE ON SCHEMA <Schema_name> TO <User_name>;

In case you're using a static function for masking, it is necessary to grant privileges for execution of that function:

GRANT EXECUTE ON FUNCTION <Function_name> TO <User_name>;

3. If you're going to use random-based masking methods (Masking Methods on page 167) for Static masking, you
need to create a dedicated schema named DS_ENVIRONMENT in your source database (refer to Configuring
DataSunrise for Masking with random-based methods on page 177).

11.1.4.9 Creating a Teradata Database User


1. To create a user for Static Masking, execute the following query:

CREATE USER "<User_name>" AS PERM =0 PASSWORD "<Password>"

2. Provide the new user with the required privileges using the following queries:

GRANT SELECT ON "<Source_table>" TO "<User_name>"


GRANT SELECT, INSERT ON "<Target_schema>" to "<User_name>"
11 DataSunrise Functional Modules | 242
3. If you're going to transfer UDT types (ARRAY-like custom types) using the Stream Operator loader or any other
loader similar to Tpump (refer to Static Masking Loaders on page 232), provide your user with the following
grants:

GRANT UDTUSAGE ON SYSUDTLIB TO <Target_Schema_name> WITH GRANT OPTION

4. To use Teradata tbuild loaders, do the following:


• Download and install Teradata Tools and Utilities (TTU) Base
• Run Suite Setup
• Select Teradata Parallel Transporter Base. It's important that TTU List Products displays Shared ICU Libraries for
Teradata.

11.1.4.10 Creating a Vertica Database User


1. To create a new Vertica user, execute the following query:

CREATE USER <User_name> IDENTIFIED BY ‘<Password>’;

2. Execute the following query to provide the user with necessary privileges:

GRANT AUTHENTICATION <Authentication_method_name> TO <User_name>;


GRANT SELECT ON TABLE <Source_database>.<Schema_name>.<Source_table> TO <User_name>;
GRANT INSERT ON TABLE <Target_database>.<Schema_name>.<Target_table> TO <User_name>;
GRANT CREATE ON SCHEMA <Target_schema> TO <User_name>;

In case you're using a static function for masking, it is necessary to grant privileges to execution of that function:

GRANT EXECUTE ON FUNCTION <DB_name> . <Schema_name> . <Custom_function> TO <User_name>;

11.1.4.11 Creating a DB2 Database User


To create a DB2 user for static masking, do the following:
1. Create a DB2 user for getting the target database's metadata: Creating a DB2 Database User on page 73
2. Grant required privileges to your user:
a) Execute the following query:

GRANT CREATETAB ON DATABASE TO USER <User_name>;

b) Execute the following query for the tables to be masked:

GRANT SELECT ON TABLE <Source_database>.<Source_table> TO USER <User_name>;

11.1.4.12 Creating a MongoDB Database User


To create a MongoDB user for static masking, do the following:
1. Create a MongoDB user for getting the target database's metadata: Creating a MongoDB Database User on page
74
2. Grant the readWrite priviilege to your user:

db.grantRolesToUser("<User_name>", ["readWrite"])
11 DataSunrise Functional Modules | 243

Note: you should grant the readWrite privilege in respect of each database involved in the masking process so
before granting the permission execute the following command:

use <Source_DB>

or

use <Target_DB>

respectively

11.2 In-Place Static Masking


In some cases, when you created a copy of your production database to be used by testers or outsourcers but you
are not allowed to give access to the actual data the database contains, you can use the In-place masking feature to
mask data in the database without creating a copy of this database as regular Static Masking does.
In-place masking utilizes DataSunrise’s Static Masking engine. The peculiarity of it is that the database/schema/table
to be masked is the target and source at the same time. During the masking process another target table is created.
DataSunrise takes the data from the source table, masks it and inserts into the target table. Then the source table is
removed and the target table is renamed as the source table. As a result, you get your source table masked.
For now, In-place masking is available for Oracle, PostgreSQL, MySQL, Greenplum, Redshift, Amazon Aurora, MS SQL
Server, MongoDB and MariaDB databases.

Important: please take into account that the data in your source table is replaced with masked values during the
in-place masking process and this process is irreversible. To avoid loosing your valuable data, back the source table
up if necessary.

Note: for Oracle, MS SQL Server and MySQL-like databases (MySQL, MariaDB, Aurora MySQL) transferring of
existing triggers is enabled by default.

To utilize the In-Place Masking feature, do the following:


1. Navigate to Masking → Static Masking and create a new task.
2. Select your database instance in the Source Instance and select Mask in Place in the Target drop-down list.
3. Select a source database and database objects to mask, masking methods to use, etc (refer to Static Data
Masking on page 228).

11.3 Sensitive Data Discovery


The Data Discovery component enables you to search for database objects that contain sensitive data and quickly
create Rules for these objects. DataSunrise includes prebuilt search filters for various data types. You can create your
own filters as well.
DataSunrise uses regular expressions to perform sensitive data search by column names and their contents. If it is
not enough you can use Lua scripting. The following variables are available inside the script:
• attributeID
• columnName
11 DataSunrise Functional Modules | 244
• fullColumnType
• columnSize
• columnValue
• columnType
The script will return a value other than 0 if the data will be discovered.
To configure the Data Discovery functionality, dedicated objects called Information Types are used. Information Type
is a filter that defines certain attributes DataSunrise uses to search for sensitive data in the database.
By default, DataSunrise includes search filters (Information Types) for the following categories of data:
11 DataSunrise Functional Modules | 245

Category Security Standards Information Types


Personal Info HIPAA, GDPR, KVKK, CCPA, Names, ID numbers, age, birth date, etc.
ISO 27001, Nevada Privacy
Law, PIPEDA, Digital Personal
Data Protection Bill, APPs,
APPI, Privacy Act NZ, LGPD
Geographical (Address) HIPAA, GDPR, KVKK, CCPA, Addresses, post codes, geographical coordinates,
ISO 27001, Nevada Privacy geocoding, etc.
Law, PIPEDA, Digital Personal
Data Protection Bill, APPs
Financial GDPR, SOX, CCPA, ISO Account numbers, Tax IDs, BAN, income, etc.
27001, Nevada Privacy Law,
PIPEDA, Digital Personal Data
Protection Bill, APPs
Occupation GDPR, SOX, KVKK, CCPA, ISO Admission date, company name, etc.
27001, Nevada Privacy Law,
PIPEDA, Digital Personal Data
Protection Bill, APPs
Medical HIPAA, GDPR, KVKK, CCPA, Medical records-related (treatment, Health
ISO 27001, Nevada Privacy Insurance Number, etc.)
Law, PIPEDA, Digital Personal
Data Protection Bill, APPs,
Privacy Act NZ
Banking GDPR, PCI DSS, SOX, KVKK, Banking-related (credit card numbers, bank
CCPA, ISO 27001, Nevada account, cardholder, etc.)
Privacy Law, PIPEDA, Digital
Personal Data Protection Bill,
APPs, Privacy Act NZ
Telephone/Fax HIPAA, GDPR, KVKK, CCPA, Phone numbers, country calling codes, etc.
ISO 27001, Nevada Privacy
Law, PIPEDA, Digital Personal
Data Protection Bill, APPs
Web/Network GDPR, KVKK, CCPA, ISO Internet-related (email address, host name, IP
27001, Nevada Privacy Law, address, etc.)
PIPEDA, Digital Personal Data
Protection Bill, APPs
Numbers HIPAA, GDPR, KVKK, CCPA, Number-based entries (certificate licence number,
ISO 27001, Nevada Privacy device serial number, etc.)
Law, PIPEDA, Digital Personal
Data Protection Bill, APPs
Tax Numbers for Countries GDPR, SOX, CCPA, Nevada Permanent account number
Privacy Law, ISO 27001,
PIPEDA, Digital Personal Data
Protection Bill, APPs, APPI
VAT Numbers for Countries GDPR, CCPA, Nevada Privacy VAT numbers
Law, ISO 27001, PIPEDA,
Digital Personal Data
Protection Bill, APPs
Repeating Patterns Number identification
Unicode Definitions Unicode-related (code, InitSet, Init, number, etc.)
11 DataSunrise Functional Modules | 246

Category Security Standards Information Types


Special Regex Regular expressions (alphanumeric, deletes short
words, etc.)
Date Formats HIPAA, GDPR, CCPA, ISO Dates (DD MM YY, dd/mm/yyyy, date open
27001, Nevada Privacy Law, format, etc.)
PIPEDA, Digital Personal Data
Protection Bill, APPs
Conditional Abbreviations Acronyms and abbreviations
USA Connectors -
User Defined Accounts -

11.3.1 Creating a New Information Type


Data Discovery enables you to search data for columns containing sensitive data using Regular expressions and Lua
scripts by columns names and column contents. These two methods can be used to define the structure of data to
be discovered such as data type, range of values, etc. To create and edit Information Type objects (search filters), go
to the Information Types subsection. There is a list of existing filters at this subsection's start page. To the newly
created Information Types you can set tags of category and country. You can see grouping by tags in the Group by
Countries section.
To create an Information Type filter, do the following:
1. Click Add Information Type.
2. Input a filter name into the Logical Name text field.
3. Select security standards an information type belongs to, in the Sensitivity Labels drop-down list
4. Click Save to save the filter's settings.
5. Locate your information type in the list and click its Name to edit its settings. A list of filter attributes will be
displayed (a new filter doesn't have any attributes).
6. Click Add Attribute to create an attribute. A filter's attribute includes templates DataSunrise uses to search
data (by a column name and column contents). Each filter can include multiple attributes.
7. Fill in the required text fields in the Attribute window:
11 DataSunrise Functional Modules | 247

Interface element Description


Name field Logical name of an attribute
Column Name tab
Column Name template field A template used to search for a column by its name. Each template should
be placed on a new line.
Case Sensitive check box Use this check box to differentiate upper and lower case letters during the
search
Full Names check box Use this box to define the column's name as a full name: dbName
\.schemaName\..*tableName.*\..*columnName.*$
Column Data tab
Column Data Type drop-down list Set filters to search data inside database columns.
• Irrelevant: search for misc data.
• Numbers Only: set a number range to search inside a column.
• Strings Only: set a regular expression pattern to search inside columns
in Template for Column Contents.
• Dates Only: set a date range to search inside columns.
• Dates and Time: search for date and time values.
• Time Only: search for time values.

Search Method drop-down list Select a search method. This method depends on the Column Data Type
selected.
• Template (Strings Only). A template used to search inside columns. It
can be a regular expression.
• Unstructured text (Strings Only). Unstructured text
• NLP discovery (Strings Only). Refer to subs. NLP Data Discovery on
page 250
• Lua Script. Enable Lua scripting for searching. The Lua enables you to
create simple scripts which define the structure of the content you want
to search for.

Note: Example:
if (string.match(columnName, "first_name") and columnSize == 8) then
return 1 else return 0 end
In this case, if there is a string entry ("string" variable) in a column
named first_name and the entry is 8 characters long ("==" or "=",
but you can use ">=" or "<=" as well) then the script would "return
1" — DataSunrise will display the search results. Otherwise (return 0)
DataSunrise won't display any results.

• Lexicon (Strings Only). Use Lexicon for searching. See Discovering


Sensitive Data Using Lexicon on page 251
• Range (for Numbers Only). Minimum and maximum value of a number
range for searching inside columns
• Range (for Dates Only). First date and last date of a time range to
search inside columns

Filename keyword validation check Search across fiiles names of which include words from a specified Lexicon.
box (for Unstructured text) You need to select a Lexicon of interest in the Lexicon drop-down list
11 DataSunrise Functional Modules | 248

Interface element Description


Negative filename keyword Exclude fiiles names of which include words from a specified Lexicon from
validation check box (for search. You need to select a Lexicon of interest in the Lexicon drop-down
Unstructured text) list to exclude files from search
Keyword validation check box (for • Words list: Lexicon to search according to
Unstructured text) • Whole file search: search across a complete file
• Number of words: specify a number of words to search across. Used
together with the By number of words check box
• Direction: words direction to search at

Negative keyword validation check • Words list: Lexicon to exclude from the search
box (for Unstructured text) • Whole file search: search across a complete file
• Number of words: specify a number of words to exclude from the
search. Used together with the By number of words check box
• Direction: words direction to search at

Validation check box Validator: validation method to use. Luhn algorithm is available by default
but you can also use your Lua Script. Navigate to Configuration → Lua
Scripts and create the required Lua script. Then select your script in the
Validator drop-down list
Default Masking Method tab
Main Masking Method drop-down Masking algorithm to be used for a given type of data. Refer to subs.
list Masking Methods on page 167
Mask Value field Masking value
Alternative Masking Method drop- The masking method to be used if there's a relation between the discovered
down list column and other columns by foreign keys and the main masking method
can't be used

Note: The PCRE library is used for regular expressions, so PCRE syntax should be used when creating
templates. For example, the following expression is used to search for phone numbers in a database column:

^\+(?:[0-9] ?){6,14}[0-9]$

8. Click Save to save the attribute. Add additional attributes to the filter if necessary.
9. To view filter's settings, click the Information Types link in the left panel, select a filter from the list and click its
name.
10. Note that you can filter the list of information Types by associated countries. To do that, put the Group by
Countries switch on.

11.3.2 Periodic Data Discovery


Performing sensitive data search on schedule enables you to update Object groups containing sensitive objects
regularly.

1. Create a new task. Name it


2. Select a server to run the task on
3. If you need a report on discovery results, check the Generate Reports check box and select report file format
(CSV or PDF)
4. Configure the sensitive data in Search Parameters:
11 DataSunrise Functional Modules | 249

UI Element Description
Database Instance drop-down list Database instance to search sensitive data across
Credentials button User credentials used to connect to the target database
Save Search Results in an Object Group Select an Object Group to save the search results in (not obligatory)
drop-down list
SELECT strategy drop-down list • Select top rows: SELECT first rows of the target table defined by the
Number of analyzed rows value;
• Select random rows: SELECT random rows;
• Select all rows: SELECT an entire table.

Column match strategy drop-down list Column filtering type


Min percentage of match field Minimum percentage of rows in a column that match the search filter
conditions to consider the column as containing the required sensitive
data
Number of analyzed rows field Number of table rows to be SELECTed
Database Objects to Search Across
Database drop-down list Database to search sensitive data across
Schema drop-down list Schema to search sensitive data across
Exclude Objects from the Search Select Objects to skip when searching
Search Criteria
Search by drop-down list Search by:
• Security standards
• Existing Information types (see Sensitive Data Discovery on page
243)

Search for Attributes drop-down list Search by:


• Security standards
• Existing Information types (see Sensitive Data Discovery on page
243)

Startup Frequency
Frequency drop-down list Task running frequency. You can use Manual for manual starting.

Note: Data Discovery fo Amazon S3 offers more settings than "regular" Data Discovery.
11 DataSunrise Functional Modules | 250
Table

UI Element Description
Enable AWS S3 Inventory metastore mode check box Enable a Crawler task (see AWS S3 Crawler on page
253)
Enable statistics on data processing speed check box Display statistics on data processing speed
Enable statistics on attributes check box Display statistics on file attributes
Additional Metrics check box Display additional metrics
Task Mode drop-down list • Standard: standard Data Discovery
• Incremental: enable Incremental scanning (see
Incremental Data Discovery on page 254)
• Randomized: enable Randomized scanning (see
Randomized Data Discovery on page 254)

5. If required, keep all Data Discovery results or remove old results by checking the corresponding check box.

11.3.3 NLP Data Discovery


DataSunrise’s Data Discovery Module features NLP (Natural Language Processing) Data Discovery. This feature
enables you to search for sensitive data across database columns that contain unstructured data. For example, you
can locate email addresses in text. NLP Data Discovery now works with non-binary and binary data types.
NLP Data Discovery supports the following file formats:
• Microsoft Word: DOC, DOCX, RTF, DOT, DOTX, DOTM, DOCM, FlatOPC, FlatOpcMacroEnabled, FlatOpcTemplate,
FlatOpcTemplateMacroEnabled
• OpenOffice: ODT, OTT
• WordprocessingML: WordML
• Web: HTML, MHTML
• Text: TXT

Note: you need to install Java 1.8+ to be able to use NLP Data Discovery.

To use NLP Data Discovery, do the following:


1. Navigate to Data Discovery → Information Types
2. Create new Information Type (search filter), select Security Standards to apply and save it. Open your
Information Type and navigate to Attributes, click Add Attribute
3. Set Attribute name (any), select the Column Data tab
4. In Column Data Type, select Strings Only
5. In Search Method, select NLP discovery
6. Select data type to search for in the Data to Search... drop-down list
7. Select default masking method to apply to the data type you chose. Save the Attribute
8. Navigate to Data Discovery → Periodic Data Discovery and create a Periodic task. Fill out all the required
fields
9. In the Search Criteria section, select Information Types from the Search by drop-down list
10. Select your Attribute
11. Run the task.
11 DataSunrise Functional Modules | 251

11.3.4 Discovering Sensitive Data Using Lua Script


DataSunrise enables you to use Lua Script for Sensitive Data Discovery.
To use Lua Script for Discovery, do the following:
1. Navigate to Data Discovery → Information Types and create a new Type or use existing.
2. Click Add Attribute to create an Attribute.
3. Set a logical name for the Attribute, Navigate to Column Data and select Strings Only.
4. In the Search Method drop-down list, select Lua Script.
5. Click Edit Lua Script and input the script into the Script field. Note that you can see Global variables that can be
used in the script
6. For Data Discovery, the following Global variables can be used:
• attributeID (number) - Current Attribute ID
• columnName (string) - Column Name
• fullColumnType (string) - Actual Type of Database Column
• columnSize (number) - Column Size
• columnValue (number / string) - Data Contained in the Column
• columnType (number) - Column Type. Available Types:
• 0 - number
• 1 - string (current)
• 2 - date
• 3 - date and time
• 4 - time
• 5 - other
7. See an example of the script below:

if (string.match(columnName, "first_name") and columnSize == 8)


then
return 1
else
return 0
end

This script searches for database columns named "first_name" which contain entries 8 characters long.

11.3.5 Discovering Sensitive Data Using Lexicon


DataSunrise enables you to use built-in dictionaries (Lexicon) for searching for sensitive data. These dictionaries
contain names, surnames, company names, postcodes, job positions etc.
To use the Lexicon for Discovery, do the following:
1. Navigate to Data Discovery → Information Types and create a new Type or use existing.
2. Click Add Attribute to create an Attribute.
3. Set a logical name for the Attribute, Navigate to Column Data and select Strings Only.
4. In the Search Method drop-down list, select Lexicon.
5. Select the required dictionary (Lexicon names are self-explanatory). DataSunrise will search for database entries
which contain entities from the selected dictionary.

11.3.5.1 Creating a Lexicon


You can create your own Lexicon to use for Data Discovery. To do it, perform the following:
1. Navigate to Data Discovery → Lexicons.
11 DataSunrise Functional Modules | 252
2. Click Add New Lexicon and input a logical name.
3. To import entries from a database and add them to your Lexicon, check the Import entries check box:
• Input a logical name for the task, choose the database instance, and specify a count of rows to import.
• Select columns to import entries from.
4. Locate your Lexicon in the list and open it.
5. Click Add to add an entry to the Lexicon or import CSV files with a list of entries in it.

11.3.6 Using Table Relations for Data Discovery


DataSunrise enables you to use Table Relations while doing a Sensitive Data Discovery search. As a result, Data
Discovery displays a list of columns associated with the discovered columns. For this, do the following:

1. First, you should have an existing Table Relation entry.


2. Create a new Data Discovery Search (refer to subs. Sensitive Data Discovery on page 243). Search for sensitive
data of interest and in the search results you will get a list of associated columns.
3. As a result you will be able to discover table relations across all databases included in your database server.

11.3.7 OCR Data Discovery


DataSunrise’s Data Discovery Module features OCR (Optical Character Recognition) Data Discovery. This feature
enables you to search for sensitive data such as personal data, credit card numbers, driver licenses, etc. pictured
in images. DataSunrise uses Tesseract engine based on neuronet technology for character recognition. OCR Data
Discovery works with AWS S3 only for now.
As an additional functionality, DataSunrise employs Amazon Textract OCR to discover sensitive data in
images located in Amazon S3 buckets. To enable this functionality you need to activate the parameter
DataDiscoveryUseAmazonTextractOCR. The DataDiscoveryUseAmazonTextractS3Integration parameter enables
Amazon Textract to take and scan files according to the file path that DataSunrise sends to Amazon Textact.
If you want to convert formats that are not supported by AmazonTextract to supported formats, enable the
NeedConvertUnsupportTextractFormats parameter.

Note: does not work with DataDiscoveryUseAmazonTextractS3Integranion

If the NativeOCRHandlingOnExternalOCRError parameter is active, the file will be processed by the native OCR when
the external OCR fails to process the image or processes it with an error.
DataSunrise OCR Data Discovery supports the following file formats:
• JPEG
• JPEG 2000
• GIF (non-animated)
• PNG
• TIFF
• WebP
• BMP
• PNM
• PDF

Note: you need to install Java 1.8+ to be able to use OCR Data Discovery with NLP Data Discovery
11 DataSunrise Functional Modules | 253

Once you've started an OCR Data Discovery task, DataSunrise browses the contents of your S3 bucket for images.
OCR DD engine's preprocessor prepares images for further processing by making them more contrast and sharp.
Then DataSunrise with the help of the Tesseract OCR technology recognizes text pictured in images and performs
Data Discovery on this text according to your DD Task's settings. As a result, you get the names and location of
image files that contain sensitive data.
To use OCR Data Discovery, do the following:
1. Navigate to Data Discovery → Periodic Data Discovery
2. Create a Data Discovery task for your S3 bucket
3. Run the task and DataSunrise will perform OCR discovery automatically.

11.3.8 AWS S3 Crawler


The AWS S3 Crawler Periodic task enables you to use audit data collected by AWS S3 Inventory for Data discovery
purposes. AWS Inventory stores metadata of a source AWS S3 bucket in the form of an archived CSV file. To reduce
traffic consumption and operation cost, DataSunrise can get S3 metadata with the help of Inventory instead of using
AWS API calls.

1. Create a new task. Name it


2. Select a server to run the task on
3. Select your AWS S3 DB Instance
4. Expand Settings and fill out all the remaining fields:
11 DataSunrise Functional Modules | 254

Parameter Description
Template to crawl through settings field Specify path to S3 folder to browse with the Crawler:
<level1_folder_logical_name>/<level2_folder_logical_name>/
<levelN_folder_logical_name>. ASCII characters only are allowed.
Note that each path segment should be ecnlosed in <> and separated
by a slash
Allowed values fields Slash-separated exact values of the level (with no spaces before/after).
Note that AWS S3 bucket folder names may contain spaces

Upload/drag-and-drop a JSON file Self-explanatory


which contains Cross-Account-Role
ARN settings subsection
Startup Frequency
Frequency drop-down list Task running frequency. You can use Manual for manual starting.

5. If required, keep all Data Discovery results or remove old results by checking the corresponding check box.

11.3.9 Incremental Data Discovery


Incremental Data Discovery enables you to speed up an AWS S3 Data Discovery process by excluding already-
scanned files from search. To be able to use the Incremental mode, you need to arrange existing incremental DD
tasks into groups of tasks to enable them to share information about timestamp of files in your S3 buckets. Having
performed a scan, a task shares information on the scanned files so other tasks from the same group can use this
information to exclude files that weren't updated since the first scanning from following searches.
To create an Incremental Data Discovery task, do the following:
1. Create a new task (see Periodic Data Discovery on page 248)
2. Select your AWS S3 DB Instance
3. In the Search Parameters section, select Incremental in the Task mode drop-down list
4. In the Scan Groups section, select existing Incremental tasks to add to a current group. Note that you can do it
either with the help of the Add to Group button or just by drag-and-dropping a task into the task list.

11.3.10 Randomized Data Discovery


Randomized Data Discovery enables you to scan large number of files contained in your AWS S3 bucket by portions
by scanning not a complete bucket but just random files during one iteration. To be able to use the Randomized
mode, you need to specify the percentage of files in your S3 buckets that should be scanned at one iteration.
To create a Randomized Data Discovery task, do the following:
1. Create a new task (see Periodic Data Discovery on page 248)
2. Select your AWS S3 DB Instance
3. In the Search Parameters section, select Randomized in the Task mode drop-down list
4. In the The percentage of files should be scanned field, specify the percentage of files in your AWS S3 bucket
that should be scanned with Data Discovery task.
5. If necessary, in the Scan Groups section, select existing Randomized tasks to add to a current group. Note that
you can do it either with the help of the Add to Group button or just by drag-and-dropping a task into the task
list. You can leave this section blank since if you specify a Scan Group, DataSunrise uses the Group's percentage
settings, not the current Task's (The percentage of files should be scanned value).
11 DataSunrise Functional Modules | 255

11.3.11 Creating Database Users Required for Data


Discovery
This section describes how to create a database user with sufficient privileges to perform the Discovery of sensitive
data on a target database. Such user should be used to establish a connection with a target database.

11.3.11.1 Creating an Oracle Database User


To create a new Oracle user, connect as the SYS user and execute the following query:

CREATE USER <User_name> IDENTIFIED BY <Password>;

To grant the new user the required privileges, execute the following query (as the SYS user):

GRANT CONNECT TO <User_name>;


GRANT RESOURCE TO <User_name>;
GRANT SELECT_CATALOG_ROLE TO <User_name>;
GRANT CREATE ANY TABLE TO <User_name>;
GRANT SELECT ON "SYS"."DBA_OBJECTS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_OBJECT_TABLES" to <User_name>;
GRANT SELECT ON "SYS"."DBA_TAB_COLUMNS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_TABLES" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_TAB_COLS" to <User_name>;
GRANT SELECT ON "SYS"."DBA_NESTED_TABLES" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_SYNONYMS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_USERS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_PROCEDURES" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_TYPES" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_TYPE_ATTRS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_DEPENDENCIES" TO <User_name>;
GRANT SELECT ON "SYS"."COLLECTION$" TO <User_name>;
GRANT SELECT ON "SYS"."V_$SERVICES" TO <User_name>;
GRANT SELECT ON "SYS"."V_$INSTANCE" TO <User_name>;
GRANT SELECT ON "SYS"."V_$DATABASE" TO <User_name>;
GRANT SELECT ON "SYS"."GV_$INSTANCE" TO <User_name>;
GRANT SELECT ON "SYS"."OBJ$" TO <User_name>;
GRANT SELECT ON "SYS"."COL$" TO <User_name>;
GRANT SELECT ON "SYS"."USER$" TO <User_name>;
GRANT SELECT ON "SYS"."COLTYPE$" TO <User_name>;
GRANT SELECT ON "SYS"."HIST_HEAD$" TO <User_name>;
GRANT SELECT ON "SYS"."TAB$" TO <User_name>;

To specify only the table to perform a 'Data Discovery' tasks on, execute the following query:

GRANT SELECT ON <Target_schema> TO <User_name>;

To let the new user perform 'Data Discovery' tasks on all tables in a specific schema, execute the following script:

BEGIN
FOR t IN (SELECT * FROM all_tables WHERE OWNER = <Target schema>)
LOOP
EXECUTE IMMEDIATE 'GRANT SELECT ON ' || t.OWNER || '.' || t.TABLE_NAME || ' TO <User_name>';
END LOOP;
END;

11.3.11.2 Creating a PostgreSQL Database User


1. To create a new PostgreSQL/Aurora PostgreSQL user, execute the following query:

CREATE USER <User_name> WITH PASSWORD ‘<Password>’;


11 DataSunrise Functional Modules | 256
2. Execute the following query to provide the user with necessary privileges:

GRANT SELECT ON ALL TABLES IN SCHEMA <Target_schema> TO <User_name>;


GRANT USAGE ON SCHEMA <Target_schema> TO <User_name>;

11.3.11.3 Creating a Greenplum Database User


1. To create a new Greenplum user, execute the following query:

CREATE USER <User_name> WITH PASSWORD ‘<Password>’;

2. Execute the following query to provide the user with necessary privileges:

GRANT SELECT ON ALL TABLES IN SCHEMA <Target_schema> TO <User_name>;

3. For Greenplum lower than 6.0: grant your user the privilege of SELECT for each table you're going to search
across:

GRANT SELECT ON <Target_table> TO <User_name>;

11.3.11.4 Creating an SAP Hana Database User


1. To create a new SAP HANA user, execute the following query:

CREATE USER <User_name> PASSWORD "<Password>" NO FORCE_FIRST_PASSWORD_CHANGE;


2. Since the Data Discovery feature presumes only selection, provide the user with the necessary privileges by
executing the following query:

GRANT SELECT ON SCHEMA <Schema_name> TO <User_name>;

11.3.11.5 Creating an SQL Server Database User


You can download a dedicated script that can be used for creating a user and providing necessary privileges. The
script can be downloaded at here: https://www.datasunrise.com/doc/creating_mssql_user_data_discovery.sql

11.3.11.6 Creating a MySQL/Aurora MySQL/MariaDB Database User


To create a new MySQL/Aurora MySQL/MariaDB user, execute the following query:

CREATE USER <User_name> IDENTIFIED BY ‘Password’;

Execute the following query to provide the user with necessary privileges:

GRANT SELECT ON <Target_database> to ‘<User_name>’; FLUSH PRIVILEGES;

11.3.11.7 Creating a Netezza Database User


To create a new Netezza user, execute the following query:

CREATE USER <User_name> WITH PASSWORD ‘<Password>’;

Execute the following query to provide the user with necessary privileges:

GRANT LIST ON DATABASE TO <User_name>;


GRANT SELECT ON DATABASE TO <User_name>;
11 DataSunrise Functional Modules | 257
11.3.11.8 Creating a Redshift Database User
To create a new Redshift user, execute the following query:

CREATE USER <User_name> PASSWORD ‘<Password>’;

Execute the following queries to provide the user with necessary privileges:

GRANT SELECT ON ALL TABLES IN SCHEMA <Schema_name> TO <User_name>;

11.3.11.9 Creating a Teradata Database User


Any user can SELECT from a database, so it is not required to create a special user.

11.3.11.10 Creating a Vertica Database User


To create a new Vertica user, execute the following query:

CREATE USER <User_name> IDENTIFIED BY ‘<Password>’;

Execute the following query to provide the user with necessary privileges:

GRANT AUTHENTICATION <Authentication_method_name> TO <User_name>;


GRANT SELECT ON ALL TABLES IN SCHEMA <Schema_name> TO <User_name>;

11.3.11.11 Data Discovery with TDS 7.4 Always Encrypted


If you're trying to perform Data Discovery and get the following error:

SELECT EXCEPTION test.dbo.NewTable err [Incompatible data types in stream operation Column: 1<RAW>,
datatype in operator <</>>: CHAR. errCode = 32000], Query : SELECT [Column1], [Column3], [Column4],
[Column5], CAST([Column6] as char) AS [Column6], [Column7], [Column8], [Column9], [Column10] FROM
[dbo].[NewTable]
ORDER BY Column1
OFFSET 0 ROWS FETCH NEXT 100 ROWS ONLY

Do the following:
• Run SSMS as the administrator (it is required to create a certificate for a global user)
• Create a main key stored at your local PC. Name the certificate or create new
• Create a key for encryption which uses the main key
• Encrypt your column with this key

11.3.11.12 Enabling Data Discovery in Sybase


To use Data Discovery on a Sybase database, execute the following queries in your Sybase client application:

sp_configure 'enable monitoring', 1


go
sp_configure 'sql text pipe active', 1
go
sp_configure 'sql text pipe max messages', 100
go
sp_configure 'plan text pipe active', 1
go
sp_configure 'plan text pipe max messages', 100
go
sp_configure 'statement pipe active', 1
go
sp_configure 'statement pipe max messages', 100
go
11 DataSunrise Functional Modules | 258
sp_configure 'errorlog pipe active', 1
go
sp_configure 'errorlog pipe max messages', 100
go
sp_configure 'deadlock pipe active', 1
go
sp_configure 'deadlock pipe max messages', 100
go
sp_configure 'wait event timing', 1
go
sp_configure 'process wait events', 1
go
sp_configure 'object lockwait timing', 1
go
sp_configure 'SQL batch capture', 1
go
sp_configure 'statement statistics active', 1
go
sp_configure 'per object statistics active', 1
go
sp_configure 'max SQL text monitored', 255
go
sp_configure 'statement cache size', 5000
go
sp_configure 'lock timeout pipe active', 1
go
sp_configure 'lock timeout pipe max messages', 1000
go
sp_configure 'capture compression statistics' , 1
go
dbcc fix_text ( spt_jtext )

Important: if you've changed an Adaptive Server character set to multibyte charset, upgrade text values by
executing the following query (the table should be in the current database):

dbcc fix_text (<table_name> | <table_id>)

11.3.12 Data Subject Access Request (DSAR)


The GDPR and CCPA security standards oblige their subjects to provide access to all PII stored in corporate
databases to the subjects of this data on request. As a rule, this task is to be accomplished by lawyers or database
operators. DataSunrise’s DSAR simplifies this task by providing powerful search and reporting mechanisms. The
DSAR functionality enables you to search across your databases and get the personal data of interest in compliance
with the GDPR and CCPA security standards. This data can be downloaded from the database and displayed as a
report.
The search process comprises two stages:
• DataSunrise administrator configures the search parameters based on a Data Discovery task. The data to be
requested also should be specified (name, surname, birth date, etc.)
• The operator uses the search filters configured by the administrator to initiate search tasks and get the data of
interest.
To utilize the DSAR functionality, do the following:
1. Create a Periodic Data Discovery task (refer to Periodic Data Discovery on page 248). It will be used to search
the data of interest. You need to run this task at least once because DSAR gets the data from the latest available
Periodic task.
2. Navigate to Data Discovery → DSAR. Create a new Config. Input some logical name, select the Periodic task you
created before.
11 DataSunrise Functional Modules | 259
3. Click New Field to add a new search field. Input some logical name and select a search filter(s) to associate with
the Field. These fields will be used at the next step to specify the values of interest. Save the task.
4. Open your task again, navigate to the Tasks subsection and click Start Searching to initiate a search. Select
output file format and specify the values to be discovered in the corresponding search fields. As a result,
DataSunrise will browse the database to find all columns associated with the specified values.
5. You can view the report as a table or as a diagram by clicking the corresponding icons, or you can download the
report.

11.4 Reporting
The Reporting section includes tools for creating reports on DataSunrise operations.

Note: the reporting component always displays depersonalized user queries, no actual queries might leak.

11.4.1 Reports
Click Reports in the Event Monitor section.
To view a report, perform the following:
1. Select a report to view in the Report Type drop-down list:
11 DataSunrise Functional Modules | 260

Report type Description


Audited Applications Display applications queries of which were audited
Audited Hosts Display hosts queries from which were audited
Audited IP Display IP addresses queries from which were audited
Audited Users Display database users whose queries were audited
Blocked Applications Display applications queries of which were blocked
Blocked Hosts Display hosts queries from which were blocked
Blocked IP Display IP addresses queries from which were blocked
Blocked Users Display database users whose queries were blocked
Masked Applications Display applications queries of which were processed by a masking rule
Masked Hosts Display hosts queries from which were processed by a masking rule
Masked IP Display IP addresses queries from which were processed by a masking rule
Masked Users Display database users whose queries were processed by a masking rule
Failed Login Attempts Display failed user attempts to log into the database
Frequent operations Display a list of the most frequent operations
The Longest operations Display a list of the longest operations
Blocked operations Display blocked operations
Account Operations Display operations with DataSunrise user accounts
Change Passwords Display password changes
Audited Application Users Display app users whose queries were captured by the Data Audit component
Blocked Application Users Display app users whose queries were blocked by the Database firewall
Masked Application Users Display app users whose queries were obfuscated by the Dynamic Data masking
component.

2. Specify a database instance to make a report for via the Instance drop-down list.
3. Specify a reporting time frame using the From and To drop-down lists.
4. Click Refresh to refresh the operations list.
5. To change the method of displaying a report, click the corresponding link:
• Table — display report in the form of a table.
• Graph — display report in the form of an interactive chart.
6. You can also export a report to a PDF or CSV file. Select file format in the Format drop-down list and click
Export.

11.4.2 Creating Custom Reports with the Report Generator


The Report Generator (Report Gen) functionality creates and exports DataSunrise reports to external files (CSV or
PDF).
For reports, navigate to the Reporting → Report Gen subsection. Here you can see a list of existing Report
Generator tasks. To create a new Report Generator task, do the following:
1. Click New for a new task. Name the task.
2. Select report type in the Report Type drop-down list:
• Audit: report on events captured by the Data Audit functionality
11 DataSunrise Functional Modules | 261
• Security: report on events blocked by the Data Security functionality
• Masking: report on database columns that were masked by the Data Masking functionality
• Session Report: report on unsuccessful database authentication attempts
• Direct Sessions Report: report on connections (sessions) established between a target database and client
applications directly bypassing DataSunrise proxies. Note that sessions originated from DataSunrise host
can't be displayed. Thus, don't locate your client applications on the same host as DataSunrise
• Operation Errors: report on SQL errors
• System Events: report on system events
• Instances Status Report: report on proxy/Instance state.
3. Select DataSunrise server to generate a report on, in the Generate Report on Server drop-down list. When
DataSunrise is deployed in High-Availability (HA) mode, you can select a DataSunrise server to generate a
report on.
4. Select format of an output file (CSV or PDF).
5. Specify Report Details:
Interface element Description
Grouping Period field Place queries captured within the specified period of time into the same
string of a report
Data Filter field Specify data to include in a report.

Note: use filter parameters from the Columns in Report subsection, Filter
column to specify the data to be included in a report (see Data Filter Values
on page 262 for the full list of Filter values)

Instance and Object Group tab Select a database instance to create a report for, in the DB Instance drop-
down list or report on actions in respect of the objects from the selected
Object group
Requests per Grouping Period If necessary, select data to include in a report by total number of user
drop-down list queries per a grouping period or by total number of rows returned to a
query
Total Number of Returned Rows If necessary, select data to include in a report by total number of returned
drop-down list rows
Query Types tab If necessary, select Query types to include in a report
Rules to Report on tab If necessary, select existing Rules to report on. This enables you to create
Report Generator reports on events captured by certain Rules
Include Operations with Error Include failed operations in a report (operations with "error" status in the
check box Transactional Trails)

6. Select columns to include in a report in the Columns in Report subsection. This list includes column names
and the corresponding parameters in the Filter columns. You can use these parameters to configure the Data
Filter. Refer to Data Filter Values on page 262.
7. In the Period drop-down list, specify a reporting time frame.
8. Specify a regularity of generating reports in the Frequency of report generation subsection:
Interface element Description
Start From drop-down list Initial date and time of the report generating period
Drop-down list Frequency of report generation (once, hourly, daily, weekly, monthly).
11 DataSunrise Functional Modules | 262
9. Configure the additional settings in the Export Options:
Interface element Description
Send to Subscribers drop-down list Send report file to a Subscriber (Subscriber Settings on page 212)
Query Length Limit field Specify query length limit
External Command text field Send specified parameters to an external application

Important: Since it is difficult to predict what name a report will get


as it depends on the exact date and time a report is generated, you can
use <report> to refer to a report currently being generated. DataSunrise
replaces <report> with the actual report name automatically.
• You can copy a newly-created report via SCP on Linux. Use the
following command: scp -i <report> user@remotehost:/some/remote/
target_directory
• To copy a newly-created report to other folder, use the following
command: cmd /c copy <report> "/some/local/target_directory"
• To pass report location to an external application, use the following
command: someprogramm.exe parameter_name=<report>

Write to Syslog check box Export report data via Syslog

10. Click Save to save the task. You will be redirected to the task list.
11. Click Edit to view the task's details.
12. To generate a report, click Start Now. All reports are displayed in the Reports tab. To view a report, click Open.

11.4.2.1 Data Filter Values


You can use the listed parameters to specify data to include to a Report Generator report (Data filter text field, see
the previous subsection, step 7). You can find all the parameters listed here in the Include Columns to the Report
subsection of the Report Generator page.
To select data to include in a report, check the required parameter in the Columns in Report, then insert the
required Filter string from the table below to the Data Filter text field and specify the Filter's value. Note that the
filtering is based on using the WHERE clause. For example, the following string can be used to report on all queries
directed from localhost (127.0.0.1) to the target database:

connections.client_host='127.0.0.1'
11 DataSunrise Functional Modules | 263

Filter Description Value

connections.instance_id Unique identifier of an instance 1-9999999999

operations.sql_query SQL query code String of unlimited length

connections.db_type Database type 1 - MS SQL, 2 — Oracle etc.

connections.proxy_id Proxy unique identifier 1-9999999999

sessions.user_name Database user name used for connection String, 1024 chars max

connections.sniffer_id Identifier of DataSunrise sniffer used to receive 1-9999999999


connection

sessions.db_name Name of the database connected to String, 1024 chars max

connections.server_port Target DB port number 1- 65535

sessions.service_name SID or database service name granted access String, 1024 chars max

connections.client_port Client application port number 1- 65535

sessions.os_user Operating system name of the client app String, 1024 chars max

sessions.host_name Client app hostname String, 1024 chars max

connections.client_host Client app IP address String, 1024 chars max

connections.server_host Target DB host address String, 1024 chars max

tbl_objects.tbl_name Name of the table mentioned in client application String, 1024 chars max
queries

sessions.application Client application name String, 1024 chars max

tbl_objects.sch_name Name of the database schema used in client String, 1024 chars max
application queries

operations.type Query type -3 — Multi Statement, -2 —


Undefined, ....272 - Other

tbl_objects.db_name Name of the database used in client application String, 1024 chars max
queries

app_sessions.login Client application's user name String, 1024 chars max

11.4.3 VA Scanner
This feature enables you to view all known vulnerabilities for the databases included in your DataSunrise's
configuration according to the CVE database and according to the Security Guidelines by CIS and DISA. The
feature also enables you to view recommendations on fixing these vulnerabilities. Note that you need an Internet
connection to be able to update your vulnerability database.
CVE Guidelines are available for the following databases:
• Apache Hive
11 DataSunrise Functional Modules | 264
• Apache Cassandra
• Apache Impala
• Elasticsearch
• Greenplum
• IBM Informix Dynamic Server
• IBM Netezza
• MongoDB
• MS SQL Server
• MySQL
• MariaDB
• Oracle Database*
• PostgreSQL
• SAP Hana
• Sybase
• Teradata Express
• Vertica

Note: *Oracle Database supported versions:

• 1.0.2.2, 1.0.2.2 R1
• 3.0.1, 3.2, 3.2.0.00.27
• 4.0, 4.0.8, 4.0.8 R2, 4.1, 4.2.0, 4.2.1, 4.2.3
• 5.1
• 7, 7.0.2, 7.0.64, 7.1.3, 7.1.4, 7.1.5, 7.3, 7.3.3, 7.3.4
• 8, 8.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.5.1, 8.0.6, 8.0.6.3, 8.1, 8.1.5, 8.1.6, 8.1.7, 8.1.7 R1, 8.1.7.0.0, 8.1.7.4, 8.1.7.4
R3
• 9, 9.0, 9.0.1, 9.0.1.4, 9.0.1.5, 9.0.2.4, 9.0.2.4 R2, 9.0.4, 9.2, 9.2.0.1, 9.2.0.2, 9.2.0.3, 9.2.0.4, 9.2.0.5, 9.2.0.6, 9.2.0.6
R2, 9.2.0.7, 9.2.0.7 R2, 9.2.0.8, 9.2.0.8 R2, 9.2.0.8DV, 9.2.0.8DV R2, 9.2.1, 9.2.2, 9i
• 10, 10.1, 10.1.0.2, 10.1.0.3, 10.1.0.3, 10.1.0.3.1, 10.1.0.4, 10.1.0.4 R1, 10.1.0.4.2, 10.1.0.4.2 R2, 10.1.0.5, 10.1.0.5
R1, 10.1.8.3, 10.2, 10.2.0.0, 10.2.0.1, 10.2.0.1 R2, 10.2.0.2, 10.2.0.2 R2, 10.2.0.3, 10.2.0.3 R2, 10.2.0.4, 10.2.0.4.2,
10.2.0.5, 10.2.1, 10.2.1 R2, 10.2.2, 10.2.3, 10g
• 11, 11.1.0.6, 11.1.0.6.0, 11.1.0.7, 11.1.0.7.0, 11.1.0.7.3, 11.2.0.1, 11.2.0.1.0, 11.2.0.2, 11.2.0.3, 11.2.0.4, 11g, 11i
• 12.1.0.1, 12.1.0.2, 12.2.0.1, 12c
• 18, 18.1, 18.1.0.0, 18.2, 18c
• 19c
DISA Guidelines are available for the following databases:
• IBM DB2 version 10.5
• MongoDB 3.2, 3.4
• MS SQL Server 2005, 2012, 2014, 2016 Database
• MS SQL Server 2005, 2012, 2014, 2016 Instance
• Oracle Database 9i, 10g, 11g, 11.2g, 12c
CIS Guidelines are available for the following databases:
• IBM DB2 version 8, 9 & 9.5, 10
• MongoDB 3.2, 3.4, 3.6
• MS SQL Server 2008 R2
• MS SQL Server 2012, 2014, 2016, 2017, 2019
• Oracle Database 11.2g, 12c
• PostgreSQL 9.5, 9.6, 10, 11, 12
11 DataSunrise Functional Modules | 265

Important: your database instance's credentials should be saved in DataSunrise for VA Scanner to work.

First, you should create a dedicated periodic task to check availability of the vulnerabilities database and download
the required files. If there is a new version of the database file is available and the internet connection is available,
DataSunrise downloads it from update.datasunrise.com and saves in the AF_HOME folder. Then the periodic task
browses DataSunrise's configuration and forms a list of vulnerabilities for each of the databases included in the
periodic task. This information is saved in the task's results.

Important: to enable DataSunrise to download the vulnerabilities database, allow https://update.datasunrise.com/,


port 443 in your firewall's settings. Note that the connection is unidirectional.

To utilize the VA Scanner feature, do the following:


1. Navigate to VA Scanner → Scan Tasks and create a new task. Select Server to start the task on. Then select
database instances to make a list of vulnerabilities for in the Choose Instances. If necessary, specify Subscribers
to notify and startup frequency.
2. Navigate to VA Scanner → Dashboard for a report on vulnerabilities. This page displays a list of available
databases, the number of known vulnerabilities and Security Recommendations. Use the charts for details. See the
Results subsection for vulnerability description. You can also generate a report on the vulnerabilities in PDF or
CSV format.

11.4.4 VA Scanner grants


DataSunrise's VA Scanner supports the following databases:

Table

DBMS DB version
PostgreSQL 9.5, 9.6, 10, 11, 12
MS SQL Server 2005, 2008 R2, 2012, 2014, 2016
Oracle Database 9, 10, 11, 11.2, 12
IBM DB2 8, 9, 10, 10.5
MongoDB 3.2, 3.4, 3.6

To be able to use VA Scanner, you need to provide your user with the grants listed below.
• Postgres 9.5, 9.6

Should be superuser to examine the following recommendations: 3.1.4, 3.1.5, 3.2


(the pg_read_all_settings role doesn't exist in 9.5 and 9.6)

• Postgres 10, 11, 12

GRANT pg_read_all_settings TO <User_name>;

• MS SQL Server 2005

EXEC('USE [master] GRANT ALTER TRACE TO [' + @LOGIN + ']')


IF NOT EXISTS(SELECT * FROM [master].[dbo].[sysdatabases] WHERE name = 'rdsadmin') -- RDS databases do
not support these grants
BEGIN
EXEC('USE [master] GRANT EXECUTE ON OBJECT::[sys].[xp_loginconfig] TO [' + @USER + ']')
11 DataSunrise Functional Modules | 266
EXEC('USE [msdb] GRANT EXECUTE ON OBJECT::[msdb].[dbo].[sp_enum_proxy_for_subsystem] TO [' + @USER +
']')
EXEC('USE [msdb] GRANT SELECT ON OBJECT::[msdb].[dbo].[sysproxysubsystem] TO [' + @USER + ']')
EXEC('USE [msdb] GRANT SELECT ON OBJECT::[msdb].[dbo].[syssubsystems] TO [' + @USER + ']')
EXEC('USE [msdb] GRANT SELECT ON OBJECT::[msdb].[dbo].[sysproxies] TO [' + @USER + ']')
END

• MS SQL Server 2008 R2, 2017, 2019

IF NOT EXISTS(SELECT * FROM [master].[dbo].[sysdatabases] WHERE name = 'rdsadmin') -- RDS databases do


not support these grants
BEGIN
EXEC('USE [msdb] GRANT SELECT ON OBJECT::[dbo].[sysproxies] TO [' + @USER + ']')
EXEC('USE [msdb] GRANT SELECT ON OBJECT::[dbo].[sysproxylogin] TO [' + @USER + ']')
EXEC('USE [master] GRANT EXECUTE ON OBJECT::[sys].[xp_loginconfig] TO [' + @USER + ']')
END

• MS SQL Server 2012, 2014

EXEC('USE [master] GRANT ALTER TRACE TO [' + @LOGIN + ']')


IF NOT EXISTS(SELECT * FROM [master].[dbo].[sysdatabases] WHERE name = 'rdsadmin') -- RDS databases do
not support these grants
BEGIN
EXEC('USE [master] GRANT ALTER SETTINGS TO [' + @LOGIN + ']')
EXEC('USE [master] GRANT EXECUTE ON OBJECT::[sys].[xp_loginconfig] TO [' + @USER + ']')
EXEC('USE [msdb] GRANT SELECT ON OBJECT::[dbo].[sysproxies] TO [' + @USER + ']')
EXEC('USE [msdb] GRANT SELECT ON OBJECT::[dbo].[sysproxylogin] TO [' + @USER + ']')
END

• MS SQL Server 2016

EXEC('USE [master] GRANT ALTER TRACE TO [' + @LOGIN + ']')


EXEC('USE [master] GRANT VIEW SERVER STATE TO [' + @LOGIN + ']')
IF NOT EXISTS(SELECT * FROM [master].[dbo].[sysdatabases] WHERE name = 'rdsadmin') -- RDS databases do
not support these grants
BEGIN
EXEC('USE [master] GRANT ALTER SETTINGS TO [' + @LOGIN + ']')
EXEC('USE [master] GRANT EXECUTE ON OBJECT::[sys].[xp_loginconfig] TO [' + @USER + ']')
EXEC('USE [msdb] GRANT SELECT ON OBJECT::[dbo].[sysproxies] TO [' + @USER + ']')
EXEC('USE [msdb] GRANT SELECT ON OBJECT::[dbo].[sysproxylogin] TO [' + @USER + ']')
END

• Oracle 9, 10, 11

GRANT SELECT ON "SYS"."DBA_DB_LINKS" TO <User_name>;


GRANT SELECT ON "SYS"."DBA_SYS_PRIVS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_ROLE_PRIVS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_TAB_PRIVS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_PROFILES" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_ROLES" TO <User_name>;
GRANT SELECT ON "SYS"."V_$LOG" TO <User_name>;
GRANT SELECT ON "SYS"."V_$PARAMETER" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_TS_QUOTAS" TO <User_name>;
GRANT SELECT ON "SYS"."V_$PWFILE_USERS" TO <User_name>;
GRANT SELECT ON "SYS"."STMT_AUDIT_OPTION_MAP" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_PRIV_AUDIT_OPTS" TO <User_name>;

• Oracle 11.2

GRANT SELECT ON "SYS"."DBA_DB_LINKS" TO <User_name>;


GRANT SELECT ON "SYS"."DBA_SYS_PRIVS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_ROLE_PRIVS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_TAB_PRIVS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_COL_PRIVS" TO <User_name>;
11 DataSunrise Functional Modules | 267
GRANT SELECT ON "SYS"."DBA_PROXIES" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_USERS_WITH_DEFPWD" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_ROLES" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_PROFILES" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_LIBRARIES" TO <User_name>;
GRANT SELECT ON "SYS"."V_$LOG" TO <User_name>;
GRANT SELECT ON "SYS"."V_$PARAMETER" TO <User_name>;
GRANT SELECT ON "SYS"."OBJAUTH$" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_STMT_AUDIT_OPTS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_OBJ_AUDIT_OPTS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_REGISTRY_HISTORY" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_USERS_WITH_DEFPWD" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_REPCATLOG" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_TABLESPACES" TO <User_name>;

• Oracle 12

GRANT SELECT ON "SYS"."DBA_DB_LINKS" TO <User_name>;


GRANT SELECT ON "SYS"."DBA_SYS_PRIVS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_ROLE_PRIVS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_TAB_PRIVS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_COL_PRIVS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_PROXIES" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_USERS" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_USERS_WITH_DEFPWD" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_ROLES" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_PROFILES" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_LIBRARIES" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_TABLESPACES" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_TABLES" TO <User_name>;
GRANT SELECT ON "SYS"."V_$LOG" TO <User_name>;
GRANT SELECT ON "SYS"."V_$PARAMETER" TO <User_name>;
GRANT SELECT ON "SYS"."OBJAUTH$" TO <User_name>;
GRANT SELECT ON "SYS"."OBJ$" TO <User_name>;
GRANT SELECT ON "SYS"."USER$" TO <User_name>;
GRANT SELECT ON "SYS"."DBA_STMT_AUDIT_OPTS" TO <User_name>;
GRANT SELECT ON "SYS"."CDB_OBJ_AUDIT_OPTS" TO <User_name>;
GRANT SELECT ON "SYS"."AUDIT_UNIFIED_ENABLED_POLICIES" TO <User_name>;
GRANT SELECT ON "SYS"."AUDIT_UNIFIED_POLICIES" TO <User_name>;

• DB2 8

GRANT SELECT ON TABLE SYSIBM.SYSTABAUTH TO USER <User_name>;


GRANT SELECT ON TABLE SYSIBM.SYSTBSPACEAUTH TO USER <User_name>;
GRANT SELECT ON TABLE SYSCAT.TABLES TO USER <User_name>;

• DB2 9

GRANT SELECT ON TABLE SYSIBM.SYSTABAUTH TO USER <User_name>;


GRANT SELECT ON TABLE SYSIBM.SYSTBSPACEAUTH TO USER <User_name>;
GRANT SELECT ON TABLE SYSCAT.TABLES TO USER <User_name>;
GRANT SELECT ON TABLE SYSCAT.ROLEAUTH TO USER <User_name>;

• DB2 10

GRANT SELECT ON TABLE SYSIBM.SYSTABAUTH TO USER <User_name>;


GRANT SELECT ON TABLE SYSIBM.SYSTBSPACEAUTH TO USER <User_name>;
GRANT SELECT ON TABLE SYSCAT.TABLES TO USER <User_name>;
GRANT SELECT ON TABLE SYSCAT.ROLEAUTH TO USER <User_name>;
GRANT SELECT ON TABLE SYSCAT.AUDITPOLICIES TO USER <User_name>;
GRANT SELECT ON TABLE SYSIBMADM.DBMCFG TO USER <User_name>;
11 DataSunrise Functional Modules | 268
• DB2 10.5

GRANT SELECT ON TABLE SYSIBM.SYSTABAUTH TO USER <User_name>;


GRANT SELECT ON TABLE SYSIBM.SYSTBSPACEAUTH TO USER <User_name>;
GRANT SELECT ON TABLE SYSCAT.TABLES TO USER <User_name>;
GRANT SELECT ON TABLE SYSCAT.ROLEAUTH TO USER <User_name>;
GRANT SELECT ON TABLE SYSCAT.AUDITPOLICIES TO USER <User_name>;
GRANT SELECT ON TABLE SYSIBMADM.DBMCFG TO USER <User_name>;
GRANT SELECT ON TABLE SYSCAT.AUDITUSE TO USER <User_name>;
GRANT SELECT ON TABLE SYSIBMADM.PRIVILEGES TO USER <User_name>;
GRANT EXECUTE ON FUNCTION SYSPROC.AUTH_LIST_ROLES_FOR_AUTHID TO USER <User_name>;
GRANT EXECUTE ON FUNCTION SYSPROC.AUTH_LIST_AUTHORITIES_FOR_AUTHID TO USER <User_name>;

• MongoDB 3.2, 3.4, 3.6

use admin
db.createRole(
{
role: "getParamRole",
privileges: [ { resource: { cluster: true}, actions: [ "getParameter" ] } ],
roles: []
}
)
db.createUser(
{
user: "<User_name>",
pwd: "<Password>",
roles: [ { role: "readAnyDatabase", db: "admin" }, { role: "getParamRole", db: "admin" } ]
}
)

OR

use admin
db.createRole(
{
role: "dataSunriseRole",
privileges: [ { resource: { cluster: true}, actions: [ "getParameter" ] } ],
roles: [ { role: "readAnyDatabase", db: "admin" } ]
}
)
db.createUser(
{
user: "<User_name>",
pwd: "<Password>",
roles: [ { role: "dataSunriseRole", db: "admin" } ]
}
)

11.5 Compliance Manager


11.5.1 Compliance Manager Overview
The Compliance Manager feature enables you to quickly establish role-based protection for your database. This
is achieved by periodically searching for sensitive data across the target database and creating Data Security and
Masking rules to protect the sensitive columns.
Execution of a Compliance Manager task includes the following steps:
1. Performing automatic (periodic) data search for searching for potentially sensitive data across the target database
and saving information about database columns that contain this data in a dedicated Object Group (for more
information about the Sensitive Data Discovery feature refer to Sensitive Data Discovery on page 243.
11 DataSunrise Functional Modules | 269
2. Creating Data Audit rules to audit access to the columns that contain sensitive data, creating Data Security and
Data Masking Rules to protect sensitive columns.
3. Restricting access to the sensitive columns basing on Roles (existing User Groups are assigned to the Compliance
Roles for role-based access).
DataSunrise includes the following Roles, their privileges and types of Rules that are created by the Compliance
Manager for these Roles:

Role Privileges Audit Rules Security Rules Masking Rules


Super Admin All privileges + - -
Admin Only admin queries and SELECT + + -
queries
Operator Only INSERT, UPDATE, DELETE, + + -
SELECT queries
Tester SELECT queries only (sensitive data + + +
is masked)

Note: all users that do not belong to any group will be blocked by the database firewall.

11.5.2 Configuring a Compliance Manager Task


To configure a Compliance Manager task, perform the following:
1. Specify required information in the General Criteria subsection:
Parameter Description
Logical name field Task's logical name. This name is used as a prefix in the names of all objects associated
with the task
Database Instance Target database instance
drop-down list
DS Instance to be DataSunrise server to execute the task on
Executed on drop-down
list

2. Input required information into the Data Discovery Parameters:


11 DataSunrise Functional Modules | 270

Parameter Description
Search in Database Database instance to search for sensitive data across
drop-down list
Schema drop-down list Schema to search for sensitive data across
Exclude Search in Exclude specified objects from the search
place. Skip Query
Analyzed Row Count Number of table rows to SELECT
field
Max Percentage of SELECT next "Analyzed Row Count" number of rows if the number of NULL-containing
NULL field rows exceeds the "Max percentage of NULL" value
Min Percentage of Minimum percentage of rows in a column that match the search filter conditions to
NULL field consider the column as containing the required sensitive data

3. Specify security standards or Information types (search filters) to use for searching the sensitive data in the Search
Criteria
4. Set frequency of performing the sensitive data search. Click Next Step
5. At the next tab, select database columns to mask and select masking algorithms to use. DataSunrise creates
Dynamic masking rules for the columns specified and uses the masking algorithms specified at this step. Click Next
Step.
6. At the next tab assign existing user groups to Compliances Roles. Click Next Step.
7. At the next step specify Report type and frequency of reporting. DataSunrise creates a Report Gen task for the
columns specified at the second step according to these settings. Click Finish Master.
As a result, DataSunrise creates a dedicated Object group which is used to store information about columns with
sensitive data found by Data Discovery. Also DataSunrise creates Data Audit, Data Security and Data Masking rules
to protect the columns with sensitive data. Report Gen tasks for the protected columns are created as well.

11.6 Integrating Elasticsearch and Kibana


with DataSunrise
DataSunrise enables you to use data analytics introduced by Kibana and Elasticsearch. You can transfer audit data
collected by DataSunrise to an Elasticsearch database and visualize it using Kibana in the Web Console.
Note that you need to configure ElasticSearch and connect it to Kibana before integrating them with DataSunrise.
Elasticsearch and Kibana should be running! To integrate DataSunrise with Kibana and Elasticsearch, do the
following:
1. If you haven't configured ElasticSearch and Kibana before, navigate to Audit → Analytics and do the actions
listed at the page:
11 DataSunrise Functional Modules | 271

Action Description
Access to Elasticsearch Configure access to an Elasticsearch database to transfer your audit data to
index
Access to Kibana Configure access to Kibana integrated with your Elasticsearch database
Transfer Audit to Create a periodic task to move old audit data to your Elasticsearch database and
Elasticsearch periodic task pass new audit data to that database

2. Configure Elasticsearch:
11 DataSunrise Functional Modules | 272

Parameter Description
Authentication Method Method of authentication in your Elasticsearch database:
• AWS Regular
• IAM Role
• Regular: login/password
• Without Authentication

Protocol Either HTTP or HTTPS


Port Database port number
Region AWS Region. Available for AWS authentication methods
Hostname or IP IP address or host name of the server the database is installed on
Index Elasticsearch Index. You can use your own Index name
Test Connection Test connectivity between DataSunrise and Elasticsearch database

3. Configure Kibana:
11 DataSunrise Functional Modules | 273

Parameter Description
Authentication Method Method of authentication in your Kibana:
• Active Directory
• Regular: login/password
• Without Authentication

Protocol Either HTTP or HTTPS

Note: if you select HTTPS, you need to disable the KibanaVerifySSL additional
parameter in System Settings → Additional Parameters (refer to Additional
Parameters on page 337)

Port Kibana port number


Hostname or IP IP address or host name of the server Kibana is installed on
Test Connection Test connectivity between DataSunrise and Kibana
Advanced Parameters
Set Index Pattern ID Check to enable the Index Pattern ID field and input the Index Pattern ID manually
Manually check box

4. Create a Transfer Audit to Elasticsearch periodic task:


Parameter Description
Name Task logical name
Task Type Select Transfer Audit to Elasticsearch
Start on Server DataSunrise Server to run the task on
Reporting Period Time period to report on
Quantity of transferable The number of events fetched during one iteration of the task
events per iteration
Frequency Startup frequency. Schedule the task or select Manual to start the task execution
manually
Remove Results Older Than Delete obsolete results by specified date

5. Run the periodic task to transfer audit data to your Elasticsearch database and display it by Kibana
6. For details on audited events, navigate to Audit → Analytics:

7. Note that you can access ElasticSearch and Kibana settings at System Settings → Audit Storage page's tabs.
8. To see the events transferred to your Elasticsearch in Kibana itself, open Kibana's Web Console, navigate to
Discovery and select there the index you provided while configuring Elasticsearch in DataSunrise:
12 Resource Manager | 275

12 Resource Manager
This feature enables managing of DataSunrise configuration according to the principles of the Infrastructure as Code
concept.
Resource Manager enables you to do the following:
• Manage your DataSunrise infrastructure through declarative templates rather than scripts
• Deploy, manage, and monitor all the DataSunrise resources as a group rather than handling these resources
individually
• Redeploy DataSunrise throughout the development lifecycle and have confidence your resources are deployed in
a consistent state
• Define dependencies between resources so they're deployed in the correct order
• Apply tags to resources to logically organize all the resources in your configuration.
For example, Resource Manager can be used to deploy a new complex DataSunrise configuration using a pre-
created template. The other example is exporting a configuration of an existing DataSunrise Instance to other
Instances.
Basic definitions:
• Resource: an entity (DataSunrise object such as Rule, DB Instance, proxy, etc.) to be included in a Resource Group
• Resource Group: the result of Template deployment. In other words, it's a configuration created according to a
Template
• Template: Resource group definition (DataSunrise objects' description and corresponding parameters' values) in
the form of a Java Script Object Notation (JSON) file
• Parameter: Resource's parameter/value pair. Parameters are included in a Template or in a dedicated JSON file
• Changeset: changes made to a deployed Resource group.

12.1 Template Structure


Resource Manager's functionality is based on the usage of declarative Templates in JSON format. When you deploy
a Template, Resource Manager converts the Template into REST API operations. A Template includes information
on such entities (Resources) as Rules, Periodic tasks, database Instances, etc. required for deployment of these
Resources. In other words, a Template contains information on the Resource Group to be deployed. DataSunrise
uses Templates to built a configuration according to the Template's definitions. Each Template comprises the
following sections:
• Description (optional)
• ExternalResources
• Mappings
• Parameters
• Resources

{
"DSTemplateVersion" : "2020-03-10",
"ExternalResources" : {},
"Description" : "Template",
"Mappings" : {},
"Parameters" : {},
"Resources" : {}
}

See detailed description of the aforementioned sections below.


12 Resource Manager | 276
The easiest way to create a template and learn its structure is exporting corresponding resources of an existing
DataSunrise configuration. Note that each resource type has its own parameters and a template accordingly. Some
resources such as Rules, Instances etc. may have very complicated templates. Having created a template, you can
edit it which is often needed when transferring resources from a test environment to a production environment. In
such a case the deployed template will create all resources defined in it in the production environment.

12.1.1 ExternalResources Section


The ExternalResources section of a Template contains references to objects that already exist in the infrastructure
you will be deploying your Template in. This might be DataSunrise servers or database users for example. This grants
you flexibility if you need to use some objects located outside your current infrastructure but don't want to recreate
them. An ExternalResources section might look like the following:

"ExternalResources" : {
"DbUser_1" : {
"Properties" : {
"Instance" : {
"Ref" : "Instance_1"
},
"Login" : "postgres"
},
"Type" : "DbUser"
},
"DbUser_2" : {
"Properties" : {
"Instance" : {
"Ref" : "Instance_1"
},
"Login" : "test"
},
"Type" : "DbUser"
},
"Server_1" : {
"Properties" : {
"Name" : "local"
},
"Type" : "Server"
}
},

To mark a resource as "external", when Exporting a Resource Group (Resource Manager → Resource Groups
→ Export) in the Export DataSunrise object to code window, select the resource of interest and check the
corresponding External check box.

12.1.2 Mappings Section


The Mappings section of a Template contains various constants. For example:

"Mappings" : {
"LocalServerID": "1",
"QueryGroups": {
"PgAdminQueries": "-102"
}
},

Constants can be addressed from the Resources section in the following way:

"IncludedQueryGroup" : {"FindInMap": ["QueryGroups", "PgAdminQueries"]}


12 Resource Manager | 277

12.1.3 Parameters Section


The Parameters section of a Template contains certain parameters that you want to define. For example:

"Parameters" : {
"Password_Instance_1" : {
"Description" : "",
"Type" : "String"
},
"Password_Instance_3" : {
"Description" : "",
"Type" : "String"
},
},

Parameters can be addressed from the Resources section in the following way:

"Password" : {
"Ref" : "Password_Instance_1"
},

12.1.4 Resources Section


The "Resources" section of a Template contains detailed description of Resources to be imported. For example:

"Resources" : {
"Instance_1" : {
"Properties" : {
"AcceptOnlyTFAUsers" : "False",
"AdditionOption" : "",
"AsSYSDBA" : "False",
"ConnectType" : "SID",
"CustomConnectionString" : "",
"DatabaseName" : "postgres",
"DatabaseType" : "PostgreSQL",
"EnableAgent" : "False",
"FullyQualifiedDomainName" : "",
"InstanceName" : "postgres",
"KerberosRealm" : "",
"KerberosServiceName" : "postgres",
"LoadingTableReferences" : "False",
"Login" : "postgres",
"LoginType" : "Regular",
"MetadataRetrievalMethod" : "Usual",
"Password" : {
"Ref" : "Password_Instance_1"
},
"PasswordVaultType" : "LocalDB",
"QueryGroupFilter" : "{\"groups_id\":[]}",
"ServerName" : "",
"UseConnectionString" : "False",
"VerifyCA" : "False"
},
"Type" : "Instance"
},
"Interface_1" : {
"Properties" : {
"AdditionOption" : "",
"CryptoType" : "Usual",
"Instance" : {
"Ref" : "Instance_1"
},
"InterfaceHost" : "localhost",
"InterfacePort" : "5432",
"IpVersion" : "Auto",
12 Resource Manager | 278
"ProtocolType" : "Other",
"SslKeyGroup" : "0",
"VerifyCA" : "False"
},
"Type" : "Interface"
},

12.2 "Parameters" File (optional)


One of the components that may be used when creating a template is the "parameters" JSON file. Such a file should
include all parameter names and their values listed in the "parameters" section of the corresponding template.
It enables you to redefine your parameter values without changing the template itself. You can find existing
"parameters" files in the Resource Manager → Parameters subsection.
Here's an example of a "parameters" file:

{
"PostgresHost": "10.0.14.168",
"PostgresDatabasePort": "54100",
"PostgresLogin": "postgres",
"PostgresPassword": "1234"
}

It corresponds the following Parameters section of the corresponding template:

"Parameters" : {
"PostgresHost": {
"Description" : "Database host",
"Type": "String"
},
"PostgresDatabasePort": {
"Description" : "Database port",
"Type": "Integer"
},
"PostgresLogin": {
"Description" : "Login used to access the Postgres database",
"Type": "String"
},
"PostgresPassword": {
"Description" : "Password used to access the Postgres database",
"Type": "String"
}
},

12.3 Working with Templates


12.3.1 Creating a Template
To create a Template, do the following:

1. First, you need to prepare the JSON for your template. You can do it either manually or by exporting the
configuration of your existing DataSunrise instance (Exporting DataSunrise Configuration into Template on page
279).
2. Navigate to Resource Manager → Templates
3. Click Create
4. Input a logical name of the Template. Paste the template's JSON into the field.
12 Resource Manager | 279
5. Click Save to save the template
you can also specify the Parameters file you want to use together with the template.

12.3.2 Exporting DataSunrise Configuration into Template


To convert your DataSunrise configuration into a template, do the following:

1. Navigate to Resource Manager → Resource Groups


2. Click Export
3. Input a template logical name, select template file type (JSON)
4. Check Resources to include in the template
5. Click Export
you can find the template you've exported in the Resource Manager → Templates.

12.3.3 Deploying a Template


Let's assume that you already created the required Template. To convert your template into Resource Group, do the
following:

1. Create the required Template (Creating a Template on page 278)


2. Navigate to Resource Manager → Templates
3. Open your template
4. If required, in the Parameters subsection, select the required parameters file ("Parameters" File (optional) on
page 278)
5. Click Deploy
as a result, all Resources defined in your Template will be created in your DataSunrise instance.

12.4 Resources Description


12.4.1 Resource Types
This subsection provides you with a list of Resources (DataSunrise objects) that can be exported using Resource
Manager.
12 Resource Manager | 280

Resource Type Description


Instance Database Instance (Creating a Target Database Profile on page 58)
Interface Database Instance interface (Creating a Target Database Profile on page 58)
Proxy Database Instance proxy (Creating a Target Database Profile on page 58)
Server DataSunrise server (Servers on page 398)
LdapServer LDAP server (LDAP on page 396)
SubscriptionServer Mail server (Subscriber Settings on page 212)
Sniffer Database Instance sniffer (Creating a Target Database Profile on page 58)
SsoService SSO Service (Single Sign-On in DataSunrise on page 46)
QueryBased Query-based Rule
Learning Learning Rule
DDL DDL-based Rule
Masking Masking Rule
SqlInjection SQL Injection (SQL Injection Filter on page 119)
ObjectBased Object-based
ErrorBased Error-based
DataModelLearning Data Model Learning Rule (Database Traffic Analysis on page 405)
SslKeyGroup SSL Key group (SSL Key Groups on page 106)
QueryGroup Query group (Query Groups on page 207)
ObjectGroup Database Object group (Object Groups on page 203)
CefGroup CEF group (Syslog) (Syslog Settings (CEF Groups) on page 222)
DbUsersGroup Database User Group (Database Users on page 103)
LexiconGroup Lexicon (Discovering Sensitive Data Using Lexicon on page 251)
DataDiscoveryGroup Information type (Sensitive Data Discovery on page 243)
HostGroup Group of Hosts (IP Addresses on page 209)
User DataSunrise user (DataSunrise User Settings on page 389)
DbUser Database user (Database Users on page 103)
Subscriber Subscriber (Subscriber Settings on page 212)
LicenseKey License key
DataModel Table Relations Data model (Table Relations on page 400)
Schedule Schedule (Schedules on page 219)
SecurityStandard Data Discovery Security Standard (Sensitive Data Discovery on page 243)
Application Client application (Client Applications on page 211)
Host Host
LuaScript Lua Script (Lua Script Parameters on page 299)
DSARConfig DSAR configuration (DSAR Config Parameters on page 299)
UserAccessRole User Access Role (User Roles on page 390)
QueriesMap Queries Map (Queries Map on page 398)
12 Resource Manager | 281

Resource Type Description


DataDiscoveryFilter Data Discovery Information type Attributes (Sensitive Data Discovery on page
243)
BackupDictionaryTask Backup Dictionary Periodic task (Backup Dictionary Task on page 223)
CleanAuditTask Clean Audit Periodic task (Clean Audit Task on page 223)
HealthCheckTask Health check Periodic task (Health Check on page 224)
StaticMaskingTask Static masking Periodic task
UpdateMetadataTask Update metadata Periodic task (Update Metadata on page 224)
AwsRemoveUnusedServersTask AWS Remove unused servers Periodic task
UserBehaviorTrainingTask User behavior Periodic task (Periodic User Behavior on page 225)
QueriesHistoryLearningTask Query History Learning Periodic task (Database Query History Analysis on page
400)
VulnerabilityAssessmentTask Vulnerability assessment Periodic task (VA Scanner on page 263)
DataDiscoveryTask Data Discovery Periodic task
DataDiscoveryReportTask Data Discovery Periodic task (Periodic Data Discovery on page 248)
OperationsReportTask Operations Report task (Creating Custom Reports with the Report Generator on
page 260)
SessionReportTask Session Report task (Creating Custom Reports with the Report Generator on page
260)
DirectSessionsReportTask Direct Sessions Report task (Creating Custom Reports with the Report Generator
on page 260)
OperationsErrorReportTask Operation Errors Report task (Creating Custom Reports with the Report Generator
on page 260)
SystemEventsReportTask System Events Report task (Creating Custom Reports with the Report Generator on
page 260)
InstancesStatusReportTask Instances Status Report task (Creating Custom Reports with the Report Generator
on page 260)
Settings System Settings (System Settings on page 328)
AuditRule Audit Rule (Creating a Data Audit Rule on page 122)
SecurityRule Data Security Rule (Creating a Data Security Rule on page 162)
LearningRule Learning Rule (Learning Mode Overview on page 197)
MaskingRule Dynamic Masking Rule (Creating a Dynamic Data Masking Rule on page 165)
Encryption Encryption (Encryptions on page 108)
ExternalDispatcherRule External Dispatcher Rule
RuleFptKey Masking Key (Generating a Private Key Needed for Data Masking on page 26)

12.4.2 Instance Parameters


This subsection includes a list of Instance parameters.
12 Resource Manager | 282

Parameter name Description


Instance Database Instance
DatabaseType Database Type
Login Database login
DatabaseName Database name
ConnectType Oracle-specific connection parameter: SID, Service name
AsSYSDBA Oracle-specific connection parameter: connect as SYSDBA
Password Database password
SslKeyGroup SSL Key group name
CustomConnectionString Custom connection string
UseConnectionString Use custom connection string
KerberosServiceName Kerberos service name
AdditionOption For Hive database
VerifyCA Verify CA
PasswordVaultType Place to store passwords at: request every time, store in DataSunrise, store in
CyberArk
LoginType Authentication method: without authentication, regular login/password, Active
Directory, IAM Role (AWS), AWS Regular (AWS)

Note: if IAM Role authentication method is required to be selected, enable the


AWSSDKLoggingEnable additional parameter and restart DataSunrise. This will
enable you to get detailed information about AWS errors

KerberosRealm Kerberos Realm


FullyQualifiedDomainName FQDN
ServerName Server name
AcceptOnlyTFAUsers Accept only Two-factor authentication users
MetadataRetrievalMethod Metadata retrieval method
QueryGroupFilter Query groups that don't request 2FA
EnableAgent Enable database agent
LoadingTableReferences Load table Relations
AwsRegion AWS Region
DefaultAppDataModel
InstanceName Name of the database instance

12.4.3 Interface Parameters


This subsection includes a list of Interface parameters.
12 Resource Manager | 283

Parameter name Description


Instance Database instance
InterfaceHost Interface IP address or range of IP addresses
InterfacePort Interface port number
IpVersion IP version (Auto, IPv4, IPv6)
CryptoType See protocolType below
SslKeyGroup SSL Key Group to use for securing the connection
VerifyCA Verify Certification Authority:
• Don't verify: 0
• Verify CA only: 1
• Verify CA and Identity: 2

ProtocolType • Hive:
• Regular: 0
• HTTP: 1
• S3
• HTTP: protocolType: 3, cryptoType: 0
• HTTPS: protocolType: 3, cryptoType: 1
• HTTP Reverse Proxy: protocolType: 1, cryptoType: 0
• HTTPS Reverse Proxy: protocolType: 1, cryptoType: 1
• Snowflake
• HTTP: protocolType: 3, cryptoType: 0
• HTTPS: protocolType: 3, cryptoType: 1
• Aurora MySQL, MySQL, MariaDb:
• C/S Protocol: 0
• X-Protocol: 2

AdditionOption
UpdateMetadata

12.4.4 Proxy Parameters


This subsection includes a list of Proxy parameters.
12 Resource Manager | 284

Parameter name Description


Interface Network interface to use for the proxy
ProxyHost IP address or host name of the proxy
ProxyPort Port number of the proxy
Server DataSunrise server to set up the proxy on
IsEnable Is the proxy enabled (TRUE) or not (FALSE)
SslKeyGroup SSL Key Group to use for securing the connection
VerifyCA Verify Certification Authority:
• Don't verify: 0
• Verify CA only: 1
• Verify CA and Identity: 2

AcceptSslConnectionsOnly Accept SSL connections only


EnableSNI Use Server Name Indication (SNI) extension in TLS
ForceUpdate Force update

12.4.5 Server Parameters


This subsection includes a list of Server parameters.

Parameter name Description


Name Server name
Host Server IP address
BackendPort Backend port number
CorePort Core port number
BackendHttps Backend HTTPS
CoreHttps Core HTTPS

12.4.6 LDAP Server Parameters


This subsection includes a list of LDAP Server parameters.
12 Resource Manager | 285

Parameter name Description


Logical Name field Server logical name
Host field Server IP address
Port field LDAP Server port
SSL check box Use SSL (TRUE), don't use SSL (FALSE)
Login Attribute field LDAP user name attribute
Domain field Domain
Base DN field Database Distinguished Name. The database to search across
Group Base DN field Distinguished Names of databases to search across. If empty, the Base DN
parameter's value is used
User Filter field User Filter used to search for user attributes
Group Attribute field Group Attribute
Login Type drop-down list Login Type
Login Custom Format field Login Custom Format. Supported patterns: <name>, <domain>, <basedn>.For
OpenLDAP for example: cn=<name>,<basedn>
Email Attribute field Email attribute
Login field LDAP user name
Save Password drop-down list Method of saving passwords:
• Save in DataSunrise
• Retreive from CyberArk
• Retreive from AWS Secrets Manager
• Retrieve from Azure Key Vault

Password field LDAP password


Default check box Enabled - this server is the default one, disabled - not default
Case sensitive check box Enabled - LDAP server is case-sensitive to user names

12.4.7 Subscription Server Parameters


This subsection includes a list of Subscription Server parameters.
12 Resource Manager | 286

Parameter name Description


Type Server type:
• SMTP: 0
• SNMP: 1
• EXTERNAL: 2
• Slack (Direct): 3
• Slack (token): 4
• Jira: 6
• ServiceNow: 7
• Zendesk: 8
• NetcatTCP: 9
• NetcatUDP: 10
• Syslog: 14
• AWS Cloud Watch: 15

Host Host
Port Port
AuthType SSL (SMTP only):
• Disabled: 0
• Enabled: 1
• STARTTLS Preferred: 2
• STARTTLS Required: 3

Login Login
SslVerify Verify server SSL certificate
UseForSecurity Use the server for sending security emails
PasswordVaultSafe CyberArk Safe to store password in
PasswordVaultFolder CyberArk Folder to store password in
PasswordVaultObject CyberArk Object to store password in
Name Server logical name
ProtocolEx Syslog protocol (RFC 3164, RFC 5424)
AwsSecretID AWS Secrets Manager ID
Password Password
MailSender Send-from email address

12.4.8 Sniffer Parameters


This subsection includes a list of Sniffer parameters.
12 Resource Manager | 287

Parameter name Description


Interface Network interface to use for the proxy
Name Logical name
NetworkDevice Network device to use for setting up a sniffer
IsEnable Is the sniffer enabled (TRUE) or not (FALSE)
Server DataSunrise server to set up the proxy on
SslKeyGroup SSL Key Group to use for securing the connection
ForceUpdate Force update

12.4.9 SSO Service Parameters


This subsection includes a list of SSO Service parameters.

Parameter name Description


Name SSO Service logical name
ServiceType Service type:
• OpenID
• SAML

Data Data
AuthorizationUrl Authorization Token Endpoint URL
TokenUrl Token endpoint URL
TokenKeysUrl Token Keys Endpoint URL
OidcClientId OIDC Client ID
OidcClientSecret OIDC Client Secret
Endpoint Endpoint

12.4.10 Query Based Parameters


This subsection includes a list of Query Based parameters.

Parameter name Description


IncludedQueryGroup Processed Query Group
ExcludedQueryGroup Skipped Query Group
IncludeObjectGroups Included Object Groups

12.4.11 Learning Parameters


This subsection includes a list of Learning Rules parameters.
12 Resource Manager | 288

Parameter name Description


SaveQueriesInGroup Save Statements in Group
SaveUsersInGroup Save Users in Group
SaveObjectsInGroup Save Objects in Group
SaveApplications TRUE: save Applications

12.4.12 DDL Parameters


This subsection includes a list of DDL Rules parameters.

Parameter name Description


QueryTypes Query types to include in a Rule
IncludeObjectGroups Object Groups to include in a Rule

12.4.13 Masking Parameters


This subsection includes a list of Masking Rules parameters.
12 Resource Manager | 289

Parameter name Description


MaskType Masking method:
• Unknown
• Default
• FixedNumber
• FixedString
• EmptyValue
• RandomValueLikeCurrent
• RandomFromInterval
• FunctionCall
• RawReplace
• EmailMasking
• EmailMaskingFull
• EmailMaskingUserName
• CreditCardMasking
• MaskLastChars
• ShowLastChars
• MaskFirstChars
• ShowFirstChars
• ShowFirstLastChars
• MaskFirstLastChars
• RegexpReplace
• FixedDateTime
• FixedDate
• FixedTime
• RandomDateTimeInterval
• RandomDateInterval
• RandomTimeInterval
• RandomDateTimeOffset
• RandomDateOffset
• RandomTimeOffset
• MaskUrl
• UnstructuredDataMasking
• LuaScript
• FPTokenizationEmail
• FPTokenizationSSN
• FPTokenizationCreditCard
• FPTokenizationNumber
• FPTokenizationString
• FPEncryptionEmail
• FPEncryptionSSN
• FPEncryptionCreditCard
• FPEncryptionNumber
• FPEncryptionString
• FilterTable
• HideTable

MaskValue Value to replace your sensitive data with


KeepRowCount TRUE: disable masking of columns included into DISTINCT, GROUP BY, HAVING,
ORDER BY, WHERE clauses
12 Resource Manager | 290

Parameter name Description


MaskSelectOnly TRUE: mask only SELECT queries. For example, the following query will not be
masked: UPDATE customers SET id = id RETURNING *
DataChangeAction • Undefined
• NoAction
• BlockAll
• PreventUpdateAndBlockOtherActions

DmlFilter An array of DML filter values.

12.4.14 Masking Key Parameters


This subsection includes a list of Masking Key parameters.

Parameter name Description


Name Key name
KeyValue Key value (512 characters)

12.4.15 SqlInjection Parameters


This subsection includes a list of SQL Injection Rules parameters.

Parameter name Description


WarningLevel Warning level
ErrorLevel Blocking Level
CommentPenalti Comment penalty
KeywordInCommentPenalti A Keyword in a Comment Penalty
DoubleQueryPenalti Double Query Penalty
OrPenalti OR Penalty
ConstExprPenalti Constant Expression Penalty
NullUnionPenalti Union Penalty
SuspiciousCastPenalti Suspicious Convertion: Blind Error Attack
SuspiciousFunction Suspicious Function Call
ConcatenationCHR Concatenation of Single Characters for Many Types of Attacks
SuspiciousCondition Suspicious Condition to Checks for Boolean Blind Attack

12.4.16 Object Based Parameters


This subsection includes a list of Object Based Rules parameters.
12 Resource Manager | 291

Parameter name Description


DmlInsertEnabled TRUE: process INSERTs
DmlUpdateEnabled TRUE: process UPDATEs
DmlDeleteEnabled TRUE: process DELETEs
DmlSelectEnabled TRUE: process SELECTs
FuncCallEnabled TRUE: process function calls
DmlFilter An array of DML filter values.
DmlExcludes An array of database objects excluded from processing
FuncFilter An array of functions included in processing
FuncExcludes An array of functions excluded from processing
ApplySelectForWhereAndJoin TRUE: process SELECTs in WHERE & JOIN clauses
RowLimit Trigger the Rule Only if the Number of Affected/Fetched Rows is not Less than
ApplySelectWithoutFrom TRUE: process SELECT without FROM
IncludeObjectGroups An array of object groups included in processing
ExcludeObjectGroups An array of object groups excluded from processing

12.4.17 Error Based Parameters


This subsection includes a list of Error Based Rules parameters (the Session Events filter).

Parameter name Description


CheckOperationErrors TRUE: enable "Audit Operation Errors"
CheckSessionErrors TRUE: enable "Audit Sessions"
OnlyAuthorizeErrors TRUE: enable "Audit Sessions, Not authenticated"
AlertEnable TRUE: enable "Audit sessions, Unsuccessful"
AlertSessionLimit "If number of such sessions is greater than" value
AlertSessionPeriod "If number of such sessions is greater than, per" value
AlertOperationLimit "If number of such operations is greater than" value
AlertOperationPeriod "If number of such operations is greater than, per" value
CheckSuccessSession TRUE: enable "Audit sessions, All Success"
AlertQueryLenEnable TRUE: enable "Query Length is More Than"
AlertQueryLenSize "Query Length is More Than" value
AlertQueryTimeEnable TRUE: enable "Query Execution Takes Longer Than"
AlertQueryTimeValue "Query Execution Takes Longer Than" value
AlertSelectAsteriskEnable TRUE: Enable "Query Includes "SELECT *" Expression"

12.4.18 Data Model Parameters


This subsection includes a list of Data Model parameters.
12 Resource Manager | 292

Parameter name Description


Name Data Model logical name
Instance Database instance
TableRelations
RelationshipType Relationship type:
• Undefined
• OneToOne
• OneToMany
• ManyToOne

RelationshipName Relationship name


DatabaseName DB name
RefDatabaseName Related database name
SchemaName Schema name
RefSchemaName Related schema name
TableName Table name
RefTableName Related table name
Columns
Column Column name
RefColumn Related column name

12.4.19 SSL Key Group Parameters


This subsection includes a list of SSL Key Group parameters.

Parameter name Description


RestartProxy
Name Logical name
Keys An array which comprises KeyID, Type, clientHost, Key
KeyID Key ID
Key Key body
Type Type:
• Proxy: 1
• Sniffer: 2
• Interface: 3

ClientHost

12.4.20 Query Group Parameters


This subsection includes a list of Query Group parameters.
12 Resource Manager | 293

Parameter name Description


Name Logical name
DatabaseType Database type the queries belong to

12.4.21 Object Group Parameters


This subsection includes a list of Object Group parameters.

Parameter name Description


Name Logical name
Instance Database Instance
DmlFilter Tables filter
FuncFilter Procedures and Functions filter
ForceUpdate
Tags An array which comprises TagId, TagKey, TagValue
TagId
TagKey
TagValue

12.4.22 CEF Group Parameters


This subsection includes a list of CEF Group parameters.
12 Resource Manager | 294

Parameter name Description


Name Logical name
IsEnable Is the CEF group enabled (TRUE) or disabled (FALSE)
Session Open A session has been opened
Session Close A session has been closed
Operation Open An operation (an SQL query or a prepared statement) has been opened
Operation Close An operation has been closed
Operation Exec Start Execution of operation has been started
Operation Exec Stop Execution of operation has been stopped
Operation Data SELECT query's results
Operation Masking A masking operation was executed
Operation Blocking A query blocking operation was executed
Operation Meta SELECT query result's metadata (column names, column types, etc)
Session Failed Session failed
Operation Failed Operation execution has been failed
Operation Rule An operation-capturing Rule has been triggered
Session Rule A session-capturing rule has been triggered
Execution Rule An execution-capturing Rule has been triggered

12.4.23 Database Users Group Parameters


This subsection includes a list of DB Users Group parameters.

Parameter name Description


Name Logical name
DatabaseType Database type
Instance Database Instance
NewGroupItems An array of new Users or User Groups which comprises the following fields:
• id: entity ID
• name: entity name
• type: 2 - User Group, 1 - User
• dbType: see DatabaseType
• dbInstanceID: Instance ID
• dbInstanceName
• enabled: always 1

OldGroupItems An array of Users or User Groups to be deleted from the group.


GroupItem
GroupItemType User or User Group
12 Resource Manager | 295

12.4.24 Data Model Lexicon Group Parameters


This subsection includes a list of Lexicon Group parameters.

Parameter name Description


Name Lexicon name
Entries Lexicon entries

12.4.25 Host Group Parameters


This subsection includes a list of Host Group parameters.
12 Resource Manager | 296

Parameter name Description


Name SSO Service logical name
DatabaseType Database type:
• Any
• MsSQL
• Oracle
• DB2
• PostgreSQL
• MySQL
• Netezza
• Teradata
• Greenplum
• Redshift
• AuroraMySQL
• MaraDB
• Hive
• Hana
• Vertica
• MongoDB
• AuroraPostgreSQL
• DynamoDB
• ElasticSearch
• Cassandra
• Impala
• Snowflake
• Informix
• Athena
• S3
• Sybase

Instance Database Instance


NewHosts
OldHosts
SubItems
Item
ItemType Item type
• Unknown
• User
• Group

12.4.26 User Parameters


This subsection includes a list of User parameters.
12 Resource Manager | 297

Parameter name Description


Login User name
AccessRoles Access role
Email User email address
AuthType Network Auth (AD, Kerberos, LDAP)
Password User password
AccountStateType Is the user confirmed or not:
• 0: Confirmed
• 1: Not confirmed
• 2: Waiting for confirmation

GeneratePassword Generate user password (user Email should be provided)


TwoFactorAuthType Two-Factor authentication:
• 0: Disabled
• 1: Email
• 2: OTP-based

IpRestrictions
CustomData

12.4.27 dbUser Parameters


This subsection includes a list of dbUser parameters.

Parameter name Description


Login DB user Login
DatabaseType Database type
Instance Instance ID
EnableTwoFactorAuth TRUE: enable Two-factor authentication
TwoFactorAuthType 2FA authentication type:
• None
• EMail
• TOTP
• Skip

Mail User email address

12.4.28 Schedule Parameters


This subsection includes a list of Schedule parameters.
12 Resource Manager | 298

Parameter name Description


Name Schedule logical name
StartTime Starting time
EndTime Ending time
Intervals
Day Day of week:
• Monday
• Tuesday
• Wednesday
• Thursday
• Friday
• Saturday
• Sunday

StartDayTime Interval starting time


EndDayTime Interval ending time

12.4.29 Security Standard Parameters


This subsection includes a list of Security Standard parameters.

Parameter name Description


SecurityStandards
StandardName Standard logical name
StandardType Standard type:
• Custom
• Embedded

InformationTypes
InformationType

12.4.30 Application Parameters


This subsection includes a list of Application parameters.

Parameter name Description


Name Application name

12.4.31 Host Parameters


This subsection includes a list of Host parameters.
12 Resource Manager | 299

Parameter name Description


HostAlias Alias
Host Host name or IP address
Mask Mask
Type Entity type:
• Host
• Range
• Network
• RangeIPV6
• NetworkIPV6

12.4.32 Lua Script Parameters


This subsection includes a list of Lua Script parameters.

Parameter name Description


Name Script logical name
Script Script body

12.4.33 DSAR Config Parameters


This subsection includes a list of DSAR Config parameters.

Parameter name Description


Name Config logical name
DataDiscoveryTask Data Discovery task ID
Fields Fields
Label Label
Attributes Attributes
Id ID

12.4.34 User Access Role Parameters


This subsection includes a list of User Access Role parameters.
12 Resource Manager | 300

Parameter name Description


Name Role name
ActiveDirectoryPath Active Directory path
Permissions
12 Resource Manager | 301

Parameter name Description


Name Object type
• Users
• DbDatabases
• DbInstances
• DbInterfaces
• DbInstanceUsers
• GroupsofDbUsers
• Rules
• Proxies
• Sniffers
• Settings
• FirewallServers
• GroupsOfTemplateQuery
• SubscriberServers
• Subscribers
• TemplateQuery
• MetadataColumn
• MetadataObject
• MetadataSchema
• Programms
• Hosts
• GroupsOfHosts
• Service
• ObjectGroups
• Flush
• Sessions
• Schedule
• SslKeyGroups
• SslKeys
• DataDiscoveryGroups
• DataDiscoveryFilters
• Tasks
• PeriodicTasks
• DbDatabaseUsers
• ObjectFilter
• SubItemFilter
• Terminate
• AccessRole
• CleanAudit
• StartFirewall
• CleanDictionary
• UpgradeFirewall
• CefGroups
• CefItems
• ManualDictionaryBackup
• ManualAuditRotate
• DictionaryRestore
• ActiveDirectoryMapping
• DbInstanceProperties
• DbInstanceUsersProperties
• DbDatabaseProperties
• DbDatabaseUsersProperties
• RulesLimit
12 Resource Manager | 302

Parameter name Description


AllowedActions Actions:
• List
• View
• Insert
• Edit
• Delete
• Execute

12.4.35 Queries Map Parameters


This subsection includes a list of Queries Map parameters.
12 Resource Manager | 303

Parameter name Description


Entries
DatabaseType Database type:
• Any
• MsSQL
• Oracle
• DB2
• PostgreSQL
• MySQL
• Netezza
• Teradata
• Greenplum
• RedShift
• AuroraMySQL
• MariaDB
• SQLite
• Hive
• Hana
• Vertica
• MongoDB
• AuroraPostgreSQL
• DynamoDB
• ElasticSearch
• Cassandra
• Impala
• Snowflake
• Informix
• Redis
• Athena
• S3
• Sybase
12 Resource Manager | 304

Parameter name Description


MainQuery Main query:
• MultiStmt
• Undefined
• Any
• None
• Merge
• Load
• Unload
• Delete
• Update
• Insert
• Replace
• Upsert
• SetVar
• Select
• Table
• Values
• SubJoin
• DoExpression
• SqlScript
• ExplainPlan
• ProfileStatement
• ExecuteCommand
• DDLNone
• AlterAggregate
• AlterConversion
• AlterExtension
• AlterForeignTable
• AlterForeignData
• AlterSession
• AlterInstanceUser
• AlterUserMapping
• AlterDatabase
• AlterSystem
• AlterSequence
• AlterServer
• AlterView
• AlterMaterializedView
• AlterMaterializedViewLog
• AlterTable
• AlterFunction
• AlterMethod
• AlterProcedure
• AlterPackage
• AlterSynonym
• AlterPublicSynonym
• AlterIndex
• AlterTrigger
• AlterType
• AlterTypeBody
• AlterLanguage
• AlterSchema
• AlterRule
12 Resource Manager | 305

Parameter name Description


Synonyms See the Main query list above

12.4.36 Backup Dictionary Parameters


This subsection includes a list of Backup Dictionary task parameters.

Parameter name Description


BackupName Backup name
BackupSettings TRUE: backup settings
BackupUsers Backup users
BackupObjects Backup configurations
ExternalCommand External command to be executed

12.4.37 Clean Audit Parameters


This subsection includes a list of Clean Audit task parameters.

Parameter name Description


cleanAuditType Cleaning method:
• Unknown
• UseDeleteClean
• UseDropClean
• UseCropByDate

RemoveDataOlderThan Remove audited data older than the specified value


IgnoreErrors TRUE: enable "Ignore Errors Associated with Unavailability of Servers when
Performing Clean Audit"
DeleteDataBy • FullPartitions
• Limits

Archive TRUE: enable "Archive Removed Data before Cleaning (to AWS Athena CSV
Format)"
ArchiveType • None
• AwsAthenaCSV

ArchiveFolder Archive folder


ExecuteCommand A command to be executed after archiving

12.4.38 Health Check Parameters


This subsection includes a list of Health Check task parameters.
12 Resource Manager | 306

Parameter name Description


Instance DB instance
HealthCheckType Health check type:
• Local
• Web

SendErrorsToEvents TRUE: enable "Send Error Messages to Event Monitor"


LoadBalancerAddress Load balancer's IP address
LoadBalancerPort Load balancer's port number

12.4.39 Static Masking Parameters


This subsection includes a list of Static Masking task parameters.
12 Resource Manager | 307

Parameter name Description


SourceInstance Source DB instance
TargetInstance Target DB instance
SourceAsSYSDBA Connect to the source database as SYSDBA (Oracle)
TargetAsSYSDBA Connect to the target database as SYSDBA (Oracle)
SourceLogin Source database login
TargetLogin Target database login
IsSourceDefaultCredentials Use default credentials for source database
IsTargetDefaultCredentials Use default credentials for target database
SourcePassword Source database password
TargetPassword Target database password
CreateTables TRUE: enable "Create Tables if They Do Not Exist"
CreateCheckConstraints TRUE: enable "Create Check Constraints"
CreateUniqueConstraints TRUE: enable "Create Unique Constraints"
CreateDefaultConstraints TRUE: enable "Create Default Constraints"
CreateForeignKeys TRUE: enable "Create Foreign Keys"
CreateIndexes TRUE: enable "Create Indexes"
TruncateTables TRUE: enable "Truncate Target Tables"
CheckTargetTablesIsEmpty TRUE: enable "Check for Empty Target Table"
DropFKBeforeTruncating TRUE: enable "Drop Foreign Keys before Truncating"
DisableTriggers TRUE: enable "Disable Triggers"
DropConstraints TRUE: enable ""
DisableLogging TRUE: enable ""
UseHints TRUE: enable ""
LoadType Loader type:
• Default
• DbLink
• Libpq
• DirectPath
• Libmysql
• Bcp
• S3
• Hdfs
• TbuildAuto
• TbuildLoad
• TbuildInserter
• TbuildStream
• TbuildUpdate
• ExternalTable
• DbAccess

UseDirectSourceConnect TRUE:
12 Resource Manager | 308

Parameter name Description


UseProxyForMasking TRUE: enable usage of a proxy for masking
IntTestsSlowdown TRUE:
UseParallelLoad TRUE: enable "Use Parallel Load"
ApplyRelatedTablesFilters TRUE: enable "Apply Related Tables Filters"
MaskInPlace TRUE: enable "Mask In Place"
ResolveUndefinedRelationshipTypesTRUE: enable "Automatically resolve relationship types between related tables if
there are Undefined ones"
S3ConnectionType Connection method for AWS S3 buckets
BucketName S3 bucket name
BucketRegion S3 bucket region
AccessKey Access key
SecretKeyVaultType Secret key storage type
SecretKey Secret key
AwsSecretId AWS Secrets Manager ID
RoleARN ARN Role
CustomProxyPort Custom proxy port number
WorkerID Worker ID
ProxyID Proxy ID
ProxyHost Proxy host or IP address
Databases Databases
DatabaseSourceName Source database name
DatabaseTargetName Target database name
Schemas Schemas
SchemaSourceName Source schema name
SchemaTargetName Target schema name
Tables Table to transfer
TransferAllTables Transfer all tables
TableName Table name
ColumnsFilter Filtering by columns
Columns Columns to be masked
12 Resource Manager | 309

Parameter name Description


MaskType Masking method to be used:
• Unknown
• Default
• FixedNumber
• FixedString
• EmptyValue
• RandomValueLikeCurrent
• RandomFromInterval
• FunctionCall
• RawReplace
• EmailMasking
• EmailMaskingFull
• EmailMaskingUserName
• CreditCardMasking
• MaskLastChars
• ShowLastChars
• MaskFirstChars
• ShowFirstChars
• ShowFirstLastChars
• MaskFirstLastChars
• RegexpReplace
• FixedDateTime
• FixedDate
• FixedTime
• RandomDateTimeInterval
• RandomDateInterval
• RandomTimeInterval
• RandomDateTimeOffset
• RandomDateOffset
• RandomTimeOffset
• MaskUrl
• UnstructuredDataMasking
• LuaScript
• FPTokenizationEmail
• FPTokenizationSSN
• FPTokenizationCreditCard
• FPTokenizationNumber
• FPTokenizationString
• FPEncryptionEmail
• FPEncryptionSSN
• FPEncryptionCreditCard
• FPEncryptionNumber
• FPEncryptionString
• FilterTable
• HideTable

ColumnName Column name


MaskValue Masked value
12 Resource Manager | 310

Parameter name Description


FilterColumnName Filtering by column names
IgnoreColumnsAndReferencesChecks
Ignore column and reference checks

12.4.40 AWS Remove Unused Servers Parameters


This subsection includes a list of AWS Remove Unused Servers task parameters.

Parameter name Description


No parameters

12.4.41 User Behavior Training Parameters


This subsection includes a list of User Behavior task parameters.

Parameter name Description


TrainStartDate Training starting date
TrainEndDate Training ending date

12.4.42 Queries History Learning Parameters


This subsection includes a list of Queries History Learning task parameters.

Parameter name Description


Instance Database instance
IsDefaultCredentials TRUE:
Login Database user login
Password Database user password
TargetModel Table Relation to save the detected relations in
DatabaseSpecificInfo
IncludeObjectGroups Object Group to include (Process tables in)
ExcludeObjectGroups Object Group to exclude (Skip tables in)
AuditDataDmlFilters An array of included DML filters
AuditSkipDmlFilters An array of skipped DML filters

12.4.43 Vulnerability Assessment Parameters


This subsection includes a list of Vulnerability Assessment task parameters.

Parameter name Description


Instances Database instances to check for vulnerabilities
Subscribers A list of Subscribers to notify about the latest vulnerabilities
12 Resource Manager | 311

12.4.44 Data Discovery Task Parameters


This subsection includes a list of Data Discovery Task parameters.

Parameter name Description


Instance Database Instance ID
SkipStandartObjects TRUE: Skip standard objects
GroupIds Group IDs
SensitivityLabels Sensitivity labels
SelectRowCount Select number of rows
ReSelectIfNulls TRUE: re-select if NULLs
ReSelectCountPercent
DataMatchPercent
DbFilter DB name
SchemaFilter Schema name
TableFilter Table name
ObjectGroup Object group
PageSize Page size
Columns Columns:
• Name: column name
• isRegex: column name is a regular expression

SkipObjects
Database Database name
DatabaseRegex TRUE: Database name is a regular expression
Schema Schema name
SchemaRegex TRUE: Schema name is a regular expression
Table Table name
TableRegex TRUE: Table name is a regular expression

12.4.45 Data Discovery Report Parameters


This subsection includes a list of Data Discovery Report task parameters.
12 Resource Manager | 312

Parameter name Description


Order Order of columns
ColumnType Column filter:
• FilterID
• FilterName
• FilterGroup
• DatabaseName
• SchemaName
• TableName
• ColumnName
• ColumnType
• RowID
• Standards

Title Title

12.4.46 Operations Report Task Parameters


This subsection includes a list of Operations Report Task parameters.

Parameter name Description


QueryTypes Query Types. See Queries Map parameters
ChainType Rule type:
• Audit
• Security
• Learning
• Masking
• Encryption
• ExternalDispatcher

OperationsWithError TRUE: report on operations resulted in errors


Columns
Order Column order
ColumnType Column type:
• Number
• String
• DateOnly
• DateTime
• TimeOnly
• Other

Title Column title

12.4.47 Operations Report Parameters


This subsection includes a list of Operations Report task parameters.
12 Resource Manager | 313

Parameter name Description


QueryTypes Query type:
• MultiStmt
• Undefined
• Any
• None
• Merge
• Load
• Unload
• Delete
• Update
• Insert
• Replace
• Upsert
• SetVar
• Select
• Table
• Values
• SubJoin
• DoExpression
• SqlScript
• ExplainPlan
• ProfileStatement
• ExecuteCommand
• DDLNone
• AlterAggregate
• AlterConversion
• AlterExtension
• AlterForeignTable
• AlterForeignData
• AlterSession
• AlterInstanceUser
• AlterUserMapping
• AlterDatabase
• AlterSystem
• AlterSequence
• AlterServer
• AlterView
• AlterMaterializedView
• AlterMaterializedViewLog
• AlterTable
• AlterFunction
• AlterMethod
• AlterProcedure
• AlterPackage
• AlterSynonym
• AlterPublicSynonym
• AlterIndex
• AlterTrigger
• AlterType
• AlterTypeBody
• AlterLanguage
• AlterSchema
• AlterRule
12 Resource Manager | 314

Parameter name Description


ChainType Rule type:
• Audit
• Security
• Learning
• Masking
• Encryption
• ExternalDispather

OperationsWithError TRUE: report on operations ended with errors


Columns
Order Column order
ColumnType Column contents:
• ServiceName
• DbUserName
• BeginTime
• OsUserName
• SqlQuery
• OperationType
• ClientHost
• ServerHost
• AppName
• InstanceID
• CurrentDbName
• DbType
• ProxyID
• InstanceName
• SnifferID
• TotalAccess
• ErrorText
• TouchedSchema
• TouchedTable
• TotalAffectedRows
• RowID
• UniqueID
• ClientHostName
• ClientPort
• ServerPort
• TouchedDb
• ApplicationUserName
• SessionID

Title Title

12.4.48 Session Report Parameters


This subsection includes a list of Session Report task parameters.
12 Resource Manager | 315

Parameter name Description


Instance Database instance
SessionReportType Session report type:
• AllSessions
• ErrorSessions
• AuthErrorOnly

12.4.49 Direct Sessions Report Task Parameters


This subsection includes a list of Direct Sessions Report Task parameters.

Parameter name Description


Instance Database Instance

12.4.50 Operations Error Report Task Parameters


This subsection includes a list of Operations Error Report Task parameters.
12 Resource Manager | 316

Parameter name Description


QueryTypes Query Types. See Queries Map parameters
Columns
Order Column order
ColumnType Column type:
• ServiceName
• DbUserName
• BeginTime
• OsUserName
• SqlQuery
• OperationType
• ClientHost
• ServerHost
• AppName
• InstanceID
• CurrentDbName
• DbType
• ProxyID
• InstanceName
• SnifferID
• TotalAccess
• ErrorText
• TouchedSchema
• TouchedTable
• TotalAffectedRows
• RowID
• UniqueID
• ClientHostName
• ClientPort
• ServerPort
• TouchedDb
• ApplicationUserName
• SessionID

Title Column title

12.4.51 System Events Report Task Parameters


This subsection includes a list of System Events Report Task parameters.
12 Resource Manager | 317

Parameter name Description


Columns
Order Column order
ColumnType Column type:
• ID
• EventID
• Time
• Level
• Type
• Server
• Message

Title Column title

12.4.52 Instances Status Report Task Parameters


This subsection includes a list of Instances Status Report Task parameters.

Parameter name Description


No parameters are available

12.4.53 Settings Parameters


This subsection includes a list of Settings parameters.

Parameter name Description


Settings
SettingName Name
SettingValue Value
Server Server ID
13 DataSunrise Authentication Proxy | 318

13 DataSunrise Authentication Proxy

13.1 DataSunrise Authentication Proxy


Overview
To maintain a secure connection to databases or to the Web Console, DataSunrise can be used as an authentication
proxy. Once user mapping is configured, users will be able to connect to databases through the DataSunrise proxy
using the Active Directory user credentials. DataSunrise maintains the organizational authentication policies of
Microsoft Active Directory, Kerberos and LDAP protocols. You can also configure Active Directory authentication for
users of DataSunrise's Web Console to enhance security and make AD user groups role management easier.

Figure 46: Active Directory users can be mapped to one database user or each AD user can be mapped to a
separate database user, as shown in the figure. When a client connects to a database, DataSunrise connects
to AD services and ascertains rights of the user to connect to the database.

Prerequisites:
The machine to be configured must belong to an Active Directory domain. Follow the Microsoft instructions on
joining the Active Directory domain.
DataSunrise Authentication Proxy configuration scheme:
1. Creating an AD user and assigning principal names with encrypted keys on the domain controller machine.
2. Configuring DataSunrise to map AD users to DB users.

Datasunrise supports the following encryption algorithms:


13 DataSunrise Authentication Proxy | 319

Parameter Description
Amazon Redshift MD5
Greenplum MD5
MySQL SHA-1, SHA-256
Netezza MD5, SHA-256, crypto
PostgreSQL MD5
Vertica MD5, SHA-512
SQL Server xor

• Redshift. Always uses MD5 hashing. You don't need to configure anything at the server's side.
• PostgreSQL. Open pg_hba.conf file and set authentication type to "md5". Refer to the following page for details:
https://www.postgresql.org/docs/current/static/auth-pg-hba-conf.html
• Netezza. Depending on password hashing used for mapping (MD5, SHA256, crypt) set authentication method
with the SET CONNECTION command. Refer to the following page for details: https://www.ibm.com/support/
knowledgecenter/en/SSULQD_7.2.1/com.ibm.nz.dbu.doc/r_dbuser_set_connection.html
• Vertica. Depending on password hashing used for mapping (MD5, SHA512), set SECURITY ALGORITHM with
the ALTER USER command. Refer to the following page for details: https://my.vertica.com/docs/8.1.x/HTML/
index.htm#Authoring/SQLReferenceManual/Statements/ALTERUSER.htm

13.2 Integrating Active Directory with


DataSunrise Proxy
Here are the steps you will need to perform to integrate DataSunrise with Active Directory (AD).
• Create an AD user (Linux)
• Create an SPN for DataSunrise server (Windows)
• Create a keytab and an SPN for the AD user (Linux)
• Enable delegation for the AD user (Windows and Linux)
• Configure krb5.conf using the keytab (Linux)
• Run DataSunrise as a service.
You can find the description of these steps in the following subsections.

13.2.1 Integration on Windows


13.2.1.1 Setting a Service Principle Name (SPN)
You need to set a Service Principal Name (SPN) for required services. Follow the steps given below to set an SPN.
1. Log into the AD domain controller server.
2. Set an SPN using the setspn tool:

setspn -S MSSQLSvc/<fqdn>:1433 <hostname>


13 DataSunrise Authentication Proxy | 320

Parameter Description
-S Adds the specified SPN after verifying that no duplicates exist
<service>/<fqdn> Specify an SPN and a Fully Qualified Domain Name (FQDN) in the following
format: service/<fqdn>@REALM
• Use vertica as a service name for HP Vertica
• Use postgres for Amazon Redshift, PostgreSQL or Greenplum
• Use netezza for IBM Netezza
• Use MSSQLSvc for MS SQL Server
• Use HTTP for DataSunrise GUI authentication

<hostname> DataSunrise server hostname

13.2.1.2 Configuring Active Directory Delegation

1. On the domain controller machine, navigate to Active Directory Users and Computers, locate the account of
the machine DataSunrise is installed on.
2. In the Properties section, go to the Delegation tab and select Trust this computer for delegation to specified
services only and click Add.
3. In the Users and Computers window, specify the user account that was used to launch the database or the
name of the server the RDBMS is installed on.
4. Optionally, you can use Check names to check if a specified user or computer exists and click OK, then select a
required service and click OK.

13.2.2 Integration on Linux


13.2.2.1 Creating an Active Directory User (Linux)
To configure DataSunrise Proxy Authentication, you need to create an AD user (an existing one can be used as well).
Additionally, for Kerberos authentication you need to create a keytab file containing pairs of Kerberos principals and
encrypted keys. Follow the steps given below to create a new AD user.
1. Log into the domain controller server, click Start → Administrative Tools, and launch Active Directory Users
and Computers.
2. If it is not already selected, click the node for your domain (domain.com).
3. Right-click Users, point to New, and then click User.
4. In the New Object → User dialog box, specify the parameters of the new user. It could be a regular user, it is not
required to provide a user with any additional privileges. The user account should be active (Account is disabled
check box unchecked), and the password for the account should be perpetual (Password never expires check
box checked).

13.2.2.2 Setting a Service Principle Name (SPN)


You need to set a Service Principal Name (SPN) for required services. Follow the steps given below to set an SPN.
1. Log into the AD domain controller server.
2. Set an SPN using the setspn tool:

setspn -S <service>/<fqdn> <AD_username>


13 DataSunrise Authentication Proxy | 321

Parameter Description
-S Adds the specified SPN after verifying that no duplicates exist.
<service>/<fqdn> Specify an SPN and a Fully Qualified Domain Name (FQDN) in the following
format: service/<fqdn>@REALM
• Use vertica as a service name for HP Vertica
• Use postgres for Amazon Redshift, PostgreSQL or Greenplum
• Use netezza for IBM Netezza
• Use MSSQLSvc for MS SQL Server
• Use HTTP for DataSunrise GUI authentication.

<AD_username> AD user name.

13.2.2.3 Configuring Active Directory Delegation

1. On the domain controller machine, navigate to Active Directory Users and Computers, locate the account of
the machine DataSunrise is installed on.
2. In the Properties section, go to the Delegation tab and select Trust this computer for delegation to specified
services only and click Add.
3. In the Users and Computers window, specify the user account that was used to launch the database or the
name of the server the RDBMS is installed on.
4. Optionally, you can use Check names to check if a specified user or computer exists and click OK, then select a
required service and click OK.

13.2.2.4 Creating a keytab (Linux)


A keytab is a file which contains pairs of Kerberos principals and encrypted keys. You can use a keytab file to
authenticate to various remote systems without entering password using Kerberos. On Linux, you can create a
keytab using the ktutil tool. Follow the steps given below to create a keytab.
1. Run ktutil and execute the following command.

ktutil
ktutil: addent -password -p <service_name>/<pc_name>@<DOMAIN NAME> -k 1 -e
aes128-cts-hmac-sha1-96
<ADuser password>
ktutil: addent -password -p <service_name>/<pc_name>@<DOMAIN NAME> -k 1 -e
aes256-cts-hmac-sha1-96
<ADuser password>
ktutil: addent -password -p <service_name>/<pc_name>@<DOMAIN NAME> -k 1 -e
arcfour-hmac
<AD user password>
ktutil: addent -password -p <service_name>/<pc_name>@<DOMAIN NAME> -k 1 -e
des-cbc-md5
<AD user password>
ktutil: addent -password -p <service_name>/<pc_name>@<DOMAIN NAME> -k 1 -e
des-cbc-crc
<AD user password>
ktutil: wkt postgres.keytab
ktutil: q

2. Place your keytab file in the DataSunrise installation folder.


3. Amend the krb5.conf file by adding your keytab to it. For example:

[libdefaults]
default_realm = DB.LOCAL
clockskew = 300
13 DataSunrise Authentication Proxy | 322
ticket_lifetime = 1d
forwardable = true
proxiable = true
dns_lookup_realm = true
dns_lookup_kdc = true
default_keytab_name = FILE:/opt/datasunrise/backend.keytab
default_ccache_name = FILE:/tmp/krb5cc_datasunrise

[realms]
DB.LOCAL = {
kdc = dsun.db.local
admin_server = dsun.db.local
default_domain = DB.LOCAL
}

[domain_realm]
.db.local = DB.LOCAL
db.local = DB.LOCAL

[appdefaults]
pam = {
ticket_lifetime = 1d
renew_lifetime = 1d
forwardable = true
proxiable = false
retain_after_close = false
minimum_uid = 0
debug = false
}

By default, the krb5.conf file is located in the /etc/ folder. If your krb5.conf file is in another folder, you need to
reset the KRB5_CONF variable’s value:

export KRB5_CONFIG=<path to your krb5.conf>

13.3 Configuring DataSunrise Authentication


Proxy for Database Connections
To maintain a secure connection to a database, DataSunrise can be used as an authentication proxy. Once user
mapping is configured, users will be able to connect to databases through a DataSunrise proxy using Active
Directory user credentials. Authentication can be done using the Kerberos or LDAP protocols.
The Kerberos protocol is based on "tickets" and provides mutual authentication — both the user and the server
verify each other's identity. Kerberos protocol messages are protected against eavesdropping and replay attacks.
LDAP is an application protocol used as a central repository for user information and as an authentication service.
Compared to Kerberos, LDAP provides one-way authentication. Hence it is not a single sign-on technology, users
have to log in to every service.

Important: DataSunrise authentication proxy feature is available for Amazon Redshift, Greenplum, MySQL
PostgreSQL, Netezza, SQL Server and Vertica databases.

13.3.1 LDAP Authentication for Database Connections


To configure DataSunrise to map Active Directory users to database users using LDAP, do the following.
1. To configure DataSunrise Proxy Authentication, you need to create an AD user (an existing one can be used as
well) (refer to Creating an Active Directory User (Linux) on page 320).
13 DataSunrise Authentication Proxy | 323
2. Create an LDAP server. Navigate to System Settings → LDAP and click Add LDAP Server. Fill out the required
fields:

Parameter Description
Logical Name Logical name of the LDAP server's profile
Group Attribute A search filter used to filter user groups by attribute. Used for mapping of AD user
groups.
Host LDAP server’s host
Login Type Server type
Port Server’s port number
Login Custom Format If you want to know the format for an LDAP login, you need to replace the dots in
the DNS name with a comma. I.e: CN=Test.OU=Europe.O=Novell would become:
CN=Test,OU=Europe,O=Novell. If you are not using Novell LDAP it would become:
CN=Test,OU=Europe,DC=Novell,DC=com. Depending on the domain (DC) you use
to authenticate.
DataSunrise supports the following patterns: <name>, <domain>, <basedn>, which
are auto replaced. For example:
o Active Directory: <domain>\<name>;
o OpenLDAP: cn=<name>, <basedn>

SSL check box Use SSL for connection


Domain LDAP server domain name. Used to create an LDAP login.
Login LDAP user name. Needed for authentication and execution of queries by a privileged
account. Used for mapping groups and AD authentication in the Web Console
Base DN Distinguished Name (DN) - database to search across. DIT (Directory Information
tree) to start data search from

Save Password Method of saving an LDAP password:


• Save in DataSunrise
• Retrieve from CyberArk. In this case you should specify CyberArk's Safe, Folder
and Object (fill in the corresponding fields)
• Retrieve from AWS Secrets Manager. In this case you should specify AWS Secret
Manager ID (refer to Using AWS Secrets Manager for Storing Passwords on page
420)
• Retrieve from Azure Key Vault. You should specify Secret Name and Azure Key
Vault name to use this feature

Password (if an LDAP LDAP user password. Needed for authentication and execution of queries by a
password is saved in privileged account. Used for mapping groups and AD authentication in the Web
DataSunrise) Console
Is Default check box Use this LDAP server as a default one
User Filter Expression that defines criteria of selection of catalog objects included into the
search area defined by the “scope” parameter. Thus, it is a search filter used to
search for user attributes

Note: if your system includes multiple LDAP servers, you should add all these server profiles to DataSunrise.
Note that the Base DN should be different for each server and the host name should be similar for all
13 DataSunrise Authentication Proxy | 324
the associated servers. The point is that DataSunrise looks for the user you're trying to log in as across all
available LDAP servers every time you're trying to authenticate via the DataSunrise's authentication proxy.
That's why all users should have unique names or some errors might occur.

3. Follow the mapping configuration instructions in Configuring User Mapping on page 324.
Important for MySQL users: There are two available methods of password transferring:
1. sha256_password: Recommended method of password transferring. Make sure that the
MySQLUseSHA256PasswordMethodForMapping parameter is enabled in the System Settings → Additional
subsection.
2. mysql_clear_password: use this method if your client application does not support the sha_256_password
method. To enable this method, perform the following.
• Enable the Cleartext Authentication Plugin on the client side:

mysql -enable-cleartext-plugin -h <DataSunrise_hostname> --port=3307 -u <AD_user> --


password=<password>

• Go to System Settings → Additional and uncheck the MySQLUseSHA256PasswordMethodForMapping


parameter (set 0 value).

Important: when the Cleartext Authentication Plugin is used, the passwords will be sent unencrypted, which is
not safe unless you use an SSL-encrypted connection.

Important: if the MySQLUseSHA256PasswordMethodForMapping parameter is set to "0" and you get the
following error "Authentication with 'mysql_clear_password' method requires SSL encryption to transmit password
securely. This requirement can be disabled.", you should enable SSL both on the client side and on the server. Or you
can disable the LdapMappingRequireClientSideSSL parameter (set "0" value).

Important: currently, DataSunrise doesn't support caching_sha2_password authentication for MySQL


8, so we can't use LDAP authentication with default MySQL settings. To enable authentication, change
default_authentication_plugin=caching_sha2_password to default_authentication_plugin=mysql_native_password
in my.ini (Windows) or my.cnf (Linux).

13.3.2 Kerberos Authentication for Database Connections


1. To configure DataSunrise Proxy Authentication, you need to create an AD user (an existing one can be used as
well) and set a principal name (SPN) for required services. For more information on configuring Kerberos, refer to
https://www.datasunrise.com/blog/professional-info/configuring-kerberos-authentication-protocol/
2. Follow the mapping configuration instructions in subs. 7.3.3.
Important for MySQL users: There are two available methods of token transferring:
1. auth_windows: to use this method, make sure that the MySQLUseAuthGSSAPIMethodForMapping parameter
is unchecked in the Settings → Additional subsection.
2. GSSAPIAuth: to use this method, go to Settings → Additional and check the
MySQLUseAuthGSSAPIMethodForMapping parameter.

13.3.3 Configuring User Mapping


1. Navigate to System Settings → General → Authentication Proxy and select LDAP in User Mapping Protocol
Type.
2. Navigate to Configuration → Databases and open an existing database profile.
13 DataSunrise Authentication Proxy | 325
3. In the profile's settings, click Actions → Authentication Proxy Settings
4. Click Enable to enable authentication proxy for the current database.
5. Select location of mapping configuration:
• DataSunrise Integrated: in a DataSunrise config file;
• CSV: in an external CSV file (inside the AF_HOME folder)
• External Database: in an external database. To use an external database to store mapping configuration, you
need to create a table there. Here's an example of a query that can be used to create such a table (SQLite
database):

CREATE TABLE "active_directory_mapping" (


id INTEGER PRIMARY KEY AUTOINCREMENT,
instance_id INTEGER NOT NULL,
ad_group VARCHAR(1024),
ad_user VARCHAR(1024),
db_user VARCHAR(1024) NOT NULL,
pass_hash VARCHAR(1024),
pass_hash2 VARCHAR(1024),
ldap_server_id INT
)

6. Click Mapping+ to create a new mapping task.


7. Fill out the required fields:.
Parameter Description
AD Entity Single AD user or a group of AD users
AD Login Active Directory user name to map database user to
DB Login Database user name to map the AD user to
DB Password Database password
Hash Used by DB for Hash type (see Supported Encryption Algorithms, DataSunrise Authentication Proxy
Authentication Overview on page 318)
Admin Login (Vertica only) Admin login
Admin Password (Vertica Admin password
only)
LDAP Server LDAP server to use for mapping

Click Save.

13.3.4 Mapping a Group of AD Users


To map a group of AD users to a database user, perform the following.
1. Specify an Active Directory user that has access to AD groups. To do this, change the following parameters via
the DataSunrise's CLI, specifying the LdapUser parameter in the format of <domain_name>\<AD_username>.

executecommand.bat changeParameter -name LdapUser -value "DOMAIN\<AD_username>"


executecommand.bat changeParameter -name LdapPassword -value <AD_user_password>

To change the parameters via the DataSunrise's Web Console, navigate to System Settings → General and
specify an AD user name and a password in the corresponding text fields.
2. Perform all the steps from the previous section.
All the actions are the same except adding of AD user mapping configuration (subs Configuring User Mapping on
page 324).
13 DataSunrise Authentication Proxy | 326
Instead of an AD user name (-adLogin parameter) use the name of the AD group (-adGroup).

executecommand.bat addDbUserMapping -instance vertica -adGroup <AD_group_name> -dbLogin <DB_user> -


dbPassword <DB_password> -hashType MD5

Note: make sure that the Group you're using for mapping is not the Primary group of AD user you're going
to authenticate with. If so, assign another Group as the Primary to the User. You can do it on your AD Domain
Controller.

13.4 Configuring Mapping of AD Users to


Database Users via the Web Console
1. Navigate to your target database profile, click Auth Proxy Settings. You will be redirected to the AD to DB User
Mappings page
2.
Note: You can use the Config option if the information about user mapping is stored in the database or the File
option if the information about mapping is stored in an external file.

Enable user mapping for your database. Click Enable and in the Enable User Mapping window select Database.
Specify the connection details of your target database and click Enable
3. Click Mapping+ to create a new User Mapping
4. Fill out the required fields
UI element Description
AD Type drop-down list Select Login for a single AD user and Group for a group of AD users
AD Login field Active Directory user's name
DB Login field Name of the database user you want to map the AD user to
DB Password field Password of the database user
Hash Type field Hash type (MD5 or SHA-512)

5. Click Save.

13.4.1 LDAP Users Cache


Normally, when user mapping configured, DataSunrise establishes connections with the AD Controller to get AD
logins required for user mapping. When a large number of such connections established, it can cause performance
hit. In such case you can use the Cache of LDAP users feature. This feature enables you to add AD logins to a cache
so DataSunrise uses this cache to get AD logins required for user mapping. This cache exists temporarily (it depends
on the caching settings, see below). To enable the caching, do the following:
1. Navigate to System Settings → Additional Parameters
2. Locate the LdapLoginCacheTimeout parameter (you can use the search feature to do that). This parameter's value
means a number of seconds an LDAP user cache exists
3. Set LdapLoginCacheTimeout to 900 for example — it will give you 15 minutes to keep your LDAP logins in
DataSunrise's memory to avoid issues associated with performance hit.
13 DataSunrise Authentication Proxy | 327

13.5 Customization of an LDAP Search


String for Authentication Proxy
By default, DataSunrise uses fixed parameters to search for sensitive data on LDAP servers. We use queries with
"(&(objectCategory=User)(sAMAccountName=<name>))" filter and compare the required user name with the
"sAMAccountName" attribute returned. DataSunrise enables you to configure search filters and user attributes to
be returned.

13.5.1 Searching for Users


LdapMappingSearchFilterUser enables you to set a parameter to search for a user name on an LDAP server by the
following pattern: "(&...=<name>...)". Note that <name> should be replaced with a required name.
By default, we use the following parameter for filtering: "(&(objectCategory=User)(sAMAccountName=<name>))"
DataSunrise will approve the authentication if one or more entity were returned.

13.5.2 Searching for User Groups


DataSunrise uses the same parameter for searching groups as for searching users:
LdapMappingSearchFilterGroup.
LdapMappingRetrieveAttributeGroup enables you to set an attribute to be returned (group name) to check if the
required user is a member of the group. By default, we use the "memberOf" parameter. The Group name should be
included to the "memberOf" list for approving the authentication user. To give the required permissions to your user,
do the following:
1. On your AD domain controller machine, open AD Users and Computers and navigate to your domain object
2. Right-click and navigate to Properties → Security → Advanced
3. Click Add and enter the user name to add
4. Click Properties and in Apply to, change the type to User objects
5. Check the Read Member Of check box. Click OK to save the settings.
14 System Settings | 328

14 System Settings
System Settings section provides access to DataSunrise system settings as follows:
• Messages the firewall displays when blocking access to a target DB
• Configuration of a database which DataSunrise uses to store data auditing results (Audit Storage)
• Logging settings. Access to logs
• Email notification settings
• User profiles and roles
• Syslog settings
• DataSunrise servers
• Configuration backups
To enter this section, click the System Settings link.

14.1 General Settings


Configuration Control subsection. Enables you to create backups of DataSunrise files

Interface element Description


Create Backup button Create a backup. Click Create Backup then select objects to include to the
backup. The following objects can be selected to be backed up:
• Settings: firewall settings.
• Users: firewall Users, Roles, etc.
• Configurations: servers, instances (Interfaces, Proxies, Sniffers, metadata),
database Users and Groups, Hosts, Schedules, Applications, Static Masking
tasks, Data Discovery tasks, Report Generator reports, Query Groups,
Subscribers settings, SSL key groups, Data Discovery groups, CEF groups,
Rules.

Note: backup name means the date and time of backing up and at the
same time the name of the folder the backup is saved in

Table with available backups Displays available backups


Actions → Restore Recover the Dictionary. Select a backup that should be used for recovery in
the table and click Actions → Restore.
Actions → Remove Remove a backup. Select a backup to be removed in the table and click
Actions → Remove
Actions → Edit comments Add comments to a backup.

Advanced Dictionary Operations subsection. The settings enable you to clean your Dictionary (Clean Dictionary
in the Operation drop-down list) and encrypt all *.db files (including the Dictionary) with a custom encryption key
(Encryption of Configuration Files in the Operation drop-down list). The encryption operation is irreversible,
but you can encrypt the files as many times as you want. The key is stored in the crypt.pem file located in the
DataSunrise installation folder. Once the encryption is applied, the Core and Backend will be restarted.
14 System Settings | 329

Important: don't change DataSunrise version to a lower one when using the Dictionary encryption. It will cause
"Error 607" (inability to use the old configuration) because the local_settings.db file of the former version of
DataSunrise was encrypted with another key previously.

Web Console Parameters subsection. Contains settings needed to configure authentication of Active Directory
users to the DataSunrise's Web Console and to configure mapping of AD users to DataSunrise users. These settings
are required to configure the Authentication Proxy as well (refer to subs. DataSunrise Authentication Proxy on page
318).

Interface element Description


Type of Authentication to DataSunrise Authentication to the DataSunrise's Web Console:
UI drop-down list
• Simple: by login/password
• Kerberos: by Kerberos
• LDAP. Enable you to log in into the Web Console using LDAP
accounts.
To enable this feature, add at least one LDAP server to System
Settings → LDAP and in the System Settings → Type of
Authentication to DataSunrise UI select LDAP
Two authentication modes are available:
• By user name. A user should exist (Access Control →
Users) with Active Directory Authentication enabled. At the
login page, insert the name and password saved in LDAP.
DataSunrise’s backend checks all available LDAP servers and
tries to connect to them using the login and password and
authenticates the user.
• By group. It is used when a user unknown to the system tries to
log in. Note that the Group Attribute (System Settings → LDAP)
should include a correct attribute name and Access Control →
Roles should include Active Directory Path. The backend tries
to connect to available LDAP servers and gets the attribute
specified in the Group Attribute field. It creates a user and
grants it certain rights according to the Active Directory Path
names. Authentication will be performed as “By user name”
hereafter, because a user already exists.

Messages subsection

Interface element Description


The Message for Blocked Rules field Specify a message that DataSunrise displays when a query is
blocked by the Data Security functionality.

Authentication Proxy subsection (for more information on Authentication Proxy, refer to the Admin Guides)
14 System Settings | 330

Interface element Description


User Mapping Protocol Type drop-down Authentication proxy type:
list
• Kerberos
• LDAP

Require Client Side SSL for LDAP Require usage of SSL at client side when doing LDAP Mapping
Mapping check box
Accept Only Mapped Users check box When enabled, the connection of database users via DataSunrise's
proxy will be restricted. Only mapped AD users will be able to
connect to the database via DataSunrise's proxy.

HTTP Proxy Settings subsection. Contains proxy settings required for sending metrics to AWS from closed
networks (refer to Amazon CloudWatch Custom Metrics on page 418)

Interface element Description


Type drop-down list Proxy's connection protocol (HTTPS or HTTP)
Host field Proxy's host or IP address
Port field Proxy's default port
User field User name
Password field (When User's password
Change Credentials is
enabled)

Self-Service Access Request subsection. Contains SSAR settings. Refer to Self-Service Access Request on page
425.

14.2 Logging Settings


Navigate to System Settings → Logging and Logs → Logging to get to the Logging tab.
General subsection
14 System Settings | 331

Interface element Description


Path to the Directory that Contains DataSunrise Logs Define a path to the folder for keeping DataSunrise
text field log files. If you've changed the folder, restart either
the Backend or the DataSunrise system service.
For Linux: the owner of the folder should be the
datasunrise:datasunrise user. To grant this user the
required privilege, execute the following command:

chown -R datasunrise:datasunrise <folder path>

For example: if you changed the folder to /var/opt/


datasunrise from /opt/datasunrise/logs, the command
should be the following:

chown -R datasunrise:datasunrise /var/opt/


datasunrise/

File Name for the AppBackendService Log text field AppBackendService log file name
File Name for the AppFirewallCore Log text field AppFirewallCore log file name
Write Log Messages to the Existing Log File Before Number of hours to elapse before DataSunrise creates a
Creating a New One text field new log file (24 hours by default)
Time Period to Store Old Log Files (Days) text field Number of days DataSunrise keeps old log files (7 days
by default)
Limit Total Size of AppBackendService Log Files Maximum size of Backend's log files
(MBytes) text field
Limit Total Size of AppFirewallCore Log Files text field Maximum size of Core's log files
Maximum Size of a Single Log File (MBytes) text field Maximum size of a log file. (10 MB by default)

Statistics subsection

Interface element Description


Logging Interval (Seconds) text field Periodicity of logging events (10 seconds by default)
Time Interval to Store Statistics (Minutes) text field Number of minutes DataSunrise keeps statistical
information (20160 minutes by default)
Time Interval at Which Old Statistics are Deleted Periodicity of removing old statistical information (60
(Minutes) text field minutes by default)

Debug Traces subsection


14 System Settings | 332

Checkbox Description
AgentServerTrace Agent server trace. Enabling this parameter will add additional
information about progress of agent server activity
AntlrCalcLines Enable/disable ANTLR lines calculation. When enabled, increases
CPU utilization. Be careful when enabling
AntlrIntTrace Enable/disable expanded ANTLR tracing (used in MS SQL Server)

AntlrLexTrace Enable/disable ANTLR lexer tracing


AntlrTrace Enable/disable ANTLR parser tracing
AppUserTrace Application User Capturing tracing. Queries are compared with a
marker specified in database Instance properties. If comparing is
successful, a ConnectionEx: Establish app session entry is added to a
log. Otherwise an entry about unsuccessful comparing is added to a
log
AuditConnectionPoolTrace When enabled, Audit connection pool traces will be printed in logs
AuditLoadTrace Tracing of Audit Journal's copy/load mechanism
AuditPartitionTrace Trace creating/deleting partitions
AuditReportTrace Enabling this parameter will add additional information about
progress of audit reports
AuditTrace Trace messages captured by the Data Audit function
AuthenticatorTrace Trace Authenticator
AutoLockerTrace
AwsIAMResolverTrace Enable/Disable additional traces for AWS (for example some API
calls)
AwsRemoverServersTaskTrace When enabled, Copying proxy processing traces will be printed in
logs
AzureRemoveServersTrace When enabled, Azure Remove Servers Task traces will be printed in
logs
BackendMainThreadTrace If checked, enables additional logs for the main thread of
AppBackendService process. Useful if it's required to see main
process function calls. Printed per minute/second. Be careful when
enabling
CoreObjectCounterTrace
DAOManagerPoolTrace If checked, additional logs are enabled to monitor Dictionary
connections. Useful if you want to see the creation, deletion, or
movement of free and used connections between their containers
DAOManagerPoolSizeTrace If enabled, additional logs are enabled to monitor counters of
dictionary connections
DatabaseClientVersionTrace If enabled, Database client version will be printed in logs
DSARTrace DSAR tracing (searching for any information associated with a
specified key. A key can be any InfoType). Includes a resulting query
which gets all the data associated with the specified key
DataDiscoveryFileDbTrace Provides more detailed information output than the one with
DataDiscoveryFileFinderTrace
14 System Settings | 333

Checkbox Description
DataDiscoveryFileFinderTrace Enabling this parameter will help you see the following information
in DataSunrise log files when Data Discovery task is running:
processed file in each thread. Processed column in file
DataDiscoveryFilterTrace Enabling this parameter will help you see the following information
in DataSunrise log files when Data Discovery task is running: match
of specific filter. Result of match (success/failure)
DataDiscoveryIncrementalTrace Enabling this parameter will help you see the following information
in DataSunrise log files when Data Discovery task with incremental
search is running
DataDiscoveryInventoryTrace Enabling this parameter will help you see the following information
in DataSunrise log files when Data Discovery task with inventory is
running
DataDiscoveryMultiProcTrace Enabling this parameter will help you see the following information
in DataSunrise log files when Data Discovery task is running:
deployment in a multiprocessor environment
DataDiscoveryOCRTrace Enable/Disable OCR trace usage for Discovery Image processing,
outputs OCR result in logs
DataDiscoveryObjectFilterTrace Enabling this parameter will help you see the following information
in DataSunrise log files when Data Discovery task is skipping objects
from processing
DataDiscoveryRPCTimeTrace Enable/Disable traces for RPC request Data Discovery (Count time
per action)
DataDiscoverySqlFinderTrace Enabling this parameter will help you see the following information
in DataSunrise log files when Data Discovery task is running:
processed column in SQL databases
DataDiscoveryTaskMgrTrace Enabling this parameter will help you see the following information
in DataSunrise log files when Data Discovery task is running: task
management
DataDiscoveryTrace Enabling this parameter will help you see the following information
in DataSunrise log files when Data Discovery task is running:
Processed file in each thread. Processed column in file. Match of
specific filter. Result of match(success/failure)
DdlDataModelLearningTrace DDL data model learning log trace for periodic task
DnsSrvTrace
DynamicSqlProcessingTrace When enabled, Dynamic SQL processing traces will be printed in
logs
ErResolverTrace
FileBufferTrace
FlushTrace
FreezeDetectorTrace
HealthCheckerTrace Log Health Check Periodic tasks
HttpClientTrace This parameter is responsible for logging requests and responses of
the HttpClient component
HttpCurlDebug Log all information sent and received by the web server
HttpCurlVerbose Log debugging information about the web server
14 System Settings | 334

Checkbox Description
IACTrace Check if you want to see IAC trace in log file
IntTestsTrace Messages about integration tests processing. Used for automated
tests by the product developers
InterchangeTrace Tracing of data exchange between two Backeds or between a
Backend and a Core (HTTP or HTTPS).
JniTrace
JsonTrace Trace JSON requests and responses between a Web Console and a
DataSunrise server
LDAPTrace Trace LDAP connection events (used in User mapping and LDAP
authentication to the Web Console)
LM_DEBUG Debug all messages in a thread
LM_TRACE Trace all messages in a thread
LastPacketsTrace Trace last packets
LeaksTrace
LicenseTrace
LogToStdout Output messages to the standard output stream. By default,
DataSunrise uses text terminal or a console as standard output
MaskingTrace When enabled, Masking processing traces will be printed in logs
MetadataCacheFillStatementTrace Enable/disable extra traces for DbObjects::fillStatement calls
MetadataCacheTrace Log debugging information about operations with internal
metadata cache which contains database structure
MetadataCacheVerboseTrace Enable extra traces to collect metadata
MetadataTrace Enable traces for metadata diagnostics. Check to get extensive
information on metadata
MsSqlParserStateTrace MS SQL Server parser state tracing
MsSqlSubTrace Send multistatement subqueries to the log

MsgHandlerProcessPacketTrace Tracing of packets delivered to a MessageHandler for parsing


MsgHandlerSvcTrace Trace Message handler
PackageParseErrorTrace If checked, logging of Oracle's package parsing errors is enabled.
If a query invokes a package, DataSunrise parses and analyzes the
DDL query. If the parsing of the package is failed, DataSunrise will
display a corresponding message. This type of message display
can be enabled with this parameter. You can disable logging but in
that case you will not know if any packages haven’t been parsed.
If certain packages haven’t been parsed, audit and security rules
associated with them will not work
ParserDebugMode Save traffic dump in debug.dump and packet.dump files
PcapProxyDebugTrace Debugging information tracing for the mechanism of generating
PCAP files from proxy traffic (PcapProxyTrace)
PcapProxyTrace Generate .pcaps (libpcap-devel for Linux / npcap for Windows) for
each inbound/outbound connection. The .pcaps are stored in the
(pcaps/worker-<worker id>) folder within the DataSunrise's
installation folder . This feature is useful for debugging
14 System Settings | 335

Checkbox Description
PcapTrace Trace Pcap-captured network traffic
PcapUnprocessedSegmentTrace Not used
ProcessManagerTrace
ProxyEventHookTrace Additional logs for proxy events (debugging)
ProxyTrace Trace proxy messages about volume of data received and sent
QueryHistoryLearningTrace Tracing of Table Relations mechanism
RecognizerParserTrace Log debugging information about the SQL parser
RecognizerTreeTrace
RulesTrace Trace DataSunrise Rules. Messages related to loading and checking
rules in a proxy. The Core loads Rules from the Dictionary at
startup and when Rules were changed. In this case, all Rule
settings are displayed in the logs. When a query passes through
a proxy, it is checked against all the rules. Information about the
progress of such a check is displayed in the logs. For example,
if a Rule is configured for a certain column, then the logs will
contain information about whether this column is included in the
request.With dynamic masking, the original and masked request
text is displayed in the logs. Additionally displays messages about
problems with licensing of target databases (for example, if a wrong
Instance Oracle and / or MSSQL is licensed)
SMUXCodeTrace Enable MARS proxy code tracing to troubleshooting of multiplexer
SMUXTrace Enable MARS proxy tracing.
SSLParserTrace
SSOTrace Advanced logging of SSO used for Web Console authentication
StaticMaskingTrace Static Masking and Inplace-masking logging (selected objects and
masking methods)
StaticMaskingTraceWithData
SyslogTrace Trace connections to a Syslog server and sending notifications to it
SystemBackupDictionaryTrace Messages about creating Dictionary backups
SystemTasksTrace Enable or disable System periodic task traces
TFATrace Logging of 2FA mechanism used for proxy and Web Console
authentication
TaskSchedulerTrace
ThriftParserTrace Trace Thrift protocol parser
TrafficBuilderTrace Trace Traffic builder
TrailAuditTrace Tracing of the Trailing the DB Audit Logs mechanism
TrailAuditVerboseTrace Enable/disable tracing queries and sessions in a trail db native audit
UpdateConfigTrace Trace config updates

Parser Traces subsection


14 System Settings | 336

Checkbox Description
Db2ParserTrace Trace DB2 parser (log parsing sequence)
DynamoParserTrace Trace DynamoDB parser (log parsing sequence)
ElasticSearchParserTrace Trace ElasticSearch parser (log parsing sequence)
FirebirdHandlerTrace Not used
FirebirdParserTrace Not used
HanaParserTrace Trace SAP Hana parser (log parsing sequence)
HiveParserTrace Trace Hive parser (log parsing sequence)
MongoParserTrace Trace Mongo parser (log parsing sequence)
MSSQLParserTrace Trace SQL Server parser (log parsing sequence)
MySQLParserTrace Trace MySQL parser (log parsing sequence)
NetezzaParserTrace Trace Netezza parser (log parsing sequence)
OracleParserTrace Trace Oracle parser (log parsing sequence)
PostgreHandlerTrace Trace PostgreSQL data handler
PostgreParserTrace Trace PostgreSQL parser (log parsing sequence)
S3HandlerTrace Trace Amazon S3 data handler
S3ParserTrace Trace Amazon S3 parser (log parsing sequence)
SnowflakeParserTrace Trace Snowflake parser (log parsing sequence)
TeradataParserTrace Trace Teradata parser (log parsing sequence)
VerticaHandlerTrace Trace Vertica data handler
VerticaParserTrace Trace Vertica parser (log parsing sequence)

14.2.1 Limiting Size of Logs


A big amount of obsolete log files can flood your storage and slow down your operations. So we recommend
setting a reasonable value for the number of megabytes for the size of your log file to prevent flooding your storage
when old logs are relevant for your configuration or compliance requirements.

Note: the actual log size depends on available storage space and your security policies.

This is how you can do it:


1. Navigate to System Settings → Logging&Logs → Logging
2. In the General subsection, set the maximum size of your Backend logs in the Limit total size of
AppBackendService log files
3. In the General subsection, set the maximum size of your Core logs in the Limit total size of AppFirewallCore
log files
4. You can also set a maximum size of a single log file to limit your logs' size in the Maximum size of a single log
file
14 System Settings | 337

14.2.2 Advanced Dictionary Operations


This subsection enables you to execute some advanced operations with your Dictionary database. For access
to these settings, navigate to System Settings → General → Configuration Control → Advanced Dictionary
Operations. Note that these settings are not available for an SQLite based Dictionary.
Advanced Distionary Operations
Operation Description
Encryption of Configuration Encrypts Dictionary columns using pgcrypto. Refer to Encrypting the Dictionary
Files (PostgreSQL) while DataSunrise instance is running on page 387
Clean Dictionary Deletes Dictionary tables. Note that all existing DataSunrise settings and entities
will be deleted at that
Change Dictionary Changes database used as Dictionary

14.3 Additional Parameters


This subsection contains system settings useful for experienced users as well as database-specific settings. To access
the additional settings, navigate to System Settings → Additional Parameters.
To change some parameter's value, locate the required parameter in the list (you can use the Search by the Name
feature), change its value and click Save. Note that you can also display settings applicable only to a certain
DataSunrise server (use the <All> drop down list to select the server of interest).
Parameters list
14 System Settings | 338

Parameter Default value Description


AWSClientMax ErrorRetryCount 10 The number of retries allowed at the service
client level; the SDK retries the operation the
specified number of times before failing and
throwing an exception
AwsIAMUserResolver JSON path to a custom field in a CloudTrail
CustomJsonPath event. If the path contains a dot ( . )
symbol, then the name needs to be
enclosed in quotation marks (e.g.
"some.field.name".SecondField.EndField)
AWSSDKLoggingEnable Disabled If enabled, enables additional logs within
AWS C++ SDK. Useful if it's required to
resolve some AWS issues
AWSusesHTTPS Enabled AWS uses HTTPS
AcceptOnlyEncryptedSessions Enabled Accept only sessions encrypted with SSL
ActionAfterErrorInParsing Disabled Continue operation after encountering a
parsing error
AdditionalEnvironmentPaths The directory of DS executable file. It will be
added to the PATH environment variable
AgentHost 0.0.0.0 Not used
AgentPort 5566 Not used
AllowRequestsToVendorSite Enabled If enabled, allows requests to the vendor
site. For example, to check for a new version
of product or to update the vulnerability
assessment database
AllowUpdateMetadata Enabled Update metadata using queries that request
BunchMode all schema objects. For MySQL, use the
MySQLAllowUpdateMetadataBunchMode
property
AlwaysResendPacket Disabled Proxy will forward packets that were analyzed
AfterParsing
AlwaysResendPacketByQueue Disabled
AnltrTokenPoolMaxWarmSize 5
AntlrMaxConstantsInExpression 100 The number of constant values in the
expression up to which the SQL parser will
continue parsing. The rest of the query will be
skipped and left unparsed
AntlrMaxRowsInBulkInsert 10 The number of rows in a bulk insert
statement up to which the SQL parser will
continue parsing. The rest of the query will be
skipped and left unparsed
AppInterchangeThreadCount 1 Number of Core threads used for
ApplicationInterchange (Mongoose web
server)
AthenaMetadataInfo 5000 The connection timeout when receiving
ConnectionTimeout Athena metadata info
14 System Settings | 339

Parameter Default value Description


AthenaMetadataInfo 5000 The query timeout when receiving Athena
RequestTimeout metadata info
AuditArchiveFolderSizeLimit 1024 Used for audit archiving. Limits the size of the
folder to store audit data and defines the free
size of storage
AuditCleanerDeleteLimit 500000 if the value is greater than zero, restricts
the number of affected rows in DELETE SQL
queries when cleaning the Audit via periodic
task or manually (only used for MariaDb
Cluster)
AuditConnectionsLoadInterval 1024 Upload the data to the Audit Storage having
reached the specified limit (bytes)
AuditDataMySQLUseLoad Enabled If disabled, store MySQL audited data using
INSERT statements. If enabled, store MySQL
audited data using LOAD statements
AuditDataPgSQLUseLoad Enabled If disabled, store PostgreSQL audited data
using INSERT statements. If enabled, store
PostgreSQL audited data using LOAD
statements
AuditDataScheduleTimer 10 Period of time (seconds) that should elapse
before data is uploaded to Audit Storage
AuditDiscFreeSpaceLimit 10240 The threshold for SQLite used as an Audit
Storage. If the available space on the disc
where audit.db files are stored is less than the
specified value (MB), the corresponding alert
wil be displayed in Event Monitor → System
Events
AuditExecutionRules 2048 Only for streaming audit events (MySQL
LoadInterval LOAD, Postgres COPY)
AuditFieldCryptEnabled Disabled
AuditFieldCryptOptions
AuditHighWaterMark 20000 Maximum number of messages in a thread to
be processed by DataAudit.
AuditJournalQueueFill 15 If the internal queue filling of the Audit
PercentWarning journal is more than the specified value (in
%), a corresponding alert will be displayed in
Event Monitor → System Events
AuditJournalThreadMax 3600
CycleGap
AuditLastDbVersion
AuditLastGmtOffset 0
AuditLoadDataFolderSizeLimit 5000 The maximum size of the folder where the
audit data in CSV format is temporarily
stored, before it can be fetched to the Audit
database
AuditLoadFilesSizeLimit 100000
14 System Settings | 340

Parameter Default value Description


AuditLobOperation Disabled Enable/disable auditing of LOB operation
(Oracle Database specific)
AuditLowWaterMark 5000 Minimum number of messages in a thread to
be processed by DataAudit before reaching
their limit number (AuditHighWaterMark).
AuditMaskPassword 1
AuditMaskPasswordValue
******

AuditMaxColumnDataSize 65536 Maximum size of a data column to be audited


AuditMySQLInnoDB 30
LockTimeout
AuditObjects Disabled Enable/disable auditing of DB objects
for each operation (required for filtering
operations by objects in reports)
AuditOperationData 65536 The size of data (bytes) to be reached before
LoadInterval being uploaded to the Audit Storage
AuditOperationDataset 1024 Upload operation data sets to the Audit
LoadInterval Storage after reaching the specified limit
(bytes)
AuditOperationExec 2048 Upload of operation calls to the Audit Storage
LoadInterval after reaching the specified limit (bytes)
AuditOperationGroups 1024
LoadInterval
AuditOperationMeta 8192 The size of metadata (bytes) to be reached
LoadInterval before being uploaded to the Audit Storage
AuditOperationRules 2048 The size of operation rules (bytes) to be
LoadInterval reached before being uploaded to the Audit
Storage
AuditOperations LoadInterval 1024 The size of operation logs (bytes) to be
reached before being uploaded to the Audit
Storage
AuditOtlLongSize 1024 The maximum buffer size for operations with
Large Objects: varchar_long, raw_long, CLOB
and BLOB. This function is to be called to
increase the buffer size
AuditOtlStreamBufferSize 50 OTL stream buffer size. Not used
AuditPartitionCount 5 Number of audit partitions created in
CreatedInAdvance advance
AuditPartitionFirstEnd Date/time of the first partition's end. For
DateTime 2018-01-01 00:00 example if you need the partition border
to be at 00:00 Monday, then specify any
Monday's midnight.
AuditPartitionFuture 60 Time to delete all future partitions at and to
RecreateTime create at least one new partition
14 System Settings | 341

Parameter Default value Description


AuditPartitionMode Enabled If enabled, enables Audit Partitions when
configuring new external Audit Storage (only
MySQL/PostgreSQL/MS SQL Server)
AuditPartitionShort Disabled If disabled, partition length is measured in
days. If enabled, partition length is measured
in minutes
AuditPortionSize 500000 Frequency of flush. The higher the parameter,
the quicker DataAudit gets the data, but the
lower the throughput
AuditPutThreadQueueWait 0
AuditRotationAgeThreshold 168 For how long to store a current audit.db file
before creating a new one.
AuditRotationMaxCount 1000 Maximum number of audit.db files to store.
AuditRotationMode Enabled If enabled, enables Audit Rotation when
initializing new internal Audit Storage (only
SQLite)
AuditRotationSizeThreshold 1024 Maximum size a current audit.db file can
reach before a new one is created
AuditRotationTotalSize 0
AuditRulesObjectDetail 1024
LoadInterval
AuditRulesStat LoadInterval 0
AuditSessionRules LoadInterval 1024
AuditSessions LoadInterval 2048 The size of session data (bytes) to be reached
before being uploaded to the Audit Storage
AuditSleepSeconds 0
AuditSqliteCacheSize 10000 Size of the cache of SQLite database which is
used to store audit data
AuditSqliteJournalMode 4 SQLite Rollback journal mode
AuditSqliteSynchronous Enabled Enable/disable 'pragma synchronous =
NORMAL' when opening a connection with a
database
AuditStartWaitTime 60 Timeout to wait before executing the "Start
audit" command (seconds)
AuditStopTime 300 Maximum timeout after getting a temporary
audit stop message.
AuditStopWaitTime 60 Timeout to wait before executing the "Stop
audit" command (seconds)
AuditSubQueryOperation 1024
LoadInterval
AuditThreads 5 Number of threads used inside the firewall's
Core for processing audit data
AuditThreadsPriority 0 Priority value for Audit Journal thread pool
14 System Settings | 342

Parameter Default value Description


AuditThroughputStat 2048 The size of data (bytes) to be reached before
LoadInterval being uploaded to the Audit Storage
AuditTrafficStat LoadInterval 0
AuditTransactions LoadInterval 2048 The size of operation transactions (bytes) to
be reached before being uploaded to the
Audit Storage
AuditTryRepairTimer 60 The time interval (seconds) between attempts
to exit emergency mode and to restore audit
data
AuroraErrorCode 5555 Aurora MySQL blocking error code
AuroraMySQLConnector Which transport layer encryption protocols
AllowedTLSVersions the Aurora MySQL connector permits
for encrypted connections. Example:
TLSv1,TLSv1.1,TLSv1.2,TLSv1.3
AuthorizationErrors 10 Period of time (minutes) that should elapse
CheckPeriod before the amount of failed database
login attempts is checked. Refer to the
AuthorizationErrorsLimit parameter
AuthorizationErrors Enable Disabled Enable/disable checking of failed database
login attempts
AuthorizationErrors Limit 5 Maximum amount of failed database login
attempts. When the specified amount is
exceeded, a corresponding message is
displayed in Event Monitor → System
Events
AwsIAMUserResolver A list of regions to search for CloudTrail
CloudTrailRegions events across when resolving temporary
credentials. Region names in the list should
be separated with a comma
AwsIAMUser ResolverUseProxy Disabled Enable/disable HTTP proxy usage for AWS
IAM User Resolver Task
AwsProductCode AWS product code
e4d3d3b6266o
cd12it8gny7gh

AzureEnableQueueDiscovery Enabled The parameter represents if sensitive data


should be discovered in Azure storage
queues
BackendAudit 10 Number of Back end connections
ConnectionCount
14 System Settings | 343

Parameter Default value Description


BackendDynamic 5 This setting is used for Linux only.
MemoryArenas
If this parameter's value is not a zero, it
defines a hard limit on the maximum number
of arenas that can be created. An arena
represents a pool of memory that can be
used by malloc (and similar) calls to service
allocation requests. Arenas are thread-safe
and therefore may have multiple concurrent
memory requests. The trade-off is between
the number of threads and the number
of arenas. The more arenas you have, the
lower per-thread contention, but the higher
memory usage.
The default value of this parameter is 0, which
means that the limit on the number of arenas
is determined by the system.
The lower the value (except zero), the lower
virtual memory consumption

BackendHttpSslCipherList List of SSL ciphers for the DataSunrise


ALL:!EXPORT: Backend service. SSL ciphers specified here
!LOW:!aNULL: will be used for communication with the
!eNULL:!SSLv2: DataSunrise Web Console only
!RC4:!3DES:!SEED

BackendMainThread 3600
MaxCycleGap
BackendSigsegvDetail Enabled Provides detailed information on Backend
and system except stacktrace in the case of a
Backend failure
BackendSigsegv LogName Prefix of log file to store information about
BackendSigsegvLog Backend failure

BackgroundAuditUpdater 300 Delay for the Backend to start updating the


DelayedStart audit database in the background
14 System Settings | 344

Parameter Default value Description


BackgroundInterManager 10 Delay for updating proxy statuses in the
UpdateDuration background process
CEF_AuthErrReportRow
CEF:${CEF.Version}|$
{CEF.DeviceVendor}|$
{CEF.DeviceProduct}|$
{CEF.DeviceVersion}|23
|Session Report Row|5|
report_id=${Report.Id}
report_time=${Report.Time}
row=${Report.Row}

CEF_DiscoveryReportHeader Pattern for the head of the table (for Syslog


CEF:${CEF.Version}| Discovery report layout).
${CEF.DeviceVendor}|
${CEF.DeviceProduct}|
${CEF.DeviceVersion}|
28|Discovery Report Header|
5|report_id=${Report.Id}
report_time=${Report.Time}
header=${Report.Header}

CEF_DiscoveryReportRow Pattern for the table string (for Syslog


CEF:${CEF.Version}|$ Discovery report layout).
{CEF.DeviceVendor}|$
{CEF.DeviceProduct}|$
{CEF.DeviceVersion}|29
|Discovery Report Row|5
|report_id=${Report.Id}
report_time=${Report.Time}
row=${Report.Row}

CEF_OperationReportHeader Pattern for the head of the table (for Syslog


CEF:${CEF.Version}|$ report layout).
{CEF.DeviceVendor}|$
{CEF.DeviceProduct}|$
{CEF.DeviceVersion}|20
|Operation Report Header|5
|report_id=${Report.Id}
report_time=${Report.Time}
header=${Report.Header}

CEF_OperationReportRow Pattern for the table string (for the Syslog


CEF:${CEF.Version}|$ report layout).
{CEF.DeviceVendor}|$
{CEF.DeviceProduct}|$
{CEF.DeviceVersion}|21
|Operation Report Row|5
|report_id=${Report.Id}
report_time=${Report.Time}
row=${Report.Row}

CEF_SessionReportHeader Header template: (column names from your


CEF:${CEF.Version}|$ CSV file) used for reporting by sessions for
{CEF.DeviceVendor}|$ Syslog (Event Monitor → Report Gen)
{CEF.DeviceProduct}|$
{CEF.DeviceVersion}|22
|Session Report Header|5
|report_id=${Report.Id}
report_time=${Report.Time}
header=${Report.Header}
14 System Settings | 345

Parameter Default value Description


CheckFirewallActivityPeriod 0 Period of time that should epalse before the
activity of proxies is checked. If there was no
activity during the specified period of time, a
corresponding message is displayed in Event
Monitor → System Events
CheckHackMsSQLRule 1
CheckLicenseExpired 14 Send prompts about a license which will
expire in the specified number of days
CleanAuditBySessionID Enabled When clearing the Audit, session_id is used,
which significantly speeds up the cleaning
process (currently only used in "Remove all
Events before the Date")
CollectCoreExitInfo 0
ConfigPrefix The parameter to assign a prefix for the
dictionary.db file. Can be used for launching
DataSunrise with test configurations
CopyTextFormatData 127
ChunkMaxSize
CoreAdditionalCommand Additional parameters for the firewall Core
LineArg command line
CoreDynamicMemory Arenas 5 See the BackendDynamic MemoryArenas
setting. Change this setting along with the
MessageHandler ProxyThreads setting

CoreHttpSsl CipherList List of SSL ciphers for DataSunrise Core


ALL:!EXPORT:!LOW:! applications. SSL ciphers specified here are
aNULL:!eNULL:!SSLv2:!
RC4:!3DES:!SEED used for communication between Backend
and Core applications
CoreLoadTimeout 90
CoreMainThreadMax CycleGap 3600
CoreRPCAuthToken LiveTime 600 Live time of an authentication token for Core
RPC (seconds)
CoreTraceFilter Enable Core tracing only for the Instances
specified in the Value field. Note that you
may leave the complete Value tab empty (it
will not cause any effect on Log tracing)
CyberArkApplicationID CyberArk Application ID
DataSunriseDBSecurity

DAOBulkInsertSize 100 The maximum amount of rows inserted with a


single bulk insert at once via DAO subsystem
DAOShortStatement Timeout 600 The parameter is used to limit the execution
time of the operation in DAO. Can be
applied only to those connections where
the parameter is explicitly used. 0 - use the
default session value
14 System Settings | 346

Parameter Default value Description


DataDiscovery 1 The maximum nesting depth of an archive
ArchiveNestingMax
DataDiscoveryChunkSize 50 Chunk size to be able to support chunk
processing
DataDiscoveryDirtyRead Disabled Dirty Read is used to avoid blocking while
running a Data Discovery Task. Supported by
MySQL-like databases
DataDiscoveryEnable Disabled Enable/disable identification of attribute
AttributesDuplicates duplicates when executing a Data Discovery
task
DataDiscoveryFiles 1500 Maximum number of files in basic threads
HighWatermark to be processed by Data Discovery in a File
database
DataDiscoveryFiles 1000 Minimum number of files in basic threads
LowWatermark to be processed by Data Discovery in a File
database
DataDiscoveryFilesNLP 25 Maximum number of files in NLP threads
HighWatermark to be processed by Data Discovery in a File
database
DataDiscoveryFilesNLP 20 Minimum number of files in NLP threads
LowWatermark to be processed by Data Discovery in a File
database
DataDiscoveryFilesNLP 1 The number of NLP threads to Data Discovery
ThreadPools in a File database. If no NLP attributes are
selected, these threads work as basic ones
DataDiscoveryHighWatermark 25 The maximum number of messages in basic
threads to be processed by Data Discovery in
SQL and NoSQL databases
DataDiscoveryInner 50 The maximum chunk size of inner archives
ArchiveChunkSize
DataDiscoveryLowWatermark 20 The minimum number of files in basic threads
to be processed by Data Discovery in SQL
and NoSQL databases
DataDiscoveryMaxFileSize 0 The entire file size to scan as a SUM of chunks
ForChunkProcessing (DS processed chunks until this parameter's
value is reached)
DataDiscoveryMaxFileSize 50 The entire file size to scan that is
ToScanForUnsupported unsupported for chunk processing (Will be
downloaded as a single piece of a whole
object)
DataDiscoveryMax 1000 Maximum number of data snippets to save
NumberOfSnippetsToSave for every attribute of Data Discovery task (0:
means unlimited count)
DataDiscoveryMin 2 Minimum number of columns a file should
CSVColumnsCount contain to be detected as CSV by Data
Discovery
14 System Settings | 347

Parameter Default value Description


DataDiscoveryMin 3 Minimum number of rows a file should
CSVRowsCount contain to be detected as CSV by Data
Discovery
DataDiscoveryMultiprocess 100 Maximum number of available batches that
BatchCreationHighWaterMark can be created
DataDiscoveryMultiprocess 50 Minimum number of available batches that
BatchCreationLowWaterMark can be created
DataDiscoveryMultiprocess 120 Time while multiprocess task waits for new
ServersUpWaitingTime servers before ending with an error in case
there are no available servers left
DataDiscoveryNLP 25 Maximum number of files in NLP threads to
HighWatermark be processed by Data Discovery in SQL and
NoSQL databases
DataDiscoveryNLP 20 Minimum number of files in NLP threads to
LowWatermark be processed by Data Discovery in SQL and
NoSQL databases
DataDiscoveryNLP 1 The number of NLP threads used for Data
ThreadPoolSize Discovery in SQL and NoSQL databases. If
no NLP attributes are selected, these threads
work as basic ones
DataDiscoveryParquet 1000000 The number of rows DataSunrise processes
ChunkProcessingSelectCount with chunks per each iteration for Parquet
files
DataDiscoverySasChunk 100000 The number of rows DataSunrise processes
ProcessingSelectCount with chunks per each iteration for Sas files
DataDiscoverySave Disabled Enable/disable saving information about
UnmatchedFilesInfo unmatched files in the form of an error string
when executing a Data Discovery task
DataDiscoverySnippet 30 Number of border symbols near the value as
BorderLength a result of DataDiscovery task
DataDiscovery ThreadPoolSize 7 The number of basic threads used for Data
Discovery in SQL and NoSQL databases
DB2ErrorCode 11555 Code displayed when a DB2 error occurs
DB2PreparedStatement 0 • 0 - Spawning (if needed) a new prepared
MaskingMethod statement with new sqlIf
• 1 - Recompiling (if needed) a current
prepared statement with new SQL

DSARReportRowsLimit 1
DataBufferSize 64 Size of buffer for keeping network packages
DataDiscovery Disabled Enables filtering by empty column names in
CSVEnableFilterFor CSV files for Data Discovery.When enabled,
EmptyHeaders may reduce match count for such files
DataDiscoveryEnable Disabled Enables filtering by column names in
ColumnFilterForUnstructedFiles Unstructured files where data is kept in a
single <all file> column (e.g. PDF) for Data
Discovery
14 System Settings | 348

Parameter Default value Description


DataDiscoveryExecute Execute the command after completion of
Command any Data Discovery task related to a creation
of a report
DataDiscoveryFileHeader 4 File chunk of the specified size will be
SizeForAnalyze downloaded and file type will be analyzed
and column names obtained. This setting
affects CSV and unsupported formats
DataDiscoveryFileSize 1000 File size to download a complete file. If
ToFullDownload file size equals or is less, we download the
complete file. Otherwise, file is downloaded
to pnDataDiscoveryFileHeaderSizeForAnalyze
DataDiscoveryFilesHigh 25 Maximum number of files in basic threads
Watermark to be processed by Data Discovery in File
database
DataDiscoveryFilesLow 20 Minimum number of files in basic threads
Watermark to be processed by Data Discovery in File
database
DataDiscoveryFilesNLPHigh 25 Maximum number of files in NLP threads
Watermark to be processed by Data Discovery in File
database
DataDiscoveryFilesNLPLow 20 Minimum number of files in NLP threads
Watermark to be processed by Data Discovery in File
database
DataDiscoveryFilesNLP 1 The number of NLP threads to Data Discovery
ThreadPools in File database. If no NLP attributes are
selected, these threads will work as basic
DataDiscoveryFiles 9 The number of basic threads to Data
ThreadPools Discovery in File database
DataDiscoveryHigh Watermark 25 Maximum number of messages in basic
threads to be processed by Data Discovery in
SQL and NoSQL databases
DataDiscoveryLow Watermark 20 Minimum number of files in basic threads to
be processed by Data Discovery in SQL and
NoSQL databases
DataDiscovery 1 The strategy of saving Data Discovery
MatchesSaveStrategy occurences in Dictionary:
• 0 – save only first discovered desired
phrase and corresponding snippet and
display actual number of occurences
• 1 – save only unique discovered desired
phrases and corresponding snippets
as separate entries and display actual
number of occurrences
• 2 – save all discovered desired phrases
and corresponding snippets as separate
entries

DataDiscoveryMultiprocess 2 The number of batch restarts in the Data


BatchRestartCount Discovery multi process logic
14 System Settings | 349

Parameter Default value Description


DataDiscoveryMultiprocess 10 The size of parts a batch is split to. It also
BatchSplitFactor determines whether auto splitting of files
works:
• > 1: auto splitting is ON
• 1: auto splitting is OFF

DataDiscoveryMultiprocess 1500 This parameter defines how multiprocess


MaxBatchSize Data Discovery distributes files across
servers. The parameter defines the
maximum amount of megabytes for a
single batch. Works in conjunction with
DataDiscoveryMultiprocessMaxFilesInBatch
DataDiscoveryMultiprocess 50 This parameter defines how multiprocess
MaxFilesInBatch Data Discovery distributes files across servers.
The parameter defines the maximum number
of files in a single batch. Works in conjunction
with DataDiscoveryMultiprocessMaxBatchSize
DataDiscoveryMultiprocess 500 The minimum size of a file batch that should
MinBatchSize be sent to child machines for processing
DataDiscoveryMultiprocess Disabled Enables operation using only one role for
UseUniqueRole multi-process Data Discovery tasks (Main or
deferred)
DataDiscoveryMultiprocess Enabled If enabled, a Multiprocess DD task waits in a
WaitInQueue queue for at least one available server to run
the task on: once at least one server is free,
the task is removed from the queue. If the
parameter's disabled, the task is aborted with
an error
DataDiscoveryNLPHigh 25 Maximum number of files in NLP threads to
Watermark be processed by Data Discovery in SQL and
NoSQL databases
DataDiscoveryNLPLow 20 Minimum number of files in NLP threads to
Watermark be processed by Data Discovery in SQL and
NoSQL databases
DataDiscoveryNLPThread 1 The number of NLP threads to Data Discovery
PoolSize in SQL and NoSQL databases. If no NLP
attributes are selected, these threads will
work as basic ones
DataDiscoveryResult 100000 The number of rows to insert during one
RowsPerTransaction transaction when saving Data Discovery
results to Dictionary
DataDiscoveryS3FilePart 50 Max file size (Mb) for Amazon S3 data
ToRead discovery
DataDiscoverySnippet 30 Amount of border symbols near the value of
BorderLength interest as a result of a DataDiscovery task
DataDiscoveryStorage 2000 Storage size of selected table data from
CacheSize databases (except Amazon S3)
DataDiscoveryThread PoolSize 9 The number of basic threads to Data
Discovery in SQL and NoSQL databases
14 System Settings | 350

Parameter Default value Description


DataDiscoveryUseAmazon Disabled Use AmazonTextract for OCR Data Discovery
TextractOCR across Amazon S3 instead of the native
algorithm. Note that: extra billing from
Amazon may be applied

DataDiscoveryUseAmazon Disabled Let Amazon Textract download file by itself


TextractS3Integration during OCR Data Discovery process for
Amazon S3. By default, a file is downloaded
by DataSunrise server first and then is loaded
to Amazon Textract.
DatabaseUpdaterHigh 10000 Used together with
WaterMark DatabaseUpdaterLowWaterMark for the
mechanism which saves metadata missing
in the Dictionary. The metadata is sent in a
queue of multiple threads and saved in a
separate thread which takes the metadata
from the queue. This parameter determines
how many bytes can be stored in the queue
before it's considered as full
DatabaseUpdaterLow 10000 Used together with
WaterMark DatabaseUpdaterHighWaterMark. Determines
how many bytes can by included to a queue
before the supplier threads are allowed to
include additional tasks to the queue
DatabaseUpdaterUsePutq 1
WithTimeout
Db2BackendLogin Timeout 10 Time the process has to connect to a DB2
back end before timeout (seconds)
Db2KeyStashPath Full path to retrieved DB2 certificates storage
Db2KeyStoragePath Full path to trusted DB2 certificates storage
Db2UseXmlAgg Function 0
DelayForMetadata Updating 0
DictionaryBackup Folder Dictionary backup folder
./dictionaryBackup

DictionaryOtlLongSize 256 The maximum buffer size for operations with


Large Objects (varchar_long) in an external
Dictionary database
DisableAudit 0
DisableAuditClean Notify 0
DisableMetadata Cache 0
DisablePacket Parsing 0
DiscoveryMaxBatch RowsCount 50000 Number of rows Data Discovery reads from
a table during one run. For example: if there
are 30 rows in a table and this parameter
is set to 10, Discovery will do three reading
runs. The parameter’s value impacts memory
consumption
14 System Settings | 351

Parameter Default value Description


DataDiscoveryStatement 120 Statements timeout (seconds). If the timeout
Timeout has expired, the statement will be aborted.
This parameter is used only in Data Discovery.
Isn't applicable to DB2, SAP Hana and Vertica
• 0: unlimited timeout
• -1: default server timeout

DistributeSessions ToThreads 1
DoubleRunGuard Enabled Enable protection from running multiple
DataSunrise instances with single
configuration
DsFunctionRunTimeout 10
DsTableCheckDurationMs 1000
DumpServerURL HTTP server for sending crash dumps
DynamoUserUpdateEnable Enabled Background refreshing of IAM user names list
and their accessKeyId in the Core
DynamoUserUpdatePeriod 5 Period of updating the IAM user names list
EDConnectionTimeout 30000
EDServerDefaultHost 0
EDServerDefaultPort 53002
ElasticSearchMax MaskedSize 500
EnableAWSMetrics Disabled Allows sending metrics to AWS
EnableDataConverter Enabled Convert binary data to text format
EnableHyperscan Disabled If enabled, the Hyperscan regular expression
library is used
EnableOracle NativeEncryption Disabled Enable connections encrypted with Oracle
native encryption
EnableRe2c Enabled If enabled, the Re2c regular expressions
library is used
EnterpriseOID DataSunrise's Enterprise OID
1.3.6.1.4.1.7777

EventManagerFlushInterval 60 Interval (seconds) after which all subscribers


InSeconds will be flushed. By default, 60 seconds, i.e.
once a minute
ExceptionInAsserts Enabled DataSunrise's behavior when a critical parsing
FromRecognizer error occurs:
• Disabled - abort operation with an error
• Enabled - log the error message and
continue parsing SQL queries

ExternalJSONMetadata Metadata in JSON format for RPC


createNewInstanceByExternalConfig and
updateDbInstanceByExternalConfig
FileBufferAuto CreateBucket 0
14 System Settings | 352

Parameter Default value Description


FileBufferPartRead PerBlock 4 Size of masked data blocks (Mb) to be
gradually uploaded from an Amazon S3
bucket
FileBufferStoragePath Place to store temporary file buffers (Amazon
ds-file-buffer S3 bucket)

FileBufferUseLocal Storage Enabled If enabled, DataSunrise uses your local disk


as the storage of temporary file buffers.
Otherwise an Amazon S3 bucket will be used
FilledMessageHandlerSet 1
ParsingError
FirewallCoreGenerateInfo Disabled If enabled, displays stacktrace and other
OnExit information when the Core is stopped
FirewallCoreSigsegv Detail Enabled Provides detailed information on Core and
system including stacktrace in the case of a
Core failure
FirewallCoreSigsegv LogName Prefix of a log file to store information about
CoreSigsegvLog Core failure at

FlushTimeout 30
ForceAudit Disabled A substitute of the outdated
DISABLE_SQL_RECOGNIZER. Enables you to
audit any queries
ForceFlushCoreLog Disabled Forces each line of traces to be flush to log
file
ForceFlushUserLog Enabled Forces each line of user traces to be flush to
log file
FunctionMasking 0
GenerateCrashDump 0 Create Core crash dump in TempPath:
• 0 - Disabled
• 1 - Normal dump
• 2 - Extended dump

GenerateCrashDumpBackend 0 Create Backend crash dump in TempPath:


• 0 - Disabled
• 1 - Normal dump
• 2 - Extended dump

GreenplumErrorCode 5555 Greenplum error code


GreenplumParserHandle 1
CoreObject
GreenplumParserRequire Enabled Greenplum parser requests session metadata
SessionMetadata
GreenplumParserSSLMode Enabled Enable Greenplum parser SSL mode
GreenplumParserTestMode 0
14 System Settings | 353

Parameter Default value Description


GreenplumProtocolReader 524288
BufferWatermark
HAClusterName
DataSunrise

HanaErrorCode 5555 SAP Hana error code


HanaIsODataProxy Disabled Enable data auditing via the OData protocol
(SAP Hana)
HiveErrorCode 5555 Hive error code
HiveParserHandle CoreObject Enabled Enable data handling
HiveParserReaderBuffer 524288 Maximum size of Hive protocol reader packet
WaterMark buffer
HiveParserTestMode Disabled Hive parsing in test mode (debugging)
IgnoreSetApp Enabled If enabled, SQL operations used for
UserSqlOperation application user setting are ignored for any
Rules
ImageDataDiscovery Disabled When enabled, Data Discovery task will scan
images for sensitive data
KeepAliveInterval 1 Time interval to base the following
timers on: pnSSLResumeTimeout,
pnSslCacheCheckInterval, pnSMUXTimeout
KibanaProxyHost localhost Name of the host the Kibana service is
located required for DataSunrise to use
Kibana's API
KibanaProxyAwsRegion AWS Region of Kibana service required for
DataSunrise to use Kibana’s API
KibanaProxyAwsRoleArn ARN Role of Kibana service required for
DataSunrise to use Kibana’s API
KibanaProxyCryptoType 0 Kibana service protocol type required for
DataSunrise to use Kibana’s API
KibanaProxyIndexPatternID Kibana Index Pattern ID required for
DataSunrise to use Kibana’s API
KibanaVerifySSL Enabled Enable\disable SSL verification for Kibana.
Disable if you're going to use self-signed
certificates
KillCoreOnExit Enabled Stop DataSunrise Core process on exit
LastPacketsDump Disabled Enable/disable dumping of a last packet in
case of a parsing error
LastPacketsDump MaxSize 512 Maximum size of a lost packets dump file
LdapAlwaysTrustServer 1
Certificate
LdapLoginCacheTimeout 0 Number of settings an LDAP user cache exists
LdapNetworkTimeout 8 LDAP network timeout value
14 System Settings | 354

Parameter Default value Description


LicenseCheckPeriod 5 Time period (minutes) at which all traffic
sources request their availability
LoadBalancerHost Host name or IP address of a Load balancer
that is located in front of DS firewall
LoadOracleUsers 0 Load metadata associated with Oracle users.
This metadata includes user names, user
password hashes. This metadata is saved in
the Dictionary and is used to encrypt native
Oracle authentication packets
LoadSystemRoutine Disabled Whether to load system DDL
AndViewDDL if UseMetadataFunctionDDL,
UseMetadataViewDDL options are enabled
LogNumberTextWidth 6
LoginCheckIPChanges 0
LoginFailedAttemptsNumber 5 Number of attempts to enter an
invalid password or verification code
when entering the Web Console,
before the user will be banned for
LoginFailedAttemptsTimeoutSeconds.
LoginFailedAttempts 60 Number of seconds to ban the user for if
TimeoutSeconds he is run out of login attempts defined by
LoginFailedAttemptsNumber
LoginFailedPeriodOfMsg 600 Time period (seconds) for sending a message
SendingSeconds about a banned user to administrators
LoginPassword ExpirationDays 0
LogsDiscFreeSpaceLimit 10240 If the available space on the disc where logs
are stored is less than a specified value (MB),
the corresponding alert is displayed in Event
Monitor → System Events.
MailAlternativeHostname dshost An alternate hostname when working with a
subscriber's mail. Not connected with sending
events (for example, when using 2FA)
MailCurlDebug 0 Enable/disable the additional logging level
for sending notifications
MailCurlVerbose 0 Increased level of awareness when working
with an SMTP subscriber. Recommended to
be used in conjunction with MailCurlDebug
enablled
MailCustomUserAgent A custom string with the value User-Agent.
DataSunrise Database Only relevant if MailUseUserAgent and
Security Suite MailUseCustomUserAgent are enabled.

MailEnableTCPNoDelay Enabled Whether to use Nagle's algorithm or not. By


default, it is used i.e. the value of this variable
is FALSE
14 System Settings | 355

Parameter Default value Description


MailReplaceLForCRtoCRLF Enabled Do you want to change one CR or LF to both
CRLFs for messages for other SMTP senders
(Mail)? More information RFC5321
MailUseAlternative Hostname Disabled An alternate hostname when working with a
subscriber's mail. Not connected with sending
events (for example, when using 2FA)
MailUseCustomUserAgent Disabled Whether to use a custom User-Agent field
value or use an internal preset value. Relevant
only when MailUseUserAgent is enabled
MailUseUserAgent Enabled Whether to use the User-Agent field or not.
May be needed for some servers and can be
analyzed by them
MariaDBErrorCode 5555 MariaDB error code for blocking
MarkOracleSession Enabled If this parameter is set, parser will add prefix
'ds:' to processed columns in v$session
MaskingCSVMin 2 Minimum number of columns a file should
ColumnsCount contain to be detected as CSV by Masking
MaskingCSVMin RowsCount 2 Minimum number of rows a file should
contain to be detected as CSV by Masking
MaxBackendMemory 6105 The maximum volume of virtual memory that
can be used by the Backend. If exceeded, a
warning is displayed
MaxBackendMemory 12211 The maximum volume of virtual memory that
ForTerminate can be used by the Backend. If exceeded,
DataSunrise will try to allocate some memory
and if failed, the Backend will be shut down
MaxCoreMemory 6105 The maximum volume of virtual memory
that can be used by the Core. If exceeded, a
warning is displayed
MaxCoreMemory ForTerminate 12211 The maximum volume of virtual memory
that can be used by the Core. If exceeded,
DataSunrise will try to allocate some memory
and if failed, the Core will be shut down
MaxDeferredPackets 1000 The maximum number of deferred packet for
ForParsing parsing
MaxEventsIn SubscriberEmail 100 Maximum number of event notifications to be
included into one Email
MaxExecutionsAuditFor 10000 If the number of Executions for an operation
OperationPerMinute was more than <value>, then these
Executions will not be audited
MaxFirewallBackup Count 4 Maximum number of DataSunrise backups to
store, which are created when updating the
firewall via the Web Console. Other backups
will be deleted
MaxLogQuerySize 8192 The maximum allowed size of an unparsed
query in logs
14 System Settings | 356

Parameter Default value Description


MaxNotResended 1024
PacketInterval
MaxQuerySizeLimit 10485760 The maximum query size allowed for normal
processing. If the query exceeds this limit the
action specified by MaxQuerySizeLimitAction
parameter will be performed
MaxQuerySize LimitAction 0 The action to be performed when a query
is received that exceeds the size specified in
the parameter MaxQuerySizeLimit. Possible
actions:
• 1 - Skip
• 1 - Exception
• 2 - Disconnect

MaxSaveRowsCount 20
MaxUncommitted ProxyRead 1024 The maximum size of the buffer (for reading
data from proxy)
MessageHandlerConnections Enabled Distribute connections in the
DistibuteByHost MessageHandler threads by the client host
MessageHandlerMain 1 The number of threads used to process the
QueueUseThreads main queues. <n> - thread count
MessageHandler ProxyThreads 5 Number of threads used to process
database queries that pass through the
proxy.Change this setting along with the
CoreDynamicMemoryArenas setting
MessageHandlerQueue 15 If the internal queue filling of the Message
FillPercentWarning Handler is more than a specified value (in %),
a corresponding alert is displayed in Event
Monitor → System Events
MessageHandler SleepTime 0
MessageHandler 5 Number of threads used to process database
SnifferThreads queries that DataSunrise sniffer receives
MessageHandlerThread 3600
MaxCycleGap
MessageHandler 10485760 Message Handler thread stack size.
ThreadStackSize Applicable for Linux and Windows
MessageHandler 0 Set the priority level value for Message
ThreadsPriority Handler thread pool
MessageHandler 5 The number of threads used to process
TrailingThreads operations from the database by 'Trailing the
database logs' mode
MessageHandlers 30000 Limit number of messages in 'global queue'
GlobalQueueHighWaterMark to be processed with Message Handler. When
reached, the thread will wait for the queue to
drop to the minimum
MessageHandlers 29000 Limit number of messages in 'global queue'
GlobalQueueLowWaterMark to be processed with Message Handler
14 System Settings | 357

Parameter Default value Description


MessageHandlers 30000 Limit number of messages in 'local queue' to
LocalQueueHighWaterMark be processed with Message Handler. When
reached, the thread will wait for the queue to
drop to the minimum
MessageHandlers 29000 Limit number of messages in 'local queue' to
LocalQueueLowWaterMark be processed with Message Handler
MetadataInfoOtlLongSize 10480 Set the maximum buffer size for operations
with Large Objects: varchar_long, raw_long,
clob and blob. This function needs to be
called in order to increase the buffer size
(Currently use in Oracle MetadataInfo)
MetadataInfo 120 Statements timeout (seconds). If the timeout
StatementTimeout has expired, the statement will be aborted.
This parameter is used only in metadata
processing functions. Isn't applicable to DB2,
SAP Hana and Vertica
• 0: unlimited timeout
• -1: default server timeout

MongoBsonSizeThreshold 1024 Maximum size of BSON document


DataSunrise should process. If a BSON is
larger, DataSunrise will ignore the file when
parsing and converting it to JSON
MongoDbDataSearch Search nested depth limit for both Dynamic
RecursionLimit and Static Masking (MongoDB)
MongoDbStrictWrite Enabled Write block awaiting aknowledgement from
MongoDB. Aknowledged write concern
enables clients to catch network traffic,
duplicate key and other errors
MongoDbStaticRecursion Limit 10 MongoDB data search depth limit. Used for
Static Masking
MongoBulkOperationSize Data volume (number of documents)
transferred by one bulk insert operation. The
higher this parameter's value, the quicker
the insert operation but the higher memory
consumption (MongoDB)
MsSqlCipher Disabled Deprecated. Not used
MsSqlDelayedPacketsLimit 10 Maximum volume of packets sent by cached
SSL sessions which wait for decryption keys in
the parser
MsSqlMARSProxyEnable Enabled Enable/disable MARS proxy
MsSqlMarsConnections Read Enabled If enabled, the proxy will resend packets
ByFrame frame by frame for a connection with Multiple
Active Result Sets
MsSqlMetadata Disabled If this option is enabled, each database
SeparateLoading will be loaded with metadata in a separate
connection (like Azure SQL)
14 System Settings | 358

Parameter Default value Description


MsSqlMinFrameNo 0 Minimum number of PCAP frame to start
sending the SQL Server traces to the log
MsSqlRedirectsDisable Disabled Disable/enable creating of new interfaces and
proxies when redirecting
MsSqlQueueHeldEventsSize 10 Size of queue to contain "open session"
events in MS SQL audit trailing. Sessions with
only "open session" and "close session" will
not be audited. To enable/disable skipping,
see "MsSqlSkipEmptySessions".
MsSqlSSLDisable Disabled Enable/disable SSL encryption in proxy mode
MsSqlSkipEmptySessions Disabled Enable/disable skipping empty sessions
to avoid spamming with "open/close
session" events. To change size of queue
to contain "open session" events, see
"MsSqlQueueHeldEventsSize"
MsSqlSSLVersion SSL and TLS version used for connection
between DataSunrise proxy and your MS SQL
Server database:
• Empty - the highest available version
• SSLvX.X or TLSvX.X values are available.
Enforce selected version and reject other
versions. For example: 'SSLv3.0' and
'TLSv1.0'

MyDataDiscovery 31536000 Timeout specified for parameters


StatementTimeout net_read_timeout/net_write_timeout for Data
Discovery of MySQL databases, if the Use
"load data in file" statement Loader Type is
used
MyDataDiscovery Disabled Use cursor for select query for Data Discovery
UseCursorSelect across MySQL databases
MySQLAllowUpdate Disabled Update metadata using queries that request
MetadataBunchMode all schema objects. Disable this option if you
have a lot of objects in the database
MySQLAuthMapping Method of password transferring to use for
MethodsResolver "mysql_clear_password": MySQL clients during user mapping. Depends
[ on connection parameters got from the
"_client_name": clients
"MariaDB connector/J"
}]}

MySQLConnector Which transport layer encryption


AllowedTLSVersions protocols the MySQL connector permits
for encrypted connections. Example:
TLSv1,TLSv1.1,TLSv1.2,TLSv1.3
MySQLConnector Disabled If Enabled: use the mysql_clear_password
EnableClearTextPlugin plug-in in MySQL connector. Note that this
plug-in transfers password in unencrypted
form so it's disabled by default
14 System Settings | 359

Parameter Default value Description


MySQLConnector Disabled If Enabled: enable traffic compression in
UseCompression MySQL connector
MySQLDisable DCLHandling Enabled Disable/enable processing of Data Control
Language (DCL) queries (CREATE USER, DROP
USER, GRANT, REVOKE)
MySQLErrorCode 5555 MySQL blocking error message
MysqlLexiconBufferSize 2000 The buffer size for Lexicons in Static Masking
MySQLMetadataPrefetchRows 10000 Number of MySQL metadata rows which will
be prefetched in each iteration
MySQLMetadataTransaction 0 Transaction isolation level used for MySQL
IsolationLevel metadata loading:
• 0 - REPEATABLE READ (default)
• 1 - READ COMMITED
• 2 - READ UNCOMMITED
• 3 - SERIALIZABLE

MySQLUseAuthGSSAPI Disabled Use the auth_gssapi_client method to transfer


MethodForMapping passwords (Kerberos authentication for
MySQL)
MySQLUseSHA256Password Enabled Use the sha256_password method to
MethodForMapping transfer passwords (LDAP authentication
for MySQL). If this method is disabled, the
mysql_clear_password method will be used
MyStaticMasking 31536000 Timeout for parameters net_read_timeout/
StatementTimeout net_write_timeout for static masking of
MySQL databases, if the Use "load data in
file" statement Loader Type was used.
NativeOCRHandlingOnExternal Disabled If enabled: the file will be processed by the
OCRError native OCR when the external OCR fails to
process the image or processes it with an
error
NeedConvertUnsupport Disabled Enable this option if you want to convert
TextractFormats formats that are not supported by
AmazonTextract to supported formats.

Note: does not work with


DataDiscoveryUseAmazon
TextractS3Integranion

NetezzaBackendLoginTimeout 10 Time the process has to connect to Netezza


back end before timeout, seconds
NetezzaErrorCode 5555 Netezza error code
NetezzaParserDataFlushPeriod 30 Periodicity of flushing Netezza parser cache,
rows
NetezzaParserKerberos Enabled
ContinueAuthWhenError
14 System Settings | 360

Parameter Default value Description


NetezzaParserPacket 512 Netezza parser buffer packet initial size, bytes
BufferInitSize
NetezzaParserPacket 524288 Netezza parser buffer packet maximum size,
BufferWatermark bytes
NetezzaParserSSLMode Enabled Disable/Enable SSL support for Netezza
connections
NetezzaParserSkipRequest Disabled
WhenParsingImpossible
NonViableThreadWarning 15 This value sets up a time interval for
Interval displaying a warning in the system monitor
which signals that the thread is hung. This
warning will be displayed when the half
of the time set as the ThreadMaxCycleGap
parameter's value has passed since the
hangup
OAuth2HttpHeaders Specify the headers of the http request
for receiving Json Web Keys from OAuth2
service. Format:"header1: value", "header2:
value"
OAuth2URLForJson WebKeys Full URL to OAuth2 service to get
Json Web Keys. Example: https://
myorg.okta.com/oauth2/default/v1/keys?
client_id=1234aaaabbbbb
OEMCodePage 866
OnDictionaryBackup
DoneCommand
OnOldLogDelete Command
OpenSSLCipherList A list of cyphers to be used by DataSunrise
DEFAULT proxy for SSL-encrypted connection. Similar
to OpenSSL parameter format
OperationGroup CacheSize 2000 The cache size for depersonalized queries.
When the cache reaches its maximum size,
less used queries are 'preemptied' from it
OperationGroup LenMax 64
OperationGroup MergePeriod 30 The interval (in minutes) at which the merge
of temporary depersonalized data will be
performed
OracleDefaultEdition Default Oracle Edition that the backend
switches to before updating metadata
OracleEnablePreparsing Disabled Enable Oracle preparsing so connections with
migratable sessions will be handled in the
same queue in the same thread
OracleErrorCode 5555 Oracle error code
OracleMetadata 10000 Buffered rows count for receiving Oracle
BufferRowCount metadata. The 10K is the optimal value for
remote Oracle RDS
14 System Settings | 361

Parameter Default value Description


OraclePrepared 1 • 0 - Spawning (if needed) a new prepared
StatementMaskingMethod statement with new sqllf set
• 1 - Recompiling (if needed) a current
prepared statement with new SQL

OracleUseDAF ObjectsTable Enabled Enable/disable using the DAF_OBJECT table


which can prevent some bugs in Oracle
ParserAssertAction 2 The action in case of a data parsing error:
• 0 - don't react
• 1 - kill the Core process
• 2 - turn off parsing for current connection

PcapBufferSize 100
PcapConversationFilter A regular expression for filtering
conversations traffic of which needs to be
traced in the sniffer mode.
Conversation format:

srcip:srcport->dstip:dstport

Filter example:

.*192\.168\.1\.1.*

PcapMaxOutOfOrderMonitor 1
PcapMaxOutOfOrder 5 Maximum number of messages following a
SegmentCount lost message. DataSunrise will not process a
lost message if this number is achieved
PcapMaxSessionIdleTime 7200 Idle time after which DataSunrise stops
processing messages in a thread
PcapProxyDirection 0 • 0 - capturing traffic from both directions
• 1 - capturing traffic only from client to
proxy
• 2 - capturing traffic only from proxy to
server

PcapShowOnlyFileName 0
PcapShowProgressBySize 0
PgFetchRowCount 1000 Row count to be used with FETCH operation
for PostreSQL databases for Static Masking.
The lower the value is, the slower the
performance is and the lower amount of RAM
is used
PgMetadataSetUtf8 Enabled Overwrite current client_encoding value to
ClientEncoding UTF8 for all metadata connections
14 System Settings | 362

Parameter Default value Description


PgStaticMaskingIdleIn Disabled Terminate any session with an open
TransactionSessionTimeout transaction that has been idle for longer
than the specified duration in milliseconds.
This allows any locks held by that session to
be released and the connection slot to be
reused; it also allows tuples visible only to
this transaction to be vacuumed. Currently
used only in the static masking with the Copy
loader
PgStaticMaskingSetUtf8 Enabled Overwrite current client_encoding value to
ClientEncoding UTF8 for all static masking connections
PgStaticMasking 0 Abort any statement that takes more than
StatementTimeout the specified number of milliseconds, starting
from the time the command arrives at the
server from the client. Currently used only in
the static masking with the Copy loader
PgSupport PgBouncer Disabled If enabled, all connections to PostgreSQL will
be treated as connections to PgBouncer. Note
that when deploying a PostgreSQL instance
on Google Cloud, you need to enable this
setting
PostgreErrorCode 5555 PostgreSQL error code
PostgreParserHandleCoreObject 1 If disabled, DataSunrise doesn't create any
objects and performs protocol parsing only
without triggering existing Rules
PostgreParserPrepare 1 If disabled, enables debugging mode for
StmtBlockingFullBatch custom drivers. DataSunrise will block certain
packet group in pipeline while performing
packet blocking
PostgreParserRequire Enabled PostgreSQL parser requests session metadata
SessionMetadata
PostgreParserSSLMode 1 Enable/Disable SSL support. The following
values are available:
• 0 - without SSL
• 1 - with SSL
• 2 - always connect a client without SSL if
allowed by the server

PostgreParserTestMode 0 If enabled, the PostgreSQL parser will treat


protocol parsing errors more aggressiively.
Parser recovery can't be done at that. This
mode is used for tests when it's required not
to miss parser errors
PostgreParserTransparent 0 Used for tests. If enabled, a transparent
KerberosAuthentication authentication attempt will be performed
(without proxying), and information about
connection (user, domain, realm) will be
extracted from the first token
PostgreProtocolReader 524288 Size of the buffer used for PostgreSQL
BufferWatermark protocol parsing
14 System Settings | 363

Parameter Default value Description


PreferSSLForOracle Disabled If enabled, DataSunrise first tries to connect
to your Oracle database using SSL. If a
connection is not established, then DS will
connect without SSL
PrintCoreObjectsOn 10000 Print used core objects when virtual memory
ReachingVirtualMemory reaches N. The frequency of the trace
depends on the PrintCoreObjectsTimer
parameter
PrintCoreObjectsTimer 600 Trace frequency of used core
objects. Depends on the
PrintCoreObjectsOnReachingVirtualMemory
parameter
ProactiveProxyThreads 5 The number of threads for a proactive proxy
ProactorThreadQueue 100000 Limit number of messages in a thread to be
HighWatermMark processed with Proactor
ProactorThreadQueue 800000 Minimum number of messages in a
LowWatermMark thread to be processed by Proactor
before reaching their limit number
(ProactorThreadQueueHighWatermMark)
ProtocolPacket 325 Packet builder buffer size
BuilderBufferSize
ProxyConnecting Timeout 15 If a connection wasn't esstablished during this
time, DataSunrise will perform disconnect.
Applicable both to HTTP proxy and to redirect
ProxyConnection CloseTimeout 8 On expiration, the connection will be forcibly
closed if the socket did not return a response
after the shutdown command
ProxyConnection Enabled Enable/disable Nagle's algorithm for proxy
UseTcpNoDelay connections. Used in reactive and proactive
proxies
ProxyForceStopTime 30 If proxies are being stopped, proxy server
sends shutdown commands to proxy sockets.
Then proxy waits for these connections to
close for ProxyForceStopTime seconds and if
this time is elapsed, stops proxies
ProxyInterface HostTimeToLive 5 Time when elapsed, proxy resolves interface
host at client connnection. If 0 is set, then
frequent connections will slow down proxy, if
this parameter's value is too high, proxy may
try to establish connections with an old host
(for example, if the DNS is updated)
ProxyListenBacklog 2000 Size of a queue of incoming connections to
DataSunrise proxy port
ProxyMaxConnections 5000 Number of connections a proxy can process
ProxyNoSession NoticeTime 0 Displays information about connection
absense in proxy status during the specified
time
14 System Settings | 364

Parameter Default value Description


ProxyPacketResend 0 If packet was not resend for the specified
WarningTime time, the following notification will be
displayed at Event Monitor:

""[DbInstance '%s']: Proxy:


Packet was not resend for
%zu ms duration { proxyId
= %zu, connectionId = %zu,
sessionId = %zu, userName =
'%s', pktSize = %zu, inState =
'%s', side = '%s' }"

ProxyThreadsPriority 0 Priority level value for Reactive/Proactive


proxy thread pool
QueryCacheEnable ForOracle Enabled Enables the usage of query caching for Oracle
QueryCacheIgnore Pattern When a SQL query matches this regex, then
the parsed details will not be added to the
cache
QueryCacheSize 128 Maximum number of queries the query cache
can hold. If 0, caching is disabled
RDSFreeSpace LimitAlert 5120 Storage space threshold at which the System
Monitor will show alert messages
RDSFreeSpace LimitStop 1024 You can restrict DataSunrise to write audited
data to your Audit database. Here you can
specify free memory to be left in your audit
database and stop auditing in case the free
space left equals this parameter's value
ReactiveProxy 16777216 Each message of Reactive proxy has its own
QueueHighWatermMark queue. Message is either a package to be
sent or a service command. This parameter
limits the maximum number of messages in a
queue
ReactiveProxy 4194304 See ReactiveProxyQueueHighWatermMark. If
QueueLowWatermMark a queue is full, new messages will be skipped
until the size of queue is limited to this
parameter's value
ReactiveProxyThreads 8 Number of threads used in reactive proxy.
The more threads, the more Input/Output
operations can be executed parallelly for
more connections. But this requires more
resources
ReactorTaskQueue 15 See ReactorThreadQueueHighWatermMark. If
FillPercentWarning the size of a queue (percents) is larger than
this parameter's value, then a warning about
it will be monitored
ReactorThreadQueue 1200000 Reactor has a dedicated queue of messages.
HighWatermMark In some cases, messages are processed
through this queue, for example, if it is
required to close a connection by connection
ID. This parameter limits the number of
messages in a queue
14 System Settings | 365

Parameter Default value Description


ReactorThreadQueue 1000000 See ReactorThreadQueueHighWatermMark. If
LowWatermMark a queue is full, new messages will be skipped
until the size of queue is limited to this
parameter's value
ReadOracleNamed TypeFields 0
RecognizerParserDetail Disabled Enable/disable detailed trace of the
SQLRecognizer parser
RedShiftBackend Disabled Use the PostgreSQL ODBC driver to connect
UsePostgreODBC to a Redshift database
RedShiftErrorCode 5555 Redshift error code
RedirectOnPublic 0
RedshiftParser 1
HandleCoreObject
RedshiftParser 1
RequireSessionMetadata
RedshiftParser SSLMode 1
RedshiftParser TestMode 0
RedshiftProtocol 524288
ReaderBufferWatermark
RedshiftDataDiscovery Disabled Use cursor for select query for Data Discovery
UseCursorSelect across Redshift databases
RemoveOldAudit 7200 Period of deletion of obsolete files that were
LoadFilesPeriod used for loading data to the Audit storage
ReportColumnMaxSize 32760
ReportsFolder The directory to save reports generated by
./reports the Report Generator

ReportsSTatementTimeout 36000 Timeout value for statements (seconds). If the


timeout has expired, the statement will be
aborted
• 0: unlimited statement timeout
• -1: default server timeout. This parameter
is used only in Reports. Isn't applicable to
DB2, SAP Hana and Vertica

RequredFreeSpace 1024 Required free space for creation of a


ForFirewallBackup DataSunrise backup when updating the
firewall via the Web Console
RequredFreeSpace ForUpdate 1024 Required free space for updating the firewall
via the Web Console
ResetClientConnections Disabled Enable/disable closing of connection with a
client through the RST flag
ResetServerConnections Disabled Enable/disable closing of connection with a
server through the RST flag
14 System Settings | 366

Parameter Default value Description


ResolveRelationship Disabled If Enabled: enables relations type definition
TypesInQHL (1:M, 1:1) when executing a Query History
Data Model Learning task. This procedure
takes a lot of time, so it can be disabled if
necessary
RowCacheEnable Disabled Enable row cache
RowCacheFetchCount 3 Minimal fetch count of sequential rows for
enable row cache on a SQL operation
RowCacheMaxRowCount 20 Maximal row count in each fetch for enable
row cache on a SQL operation
RowCacheMaxSize 200 Count of rows that DataSunrise can place in a
row cache (per SQL operation)
RowCacheMinRowCount 1 Minimal row count in each fetch for enabling
a row cache on SQL operation
RowCacheRequest RowCount 100 Number of rows that DataSunrise requests
from a server and adds them to a row cache
RpcStatement Timeout 120 Statements timeout (seconds). If the timeout
has expired, the statement will be aborted.
This parameter is used only in the RPC. Isn't
applicable to DB2, SAP Hana and Vertica
• 0: unlimited timeout
• -1: default server timeout

S3ClientMax Connections 25 Maximum number of concurrent TCP


connections for a single HTTP client to use
S3Client ConnectTimeout 1 Socket connection timeout. Unless you are
very far away from your data center you
are connecting to, 1 second is more than
sufficient
S3ClientHttp RequestTimeout 0 For Linux only. The only option currently
applicable for Curl to set HTTP request
level timeout in seconds including possible
DNS lookup time, connection establishing
time, SSL handshake time and actual data
transmission time:
• Corresponding Curl option is
CURLOPT_TIMEOUT_MS
• If 0, no HTTP request level timeout
14 System Settings | 367

Parameter Default value Description


S3ClientRequest Timeout 3 Socket read timeouts for HTTP clients on
Windows. Default is 3 seconds and this
should be more than adequate for most
services. However, if you are transfering large
amounts of data or may experience high
latency, you should set to something that
makes more sense for your use case. For
Curl, it's the low speed time, which contains
the time in milliseconds that transfer speed
should be below "lowSpeedLimit" for the
library to consider it too slow and abort. Note
that for Windows when this config is 0, the
behavior is not specified by Windows
S3ClientEnable TcpKeepAlive Enabled Enable TCP keep-alive. No-op for WinHTTP,
WinINet and IXMLHTTPRequest2 client
S3ClientTcpKeep AliveInterval 30 Interval to send a keep-alive packet over
connection at. Default is 30 seconds,
minimum is 15 seconds. WinHTTP & libcurl
support this option. Note that for Curl, this
value will be rounded to an integer with
second granularity. No-op for WinINet and
IXMLHTTPRequest2 client
S3ClientLowSpeedLimit 1 Average transfer speed in bytes per second
that the transfer should be below during
the request timeout interval for it to be
considered too slow and be aborted. Default
is 1 byte/second. Only for CURL client
currently
S3CrawlerRequest 0 The number of queries sent when using S3
ResendCount Crawler
S3DefaultVpcRegion Which region will be used as anb endpoint
for a VPC subnet in AWS S3
S3EnableResolve Enabled Temporary credentials provided by IAM role
TemporaryCredentials will be recognized as a Role name
S3InventoryBucketRegion Any Specific region of S3 Inventory bucket
location. Default value, Any, means that
region will be determined independently
S3InventoryPrefix Constant prefix for searching in an S3
Inventory config bucket. Example: bucket/ or
just bucket-name/folder
14 System Settings | 368

Parameter Default value Description


S3MaskingPartial Resend 0 This parameter influences the working speed
and RAM consumption when processing
large files while doing Dynamic masking
on S3 buckets. The following options are
available:
• 0 - Hold: collect processed packets
before sending them. If the size of data
to be masked is more than 10 Mb, then
a temporary storage is used (see the
FileBufferUseLocalStorage parameter);
• 1 - Force: send the processed packets
immediately after processing;
• 2 - ByUserAgent: Hold or Force depending
on the agent. If the agent checks
checksums, then Hold is used.

SMUXTimeout 20 Timeout to reveal frozen MARS proxy


connections
SSLCtxMaxProtoVersion Set the maximum supported TLS protocol
version for OpenSSL's SSL_CTX. Currently
supported versions are:
• SSL3_VERSION
• TLS1_VERSION
• TLS1_1_VERSION
• TLS1_2_VERSION
Leaving this setting empty will enable
protocol versions up to the highest
version supported by the library. See the
SSL_CTX_set_max_proto_version() doc
SSLCtxMinProtoVersion Set the minimum supported TLS protocol
version for OpenSSL's SSL_CTX. Currently
supported versions are:
• SSL3_VERSION
• TLS1_VERSION
• TLS1_1_VERSION
• TLS1_2_VERSION
Leaving this setting empty will enable
protocol versions down to the lowest
version supported by the library. See the
SSL_CTX_set_min_proto_version() doc
SSLCtxOptions Option of the proxy SSL context
SSL_OP_ALL,
SSL_OP_NO_TICKET

SSLResumeTimeout 30 Maximum time a sniffer waits for traffic


decryption keys
SapCustomAppUser Identifier customization for capturing and
APPLICATIONUSER checking of an SAP ECC application user
14 System Settings | 369

Parameter Default value Description


SaveDsOriginalTable Disabled Save 'table_original' when using the
Encryptions feature
SavePassword Enabled When enabled, the instance settings provide
the option to save database credentials to
dictionary.db to avoid reentering credentials
when using some of DataSunrise features
SendDumpPeriod 3600 The time period (seconds) per which the
crash dumps will be sent to the server where
crash dumps are stored
SendDumpToServer Disabled Send the dump file to the server
SendSecurityMsg ToAdmins Enabled Send security messages to an administrators
who has an e-mail
SessionIdleDisconnect Time 0
SessionIdleWakeTime 0 Proxy will send packets with client
notifications every N seconds (Vertica,
PostgreSQL, Redshift, Greenplum, Netezza,
Oracle)
SessionIdleWarningTime 0
ShowEncryptionSettings Enabled Disable/enable the Encryption tab
ShowOldAuditEvents Disabled Enable this option if you want to see audited
events generated before DS 6.3
SingleThreadProcessing 0
SkipExecuteParsing Disabled If enabled, parsing of EXECUTE calls inside
InProcedure procedures will be skipped
SkipRoutineDdl QueryMasking Disabled Defines the behavior when masking creates
or modifies procedures and functions queries.
If disabled, queries will be masked as usual.
If enabled, queries will be not masked. To
ensure that sensitive data is hidden, this
property is only applied if the 'Mask Queries
Included in Procedures and Functions'
check box is checked in a masking rule
SkipUrgentTcp Packets Enabled If this parameter is set, then sniffer skips TCP
packets with the URG (urgent) flag
SnmpAuditFreeSpace OIDSuffix .9 Object IDs for the SNMP protocol to indicate
the corresponding health check parameter.
SnmpAuditQueueLength .4
Enable if you want to skip metadata loading
OIDSuffix
for inbound shared databases
SnmpAverageExecutions .11
OIDSuffix
SnmpAverageOperations .6
OIDSuffix
SnmpAverageReadBytes .7
OIDSuffix
SnmpAverageValue OIDSuffix .4
14 System Settings | 370

Parameter Default value Description


SnmpAverageWriteBytes .8
OIDSuffix
SnmpBackendMemory .1
OIDSuffix
SnmpCoreMemory OIDSuffix .0
SnmpLogsFreeSpace OIDSuffix .10
SnmpMailerQueueLength .5
OIDSuffix
SnmpMemoryObject OIDSuffix .1
SnmpObjectid OIDSuffix .2
SnmpProxyQueueLength .2
OIDSuffix
SnmpQueueLen OIDSuffix .3
SnmpSnifferQueueLength .3
OIDSuffix
SnowflakeMetadata Disabled Enable if you want to skip metadata loading
SkipInboundSharedDB for inbound shared databases (Snowflake)
SnowflakeUseODBC Disabled Enable if you want to use the Snowflake
ODBC driver instead of the native client
SpamEventHidingPeriod 300 Time interval during which repeating events
will not be displayed in the System Monitor
SpamLogDebugHidingPeriod 3600 Time interval during which repeating debug
mesages will not be written to the logs
SpamLogErrorHidingPeriod 3600 Time interval during which repeating error
mesages will not be written to the logs
SpamLogInfoHidingPeriod 3600 Time interval during which repeating info
mesages will not be written to the logs
SpamLogNoticeHidingPeriod 3600 Time interval during which repeating notice
mesages will not be written to the logs
SpamLogTraceHidingPeriod 3600 Time interval during which repeating trace
mesages will not be written to the logs
SpamLogWarningHidingPeriod 3600 Time interval during which repeating warning
mesages will not be written to the logs
SqlRecognizerQuietMode 2
SqlRecognizerUsesRecursion Enabled
SslCacheCheckInterval 60 Time interval that should exceed before
the SSL cache will be synchronized with the
Dictionary
SslDBCacheTimeout 1200 Time an SSL session lives for in the Dictionary
SslMEMCacheTimeout 60 Time an SSL session lives for in the memory
before being moved to the Dictionary
StartDayOfWeek 0 Week starting day (0: Monday, 6: Sunday)
14 System Settings | 371

Parameter Default value Description


StaticMaskingDir 0 Trim size for all data transfered via Oracle
PathDataTrimSize Direct Path statement ("0" is disabled)
StaticMaskingDir 1024 Buffer size for each row of Oracle Direct Path
PathRowBufferSize statement
StaticMaskingOTL 1000 Number of lines to write to the buffer first
BufferRowsCount and then to send to the server. It helps to
speed up static masking
StaticMaskingOtl LongSize 1024
StaticMaskingParallel Number of threads used for parallel data
LoadThreadsCount transfer during Static masking
StaticMaskingTable 10 The maximum number of tables (threads) to
TransferThreads be transferred simultaneously during a Static
Masking session. One thread transfers one
table
StaticMasking -1 Timeout value for statements (seconds). If the
StatementTimeout timeout has expired, the statement will be
aborted
• 0: unlimited statement timeout
• -1: default server timeout. This parameter
is used only in Static Masking. Isn't
applicable to DB2, SAP Hana and Vertica

StaticMaskingParallel 10 Enable parallel data loading for large tables


LoadThreadsCount while doing static masking
StaticMaskingTries ToCreateAll Enabled When static masking is applied to a table that
is absent in the target database, DataSunrise
creates a target table, then transfers the data
to it and then creates target table's objects
such as constraints, indexes and defaults. If
this parameter's enabled, then DataSunrise
will create all required objects despite errors
that can occur during this process. If this
parameter's disabled, DataSunrise will stop
the process of static masking if errors occur
StrictCheckRpc Disabled
14 System Settings | 372

Parameter Default value Description


StrictChildProcessCheck Enabled When enabled, the Backend controls the
name of the Core process (debugging)
SubscriberAWSCloud This string is the template for messages of
WatchEventTemplate ${Event.Description} AWS Event Subscriber (events only)
Example:

${Event.Time}:${Event.Name}:
${Event.Description} on ${Server.Name}

SubscriberAWSCloud This string is the template for messages of


WatchGeneralTemplate ${Content} AWS Event Subscriber (general)
Example:

${Content} on ${Server.Name}

SubscriberCommon This string is the template for messages of all


ErrorBasedRule Matched rule subscribers exclude email subscribers (Error-
TriggeredPlainTemplate "${Rule.Description}" Based Rule Triggered)
configured to audit
operation
errors for
"Operation #${Operation.Id}"

SubscriberCommon This string is the template for messages


ErrorBasedRule Matched rule of REST API subscribers (Error-Based Rule
TriggeredRestApiTemplate "${Rule.Description}" Triggered)
configured to audit
operation
errors for
"Operation #${Operation.Id}"

SubscriberCommonRule This string is the template for messages of


Triggered PlainTemplate Matched the ${Rule.Type} all subscribers except email subscribers (Rule
rule "${Rule.Description}" Triggered)
for the
"Operation #${Operation.Id}"

SubscriberCommon This string is the template for messages of


RuleTriggered RestApiTemplate Matched the ${Rule.Type} REST API subscribers (Rule Triggered)
rule "${Rule.Description}"
for the "Operation
#${Operation.Id}"

SubscriberCommonSessionRule This string is template for messages of


TriggeredPlainTemplate Matched the ${Rule.Type} all subscribers exclude email subscribers
rule "${Rule.Description}" (Session Rule Triggered).
for the "Session #
${Session.Id}"

SubscriberCommonSessionRule This string is the template for messages of


TriggeredRestApiTemplate Matched the ${Rule.Type} REST API subscribers (Session Rule Triggered)
rule "${Rule.Description}"
for the "Session #
${Session.Id}"
14 System Settings | 373

Parameter Default value Description


SubscriberExternal Debug Disabled Enable/disable external Subscribers
debugging
SubscriberExternal The path to the ERROR (2) file when working
ErrorFilename with an External Subscriber
SubscriberExternal The path to the INPUT (0) file when working
InputFilename with an External Subscriber
SubscriberExternal The path to the OUTPUT (1) file when working
OutputFilename with an External Subscriber
SubscriberGeneralRESTAPI Disabled Enable this parameter if you want to see
CurlDebug debug information in logs for 'General REST
API' subscriber
SubscriberGeneralRESTAPI Disabled Enable this parameter if you want to see more
CurlVerbose debug information in logs for 'General REST
API' subscriber
SubscriberGeneralRESTAPI If you want to use particular string for
Custom UserAgent DataSunrise Database 'User-Agent' when you work with 'General
Security Suite REST API' subscriber you can write it at
this. NOTE: the following parameters:
SubscriberGeneralRESTAPIUseUserAgent and
SubscriberGeneralRESTAPIUseCustomUserAgent
should be enabled
SubscriberGeneralRESTAPI Disabled Enable this parameter to disable Nagle's
Enable TCPNoDelay algorythm when working with 'General REST
API' subscriber
SubscriberGeneralRESTAPIUse Disabled If you want to use a particular string
CustomUserAgent for 'User-Agent' when working with
'General REST API' subscriber, enable
this parameter. NOTE: the parameter
SubscriberGeneralRESTAPIUseUserAgent
should be enabled
SubscriberGeneralRESTAPI Enabled If you want to send 'User-Agent' when
UseUserAgent working with 'General REST API' subscriber,
enable this parameter
SubscriberJiraCurlDebug Disabled Enables debugging of a Jira subscriber.
Recommended to be used in conjunction
with the SubscriberJiraCurlVerbose enable
SubscriberJiraCurlVerbose Disabled Increased level of information when working
with a Jira subscriber. Recommended
to be used in conjunction with the
SubscriberJiraCurlDebug enabled
SubscriberJiraCustomUserAgent Custom string including value User-Agent.
DataSunrise Database Only relevant if SubscriberJiraUseUserAgent
Security Suite and SubscriberJiraUseCustomUserAgent are
enabled
SubscriberJiraEnable Enabled Enables/disables using of Nagle's algorithm
TCPNoDelay
14 System Settings | 374

Parameter Default value Description


SubscriberJiraUse Disabled Whether to use a custom User-
CustomUserAgent Agent field value or use an internal
preset value? Relevant only when
SubscriberJiraUseUserAgent is enabled
SubscriberJiraUse UserAgent Enabled Whether to use the User-Agent field or not?
May be needed for some servers and can be
analyzed by them
SubscriberMax MsgSqlSize 4096 The maximum length of a SQL query
displayed in notifications
SubscriberRedmine CurlDebug Disabled Enables debugging of a Redmine subscriber.
Recommended to be used in conjunction
with SubscriberRedmineCurlVerbose enabled
SubscriberRedmine Disabled Increased level of information when
CurlVerbose working with a Redmine subscriber.
Recommended to be used in conjunction
with SubscriberRedmineCurlDebug enabled
SubscriberRedmine Custom string with value User-Agent. Only
CustomUserAgent DataSunrise Database relevant if SubscriberRedmineUseUserAgent
Security Suite and SubscriberRedmineUseCustomUserAgent
are enabled
SubscriberRedmineEnable Enabled Enable/disable using of Nagle's algorithm
TCPNoDelay
SubscriberRedmine Disabled Whether to use a custom User-
UseCustomUserAgent Agent field value or use an internal
preset value? Relevant only when
SubscriberRedmineUseUserAgent is enabled
SubscriberRedmine Enabled Whether to use the User-Agent field or not?
UseUserAgent May be needed for some servers and can be
analyzed by them
SubscriberSMTP An alternate hostname when working
AlternativeHostname dshost with a subscriber via SMTP. Used only if
SubscriberSMTPUseAlternativeHostname is
enabled
SubscriberSMTP CurlDebug Disabled Enables debugging of SMTP Subscriber.
Recommended to be used in conjunction
with SubscriberSMTPCurlVerbose enabled
SubscriberSMTP CurlVerbose Disabled Increased level of information when
working with an SMTP subscriber.
Recommended to be used in conjunction
with SubscriberSMTPCurlDebug enabled
SubscriberSMTP A string with user-defined User-Agent's value
CustomUserAgent DataSunrise Database when working with an SMTP subscriber
Security Suite

SubscriberSMTP 1 Disable Nagle's algorithm (tcpnodelay) when


EnableTCPNoDelay working with SMTP subscribers
14 System Settings | 375

Parameter Default value Description


SubscriberSMTP Enabled Enable to change one CR or LF to both CRLFs
ReplaceLForCRtoCRLF for messages for SMTP subscribers. More
information RFC5321
SubscriberSMTPUse Disabled Use an alternate hostname when working
AlternativeHostname with a subscriber via SMTP. SMTP sends
hostname when establishing a connection
with a mail server. A real name of the PC
is used as the hostname. Sometimes it can
cause inability to connect to the server.
The solution is the alternate hostname.
This parameter is used together with the
SubscriberSMTPAlternativeHostname
SubscriberSMTPUse Disabled Send User-Agent value, set in
CustomUserAgent DataSunrise or value defined in the
SubscriberSMTPCustomUserAgent. If enabled
- send SubscriberSMTPCustomUserAgent;
If disabled - send internal value set in
DataSunrise. This is associated with SMTP
subscribers.
SubscriberSMTPUse UserAgent Enabled Send User-Agent field when working with a
subscriber via SMTP
SubscriberSNMP This string is the template for messages of
EventTemplate ${Event.Description} SNMP Subscriber (events only)
Example:

${Event.Time}:${Event.Name}:
${Event.Description} on ${Server.Name}

SubscriberSNMP This string is the template for messages of


GeneralTemplate ${Content} SNMP Subscriber (general)
Example:

${Content} on ${Server.Name}

SubscriberService Disabled Enables debugging of a subscriber.


NowCurlDebug Recommended to be used in conjunction
with SubscriberServiceNowCurlVerbose
enabled
SubscriberService Disabled Increased level of information when
NowCurlVerbose working with a ServiceNow subscriber.
Recommended to be used in conjunction
with SubscriberServiceNowCurlDebug enabled
SubscriberService Custom string with value
NowCustomUserAgent DataSunrise Database User-Agent. Only relevant if
Security Suite SubscriberServiceNowUseUserAgent and
SubscriberServiceNowUseCustomUserAgent
are enabled
SubscriberService Enabled Enables/disables using of Nagle's algorithm
NowEnableTCPNoDelay
14 System Settings | 376

Parameter Default value Description


SubscriberService Disabled Whether to use a custom User-
NowUseCustom UserAgent Agent field value or use an internal
preset value? Relevant only when
SubscriberServiceNowUseUserAgent is enabled
SubscriberService Enabled Whether to use the User-Agent field or not?
NowUseUserAgent May be needed for some servers and can be
analyzed by them
SubscriberSlackDirect Disabled Enables debugging of a subscriber.
CurlDebug Recommended to be used in conjunction
with SubscriberSlackDirectCurlVerbose
enabled
SubscriberSlackDirect Disabled Increased level of information when
CurlVerbose working with a SlackDirect subscriber.
Recommended to be used in conjunction
with SubscriberSlackDirectCurlDebug enabled
SubscriberSlackDirect Custom string with value
CustomUserAgent DataSunrise Database User-Agent. Only relevant if
Security Suite SubscriberSlackDirectUseUserAgent and
SubscriberSlackDirectUseCustomUserAgent are
enabled
SubscriberSlackDirect Enabled Enable/disable using of Nagle's algorithm
EnableTCPNoDelay
SubscriberSlackDirect Disabled Whether to use a custom User-
UseCustomUserAgent Agent field value or use an internal
preset value? Relevant only when
SubscriberSlackDirectUseUserAgent is enabled
SubscriberSlackDirect Enabled Whether to use the User-Agent field or not?
UseUserAgent May be needed for some servers and can be
analyzed by them
SubscriberSlackToken Disabled Enables debugging of a SlackToken
CurlDebug subscriber. Recommended to
be used in conjunction with
SubscriberSlackTokenCurlVerbose enabled
SubscriberSlackToken Disabled Increased level of information when
CurlVerbose working with a SlackToken subscriber.
Recommended to be used in conjunction
with SubscriberSlackTokenCurlDebug enabled
SubscriberSlackToken Custom string with value
CustomUserAgent DataSunrise Database User-Agent. Only relevant if
Security Suite SubscriberSlackTokenUseUserAgent and
SubscriberSlackTokenUseCustomUserAgent are
enabled
SubscriberSlackToken Enabled Whether to use Nagle's algorithm or not? By
EnableTCPNoDelay default, it is used, i.e. the value of this variable
is TRUE
SubscriberSlackToken Disabled Whether to use a custom User-
UseCustomUserAgent Agent field value or use an internal
preset value? Relevant only when
SubscriberSlackTokenUseUserAgent is enabled
14 System Settings | 377

Parameter Default value Description


SubscriberSlackToken Enabled Whether to use the User-Agent field or not?
UseUserAgent May be needed for some servers and can be
analyzed by them
SubscriberSMTPAlternative 0 Alternative SMTP connection timeout
ConnectionTimeout (seconds). Default timeout is 0 which means
that it never times out during data transfer
SubscriberZendesk CurlDebug Disabled Enables debugging of a Zendesk subscriber.
Recommended to be used in conjunction
with SubscriberZendeskCurlVerbose enabled
SubscriberZendesk Disabled Increased level of information when
CurlVerbose working with a Zendesk subscriber.
Recommended to be used in conjunction
with SubscriberZendeskCurlDebug enabled
SubscriberZendesk Custom string with the value
CustomUserAgent DataSunrise Database User-Agent. Only relevant if
Security Suite SubscriberZendeskUseUserAgent and
SubscriberZendeskUseCustomUserAgent are
enabled
SubscriberZendesk Enabled Whether to use Nagle's algorithm or not. By
EnableTCPNoDelay default, it is used, i.e. the value of this variable
is TRUE
SubscriberZendeskUse Disabled Whether to use a custom User-Agent field
CustomUserAgent value or use an internal preset value. Relevant
only when SubscriberZendeskUseUserAgent is
enabled
SubscriberZendeskUse Enabled Whether to use the User-Agent field or not.
UserAgent May be needed for some servers and can be
analyzed by them
SybaseErrorCode 55555 Sybase blocking error message. If you see this
you don`t have access to this data
SybaseErrorProcName Sybase blocking error procedure name. If
TheQueryIsBlocked you see this you don`t have access to this
data and procedure of its processing is not
determined
SyslogLocalUse AppName Enabled If this parameter is enabled, DataSunrise will
send the name of application when using
local (UNIX) Syslog
SyslogLocalUsePID Enabled If this parameter is enabled, DataSunrise will
send PID when using local (UNIX) syslog
SyslogRemoteUse AppName Enabled If this parameter is enabled, DataSunrise will
send the name of application when using
remote Syslog
SyslogRemoteUsePID Enabled If this parameter is enabled, DataSunrise will
send PID when using remote Syslog
SystemBackup Enabled Create Dictionary backups. Such backup is
DictionaryEnable used when starting DataSunrise if the main
one is not available
14 System Settings | 378

Parameter Default value Description


SystemBackup 60 Periodicity of creating Dictionary backups
DictionaryTimeInterval
SystemCharset 1208 CCSID number
TaskGuardian CancelTimeout 10000 Tasks cancellation timeout. Applied during
cleanup/restoring of the Dictionary.
TaskManager ThreadCount 5 The number of threads used by the firewall's
Backend to process periodic tasks
TaskManager 3600
ThreadMaxCycleGap
TaskStopTime 300
TbuildMaxSessions 10
TbuildNoRestoreIfError 0
TbuildPreferUpdate 0
OperatorForAppend
TbuildSaveArtefacts 0
TempPath The folder to save temporary dump files in
TemporaryCached FileLiveTime 10
TemporaryProxy DefaultPort 12000 Default port of a temporary proxy
TemporaryProxyHost 127.0.0.1 Host of a temporary proxy
TeradataDebugSniffer 0
TeradataErrorCode 5555 Teradata error code
TfaIntervalBetween 600 The interval between emails for two-factor
MailRequest user authentication. Messages will be sent
to DS user not often than defined by this
parameter
TfaLinksValidation Timeout 600 Timeout for a generated 2FA email link. If DS
user hasn't activated the link within this time,
the authentication will expire
TfaMailContent Template The body of email message sent to user
Links for user validation: with two-factor authentication via email. The
%s message should contain the special character,
'%s', which will be replaced with a link for
user validation
TfaMailSubject The subject text of the email sent to the user
Two-factor authentication with two-factor authentication via email
for Datasunrise

TfaValidatedLinks Timeout 600 Timeout for two-factor authentication.


When the user has activated two-factor
authentication, he can log in without re-
authentication until the specified time has
expired
14 System Settings | 379

Parameter Default value Description


Timeout 10 Timeout value of database connection and
timeout value of a query directed to the
database (2 in 1) in seconds (only for DBMSs
that use OTL). Set 0 for default timeout values
for each DBMS
TrafficBuffer PoolVolume 512 The maximum number of buffers in a pool for
reading data from a proxy
TrailDASIntervalTime 60 Period of time to get the list of events for (e.g
for last 5 minutes from latest record)
TrailDASOffsetTime 300 Time delay for getting events (necessary for
synchronization, seconds)
TrailDBLimit 500 The number of rows used when requesting
events from the log (select ... sys.aud$).
TrailDBLog 10480 The maximum buffer size for downloading
DownloaderBufferSize XML data that is stored in the table
TrailDBNumberOfLines 1000 The number of lines to use when
downloading a portion of an AWS log in DB
trailing (DownloadDBLogFilePortionRequest)
TrailDBSkipEvents FromDS Disabled Actions that were performed from
DataSunrise servers against the target
database will be skipped in audit events if
possible (everything that goes through the
proxy server too)
TrailDbPerform ViaREST Enabled Logs from Amazon RDS will be downloaded
via the REST endpoint using the
downloadCompleteLogFile function
otherwise DownloadDBLogFilePortion will be
used.
TrailDbSessionTimeout 24 Time period (in hours) after which a session
will be closed and auditing of this session will
be stopped for Trailing DB Audit Logs.
TrailDBSybase EventsInterval 120 The number of seconds that will be added
to last record to determine the upper limit
for reading events from sybsecurity..sysaudits
tables and table for archive of audited events.
Refer to Configuring Audit Trail for auditing
Sybase database queries on page 159
TrapAuditError OIDSuffix .4 Suffixes to add to Enterprise OID for SNMP
notifications on corresponding events
TrapAuthentication OIDSuffix .2
TrapBackendEvents OIDSuffix .5
TrapConfiguration .1
ChangesOIDSuffix
TrapCoreEvent OIDSuffix .3
TrapErrorOIDSuffix .2
TrapInfoOIDSuffix .3
14 System Settings | 380

Parameter Default value Description


TrapMetadataChanges .7
OIDSuffix
TrapRuleTrigger OIDSuffix .6
TrapWarning OIDSuffix .1
UIType 1
UpdateBlocked UsersPeriod 10
UpdateLimitsPeriod 10
UpdateMetadata Enabled
CacheByQueries
UpdateMetadata 5
CommitPeriod
UpdateMetadata Disabled Used when updating metadata if SQLite is
CompletelyInTransaction used as the Dictionary. Metadata update
works in the following way:
• Getting new metadata
• Reading old metadata from the Dictionary
• Sorting old metadata
• Merging old and new metadata and
saving it in the Dictionary
If this parameter's disabled, then a transaction
in the Dictionary is created before the point 4,
which blocks the Dictionary for smaller time.
If it is enabled, then the transaction is created
before the point 2 which is a more reliable
method
UpdateMetadataLoad Enabled Enables or disables loading system objects
SystemObjects metadata during "Update Metadata"
procedure
UpdateMetadata Enabled Enables or disables sending notifications
NotifyWorkers about metadata changes to active workers
during "Update Metadata" procedure
UpdateMetadataOn Enabled If disabled, disables updating of metadata
CreateNewInstance when creating a new Database Instance
UpdateMetadataSleep 50
AfterCommit
UseCurrentSession 0
AndDatabaseName
UseKerberosMapping Enabled Enables native Kerberos authentication:
UsingNativeAuthentication user name and password are passed to
local handshake on proxy side and the
authentication is performed with API (MS SQL
Server only)
UseMessageHandler Enabled If this parameter is enabled, then the
ThreadsForPacketParsing MessageHandler thread pool will be used to
parse data
14 System Settings | 381

Parameter Default value Description


UseMetadataFunctionDDL Disabled Enable/disable masking of stored procedures
and functions
UseMetadataInfoCache Disabled This parameter controls the caching of
metadata information in a web session.
• Enabled - saves information between
requests. It can increase efficiency, but it
can lead to artifacts when the metadata
changes frequently
• Disabled - more reliable, but may work
slower

UseMetadataViewDDL Disabled Enable/disable masking of VIEWs


UsePerSessionMetadata Enabled Enable/disable updating of metadata
cash with DDL queries with account for
transactions.
UseProactiveProxy Enabled
UseRDSAuditFree Enabled Enabling this option activates the checking of
SpaceMonitor free space in your AWS RDS Audit database.
If the space runs out, then warnings will
be displayed in the system monitor or
auditing will be disabled. This parameter
depends on the RDSFreeSpaceLimitAlert and
RDSFreeSpaceLimitStop parameters
UseSimpleQueries Disabled Simple queries will be used to clear the audit.
ToAuditClean (Currently works only for Postgres)
ValidateEncryption Functions Enabled Disable/enable encryption functions
validation for the Encryption tab
VerticaErrorCode 5555
VerticaParserHandle Enabled Enable data handling
CoreObject
VerticaParserTestMode Disabled Vertica parsing in test mode (for debugging)
VerticaProtocolReader 524288 Maximum size of Vertica protocol reader
BufferWatermark packet buffer
WebLoadBalancerEnabled 0
WebSessionTimeout 11 Number of minutes that must elapse since
last user activity detected in the Web Console
before the session times out
WebThreadCount 50 Number of threads used to process the Web
Console's requests
14 System Settings | 382

Parameter Default value Description


WorkersUpdatePeriod 3000 Period for checking states of proxies and
Сore processes. Proxies can be enabled or
disabled. The Core process starts or stops
depending on the proxy state.
WriteLogDateTime A log message timestamp format. Available
TIME values:
• TIME - a timestamp is 'hh:mm:ss.ssssss'
• DATE - a timestamp is 'YYYY-MM-DD
hh:mm:ss.ssssss'
• <empty or any other value> - log
messages will be displayed without a
timestamp

14.4 ExternalJSONMetadata additional


parameter
In order to provide better flexibility and handle the cases when expected field names may not match the ones
automatically generated by the Secrets Manager, we have implemented the special Additional Parameter to handle
this use case. You can find it by the ExternalJSONMetadata name. This parameter provides mapping for the key
JSON fields required for the RPC used for the instance creation/update from the external configuration file.
Here's the list of mappable parameters:
• username: database user name used for establishing the connection to extract required database metadata
• password: database user password
• engine: database engine name
• host: hostname where the Database Instance is hosted
• port: database TCP port
• dbname: name of the database to connect to (required to build the connection string
• asSysdba: Oracle-specific parameter
The ExternalJSONMetadata parameter's value might contain some strings in the form of a JSON object. For
example:

{
"keyNames": {
"dbType" : "engine",
"pass" : "password",
"user" : "username"
},
"engineNames": {
"AWS Aurora Postgres" : "aurorapgsql",
"AWS Aurora MySQL" : "aurora",
"My Bill Gates Super DBMS" : "mssql"
}
}

The "keyNames" section of a mapping document allows you to provide synonyms for the key fields required in order
to create a DB instance in DataSunrise.
The "engineNames" section of a mapping doc allows you to add more synonyms for the database engine names.
For example, PostgreSQL can be called by multiple names (e.g pg, pgsql, postgres (AWS RDS favorite), postgresql,
PostgreSQL etc), so it would be a good idea to gather this data in advance and aggregate it into the mapping
14 System Settings | 383
parameter to ensure that the automated process will not fail due to unknown database engine name.. Parameter
values might be the following (note that database type names have corresponding synonyms):

{ "mssql", dtMsSQL },
{ "oracle", dtOracle },
{ "db2", dtDb2 },
{ "postgresql", dtPgSql },
{ "mysql", dtMySQL },
{ "netezza", dtNetezza },
{ "teradata", dtTeradata },
{ "greenplum", dtGreenplum },
{ "redshift", dtRedShift },
{ "aurora", dtAuroraMySQL },
{ "mariadb", dtMariaDB },
{ "hive", dtHive },
{ "sap hana", dtHana },
{ "vertica", dtVertica },
{ "mongodb", dtMongoDB },
{ "aurorapgsql", dtAuroraPgSql },
{ "aurorapostgres", dtAuroraPgSql },
{ "dynamodb", dtDynamoDB },
{ "elasticsearch", dtElasticSearch },
{ "cassandra", dtCassandra },
{ "impala", dtImpala },
{ "snowflake", dtSnowflake },
{ "informix", dtInformix },
{ "athena", dtAthena },
{ "s3", dtS3 },
{ "sybase", dtSybase },

You can assign multiple synonyms both in keyNames and engineNames section. If no match, DataSunrise will check
if the standard key names are used. Otherwise, you will receive the error message in the response that such key is
not known or does not exist. Both keyNames and engineNames are not compulsory, so you can leverage either one
of them, use both or none of them respectively.

14.5 Audit Storage Settings


This UI section enables configuring of the database DataSunrise uses to store auditing data (Audit Storage) .
DataSunrise can use as an Audit Storage the integrated SQLite database or an external PostgreSQL, Aurora
MySQL, MySQL or MS SQL Server database.
It is important that you can use a custom connection string for advanced configuring of database connection.
Each database type has its own pros and cons, so the choice of Audit Storage database type mostly depends on
available licenses and your preferences. SQLite is a good choice for small systems with low network load and SQLite
is the only database that supports rotation of audit.db files. Nevertheless, we don't recommend using SQLIte for
systems with high network load and we can recommend PostgreSQL or MySQL to be used as an Audit Storage
instead.
To increase performance, an Audit Storage database can be configured to use read replicas to offload read
transactions from the primary database and increase the overall number of transactions. To do this, specify host
names of read replicas in the corresponding field.
14 System Settings | 384
Audit Storage subsection
Interface element Description
Database Type drop-down list Type of a database used to store auditing data
Folder Name text field (for SQLite only) Location of the folder the SQLite DB is installed in
Specify Connection Parameters radio button Specify connection parameters for the Audit Storage
database
Database Host text field (if the Specify Connection IP address of the database used as the Audit Storage
Parameters radio button is activated)
Use Read Replicas check box Enables secondary replicas settings
Read replicas of the primary database hosts text field IP addresses or hostnames of the read replicas of the
(if the Use Read Replicas check box is checked) primary database. Each hostname should be input in a
new line (a line for each hostname or IP address)
Database Port text field (if the Specify Connection Database's port number
Parameters radio button is activated)
Database name text field (if the Specify Connection Database's name
Parameters radio button is activated)
Schema The schema to store the audit tables in
Authentication method drop-down list Method of authentication:
• Regular: login/password
• IAM Role

Login text field (if the Specify Connection Parameters User name used to access the database
radio button is activated)
Save password drop-down list Method of saving the database password:
• Save in DataSunrise
• Retrieve from CyberArk. In this case you should
specify CyberArk's Safe, Folder and Object (fill in the
corresponding fields)
• Retrieve from AWS Secrets Manager. In this case you
should specify AWS Secrets Manager ID
• Retrieve from Azure Key Vault. You should specify
Secret Name and Azure Key Vault name to use this
feature

Password text field Password to access the Audit Storage database


Specify a Custom Connection String radio button Activate to enable custom connection string to access
the Audit Storage
Custom Connection String text field (if the Specify Input a custom connection string
Custom Connection String radio button is activated)
Test Connection button (if the Specify Connection Click to test connection with the Audit Storage database
Parameters radio button is activated)
Save button Save the Audit Storage's settings

Important: there is a risk that an external Audit Storage can become non-operational and audit data collected
at that time can be lost. For such cases DataSunrise includes the Emergency Audit feature. This feature enables
automatic saving and storing of audit data in an external file if a connection with the Audit Storage is lost. Once the
connection with the Audit Storage database is restored, DataSunrise uploads the data from that file to the Audit
14 System Settings | 385
Storage. Note that temporary audit data files are stored in the DataSunrise's installation folder, in separate folders
for each Audit Storage database available (for example, if you have three different Audit Storages, you will have
three folders. Note that only one Audit Storage can be used). Names of the folders that contain audit data files are
created using the base64 method.
You can configure the Emergency Audit by changing the following parameters in the DataSunrise's Additional
Parameters (System Settings → Additional Parameters):
• AuditOperationDataLoadInterval: size of operation data to be reached before been uploaded to the Audit Storage
• AuditOperationMetaLoadInterval: size of metadata to be reached before been uploaded to the Audit Storage
• AuditOperationDatasetLoadInterval: size of operation datasets to be reached before been uploaded to the Audit
Storage
• AuditOperationRulesLoadInterval: size of Rules-related data to be reached before been uploaded to the Audit
Storage
• AuditOperationExecLoadInterval: size of operation executions to be reached before been uploaded to the Audit
Storage
• AuditSubQueryOperationLoadInterval: size of subquery operation data to be reached before been uploaded to
the Audit Storage
• AuditOperationsLoadInterval: size of operation logs to be reached before been uploaded to the Audit Storage
• AuditSessionsLoadInterval: size of session data to be reached before been uploaded to the Audit Storage
• AuditTransactionsLoadInterval: size of operation transactions data to be reached before been uploaded to the
Audit Storage
• AuditConnectionsLoadInterval: size of connection data to be reached before been uploaded to the Audit Storage
• AuditSessionRulesLoadInterval: size of session rules data to be reached before been uploaded to the Audit
Storage
• AuditOperationGroupsLoadInterval: size of operation groups data to be reached before been uploaded to the
Audit Storage
• AuditTrafficStatLoadInterval: size of traffic statistical data to be reached before been uploaded to the Audit
Storage
• AuditRulesObjectDetailLoadInterval: size of object details data to be reached before been uploaded to the Audit
Storage
• AuditRulesStatLoadInterval: size of Rules statistical data to be reached before been uploaded to the Audit Storage

Refer to Additional Parameters on page 337 for description of these parameters and the way to configure them.

14.5.1 Audit Storage Compression


If you are using SQLite as an Audit Storage, you can reduce the audit.db file size by compressing it. For this, do the
following:
1. Navigate to System Settings → Audit Storage
2. In the Audit Storage Compression subsection, click Compress and confirm the operation in the pop-up
window.

14.5.2 Rotation of audit.db Files


In case your SQLite Audit Storage has grown too large, you can create a new audit.db file and keep the possibility
to view the contents of old audit.db files. You can configure automatic creation of audit.db files or do it manually.
Note that only one audit.db file can be used by DataSunrise for writing the audit data in (the latest one). You can
read old auditing data from an older audit.db file but it stays accessible for reading during a current session only,
and then the latest file will become active. Note that the rotation is only applicable to the SQLite used as the Audit
Storage.

14.5.2.1 Configuring Automatic Rotation of audit.db Files


To schedule the automatic creation of new audit.db files, do the following:
1. Navigate to System Settings → Additional Parameters (refer to subs. Additional Parameters on page 337
14 System Settings | 386
2. Configure automatic rotation by changing the following parameters' values:
Parameter Description

AuditRotationAgeThreshold Time to store the current audit.db file before creating a new one

AuditRotationMaxCount Maximum number of audit.db files to store

AuditRotationSizeThreshold Maximum size the current audit.db file can reach before creating a
new audit.db file

3. Click Save to save each parameter.

14.5.2.2 Manual Rotation of audit.db Files


To rotate the audit.db files manually, do the following:
1. Navigate to System Settings → Audit Storage
2. In the Rotated Files subsection, click Enable to convert audit.db to the new format (split it into two files)
3. Click Rotate to create a new audit.db file to write audit data to. All existing audit.db files will be displayed in the
table:
Column Description
ID The audit.db's counting number
Current for Is the audit.db active or not
Reason The reason why the audit.db file was created
Rotate Time Time at which the audit.db file was created
End Time of Audit Time at which the audit.db file became not active

4. You can use an audit file during a current DataSunrise user session only. When a session is closed, DataSunrise
automatically switches an active file to the latest available file.
Select a required audit.db in the table and click Switch to Selected to make the selected audit.db active.

14.5.2.3 Setting Limit for DataSunrise Rotated Audit Files


If you're using SQLite as the Audit Storage and Audit Rotation is enabled, we recommend you to limit the maximum
volume of audit_X.db files to be stored at your DataSunrise server. This is how you can do it:
1. Navigate to System Settings → Additional Parameters (refer to subs. Additional Parameters on page 337)
2. Configure the AuditRotationTotalSize parameter. It defines the maximum overall size of all audit_X.db files
stored at the server. By default, it's set to 0 which means that you can store audit_X.db files of unlimited size.
To prevent overflowing of the storage, we recommend you to limit the overall size of the files depending on the
available storage size, frequency of backing up and security policies of your company.

14.5.3 Clean Storage


This UI subsection enables cleaning of the database used as the Audit Storage.
Clean Storage subsection
14 System Settings | 387

Interface element Description


Radio button The following values are available:
• Clean tables using the DELETE operation: delete database tables that
contain audit data with the DELETE operation.
• Drop and recreate tables. Then restart will be performed: delete
database tables that contain audit data with the DROP operation.
• Remove all Events before the date: self-explanatory.

Clean button Delete audit data in the Audit Storage database (DELETE or DROP depending
on which one is selected).

14.5.4 Encrypting Audit Storage (PostgreSQL) while


DataSunrise instance is running
You can encrypt audit data stored in your PostgreSQL-based Audit Storage database. Note that encryption is
IRREVERSIBLE. To enable Audit Storage encryption, do the following:
1. Navigate to System Settings → Additional Parameters and disable the AuditDataPgSQLUseLoad parameter if
it's enabled. It is also necessary that the database you're planning to use as the Audit Storage is totally clear (you
might want to create a completely new database)
2. Navigate to System Settings → Audit Storage and clean your audit tables using either DELETE or DROP (Clean
Audit section of the page)
3. Connect to your Audit Storage database as a root user and execute the following command:

CREATE EXTENSION IF NOT EXISTS pgcrypto;

4. Navigate to System Settings → Audit Storage → Encryption, select a place to store your encryption key at in
the Key storage drop-down list
5. Input an encryption key into the Key field and click Enable to start the encryption process
6. Restart the DataSunrise system service
7. To ensure that everything is OK, verify that operations.sql_query and operation_data.data in your Audit Storage
are encrypted.

14.5.5 Encrypting the Dictionary (PostgreSQL) while


DataSunrise instance is running
You can encrypt the data stored in your PostgreSQL-based Dictionary database. Note that encryption is
IRREVERSIBLE. To enable Dictionary encryption, do the following:
1. Connect to your Dictionary database as a root user and execute the following command:

CREATE EXTENSION IF NOT EXISTS pgcrypto;

2. Navigate to System Settings → General → Advanced Dictionary Operations and select Encryption of
Dictionary in the Operation drop-down list
3. In the Key Storage drop-down list, select a place to store the encryption key that should be used
4. Input your encryption key into the Key field
5. Click Enable to start an encryption process
6. Restart the DataSunrise system service and check your Dictionary columns.
14 System Settings | 388

14.5.6 Audit Storage Table Partitioning


Table partitioning enables you to split large tables to sections (smaller tables). It helps to increase performance while
deleting old audit data. Partitions are created at initialization of the partition manager and on schedule. Partitions
are created in advance until the number of created partitions exceeds the AuditPartitionCountCreatedInAdvance
parameter's value.

14.5.6.1 Audit Storage Table Partitioning (PostgreSQL)


To enable partitioning for PostgreSQL, do the following:
1. Select a PostgreSQL database as the Audit Storage and save the profile.
2. Open the Audit Storage's profile and click Enable in the Partitions section of the page. Set duration of the
partitions in Partitions Length
3. You can also configure partitioning with the following parameters (System Settings → Additional Parameters):
Parameter Description
AuditPartitionCountCreatedInAdvance Number of partitions created in advance
AuditPartitionFirstEndDateTime Date/time of the end of a first partition. This time is used to define
partition borders
AuditPartitionFutureRecreateTime Time at which all future partitions are removed and at least one
partition is created
AuditPartitionShort Partition length (0 for days, 1 for minutes). Minutes used for
debugging purposes only
AuditPartitionTrace Enable Partitioning tracing

14.5.6.2 Audit Storage Table Partitioning (MySQL)


To enable partitioning for MySQL, do the following:
Partitioning for MySQL is configured similarly to PostgreSQL (see instruction above)

14.5.6.3 Audit Storage Table Partitioning (MS SQL Server)


To enable partitioning for SQL Server, do the following:

Note: Partitioning is supported only in MS SQL Enterprise Edition and Azure.

Partitioning for MS SQL Server is configured similarly to PostgreSQL (see the instruction above)

14.6 SQL Parsing Errors


This subsection enables you to view reports on errors occurred during parsing of database user queries intercepted
by DataSunrise. To enter this section, navigate to System Settings → Query Parsing Errors.
To view an SQL error report, do the following:
1. Select an initial date of the required date range by clicking From. A date and time chooser will appear.
2. Select an end date of the required date range by clicking To. A date and time chooser will appear.
3. To update the SQL errors list, click Refresh button.
14 System Settings | 389

14.7 Syslog Integration Settings


DataSunrise can export data collected by the Data Audit module and DataSunrise's System Events to external SIEM
systems via Syslog. For the Syslog settings, navigate to Configuration → Syslog settings
Syslog subsection contains the following settings:
• Syslog remote server settings
• CEF code of messages to be transferred via Syslog. Refer to subs. Syslog Settings (CEF Groups) on page 222.
Header of Messages subsection.

UI element Description
Product text field Software program name to be included into the message header (DataSunrise
Database Security by default)
Vendor text field Vendor name to be included into the message header (DataSunrise by default)
Product Version text field Product version number to be included into the message header
CEF Version text field CEF protocol version number (this protocol is used to create a message string)

Remote Server Settings subsection.

UI element Description
Local Syslog/Remote Syslog radio button Syslog server to receive DataSunrise auditing data. The
following variants are available:
• Local Syslog
• Remote Syslog

Protocol Type drop-down list (if Remote Syslog Protocol that should be used to export data to a remote
server is selected). Syslog server. The following variants are available:
• RFC_3164
• RFC_5424

Remote Host text field (if Remote Syslog server is Hostname of a Syslog remote server
selected).
Remote Port text field (if Remote Syslog server is Port number of a Syslog remote server
selected).

14.8 DataSunrise User Settings


This subsection enables you to create new and edit existing DataSunrise user profiles. By default, there is only one
DataSunrise user — admin with administrative privileges. It is impossible to delete the admin profile.

Important: Do not confuse DataSunrise users with target database users (Database Users on page 103). A
DataSunrise user is a person with legitimate rights to access the DataSunrise's Web Console and manage its settings.
14 System Settings | 390

14.8.1 Creating a DataSunrise User


To create a new DataSunrise user profile, navigate to System Settings → Access Control → Users and do the
following:

1. Click Add User


2. Input required information into the Add User tab:
Parameter Description
Login User's logical name (any name)
Role A role to assign to the User (see description of Roles below)
Email User's email address
Network Auth Enable if you need to authenticate over network (AD, Kerberos, LDAP)
Generate Password Generate a random password
Must Change on Next Logon Prompt User to change the password on next log on

3. Set a user password (input it once again to confirm)


4. Click Save to save the profile.

14.8.2 User Roles


User Role is a system of privileges in respect of the DataSunrise Web Console's objects assigned to a DataSunrise
User. You can achieve segregation of duties by assigning different Roles to different DataSunrise users. To access
Role settings, navigate to System Settings → Access Control → Roles.
DataSunrise includes the following prebuilt Roles:
Role Description
Admin Default role. Has all possible privileges. Can't be edited
DS Admin DataSunrise administrator (has all privileges but cannot create, delete and edit users)
Operator A user with read-only access to the Web Console
Security Manager A user that can create and manage DataSunrise Users and is granted read-only access
to other elements of the Web Console.

14.8.3 Creating a Role


Along with prebuilt Roles, you can create your own custom user Roles:
1. Go to the Access Control → Roles subsection and click Add Role.
2. Enter a logical name of the Role into the Name field.
3. Enter an Active Directory group name into the Group DN field, if necessary.
This feature enables automatic generation of a DataSunrise user on first successful AD log in. When an AD user
enters DataSunrise the first time and there is no associated DataSunrise user profile, such profile will be created.
When an AD group name is specified in the Group DN field in the role’s settings, DataSunrise binds the current
role to the specified AD group.
For example, if an AD user is included in "SuperUsers" and "Developers" AD groups, and there are DataSunrise
user roles with "Example/SuperUsers" or "Example/Developers" specified in the Group DN field, these roles will
be assigned to the user. If the Group DN field is empty, DataSunrise leaves the role “as is”.
14 System Settings | 391

Note: On Windows, an AD group name should be specified in the following format: <DOMAIN>\<GROUP>.
Example: "DB\access_manager". On Linux, an AD group name should be specified in the following
format:<REALM>\<GROUP>. Example: DB.LOCAL\access_manager

4. Specify Web Console objects and what a user can do in respect of these objects in the Objects subsection: Select
an object in the list and check privileges to grant.
14 System Settings | 392

Object Description
AI Detection of Users If DELETE is disabled, changing and cleaning of Audit Storage is not allowed
AWS S3 Inventory Items Getting S3 object's metadata by the means of S3 Inventory
Access Custom File If disabled, uploading of a file for creating a new Resource Group or backup
restoring is not allowed
Active Directory Mapping Authentication Proxy (Configuration → Databases → Actions →
Authentication Proxy settings)
Application Data Model Resource Manager Data Model
Application User Settings Application Users Capturing
Applications Applications
Audit Rules Audit Rules
Blocked Users Blocked Users
Compliance Manager Compliance Manager
DSAR Configuration DSAR Configuration
DSAR Field DSAR Fields
Data Discovery Filters Data Discovery filters
Data Discovery Groups DD Scan Groups
Data Discovery Incremental Data AWS S3 Data Discovery Incremental Scanning Mode
Data Discovery Incremental Data Discovery Incremental Group
Group
Data Discovery Task Error Data Discovery Task errors
DataSunrise Servers Actions with DataSunrise Servers
Database Instance Users Actions with Database Users (Configuration → Database Users)
Database Instances Actions with DB Instances (Configuration → Databases)
Database Interfaces Actions with DB Interfaces (Configuration → Databases → DB Profile)
Database Properties (Displaying Database Properties on page 62)
Database Users Actions with Database Users
Databases Display a list of database properties. If disabled, DB properties (Configuration
→ Databases) are not displayed. If INSERT is not allowed, a new DB Instance
can't be created
Deferred Task Info Display information about deferred Data Discovery Tasks
Dynamic SQL Replacements Dynamic SQL (available for PostgreSQL, MySQL and MS SQL Server)
Encryptions Encryptions (Encryptions on page 108)
Entity Groups Lists of Audit, Security, Masking, Learning Rules. If disabled, a list of Rules is not
displayed
Function Replacements Data Masking inside functions
Groups of Database Users DB User groups (Database Users on page 103)
Groups of Hosts Creating a Group of Hosts on page 210
Hosts Creating a Host Profile on page 209
Instance Properties Creating a Target Database Profile on page 58
14 System Settings | 393

Object Description
Instance Users Creating a DataSunrise User on page 390
LDAP Servers LDAP on page 396
Lexicon Groups Discovering Sensitive Data Using Lexicon on page 251
Lexicon Items Creating a Lexicon on page 251
License Keys License keys
Lua Script Discovering Sensitive Data Using Lua Script on page 251
Masking Rules Creating a Dynamic Data Masking Rule on page 165
Metadata Columns Access to DB Instance metadata columns
Metadata Objects Access to DB Instance metadata objects
Metadata Schemas Access to DB Instance metadata schemas
Object Filters Object Group Filter on page 115
ObjectGroups Object Groups on page 203
Pair of Associated Columns Table Relations on page 400
Periodic Tasks Periodic Tasks on page 222
Proxies Display Proxies
Queries Display Query Groups
Queries Map Queries Map Parameters on page 302
Query Groups Query Group Parameters on page 292
Resource Manager Deployment Resource Manager on page 275
Resource Manager Templates Template Structure on page 275
Results of VA Scanner VA Scanner on page 263
Roles DataSunrise Roles (System Settings → Access Control → Roles)
Routine Parameters Creation of replacement functions and views during data masking
Rule Subscribers Rule Subscribers
SSL Key Groups SSL Key Groups on page 106
SSL Key Connection encryption keys (Configuration → SSL Key Groups)
SSO Services Single Sign-On in DataSunrise on page 46
Schedules Schedules on page 219
Security Guidelines Available Security Guidelines (VA Scanner → Scan Tasks → New → Choose
Guidelines)
Security Rules Data Security Rules
Security Standards Security Standards for Data Discovery (Data Discovery → Security Standards)
Sessions Active database sessions
Sniffers Available Sniffers (Configuration → Databases → DB Instance → Sniffers)
Subscriber Servers Configuring an SMTP Server on page 212
Subscribers Subscriber Settings on page 212
Syslog Configuration Groups Syslog Settings (CEF Groups) on page 222
14 System Settings | 394

Object Description
Syslog Configuration Item Syslog configuration (Configuration → Syslog Settings → Syslog Settings)
System Settings System Settings
Table Reference Actions with Table Relations
Tags Tags on page 199
Tasks Periodic Tasks on page 222
Temporary Files Temporary files
Trailing the Db Audit Logs Trailing the DB Audit Logs mode. Used for auditing (Configuration →
Databases → DB profile → Capture Mode → Trail DB Audit Logs)
Users DataSunrise Users (System Settings → Access Control → Users)

Privileges enable you to do the following:


Privilege Action
Delete Delete object entry
Edit Edit object's settings
Insert Create an object
List View a list of objects
View View object's settings

5. Specify Web Console actions a user can execute, in the Actions subsection:
14 System Settings | 395

Action Description
Audit Cleaning System Settings → Audit Storage → Clean Audit
Audit Storage Changing System Settings → Audit Storage → Audit Storage
Change Audit Storage System Settings → Audit Storage → Database Type
Encryption Settings
Change Dictionary Encryption System Settings → General → Advanced Dictionary Operations →
Settings Encryption of Configuration Files
Change Password Settings System Settings → Access Control → User → Change Password
DataSunrise Starting DataSunrise Backend startup (System Settings → Servers → Your server →
Core and Backend Process Manager → Actions)
DataSunrise Stopping DataSunrise Backend stop (System Settings → Servers → Your server → Core
and Backend Process Manager → Actions)
DataSunrise Updating DataSunrise update (System Settings → About → System Info → Download
Latest)
Dictionary Cleaning System Settings → General → Advanced Dictionary Operations → Clean
Dictionary
Dictionary Restoring System Settings → General → Configuration Control → Upload Backup
Discovery Column Content Displaying matching snippets (sensitive data) in Data Discovery results
Displaying
Flush Enable enforced synchronization of Backend and Core with the flush CLI
command. Used for testing purposes
Logs management Logging settings (System Settings → Logging and Logs)
Manual Audit Rotation System Settings → Audit Storage → Rotated Files → Rotate
Manual Dictionary Backing-up System Settings → General → Configuration Control → Create Backup
Original Query Displaying Needed to get audited events
Query Bindings Displaying Bind variables logging (Audit → Rules → Action → Log Bind Variables)
Query Results Displaying Query Results logging (Audit → Rules → Action → Log Query Results)
Reading Database Data The ability to preview data in the Object Tree during the creation of Rules, Tasks,
and Compliance.
View Dynamic Masking Events Masking → Dynamic Masking Events
View Event Description Masking → Dynamic Masking Events → Event Description
View Operation Group System Settings → Operation Group
View Query Parsing Errors System Settings → Query Parsing Errors
View Security Events Security → Events
View Session Description Audit → Session Trails → Session Details
View Session Trails Audit → Session Trails
View Top Blocked Queries Per Dashboard → Top Blocked Queries per Day
Day
View Transaction Trails Audit → Transactional Trails

6. Click Save to save the Role.


14 System Settings | 396

14.8.4 Password Settings


Password Settings enable you to customize your DataSunrise user passwords. Such customization includes setting
password length, characters used, etc. To access Password settings, navigate to System Settings → Access Control
→ Password Settings.

Note: Password Settings can be edited only by DataSunrise users with the privilege of editing such settings
(System Settings → Access Control → Role → Edit Role → Actions → Change Password Settings).

UI element Description
Minimum Password Minimum lenght of a password string
Length field
Maximum Password Maximum length of a password string. Unlimited by default
Length field
Special Symbols field Special characters that may be used when setting a password
Use Letter... check boxes Self-explanatory
Old Password Storing Number of days to store an old password for
Count Days field

14.8.5 Limiting Access to the Web Console by IP Addresses


DataSunrise enables you to restrict access for DataSunrise Users to the Web Console by IP addresses:
1. Navigate to the Access Control → Users subsection, locate an existing User of interest and click its name for
editing.
2. Navigate to the Control Access to the Web Console by IP subsection.
3. Add existing Hosts or Host Groups (refer to subs. IP Addresses on page 209) into Allow Access to enable access
for the users connecting from these hosts or add existing Hosts or Host Groups into Deny Access to prohibit
access for the users connecting from the added hosts.
4. Click Save to save the settings.

14.9 Logs
This tab enables you to view system logs of DataSunrise's modules. Navigate to System Settings → Logging and
Logs to get to the Logs tab.
Use the Log Type drop-down list to switch between logs and the Server drop-down list to select a DataSunrise
server to show a log for (if multiple servers exist).

14.10 LDAP
LDAP subsection contains LDAP servers' settings. An LDAP server is required to configure the Authentication proxy
(mapping of Active Directory users on database users). For more information on Authentication proxy, refer to the
DataSunrise Admin Guides.
To create a new LDAP server, do the following:
1. Navigate to LDAP and click Add LDAP Server to access the server's settings
2. Fill out the required fields:
14 System Settings | 397

Interface element Description


Logical Name Logical name of the LDAP server's profile (any name)
Group Attribute A search filter used to filter user groups by attribute. Used for mapping of AD user
groups
Host LDAP server's host
Login Type Server type
Port LDAP server's port number
Login Custom Format If you want to know the format for an LDAP login, you need to replace dots in a DNS
name with commas. I.e: CN=Test.OU=Europe.O=Novell would become:
CN=Test,OU=Europe,O=Novell. If you are not using Novell LDAP, it would become:
CN=Test,OU=Europe,DC=Novell,DC=com. Depending on the domain (DC) you use for
authentication.
DataSunrise supports the following patterns: <name>, <domain>, <basedn>, which
are auto replaced. For example:
o Active Directory: <domain>\<name>;
o OpenLDAP: cn=<name>, <basedn>

SSL check box Use SSL for connection


Domain LDAP server domain name. Needed for creation of an LDAP login.
Login LDAP user name. Needed for authentication and execution of queries by a privileged
account. Used for mapping groups and AD authentication in the Web Console
Base DN Distinguished Name (DN) is a database to search across. DIT (Directory Information
tree) to start data search from
Save Password Method of saving an LDAP password:
• Save in DataSunrise
• Retrieve from CyberArk. In this case you should specify CyberArk's Safe, Folder and
Object (fill in the corresponding fields)
• Retrieve from AWS Secrets Manager. In this case you should specify AWS Secrets
Manager ID
• Retrieve from Azure Key Vault. You should specify Secret Name and Azure Key Vault
name to use this feature

Password (if an LDAP LDAP user password. Needed for authentication and execution of queries by a
password is saved in privileged account. Used for mapping groups and AD authentication in the Web
DataSunrise) Console
Is default check box Use the current LDAP server as the default one
User Filter Expression that defines criteria of selection of catalog objects included into the search
area defined by the “scope” parameter. Thus, it is a search filter used to search for user
attributes

3. Having configured an LDAP server, click Test to test the connection between DataSunrise and the server. Click
Save to save the server profile.
14 System Settings | 398

14.11 Servers
The System Settings → Servers subsection displays existing DataSunrise servers. For more information on
DataSunrise multiple servers, refer to the DataSunrise Admin Guide. To access Server settings, do the following:
1. Select a required server in the list and click its name to access the server's settings
2. Reconfigure a server if necessary:
Interface element Description
Main Settings
Logical Name Logical name of the DataSunrise server (instance)
Host IP address of the server, the Instance is installed on
Backend Port DataSunrise Backend's port number (used to access the Web Console)
Core Port DataSunrise Core's port number
Use HTTPS for Backend Process Use HTTPS protocol to access the Backend
Use HTTPS for Core Processes Use HTTPS protocol to access the Core
Core and Backend Process Manager
Table with Core processes Each Proxy uses its own Core process. Select a process to take actions with
and use the Restart/Start/Stop buttons from the Actions drop-down list.
File Manager
Drop-down list with available Select the file of interest and use the Upload button to upload your local file
DataSunrise files to the current server. Or use the Download button to download the file of
interest from the current server
Server Info
Table (not configurable) Displays information about the current server (refer to About on page 399)

14.12 Operation Group


The Operation Group subsection contains a list of unique queries that were audited by DatSunrise's Data Audit
component (the contents of the operation_group table of the Audit Storage database to be exact). Note that these
queries are used for information only, i.e. you can't include them in any group or else.

14.13 Queries Map


Queries Map enables you to create a list of DDL queries for a certain database. This Type can be used while
configuring Rules' Query Types filters. Note that some databases don't have a queries map, so you need to complete
it by yourself.
To add a new Query Type, do the following:
1. Navigate to System Settings → Queries Map, select a database to add a Type for and click Create
2. Select a Query Type from the list of DDL commands to add to the list of Types
14 System Settings | 399
3. You can also add Synonyms for your Query Types. To do it, add some Query Types to the list, then check a Query
Type of interest in the list and in the Actions menu click Add Synonym to associate the selected Query Type
with another Query Type from the list of Types. Note that the associated Query Type will be removed from the list
4. Now you can use your Query Types while configuring a Rule.

14.14 About
This subsection displays general information about DataSunrise and contains the License manager:
Parameter Description
License type DataSunrise license type
License Expiration Date DataSunrise license expiration date
Version DataSunrise version number
Backend UpTime Backend operating time
Server Time Current server time
Main Dictionary Default Dictionary database used (Dictionary location)
Current Dictionary Dictionary database currently used
Default Dictionary Version Default Dictionary database version number
Current Dictionary Version Current Dictionary database version number
OS Type DataSunrise server operating system type (Windows or Linux)
OS Version DataSunrise server operating system version
Machine DataSunrise server hardware information
Node Name DataSunrise server name (PC name)
Encoding Current encoding
Server DataSunrise server the license is applied to
Audit Version • For SQLite-based Audit Storage: main part version / rotated part
version
• For remote Audit Storage: audit version

License subsection Contains the License manager. Displays available licenses.


Add license button Adds a new license to the list
Remove button Deletes the selected license
Update button Displayed only if an update is available
15 Table Relations | 400

15 Table Relations
The Table Relations feature enables DataSunrise to analyze database traffic and create associations between
database columns. "Associated columns" means that columns can be linked by integrity constraints or by JOIN and
WHERE clauses in queries. For access to Relations' settings, navigate to Configurations → Table Relations.
Associations are used:
• When configuring Dynamic and Static Data masking, suggestions on possible associations may be given
when selecting columns to be masked. When selecting a column associated with another column, you will be
prompted that there associations exist, if there are. You can include an associated column in a Rule or a Static
Masking task.
• Columns associated with columns retrieved by Data Discovery tool will be shown too (refer to Periodic Data
Discovery on page 248)
DataSunrise builds associations using the following methods:
• Integrity constraints, such as foreign and primary keys. When creating an instance, the Search for Table
Relations... check box should be checked so that during a metadata update associations are analyzed too. At
this, in the process of database's metadata update, a default_model.<instance_name> Table Relation will appear.
It is a default database model with associations that will be updated after every metadata update
• Analysis of JOIN and WHERE clauses in database traffic using a Learning Rule (Database Traffic Analysis on page
405)
• Analysis of JOIN and WHERE clauses in database query history using a dedicated Periodic Task (Database Query
History Analysis on page 400)
• Analysis of functions, views and procedures for JOIN and WHERE clauses included using a dedicated Periodic
Task, Periodic DDL Table Relation Learning Task on page 404
• If the above-mentioned actions were not sufficient, associations might be edited manually (Manual Editing of
Table Relations on page 405)

Important: all the associations work inside DataSunrise only and no database tables are modified at that.

15.1 Database Query History Analysis


Associations between tables can be built using the database queries history. At that, table relations are detected
by the means of queries containing JOIN and WHERE clauses. For this, use a dedicated Periodic Task which should
contain the following data:
• A database instance the query history of which should be analyzed.
• A list of database objects the queries to which are of interest for us. Database objects are: database schemas,
tables or columns.
• Table Relation the revealed associations to be saved to.
• As for all Periodic Tasks you need to choose the activation frequency of the task.
Once the Periodic Task finished its work, the corresponding Table Relations displays all the built associations
between the tables. To get access to the query history, each database should be configured in a special way (see
description below).

15.1.1 Preparing an Amazon Aurora MySQL Database


The configuring process for Amazon Aurora MySQL is almost similar to MySQL's with some differences described
below.
15 Table Relations | 401
1. You should enable the log_output, general_log and slow_query_log variables. To do this, you need to create a
new Parameters Group (if you're using the default Parameters Group) or edit your existing Parameters Group if
you're using a custom one.
2. Set the log_output, general_log and slow_query_log variables as shown below:

general_log: 1
slow_query_log: 1
log_output: TABLE

3. Note that if your Parameters Group was created from scratch, you will need to edit the Instance itself to avoid
using the default Parameters Group.

15.1.2 Preparing an Amazon Aurora PostgreSQL Database


The configuring process for Amazon Aurora PostgreSQL is almost similar to PostgreSQL's with some differences
described below.

1. You should create a pg_stat_statements VIEW. To do this, you need to create a new Parameters Group (if you're
using the default Parameters Group) or edit your existing Parameters Group if you're using a custom one.
2. In the Group's settings, set the shared_preload_libraries parameter's value to pg_stat_statements
3. Restart the Instance.

15.1.3 Preparing a DB2 Database


To enable the extraction of query history from a DB2 database, do the following:

1. Create an EVENT MONITOR FOR STATEMENTS which writes the data to a local table.

Note: the DB2 user you're using for creating the Monitor should have rights required for reading from the table
created by the Monitor.

Execute the following query:

CREATE EVENT MONITOR DB2STATEMENTS FOR STATEMENTS WRITE TO TABLE AUTOSTART;

2. Start the monitor:

SET EVENT MONITOR DB2STATEMENTS STATE 1;

15.1.4 Preparing a MS SQL Server Database


To enable the extraction of query history from a MS SQL database, you should have the VIEW SERVER STATE
privilege.

15.1.5 Preparing a MySQL Database


To enable the extraction of query history from a MySQL database, do the following:

1. You should enable the log_output, general_log and slow_query_log variables.


2. Enable the log_output, general_log and slow_query_log variables as shown below:

SET GLOBAL general_log = 'ON';


SET GLOBAL log_output = 'TABLE';
SET GLOBAL slow_query_log = 'ON';
15 Table Relations | 402
3. Note that to set these variable you should be granted with the SUPERUSER privileges.

15.1.6 Preparing a Netezza Database


There are two methods of extracting query history from a Netezza database. The first one is more simple and the
second method is more complex but more effective.

1. Method 1. Query the following system VIEWs: _v_gryhist and _v_grystat. To do this, you should have a user with
SELECT privileges to the aforementioned VIEWs. Execute the following query:

GRANT SELECT ON _v_qryhist, _v_qrystat TYPE SYSTEM VIEW TO <user name>;

2. Method 2. Based on using a history collection database ( refer to https://www.ibm.com/support/


knowledgecenter/SSULQD_7.2.1/com.ibm.nz.adm.doc/c_sysadm_qhist_collect_report.html). To create such a
database, we need two users. One user is the database owner and the second one will be used for working with
the history collection database. To create such users, execute the following queries:

CREATE USER <owner user name> WITH PASSWORD '<password>';


CREATE USER <DS user name> WITH PASSWORD '<password>';

Then we should grant the permissions:

GRANT LIST ON <DS user) TO <owner user>;


GRANT CREATE DATABASE TO <owner user>;

Execute the following command to create the required database:

./nzhistcreatedb -d erhistorydb -t query -o <owner user> -p '<password>' -u <DS user> -v 1

Note: for a description of the parameters refer to: https://www.ibm.com/support/knowledgecenter/en/


SSULQD_7.2.1/com.ibm.nz.adm.doc/r_sysadm_nzhistcreatedb_cmd.html If database configuring is performed on
a remote machine (not the one hosting Netezza), then you need to input the machine's IP address to the {-n | --
host}

Configure the history collection database (refer to: https://www.ibm.com/support/knowledgecenter/


SSULQD_7.2.1/com.ibm.nz.adm.doc/c_sysadm_creating_hist_config.html and https://www.ibm.com/support/
knowledgecenter/en/SSULQD_7.2.1/com.ibm.nz.dbu.doc/r_dbuser_create_history_configuration.html). Here's an
example of the configuration:

CREATE HISTORY CONFIGURATION "erhistconfig-1"


HISTTYPE QUERY -- QUERY type only
NPS '192.168.1.149' -- for remote configuring
DATABASE ERHISTORYDB -- name of the DB used in nzhistcreatedb
USER ERHISTUSR
PASSWORD 'erhist'
COLLECT QUERY -- QUERY type only
LOADINTERVAL 1
LOADMINTHRESHOLD 0
LOADMAXTHRESHOLD 0
INCLUDING ONLY SUCCESS
STORAGELIMIT 5
LOADRETRY 1
VERSION 1; -- should match the value defined in nzhistcreatedb
15 Table Relations | 403

15.1.7 Preparing an Oracle Database


No additional actions are required to fetch query history from an Oracle database.

15.1.8 Preparing a PostgreSQL Database


If you're using a PostgreSQL database as your target database, you should create a required VIEW before
discovering table relations. To do this, perform the following.

1. Open your PostgreSQL installation folder, data folder and locate the postgresql.conf file.
2. Add the following line to the end of the postgresql.conf file:

shared_preload_libraries = 'pg_stat_statements'

3. Restart the PostgreSQL system service.


4. Use PostgreSQL's client application to execute the following query and create the required VIEW:

CREATE EXTENSION pg_stat_statements;

5. Note that you should create the pg_stat_statements VIEW in your PostgreSQL server's MASTER database
(postgres as a rule) and NOT in the database you'll be browsing.

15.1.9 Preparing a Redshift Database


To extract data from a Redshift database, you need to be able to query the STL_QUERYTEXT and STV_RECENTS
system tables. For this, do the following:
Get an unlimited access to the system log by executing the following query:

ALTER USER <DataSunrise user> SYSLOG ACCESS UNRESTRICTED;

15.1.10 Preparing a Teradata Database


To extract data from a Teradata database, the DBQL (Database Query Logging) mechanism is used. To enable it, do
the following:
1. Grant the admin user the following privileges:

grant execute on DBC.DBQLAccessMacro to sysdba;

2. Start writing query logs:

Note: SQLTEXT's value should be big enough to store all the queries. By default, it's 200 characters.

begin query logging with sql LIMIT SQLTEXT=1024 on all;

3. If you don't need to write logs anymore, you can disable it:

end query logging on all;


15 Table Relations | 404

15.1.11 Preparing a Vertica Database


No additional actions are required to fetch query history from a Vertica database.

15.1.12 Preparing a Greenplum Database


To enable the extraction of query history from a Greenplum database, you should install and activate the
performance monitor (gpperfmon). To do this, perform the following:

1. Obtain the superuser privileges if you don't have them:

$ su - gpadmin

2. Execute the command shown below. This command will install and start the gpperfmon, will create a service
database for it (gpperfmon) and will create a gpmon superuser with the <password> password.

$ gpperfmon_install --enable --password <password> --port 5432

3. Edit the gpperfmon configuration file in the way described below. The file is located here:
$MASTER_DATA_DIRECTORY/gpperfmon/conf/gpperfmon.conf

min_query_time = 0

4. Restart the Greenplum server:

$ gpstop -r

5. Grant the Greenplum user you specified in your database instance's settings (Configuration → Databases), the
privileges listed below. To do this, connect to the database as a user with admin privileges, select the gpperfmon
database as current and execute the query:

grant select on table queries_history to <User name>

15.2 Periodic DDL Table Relation Learning


Task
Associations between tables can be built by the means of the analysis of functions, views and procedures for JOIN
and WHERE clauses. For this, use a dedicated Periodic Task which should contain the following data:
• A database instance the query history of which should be analyzed
• Type of target database objects to be analyzed: either Stored procedures and Function DDLs or View DDLs or
both
• Table Relation the revealed associations to be saved to.
• As for all Periodic Tasks you need to choose the startup frequency of the task.
Once the Periodic Task finished its work, the corresponding Table Relations displays all the built associations
between the tables.
15 Table Relations | 405

15.3 Database Traffic Analysis


Table associations may be built using real-time query traffic to a database. At this, as in the case with the analysis
of queries history, table associations are built using the queries containing JOIN and WHERE clauses. For that, a
designated Learning Rule should be used. After choosing a database instance on the Table Relations tab of the Filter
Statements section you need to specify the following:
• Table Relation where built table associations will be saved to.
• A list of database objects queries that are of interest to us. Database objects are understood as database
schemes, tables or columns.
After sufficient traffic for learning has been analyzed, you can disable or remove the rule..

15.4 Manual Editing of Table Relations


You can add and delete associations between your target database columns manually in two ways:
• Using the table of associations and the corresponding buttons above;
• Using the visual Diagram of associations. To add an association, highlight the required column with your mouse
cursor and drag-and-drop the arrow to the column you want to establish an association with (link the two
columns with an arrow). To remove an association, click on the arrow.

Note: if you're using Google Chrome, you need to enable Hardware Acceleration in your browser.

The association diagram shows the associated columns and the Toolbar to work with them in the left upper corner.
The toolbar enables you to:
• Add a new table to establish an association with;
• Remove highlighted association;
• Rebuild a graph in such a way that another table is in its root;
• Download an associations graph from another Table Relation;
• Select only the required tables to be shown in the diagram;
• Open a current graph in a new browser tab.
16 Capturing of Application Users | 406

16 Capturing of Application Users


Thanks to advanced traffic filtering algorithms, the Data Audit, Data Security and Data Masking features can be used
in regard to specific queries such as queries sent by certain database users or from certain hosts etc. DataSunrise
also has the ability to map client application users to database activity. In practice, it could be useful, for example,
to mask data or block database access for specific application users. This subsection describes how to implement
mapping of client application users and includes four examples.

Important: DataSunrise supports the SAP ECC App User Capturing method for SAP Hana, SAP Sybase, IBM DB2
and Oracle Database only.

Important: when using Oracle EBS based Application User Capturing, the database's password should be saved in
DataSunrise, CyberArk or AWS Secrets Manager. Otherwise DataSunrise will not be able to capture Oracle EBS users.

16.1 Markers Used to Identify Client


Application Users
Sometimes it is required to identify end users of client applications to know which user issued this or that query.
Detecting of an end user is a critical demand when you need to log queries from this or that user, block access of
some users to a database or mask results of queries issued by specific users.

Client application users interact with a target database through a database user or users they are mapped to. To
identify an app user, DataSunrise uses certain markers described below.
Information about a user of client application can be contained:
• within query's SQL
• within query results
• within bindings for prepared statements
Thus, DataSunrise uses some markers to identify the actual client app user name.
First, the DataSunrise administrator should define which way an application user will be captured within the
database traffic. To identify an end application user, DataSunrise uses three markers:
1. Query-based.
• Select id from appusers where username='([a-z]*)';
• SELECT '([a-z]*)' as for DataSunrise where ([a-z]*) is a template used by DataSunrise to find a user name.
16 Capturing of Application Users | 407

2. ResultSet-based. DataSunrise can find a real application name in SELECT's results.

3. Bindings-based. DataSunrise can find user's name in bind variables for prepared statements.
16 Capturing of Application Users | 408

Note: Column Index is the counting number of a bind variable in a query. The counting starts from 0. In this
particular case we will be searching for the u2 bind variable because Column Index = 1.

One of existing queries (executed by an application) or queries added to an application to integrate it with
DataSunrise can be defined by a DataSunrise user as a marker for an application user.
4. Session Parameter. DataSunrise can find the required information in the parameters of a session.

To learn these parameters, create an Audit Rule and audit a session. For parameters of your session, navigate to
Audit → Session Trails → Session of interest → Parameters tab:
16 Capturing of Application Users | 409

Note: when enabling multiple capturing types, it is important to remember that Query and SAP ECC types are
applied first as they work on request, while other types work on response.

16.2 Creating a Rule Required for


Application Users Capturing
After a query is captured (DataSunrise detected a user name), all other queries in a database session will be mapped
to that application user while skipping other application users.
The next step is to create a Rule in the DataSunrise's Web Console that includes Filter Session settings set to
"Application User". Thus it enables us to pick out queries sent by certain application users.

16.3 App User Capturing Examples


16.3.1 Example 1: Masking a PostgreSQL Table for a
Certain User
In this example, we use a PostgreSQL table named "Customers". We will mask the "Card" column of this table for
one application user only (for "firstappuser"). For other application users, the column will be left unmasked.
1. We navigate to Configurations → Databases → target database profile.
We open Actions → Application User and configure Application User Capturing as below:
16 Capturing of Application Users | 410

• Selected Query in the Capture Type drop-down list. It is needed to search for application user's name in
query's SQL;
• Pasted the following expression into the Pattern field:

'DataSunriseEvent:AppUserSet="([a-zA-Z]+)"';

2. We create a Masking Rule and in the Filter Sessions we select Application user, as well as the application user
("firstappuser" here), whose queries we want to capture (to be able to do it, we should create the required user
in the Configurations → Database users first). Then we select the "Card" column to be obfuscated and select
a masking method to use ("Credit Card Number" here). Thus, all queries issued by "firstappuser" will result in
masking of the "Card" column.

3. To check the settings, we execute the following queries:

SELECT 'DataSunriseEvent:AppUserSet="firstappuser"';
SELECT * FROM customers;

The result of the query will be masked


16 Capturing of Application Users | 411

Now we execute the following queries (for another application user):

SELECT 'DataSunriseEvent:AppUserSet="secondappuser"';
SELECT * FROM customers;

The result of this query will not be masked:

16.3.2 Example 2: Using a Dedicated Web Site as the Client


Application
In this example, we use a web site that displays contents of a PostgreSQL table named "customers". We will mask
the table's contents for a certain application user. First, we log in to the web site as "[email protected]" user.
1. We create a Data Audit rule to log all queries
2. We query the database using the web site and navigate to the Events subsection of the Data Audit section of
the Web Console. Here you can see a list of events captured by Data Audit.
3. Then we search for a query that includes an application user name ([email protected]) across the captured events.
We use "Filters" functionality set to "SQL".
4. Then we open an event with a user name in it for details. In the Event details, we copy the query's SQL.
16 Capturing of Application Users | 412

5. Then we go to Configurations → Databases → Target database profile.


And in the "Application users", we paste the query into User Name in SQL Pattern and create a regular
expression out of it.

6. Then we create a new user ("[email protected]") at Configurations → Database Users. It is required to create a
Rule for this user.
7. Now we can create a masking Rule for the application user.
In the Filter Sessions subsection, we select "Application user" and the user to whom we want to display
obfuscated data instead of actual data ("[email protected]"). We mask the "card" column with the "Credit Card
Number" masking algorithm. Thus, all queries issued by "[email protected]" will result in masking of the "card"
column.
16 Capturing of Application Users | 413

8. To check the result, we query the table through the web site. As you can see, the values in the "Card" column are
obfuscated.

16.3.3 Example 3: Changing Users in SQL Developer during


one session
In this example, we will be switching two users in a single session. SQL Developer uses bindings in particular, so we
are going to use the Bind Variables type of capturing..
1. We create a Data Audit rule to log all queries
2. We will be using the dbms_SESSION.set_identifier('my_client') query, so our rule should look like the following
16 Capturing of Application Users | 414

.
3. After that, we will execute several simple queries preceded by dbms_SESSION.set_identifier('my_client') in SQL
Developer. SQL Developer due to its specifics will create a bind parameter during the execution of this query:

4. As a result, in the Transactional Trails, we can see that two SELECTs in a single session were executed by two
different users:

16.3.4 Example 4: Masking a table by using ResultSet as


Capturing type
In this example, we will be using a table of customers in our database. Make sure that there a table named users with
ID and name columns exist in your database.
1. We create a Data Audit rule to log all queries
2. We create a new Capturing Setting using the ResultSet method:.
16 Capturing of Application Users | 415

3. Add a user you want to mask the values for, in Configuration → Database Users
4. When creating a Masking Rule, add the new user in the FIlter Sessions section as an Application User:

5. Connect through a proxy to the target database, execute the query for the user first:

SELECT userName from test_base.test_table where id=1;

Then execute a SELECT query on the masked columns.


6. As a result, the table is masked for the selected user and the user is displayed in the Dynamic Masking Events:
17 Amazon Web Services (AWS) | 417

17 Amazon Web Services (AWS)

17.1 Creating a Health Check


DataSunrise enables you to create a Health Check for cloud or local services to notify about failing of DataSunrise
instances. Health checking works in parallel mode which increases check speed and stability when working with
multiple proxies and nodes. A separate task is created which is called independently of Health Check queries and
serves to save the check results in cache. Then the Health Check uses the cached results.
To create a Health Check, you can use the following URLs. Paste them into a field that defines the URL to be checked
(Ping Path field when creating a Health Check)
1. This URL enables you to check all proxies for a specified instance:

https://<datasunrise server name>:11000/healthcheck/instance?inst_name=<instance


name>[&db_name=<database name>&db_login=<database user>&db_password=<database password>]

Parameter Description
<datasunrise server IP address or host name of DataSunrise's server, 11000 is the port number of the
name> DataSunrise's Web Console
<instance name> Name of DataSunrise instance (for example, an AWS instance)
<database name> Name of the database configured to be used with DataSunrise (RDS database for
example)
<database user> Database user name (login) you can use to connect to your database
<database password> Database user password you can use to connect to your database
disabled_proxy_is_error Show an error if the checked proxy is disabled
force_interface_check If True (1), health checking ends with an error which was displayed when checking an
Interface

Note: If login and password are saved for a certain instance, you can skip them in the URL, or you will get "error
500" with a corresponding message. The server will return "warning 200" if success.

When checking all instances, a health checker checks all DataSunrise proxies and if a proxy does not respond, it will
return "error 500" with a corresponding message.
2. You can use the following URL to check all proxies on all instances:

/healthcheck/all_instances

Note: if login/password are not saved in the instance's settings, this particular instance will not be checked.

3. General health check (checks all servers):

/healthcheck/general
17 Amazon Web Services (AWS) | 418

17.2 Amazon CloudWatch Custom Metrics


DataSunrise can send custom metrics to Amazon's CloudWatch. This enables you to view DataSunrise-specific
parameters such as memory volume used, performance level etc. To do it, perform the following:

1. Create an AWS role that includes the required policies:


• Navigate to AWS Console → IAM → Policies. Create a new policy. Switch to JSON and paste the following code
into the JSON field:

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"cloudwatch:PutMetricData",
"ec2:DescribeTags"
],
"Effect": "Allow",
"Resource": [
"*"
]
}
]
}

• Navigate to Roles. Create a new role → AWS EC2


• At the Attach permissions policies page, select your policy in the list (use Search). Save the Role
2. Enable EnableAWSMetrics parameter in the DataSunrise's Settings → Additional
3. Navigate to EC2 Console → CloudWatch → Browse Metrics → DataSunrise to view the metrics:
17 Amazon Web Services (AWS) | 419

Metric Description
AuditProcessingSpeed Processing of queries by audit journal speed (operations/sec)

AuditQueueLength Length of the audit journal queries queue (queries)

ProxyMessageHandlerQueueLength Length of the proxy queries queue (queries)


SnifferMessageHandlerQueueLength Length of the sniffer queries queue (queries)
CoreThreadCount Number of Core threads
TrafficBufferPoolFreeObjects Number of free blocks in the traffic buffer
TrafficBufferPoolBalance Number of used blocks in the traffic buffer
Antlr3ParserPoolSize The overall volume of the Antlr parser (kb)
Antlr3ParserPoolUsed Volume of the Antlr parser used (kb)
Antlr3TokensPoolSize Overall volume of the Antlr tokenizer (kb)
Antlr3TokensPoolUsed Volume of the Antlr tokenizer used (kb)
Antlr3StrPoolSize Overall volume of strings in the Antlr parser memory (b)

Antlr3StrPoolUsed Used volume of strings in the Antlr parser memory (b)

Antlr3ParserCommentsPoolSize Overall volume of the Antlr commentary parser (kb)

Antlr3ParserCommentsPoolUsed Volume of the Antlr commentary parser used (kb)

Antlr3TokensCommentsPoolSize Overall volume of the Antlr commentary tokenizer (kb)

Antlr3TokensCommentsPoolUsed Volume of the Antlr commentary tokenizer used (kb)

Antlr3StrCommentsPoolSize Overall volume of strings in the Antlr commentary parser


memory (b)

Antlr3StrCommentsPoolUsed Used volume of strings in the Antlr commentary parser memory


(b)

CoreVirtualMemoryUsage Memory volume used by the Core (Mb)


BackendVirtualMemoryUsage Memory volume used by the UI (Mb)
ProxyOperationsSpeed Number of database operations per second
ProxyReadTrafficSpeed Read speed (from DB to client)
ProxyWriteTrafficSpeed Write speed (from client to DB)
AuditDiscFreeSpace Volume of free space in the audit journal file system (Mb)

LogsDiscFreeSpace Volume of free space in the logs file system (Mb)

ProxyExecutionsSpeed Number of DB executions per second


17.3 Using AWS Secrets Manager for Storing
Passwords
DataSunrise enables you to store your database, LDAP, Audit Storage and Subscriber servers' passwords in AWS's
Secrets Manager. This applicable only for services compatible with AWS. This is how you can save a password in
Secrets Manager:
1. Ensure that you have an IAM Role with proper security policies set for your EC2 instance.
2. Go to https://console.aws.amazon.com/secretsmanager/home and ensure that your current region is similar to
your EC2 Instance's region.
3. Click Store a new secret. Select Other type of secrets in the Select secret type section.
4. In Secret key/value, input "password" (without quotes) key into the first field and input your actual password
into the second field. Click Next
5. Input Secret name and Description. Note that Secret name is your AWS Secret ID you should specify when
configuring password saving in the Web Console.
6. Now you can retrieve your password from the Secrets Manager: you should select Retrieve from AWS Secrets
Manager and specify your AWS Secrets Manager ID in the corresponding subsection of the Web Console.

17.4 How Load Balancing Works on Vertica


When a load balancer gets a query to activate load balancing, it selects a free Vertica cluster. The selected
Vertica cluster returns its IP address and port number that should be used by the client to connect to this cluster.
DataSunrise intercepts the cluster’s response and changes the cluster’s IP and port number to DataSunrise’s IP
and port number. Thus a client connects to DataSunrise server instead of connecting to the Vertica cluster directly.
DataSunrise does everything automatically, so you don’t need to configure anything manually.
18 Integration with the CyberArk AAM | 421

18 Integration with the CyberArk AAM


DataSunrise can be integrated with the CyberArk's AAM to store database passwords in a CyberArk Vault and
retrieve them on demand.

18.1 AAM Installation


Refer to the "Credential Provider and ASCP Implementation Guide" for CyberArk Credential Provider.
It is recommended to enable AAM cache as it will increase performance when retrieving passwords from local cache.
It is highly recommended to select a cache refresh interval which is less than 5 minutes. Otherwise, this can lead to
inaccurate password caching.

18.2 AAM Configuration. Defining the


Application ID (APPID) and Authentication
Details
To define an application, refer to the following instruction or define it manually via CyberArk's PVWA (Password Vault
Web Access) Interface:
1. Being logged in as a user which is allowed to manage applications, in the Applications tab, click Add
Application to open the Add Application page.
2. Specify the following details:
• In the Name field, specify a unique identifier of the application ("DataSunriseDBSecurity")
• In the Description, specify a short description of the application that will help you to identify it.
• In the Business owner section, specify contact information about the application's business owner.
• In the lowest section, specify the Location of the application in the Vault hierarchy. If the Location is not
specified, the application will be added to the same Location as the user which created this application.
3. Click Add and the application will be added and displayed at the Application Details page.
4. Specify the application's Authentication details. This information enables the Credential Provider to check certain
application characteristics before retrieving the application’s password. We recommend to specify the OS user
and application path. Refer to the “DataSunrise Database Security Suite – Administration Guide (Linux)” and
“DataSunrise Database Security Suite – Administration Guide (Windows)” respectively, “Program installation”
section. Default settings for Linux: OS user name is “datasunrise”, application path is “/opt/datasunrise”. Default
parameters for Windows: OS user name is “Local System” and application path is “C:\Program Files\DataSunrise
Database Security Suite”.
To enable the Credential Provider, check application’s Authentication details:
• In the Authentication tab, click Add; a drop-down list with authentication characteristics included will be
displayed.
• Select an authentication characteristic to specify.
5. Specify the OS user:
• Select OS user, the Add Operating System User Authentication window will be displayed.
• Specify name of an OS user which will run the application, then click Add; this OS user will be listed in the
Authentication tab.
18 Integration with the CyberArk AAM | 422
6. Specify the application path:
• Select Path, the Add Path Authentication window will be displayed.
• Specify the path where the application will run.
• To indicate that the specified path is a folder, check Path is folder.
• To allow internal scripts to retrieve the application password for this application, select Allow internal scripts
to request credentials on behalf of this application ID.
• Click Add and the Path will be added as an authentication characteristic with the information that you
specified.
7. Specify a hash
• Run the AIMGetAppInfo utility to calculate the application’s unique hash.
• Copy the hash value that is returned by the utility.
• In the PVWA, select Hash; the Add Hash window will be displayed.
• In the Hash field, paste the application’s unique hash value, or specify multiple hash values separated with a
semi-colon. You can add comments by using “#” after the hash value. For example:

OE883B7OD5B6E3EE37D37198049C9507C8383DB6 #app2

• Click Add, the Hash will be added as an authentication characteristic with the information that you specified.
8. Specify the application’s Allowed Machines. This information enables the Credential Provider to ensure that only
applications that run from the specified machines can access their passwords.
• In the Allowed Machines tab, click Add, the Add allowed machine window will be displayed.
• Specify IP/hostname/DNS of the machine where the application will run and will request passwords, then click
Add, the IP address will be listed in the Allowed machines tab. Make sure the allowed servers include all
mid-tier servers or all endpoints the AAM Credential Providers are installed on.

18.3 Provisioning Account and Settings


Permission for Application Access
For the application to perform its functionality or tasks, the application must have access to particular existing
accounts, or new accounts to be provisioned in the CyberArk Vault (Step 1). Once the accounts are managed by
CyberArk, make sure to setup the access to both the application and CyberArk Application Password Providers
serving the Application (Step 2).
1. In the “Password Safe”, provision the privileged accounts that will be required by the application. You can do this
either manually or automatically:
• Manually: add accounts manually, one at a time and specify all the account details.
• Automatically: add multiple accounts using the “Password Upload” feature. Note that for this step, you require
the "Add accounts" privilege in the Password Safe.

Note: for more information about adding and managing privileged accounts, refer to the “Privileged Account
Security Implementation Guide”.

2. Add the Credential Provider and application users as members of the Password Safes where the application
passwords are stored. This can either be done manually in the Safes tab, or by specifying the Safe names in a
CSV file if you want to add multiple applications.
3. Add the Provider user as a “Safe Member” with the following privileges:
• List accounts
• Retrieve accounts
• View Safe Members
18 Integration with the CyberArk AAM | 423
4. Add the application (DataSunriseDBSecurity) as a Safe Member with the following authorizations:
• Retrieve accounts
To enable the Credential Provider, check application’s Authentication details:
• In the Authentication tab, click Add; a drop-down list with authentication characteristics included will be
displayed.
• Select an authentication characteristic to specify.
5. If your environment is configured for dual control:
• In PIM-PSM environments (v7.2 and lower), if the Safe is configured to require confirmation from authorized
users before passwords can be retrieved, give the Provider user and the application the following permission:
Access Safe without Confirmation
• In Privileged Account Security solutions (v8.0 and higher), when working with dual control, the Provider user
has access without confirmation, thus, it is not required to set this permission.
6.
Note: for more information about configuring Safe Members, refer to the “Privileged Account
Security Implementation Guide”.

If the Safe is configured for object level access, make sure that both the Provider user and the application have
access to the password(s) to be retrieved.

18.4 DataSunrise Installation and


Configuration
1. Configure a DataSunrise database instance at the Configuration → Databases subsection.
• For your database instance, specify CyberArk Vault's credentials to retrieve the specific password.
• In the Save Password drop-down list, choose “Retrieve from CyberArk” option and specify CyberArk Safe,
Folder and Object parameters to store the database password in to the fields below.
2. Add the Credential Provider and application users as members of the Password Safes the application passwords
are stored in. This can either be done manually in the Safes tab or by specifying Safe names in a CSV file for
adding multiple applications.
3. Use the Save button to save vault access credentials in the DataSunrise configuration. On Save click, DataSunrise
will perform test password retrieval for the specified vault parameters. In case of test failure, DataSunrise will
report with an error message: “Cannot retrieve password. Please make sure you have entered correct CyberArk
Vault parameters”. Please also make sure you installed AAM properly since DataSunrise depends on correct AAM
Credentials Provider installation.

18.5 Retrieving a Dictionary Password from


CyberArk
When deploying DataSunrise in High Availability configuration, you can save a password to the database used as the
Dictionary in CyberArk and retrieve it from CyberArk. For this, do the following:
1. Stop DataSunrise's system service
2. Remove the local_settings.db file
3. Connect to your Dictionary database with some database client and delete the current DataSunrise server's entry
from the firewall_servers database table.
4. Activate password retrieval. Set the path to the local_settings.db as the AF_CONFIG's variable value and the path
to the DataSunrise's installation folder as the AF_HOME environment variable's value
• Windows: execute the AppBackendService.exe with the following parameters:

AppBackendService.exe DICTIONARY_APPLICATION_ID=<Set a Dictionary application ID>


DICTIONARY_TYPE=<Dictionary DB type>
DICTIONARY_HOST=<Dictionary IP address>
DICTIONARY_PORT=<Dictionary port number>
DICTIONARY_DB_NAME=<Dictionary DB name>
DICTIONARY_LOGIN=<Dictionary login>
DICTIONARY_PASS_QUERY=Safe=<CyberArk Safe name>;Folder=<CyberArk Folder name>;Object=<CyberArk
Object name>
FIREWALL_SERVER_HOST=<DataSunrise server IP address>
FIREWALL_SERVER_BACKEND_PORT=<DataSunrise Backend port number (11000 by default)>
FIREWALL_SERVER_CORE_PORT=<DataSunrise Core port number (11001 by default)>
FIREWALL_SERVER_NAME=<DataSunrise server name (any)>
FIREWALL_SERVER_BACKEND_HTTPS=1
FIREWALL_SERVER_CORE_HTTPS=1

• Linux: execute the AppBackendService with the following parameters:

export LD_LIBRARY_PATH=/opt/datasunrise
sudo ./AppBackendService DICTIONARY_APPLICATION_ID=<Dictionary application ID>
DICTIONARY_TYPE=<Dictionary DB type>
DICTIONARY_HOST=<Dictionary IP address>
DICTIONARY_PORT=<Dictionary port number>
DICTIONARY_DB_NAME=<Dictionary DB name>
DICTIONARY_LOGIN=<User name to access the Dictionary>
DICTIONARY_PASS_QUERY="Safe=<CyberArk Safe name>;Folder=<CyberArk Folder name>;Object=<CyberArk
Object name>"
FIREWALL_SERVER_HOST=<DataSunrise server IP address>
FIREWALL_SERVER_BACKEND_PORT=<DataSunrise Backend's port number (11000 by default)>
FIREWALL_SERVER_CORE_PORT=<DataSunrise Core's port number (11001 by default)>
FIREWALL_SERVER_NAME=<DataSunrise server name (any)>
FIREWALL_SERVER_BACKEND_HTTPS=1
FIREWALL_SERVER_CORE_HTTPS=1

Change the local_settings.db file's owner to datasunrise


5. Start DataSunrise's system service.

18.6 Retrieving an Audit Storage Password


from CyberArk
DataSunrise enables retrieving database user passwords used to access the external Audit Storage for the following
database engine types: Aurora MySQL, MSSQL, MySQL, PotgreSQL, Redshift. To configure an external database to be
used as an Audit Storage with CyberArk Vault.
1. Navigate to System Settings → Audit Storage
2. Fill out all the required fields (see Audit Storage Settings on page 383)
3. In the Save password drop-down list, select "Retrieve from CyberArk"
4. Specify required CyberArk parameters in the Safe, Folder and Object text fields.
19 Self-Service Access Request | 425

19 Self-Service Access Request

19.1 Overview
The Self-Service Access Request (SSAR) functionality enables database users trying to access database objects
protected by DataSunrise to request access to these objects from DataSunrise administrators. A DataSunrise
administrator having received a request can decide whether to approve it or decline. Thus, if a request is approved,
the database user that sent it is added to the allow list of the Rule that is protecting the requested database objects.

19.2 Using SSAR


To utilize the SSAR functionality, do the following:
1. SSAR is disabled by default, so you need to enable it first. Navigate to System Settings → General → Self-
Service Access Request and turn the SSAR switch on
2. Note that if your DataSunrise uses an SQLite-based Dictionary then you need to specify your server's host in
the Server settings: System Settings → Servers → Your Server → General Settings. Otherwise DataSunrise
will send a link with user's localhost address (Step 5) instead of DataSunrise server's host
3. If necessary, configure SSAR. All the settings' captions are self-explanatory. Save the settings
4. Create a Security Rule to protect database objects of choice or use an existing one
5. If a database user tries to access the protected objects, he will get an error with a link that he should follow to
create an access request:

6. The database user follows the link, fills out a request and sends it to a DataSunrise administrator
7. For available access requests, navigate to Security → Requests. Locate the request of interest in the list and
click Show
8. You can see general information about the request and objects were tried to access in the General Info section
9. In the bottom section of the page you can manage available access requests by selecting database objects to
give Read-Only or Read/Write rights for to the particular user
10. Having finished, approve or decline the request by clicking the corresponding button
11. Note that you can revoke access rights any time by navigating to the settings of the request of choice and
clicking Revoke.
20 Frequently Asked Questions | 426

20 Frequently Asked Questions


This section describes the most common issues DataSunrise users face.

DataSunrise Updating
Q. I can't update my DataSunrise. I run a newer version of the DataSunrise installer, but the installation
wizard is not able to locate the old DataSunrise installation folder.
Run DataSunrise installer in Repair mode. It removes the previous installation and updates your DataSunrise to a
newer version.
Q. I've updated DataSunrise and I get the following error:

PROCEDURE dsproc_<version>.initProcedures does not exist

Now DataSunrise uses a new method of getting metadata. Do the steps mentioned here: Editing a Target Database
Profile on page 61
Q. I'm trying to enter the Web Console after DataSunrise has been updated, but it displays the following:

Internal System Error

Most likely, you kept the Web Console tab open in your browser while updating the firewall. Log out the Web
Console if necessary and press Ctrl + F5 to refresh the page.

Databases
Q. When connecting to Aurora DB, the MySQL ODBC driver stops responding.
Most probably, you're using ODBC driver version 5.3.6, which is known to cause freezes from time to time. Install
MySQL ODBC driver version 5.3.4.
Q. I'm using DataSunrise in Sniffer mode and get the following messages in the Event Monitor:

"Crypto [<Network interface>]: <Error text> "


"Until the parameters of the crypto provider are properly configured, we can not identify the login/
user. "
"The GUEST account will be used as the current user. "
"Rules checks may not work correctly until this error is resolved. "
"Refer to '4.7.2 Configuring SSL for Microsoft SQL Server' section of the Administration Guide for
details.",

The current version of DataSunrise sniffer supports TLS v.1.0 only. You need to downgrade TLS version on the server
side. Create two keys in the register:

[HKEY_LOCAL_MACHINE][System][CurrentControlSet][Control][SecurityProviders][SCHANNEL][Protocols][TLS
1.1][Server]
[HKEY_LOCAL_MACHINE][System][CurrentControlSet][Control][SecurityProviders][SCHANNEL][Protocols][TLS
1.2][Server]

and add two DWORD-type parameters:

DisabledByDefault=1
Enabled=0

Restart the server;


20 Frequently Asked Questions | 427
If DataSunrise have intercepted an SSL session with improper cryptoprovider settings, then change your
cryptoprovider settings and reset the current SSL session. To reset a session, restart your SSMS (if you're using a
third-party app contacting the sniffed server, restart it as well);
You can also bypass resumed sessions by disabling caching of SSL sessions on the client side. To do this, on the
SSMS's host, select the following registry parameter:

[HKEY_LOCAL_MACHINE][System][CurrentControlSet][Control][SecurityProviders][SCHANNEL]

and add to it the ClientCacheTime == 0 parameter. Then restart the server.


Q. I'm getting the following notification:

Reached the limit on delayed packets

This notification is displayed when a sniffer captured a big amount of traffic on SSL sessions started before
the DataSunrise service had been started. By default, the volume of captured traffic should not exceed 10 Mb
(pnMsSqlDelayedPacketsLimit parameter).
Sometimes this notification can be displayed if there is a huge load on pcap driver. Thus a sniffer can capture too
much of a delayed traffic. In this case you need to increase pnMsSqlDelayedPacketsLimit parameter's value.
Q. I need to use an SSL certificate for database connection. What are my options?
Turn off certificate validation for the connection in your client application (Sisense). For example, you can check Trust
Server Certificate in your client software.
In your environment, you can use a certificate for DataSunrise generated by your CA from the root certificate.
Generate a self-signed certificate and copy it to your client system.
Q. I'm trying to establish a connection to a DataSunrise proxy created for an Amazon Redshift database, but
receive the following error:

[HY000][600000] [Amazon](600000) Error setting/closing connection: PKIX path building failed:


sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to
requested target

This issue is caused by DataSunrise's self-signed certificate which is used by default to handle encrypted
connections. The problem is that some client applications perform strict certificate check and don't accept self-
signed certificates.
You can solve this issue with the following methods:
• Allow usage of self-signed certificates in your client application
• Issue a certificate using your corporate Certification Authority and paste the certificate into the proxy.pem file
• Generate a self-signed certificate and allow usage of root certificates in your database connection string (ex
sslrootcert=/path/to/certificate.pem).
More on proxy certificates here: SSL Key Groups on page 106.
Q. I've configured Google-based two-factor authentication, but I can't authenticate in the target database.
Probably, your smartphone and database server are working in different time zones. Your smartphone and database
server should work in the same time zone, so synchronize the timezones and time.
Q. I can't create an SAP Hana Database instance in DataSunrise because of the following error:

ERROR: invalid byte sequence for encoding "UTF8": 0xdc


20 Frequently Asked Questions | 428
Use a custom connection string with the CHAR_AS_UTF8=true parameter. For example:

DRIVER=HDBODBC;SERVERNODE=192.168.1.244:39017;UID=SYSTEM;PWD=mawoi3Nu;DATABASENAME=SYSTEMDB;CHAR_AS_UTF8=true;

Q. I'm trying to establish a connection between DataSunrise and an Oracle database but get the following
error:

Warning| Could not connect to the database. Error: Couldn't load libclntsh.so.

Ensure that you have Oracle Instant Client installed (see the corresponding Admin Guide) and create a
corresponding .conf file:

sudo bash -c "echo /opt/oracle/instantclient_12_1 > /etc/ld.so.conf.d/oracle-instantclient.conf"


sudo ldconfig

Q. How to deal with putq and DS_32016W?

General Audit Queue In Thread #x' is filled for more than XX%. The current level is XX%

This message indicates that your Audit Storage database can't process events in timely manner (AuditQueue is less
than AuditHighWaterMark). To get rid of these errors, you can do the following:
• Increase Audit Storage database performance: enlarge CPU, RAM, change HDD to SSD
• Decrease the amount of data to audit: , audit only queries you really need to monitor
• Activity on business logic objects (where PII data is stored)
• Audit only those queries you need to monitor
• Use Filter Sessions to specify conditions to log events (skip ETL/OLTP/service applications activity for example)
• Adjust you Audit Storage parameters for better performance. Note that DataSunrise doesn't provide any
guidilines on how to do that.

Audit Rules
Q. If Local Syslog is enabled, where does log data get written to?
By default, AWS EC2 is configured to write to /var/log/messages. You have to enable the Syslog service in your
system if it's not done yet. For Local Syslog messages you can select the default Syslog Configuration in your Rules'
settings.
Q. How can I audit DQL, DML, DDL and TCL queries?
In the DataSunrise's Web Console, navigate to Audit → Rules. Then create a new Rule and in the Filter Statements
subsection, change filter type to Admin Queries. Click Add Admin Query and select queries to add to the filter.
Q. My query doesn't trigger the Rule I set up. What's wrong?
Before reaching our Support Team, please check the following:
• DataSunrise deployment scheme: Proxy, Trailings or Sniffer. Note that Sniffer doesn't work with SSL/TLS
encrypted connections except MS SQL Server
• Basic checks:
• A valid license should be installed. DataSunrise with an expired license doesn't block/audit/masks queires but
just passes traffic without any processing
• Check you problematic Rule:
• Filter Sessions: if not empty, define what you're trying to achieve
• Filter Statements: if not empty, ensure actions/user/application does match the list of SQL query types/
CRUD operations and/or Objects (or Groups) selected
20 Frequently Asked Questions | 429
•You can try debugging: enable Log Event in Storage in your Rule's settings if disabled to see if a new entry
is generated in the corresponding Events list. You can also enable Rules Trace and check how your query is
processed
• DataSunrise specific:
• Proxy: ensure that your user is connecting through your DataSunrise Proxy
• Sniffer: check if SSL/TLS is used or any database-specific transport encryption (for exaempl Oracle Native
Encryption). Note that MS SQL Server Sniffer only supports encrypted traffic processing
• Trailing: check if Native Audit is configured to capture expected actions
• Advanced checks
• Check if there are no PARSER ASSERT messages in the Core log files of the problematic worker.
If anything of the aforementioned helps, contact our Support Team for.

Masking Rules
Q. When performing Dynamic masking with the Fixed String method, the target database returns the
original unmasked value instead of a masked string.
Most probably, a table which is being masked, was created by a user connected to the database directly (not
through the Datasunrise proxy). You should update your database's metadata (Editing a Target Database Profile on
page 61) before creating a Data Masking rule.
Q. I'm using Static Masking on an Oracle database and get the following error:

Error: ORA-01950: no privileges on tablespace 'USERS' / 0 processed rows.

Execute the following query:

ALTER USER C##ELL quota unlimited on users;

Q. I've created a Dynamic Masking Rule for Informix and have selected the Email masking method, but when
I try to execute a query I get the following error:

SQL Error [IX000]: Routine (ds_replace_regexp) can not be resolved

Informix doesn't include some functions required for email masking. Refer to Informix Dynamic Masking Additional
Info on page 179
Q. I'm hosting DataSunrise on Windows. I try to configure dynamic masking for Unstructured files but get
the following error:

Code: 10 The JVM was not initialized: Please check the documentation for setting up the JVM

If you're experiencing some problems with JVM on Windows, add the path to your JVM folder to the PATH
environment variable. For example:

C:\Program Files\Java\jre1.8.0_301\bin\server

Q. I'm trying to perform In-place Static Masking on my database and get the following error:

The last In-Place Static Masking task was performed unsuccessfully. Probably, database objects could be
left in an inconsistent state. It's recommended to restore your database from a backup copy.

It means that your database may contain duplicates (masked original tables that haven't been renamed, table
constraints may be deleted or named in a different way).
20 Frequently Asked Questions | 430
Q. When I'm using loader DBLink for PostgreSQL 10 version, the static masking task ends with the following
error:

ERROR: function dblink(unknown, unknown)... does not exist LINE


1: ...E","LAST_NAME","EMAIL","GENDER","IP_ADDRESS" FROM dblink('db... ^ HINT: No function matches the
given name and argument types. You might need to add explicit type casts. / 0 processed rows.

DBLink must be located in the target database. For the extension to be found, it must be set in the schema public.
To find out in which schema the extensions are located, execute the following query:

SELECT e.extname AS "Name", n.nspname AS "Schema"


FROM pg_catalog.pg_extension e
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = e.extnamespace where e.extname='dblink';

To change the schema, execute the following query:

ALTER EXTENSION name SET SCHEMA new_schema

Other
Q. On Ubuntu, when creating a Server for Subscribers, if I select certificate type "Signed", I get an error:

error setting certificate verify locations: CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none

The problem is that the root certificate is placed in another location. Add the following string to the /etc/
datasunrise.conf file:

CURL_CA_BUNDLE=<location of the file that contains the root certificate>

for example, on Ubuntu the root certificate file is located here: /etc/ssl/certs/ca-certificates.crt
Q. My Dictionary and/or Audit Storage are located in the integrated SQLite database and I get the following
message:

SQLITE_BUSY

It's not an error! SQLite supports only one writer (Backend/Core thread) at a time so when some process accesses
DB file for a write operation, others have to wait and receive the SQLITE_BUSY message.
Let's take a look at two scenarios:
• Audit Storage: more that one proxy with Audit/Learning Rules an/or Security/Masking Rules with the Log event
in Storage option enabled. In this case, you can check Core log files for the SQLITE_BUSY message. The another
option is to check Monitoring → Queues → Audit queue length. You get a problem if the graph is constantly
rises to the Watremark.
To solve this issue, disable Log events in storage in your Security/Masking Rule and disable your Audit/Learning
Rules.
• Dictionary: an Update Metadata task or a Table Relations task (any type of this task) is running.
To solve this issue, wait for the task to be completed.
Another solution is to transfer your Dictionary and/or Audit Storage to another database type supported by
DataSunrise.
20 Frequently Asked Questions | 431
Q. I'm getting the following warning:

The free disk space limit for audit is reached. The current disc space amount is XXX MB. The disk space
limit is 10240 MB

If you want to decrease the disk space threshold for this warning, navigate to System Settings → Additional and
change the "LogsDiscFreeSpaceLimit" parameter's value from 10240 to 1024 Mb for example.
Q. I'm trying to decrypt a PostgreSQL table I encrypted before but getting the following error:

SQL Error [39000]: ERROR: decrypt error: Data not a multiple of block size
Where: PL/pgSQL function ds_decrypt(bytea,text) line 6 at RETURN

This means that somebody edited your encrypted table's contents directly, bypassing your DataSunrise's proxy. This
process is irreversible and your encrypted table can't be decrypted.
Q. I'm trying to export a big number of resources to a Resource group with Resource Manager but get the
following error:

Input otl_long_string is too large to fit into the buffer Variable...

Navigate to System Settings → Additional Parameters. Locate the DictionaryAuditOtlLongSize parameter and set
its value to 8192.
Q. I'm trying to audit Oracle queries but get the following error:

can not get CCSID from oracle charsetId, charsetId: 0

This problem occurs on DataSunrise 6.3.1 when updated from version 5.7 and lower. Update your database's
metadata to get rid of that problem.
Q. I configured a MySQL database to be used as the Dictionary and Audit Storage. I get the following error:

The total number of locks exceeds the lock table size

in Innodb, row level locks are implemented by having a special lock table located in the buffer pool where small
record allocated for each hash and for each row locked on that page bit can be set. If the pool size is overflown, the
aforementioned error is thrown. The MySQL "innodb_buffer_pool_size" parameter's recommended value is 3/4 of
your RAM size. To get rid of that error, execute the following command:

SET GLOBAL innodb_buffer_pool_size=402653184;

or edit the mysqld section of the my.cnf (Linux) or my.ini (Windows) file in the following way:

[mysqld]

innodb_buffer_pool_size = 2147483648

Q. I want to delete audit data manually from my Audit Storage database. Can I do it?
Yes you can but you can't do that for SQLite. Regarding other databases, to delete audit data manually, you need to
derive the SESSION_ID from the date you want to remove all events before. Use the following Python script to get
the SESSION_ID:

from datetime import datetime

BASE_TIME = 1451606400000
remove_before_date = "2022-10-19 10:15:20"
dt_obj = datetime.strptime(remove_before_date, '%Y-%m-%d %H:%M:%S')
timestamp = dt_obj.timestamp() * 1000
timestampWithDiff = timestamp - BASE_TIME
20 Frequently Asked Questions | 432
result = (timestampWithDiff / 10) * 10000
print(result)

Once you get your SESSION_ID value and OK with REMOVE_BEFORE_DATE value, excute the following queries in
your Audit Storage:

DELETE FROM sessions WHERE end_time IS NOT NULL AND end_time < '<remove_before_date>';
DELETE FROM operation_exec WHERE end_time IS NOT NULL AND end_time < '<remove_before_date>';
DELETE FROM transactions WHERE end_time IS NOT NULL AND end_time < '<remove_before_date>';
DELETE FROM operations WHERE end_time IS NOT NULL AND end_time < '<remove_before_date>';
DELETE FROM connections WHERE end_time IS NOT NULL AND end_time < '<remove_before_date>';
DELETE FROM app_sessions WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM long_data WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM session_rules WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM operation_sub_query WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM operation_rules WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM operation_meta WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM operation_dataset WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM operation_data WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM lob_operation WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM col_objects WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM tbl_objects WHERE session_id < <derived_session_id_as_a_number>;

Note: deleting data like that generates BLOAT. Consider running VACUUM FULL ANALYZE or configuring
autovacuum to run periodically to catch up with the changes done to storage due to DELETEs.

Q. How can I enable SSL/TLS for my Dictionary/Audit Storage connection?


By default, DataSunrise Audit Storage and Dictionary connector logic uses preferred SSL mode which means that if
Audit/Dictionary database server supports SSL, DataSunrise nodes establish SSL/TLS ecnrypted connection with it.
Otherwise, it falls back to unencrypted connection.
If you're using DBaaS like CloudSQL or AWS RDS or MS Azure counterpart of this service, CSPs (Cloud Service
Providers) enable SSL/TLS ecnryption out-of-the-box (unless you deliberately disable it which is not recommended),
so the service connections to DataSunrise-used database are always encrypted.
Q. I configured DataSunrise in the Trail DB Audit LOgs mode but there're no events in Transactional Trails
First, do the following:
• Check DB User used for your Instance or your IAM user permissions (AWS RDS)
• Check native audit settings of the target database (it should be enabled and required permissions should be
issued)
• Check native audit logging policies (AUDIT statements in Oracle, Server/Database Audit Specification in MS SQL
Server)
• Check native Audit Storage location to ensure it logs SQL statements
Check things at DataSunrise end:
• Audit Rules configured to capture queries directed to the required objects and required query types. You can
suggest an empty Query Type Rule to capture ANY queries (note the prompt in the Web Console)
• Ensure you've copnfigured Native Audit properly:
• Configuration details are exclusive for each DB type and platform (AWS RDS for example)
• Please review the chapters on native audit configuration for your platform and configure it properly if
necessary
• If you're operating MS SQL erver or Oracle, ensure that you don't test native audit from the same session where
you configured it
• Check the repository of a native audit log on the target database side:
• Example 1: check sys.aud$ table or DBA_AUDIT_TRAIL view on Oracle with audit_trail=db,extended Standard
Auditing mechanism to ensure audited statements are logged there
20 Frequently Asked Questions | 433
• Example 2: in case of MySQL/PostgreSQL RDS, ensure you can see audited data in the audit log files of your
RDS
• Example 3: for MS SQL Server, check if you can see the data by the means offered by this DBMS (using a
special function or by using SSMS Audit logs viewer ability (may not work well on AWS RDS))
• If you see audited data in the target DBMS audit logs storage location, enable TrailAuditTrace and check the
corresponding Trails worker log files. Ensure the timestamps for events are actual (to confirm Trailing is not just
too busy and is not able to catch up with the flow of events)
• If your DataSunrise Audit Srorage is the integrated SQLIte, and there are no entries in Transactional Trails, do the
following:
• Refresh the Audit Transactional Trails page of the Web Console
• Re-login into the Web Console and check the Transactional Trails again.
21 Appendix 1 | 434

21 Appendix 1

21.1 Default OIDs


Each DataSunrise event is identified inside an SNMP application by a unique identifier, OID. Each SNMP message
includes an Enterprise OID (DataSunrise unique identifier) and OIDs of certain events.

Note: The default DataSunrise identifier (Enterprise OID) is 1.3.6.1.4.1.7777. Thus, the following table displays
events' OIDs based on the default Enterprise OID.

DataSunrise Event type OID Description


Objects
Warnings 1.3.6.1.4.1.7777.0.0.1 Warnings on operations with database objects
Errors 1.3.6.1.4.1.7777.0.0.2 Notifications on errors in operations with objects
Info 1.3.6.1.4.1.7777.0.0.3 Information on operations with objects

Notifications:
Configuration change events
Trap OID 1.3.6.1.4.1.7777.0.1.1 Notifications on changes in DataSunrise configuration

Authentication events
Trap OID 1.3.6.1.4.1.7777.0.1.2 Notifications on user authentication events (successful
authentication, authentication errors)

Core events
Trap OID 1.3.6.1.4.1.7777.0.1.3 Notifications on DataSunrise Core events (start, stop,
restart)

Audit error events


Trap OID 1.3.6.1.4.1.7777.0.1.4 Notifications on errors occurred during data auditing

Backend events
Trap OID 1.3.6.1.4.1.7777.0.1.5 Notifications on DataSunrise Backend events

Rule triggering events. Activated when a Rule is triggered (matched)


Trap OID 1.3.6.1.4.1.7777.0.1.6 Notifications on triggered Rules
21 Appendix 1 | 435

21.2 DataSunrise System Events IDs


Below is the full list of DataSunrise system events and their IDs.

Note: each ID number consists of "DS_" prefix (means "DataSunrise"), five digits and a postfix ("I" — info, "E"—
error, "W" — warning ). The first ID's digit defines a group of events (Configuration, Core etc.). The second ID's digit
defines level of an event (1 — error, 2 — warning, 3 — info). The last three digits mean event's number.
21 Appendix 1 | 436

ID Title (used in the Web Console) Message


Configuration Events
DS_13001I Rule Priority Change User '<name>'. Priority for the rule with id '<id>'
was changed.
DS_13002I Rule Creation User '<name>'. The '<name>' rule was created. Id
<id> was assigned to the '<name>' rule.
DS_13003I Rule Updating User '<name>'. The '<name>' rule was updated.
DS_13004I Rule Enabled/Disabled Status Toggling User '<name>'. The rule with id '<id>' was <enabled
or disabled>
DS_13005I Rule Removal User '<name>'. The rule with id '<id>' was deleted.
DS_13006I Rule Schedule Creation User '<name>'. The schedule '<name>' was created.
DS_13007I Rule Schedule Update User '<name>'. The schedule '<name>' was
updated.
DS_13007I Rule Schedule Removal User '<name>'. The schedule '<name>' was
updated.
DS_13008I Rule Schedule Removal User '<name>'. The schedule with id '<id>' was
deleted.
DS_13009I Rule Subitem Removal User '<name>'. The Subitem with ID 'id' was deleted.
[Was used in object groups: <object groups list>]
[Was used in rules: <rules list].
DS_13010I Rule Object Removal User '<name>'. The Object with ID 'id' was deleted.
[Was used in object groups: <object groups list>]
[Was used in rules: <rules list].
DS_13011I Host Creation User '<name>'. The host item '<name>' was created.
DS_13012I Host Update User '<name>'. The host item '<name>' was
updated.
DS_13013I Host Removal User '<name>'. The host item '<id>' was deleted.
DS_13014I Host Group Creation User '<name>'. The group of hosts '<name>' was
created.
DS_13015I Host Group Update User '<name>'. The group of hosts '<name>' was
updated.
DS_13016I Host Group Removal User '<name>'. The group of hosts with id '<id>'
was deleted.
DS_13017I Proxy Creation User '<name>'. The proxy was created. ID '<id>'
was assigned to the proxy with the address '<ip
address>:<port>'.
DS_13018I Proxy Update User '<name>'. The proxy '<ip address>:<port>' (ID
= <id>) was updated.
DS_13019I Proxy Removal User '<name>'. The proxy with id '<id>' was deleted.
DS_13020I Sniffer Creation User '<name>'. The sniffer was created. ID '<id>'
was assigned to the sniffer for the interface
'<network device>'.
DS_13021I Sniffer Update User '<name>'. The sniffer '<network device>' (ID =
<id>) was updated.
21 Appendix 1 | 437

ID Title (used in the Web Console) Message


DS_13022I Sniffer Removal User '<name>'. The sniffer with id '<id>' was
deleted.
DS_13023I Firewall Server Creation User '<name>'. A new server object was inserted.
ID '<id>' was assigned to the server with the host
'<host>'.
DS_13024I Firewall Server Update User '<name>'. The server '<host>' (ID = <id>) was
updated.
DS_13025I Firewall Server Removal User '<name>'. The server with id '<id>' was
deleted.
DS_13026I Database Interface Creation User '<name>'. The database interface with id '<id>'
was created.
DS_13027I Database Interface Update User '<name>'. The database interface with id '<id>'
was updated.
DS_13028I Database Interface Removal User '<name>'. The database interface with id '<id>'
was deleted.
DS_13029I Database Instance Creation User '<name>'. The instance was inserted into
configuration. ID '<id>' was assigned to the instance
'<name>'.
DS_13030I Database Instance Update User '<name>'. The instance '<name>' (ID = <id>)
was updated.
DS_13031I Database Instance Removal User '<name>'. The instance with id '<id>' was
deleted from configuration.
DS_13032I Database Instance Update after Application user detection is '<enable or disable>'.
Application Detection The instance (ID = <id>) was updated.
DS_13033I Subscribers Server Creation User '<name>'. The '<host>:<port>' subscribers
server was added. ID '<id>' was assigned to the
server.
DS_13034I Subscribers Server Update User '<host>'. The '<name>:<port>' subscribers
server (ID = <id>) was updated.
DS_13035I Subscribers Server Removal User '<name>'. The subscribers server with id '<id>'
was deleted.
DS_13036I Subscriber Creation User '<name>'. A new subscriber object was
inserted. ID '<id>' was assigned to the subscriber
with the client address '<subscriber address>'.
DS_13037I Subscriber Update User '<name>'. The subscriber '<subscriber
address>' (ID = <id>) was updated.
DS_13038I Subscriber Removal User '<name>'. The subscriber with id '<id>' was
deleted.
DS_13039I Event Subscriber Creation User '<name>'. The event subscriber with id '<id>'
was created.
DS_13040I Event Subscriber Update User '<name>'. The event subscriber with id '<id>'
was updated.
DS_13041I Event Subscriber Removal User '<name>'. The event subscriber with id '<id>'
was deleted.
21 Appendix 1 | 438

ID Title (used in the Web Console) Message


DS_13042I Instance Level User Creation User '<name>'. A new database user object was
inserted. ID '<id>' was assigned to the database user
'<login>'.
DS_13043I Instance Level User Update User '<name>'. The database user '<login>' (ID =
<id>) was updated.
DS_13044I Instance Level User Removal User '<name>'. The database user with id '<id>' was
deleted.
DS_13045I Object Group Creation Object Group creation.
DS_13046I Object Group update. Object Group update.
DS_13047I Object Group removal. Object Group removal.
DS_13048I Periodic Task creation Periodic Task creation
DS_13049I Periodic Task removal Periodic Task removal
DS_13050I Worker creation Worker creation
DS_13051I Worker update Worker update
DS_13052I Worker removal Worker removal
DS_13053I Event email template creation Event email template creation
DS_13054I Event email template update Event email template update
DS_13055I Event email template removal Event email template removal
DS_13056I Email template creation Email template creation
DS_13057I Email template update Email template update
DS_13058I Email template removal Email template removal
Authentication Events
DS_21001E Web UI authentication failure The '<login>' user failed to connect to the
DataSunrise Web Console.
DS_22001W Database User Authentication Failure The user '<name>' tried to connect to the database
with an invalid login or password for <count> times
(the limit is <count>). <instance & proxy info>
DS_23001I Connection Authentication The '<login>' user was connected to the DataSunrise
Web Console.
DS_23002I Changed password Password has been changed
DS_23003I Generated password Password has been generated
DS_23004I Reset password Password has been reset
DS_23005I Password will be changed Password should be changed
Core events
DS_31001E License Error <license verifier error message>
DS_31002E Core Parser Error The Firewall accepted a wrong packet. Error: <error
message>.
DS_31003E Core Database Error Database exception in <function name>: <error
message> (<error code>)
DS_31004E Core Restarting Due to an Error The Firewall process terminated with exit code
<error code> and will be restarted.
21 Appendix 1 | 439

ID Title (used in the Web Console) Message


DS_31005E Core Session Initialization Failure The session object cannot be correctly initialized.
DS_31006E Criteria Type Error The type of criteria (ID = <id>) cannot be obtained.
DS_31007E Criteria Loading Error The criteria (ID = <id>) cannot be loaded.
DS_31008E Main Criterion Loading Error The main criterion cannot be loaded (ID = <id>).
DS_31009E Masking Rule Loading Error The masking rule cannot be correctly loaded.
DS_31010E Interface Loading Error The database interface cannot be loaded to the
system.
DS_31011E Sniffer PCAP Library Error The sniffer cannot be initiated, because the PCAP
library was not loaded.
DS_31012E Sniffer Task Initialization Failure The task of the sniffer cannot be initialized.
DS_31013E Sniffer Initialization Failure The sniffer cannot be initialized.
DS_31014E TCP Session Closing Due To Packet Parsing The TCP session (<host>:<port> - <host>:<port>)
Failure was closed due to TCP packet parsing failure.
DS_31015E TCP Session Closing Due To Out Of The TCP session (<host>:<port> - <host>:<port>)
Ordered Segments Limit Reached was closed, because the limit of out-of-order
segments is reached.
DS_31016E TCP Session Closing Due to Duplicate The TCP session (<host>:<port> -> <host>:<port>)
Session was closed due to detection of a new session with
the same ports.
DS_31017E TCP Session Closing Due to the Maximum The TCP session (<host>:<port> -> <host>:<port>)
Idle Time was closed, because the maximum idle time was
exceeded.
DS_31018E Interface Location Failure The target interface was not found, create a new
instance and an interface for <host>:<port>.
DS_31019E Proxy Start Failure. Busy Port The port <port> is busy. The proxy '<id>' was not
started.
DS_31020E Proxy Server Connection Failure Failure to connect to "<name> (<host>:<port>)".
DS_31021E Proxy Location Failure The Proxy was not found, create a new proxy on port
number: <port>.
DS_31022E Proxy Connection Closing Connection with ID '<id>' was closed.
DS_31023E Proxy Parser Disabling for the Specified The Proxy parser is disabled for the connection with
Connection ID '<id>' (session ID '<id>').
DS_31024E Connectable IP Address Location Failure The connectable IP address for the proxy host '<ip
address>' cannot be found.
DS_31025E Audit Database Initialization Failure The Audit database initialization failure: <error
code> (<error message>).
DS_31026E Audit Database Error The Audit database error: <error code> (<error
message>).
DS_31027E Audit Database Error During Transaction The Audit database error while terminating the
Termination transaction: <error code> (<error message>)
DS_31028E Execution Transaction ID Mismatch An error occurred when saving open execution
information to the Audit Storage. Transaction ID
'<id>' doesn't correspond to transaction ID '<id>' of
the the current session.
21 Appendix 1 | 440

ID Title (used in the Web Console) Message


DS_31029E Audit Invalid Operation Parameter Audit journal: invalid input parameter - operation.
DS_31030E Audit Invalid Execution Parameter Audit journal: Invalid input parameter - execution.
DS_31031E Rules Checker Switcher - Invalid Initialization of the Rules Checker failed. Wrong
Parameters parameters.
DS_31032E Rules Checker Switcher - Initialization The Rules Checker cannot be initiated.
Failure
DS_31033E Database Protocol Parsing Failure <database protocol parser error message>
DS_31034E Packet Blocking Failure The packet cannot be blocked.
DS_31035E SQL Query Masking Initialization Error Initialization of SQL Query Masking failed.
DS_31036E SQL Query Masking Error SQL Query Masking failed.
DS_31037E Need valid crypto settings error Need valid crypto settings error
DS_31038E Rule loading failure Rule loading failure
DS_31039E Configuration update failure Configuration update failure
DS_31040E Unstructured Data masking error Unstructured Data masking error
DS_32001W Core Restarting The Firewall process will be restarted.
DS_32002W Sniffer SSL Warning An SSL connection cannot be parsed in the sniffer
mode.
DS_32003W Database Service Appending Failure The service '<name>' cannot be added to the
database.
DS_32004W Failure of Database Service Appending to The service '<name>' cannot be added to the
the Specified Database database '<name>'.
DS_32005W Database Service Removal Failure The service '<name>' cannot be removed from the
database.
DS_32006W Failure of Removal the Database Service The service '<name>' cannot be removed from the
from the Specified Database database '<name>'.
DS_32007W Audit Free Disk Space Limit The free disk space limit for audit is reached. The
current disk space amount is <count> MB. The disk
space limit is <count> MB.
DS_32008W Logger Free Disk Space Limit The free disk space limit for logging is reached. The
current disk space amount is <count> MB. The disk
space limit is <count> MB.
DS_32009W Rule Type License Violation The <rule> rule with ID '<id>' (<name>) cannot be
added because it is not covered by the license and
cannot be added. Contact support.
DS_32010W Database Type License Violation The instance with ID '<id>' of '<database type>' is
not covered by the license and cannot be added.
Contact support.
DS_32011W Host Address License Violation The <proxy or sniffer> with ID '<id>' ('<database
type>' <host>:<port>) is not covered by the license
and cannot be added. Contact support.
DS_32012W Vertica Mapping Location Failure Vertica mapping: Mapping for a connecting user is
not found ('<name>'). The connection is passed.
21 Appendix 1 | 441

ID Title (used in the Web Console) Message


DS_32013W PostgreSQL Mapping Location Failure Postgre mapping: Mapping for a connecting user is
not found ('<name>'). The connection is passed.
DS_32014W PostgreSQL Opened Statements Warning PostgreSQL opened statements warning
DS_32015W Rules Checker Switcher - License Violation Rules Checker update failed. New rules are no longer
covered by the license. The Core will work with old
rules now.
DS_32016W Messages/Audit Queue Limit Warning The '<name>' queue is filled for more than <upper
level>%. The current level is <current level>%.
DS_32017W SSL resume timeout warning SSL resume timeout warning
DS_32018I Allocated Core memory Allocated Core memory
DS_51023I Allocated Core memory more that allowed Allocated Core memory more that allowed
DS_32019W If the marsProxyDisable option is enabled, If the marsProxyDisable option is enabled, blocking
blocking and masking rules will not work and masking rules will not work
DS_32020W Mapping is already found Mapping has been already found
DS_32021W Insufficient information on login. Ensure Insufficient information on login. Ensure that the
that the metadata is up to date. metadata is up to date.
DS_32022W NTLM mapping is not supported NTLM mapping is not supported
DS_32023W Core dictionary switching Core dictionary switching
DS_33001I No Firewall Activity No firewall activity for a certain period (<period>
sec.).
DS_33002I MsSQL Route Rewriting Rewriting of the MsSQL route: <host>:<port> ->
<host>:<port>.
DS_33003I MsSQL Route Redirection MsSQL redirection: <host>:<port>.
DS_33004I SSL session restore warning SSL session restore warning
DS_33005I Session event matched Session event matched
DS_33006I No traffic activity at the proxy No traffic activity at the proxy
DS_33007I Proxy max connections error Proxy max connections error
DS_33008I Proxy connections closing freeze Proxy connections closing freeze
DS_33009I Proxy connection slow packet monitoring Proxy connection slow packet monitoring
Audit Viewer Errors
DS_41001E Database Connection Loss Loss of connection - <error message> (code =
<error code>).
DS_41002E Database Error The database error - <error message> (code =
<error code>).
DS_41003E Audit partition error Audit partition error
DS_41004E Audit put message to queue error Audit put message to queue error
Backend Events
DS_51001E Firewall Update Failure The Firewall instance cannot be updated. <error
message> <error description>.
DS_51002E DataSunrise Self-Updater Initialization The DataSunrise Self-Updater is not fully initialized.
Failure
21 Appendix 1 | 442

ID Title (used in the Web Console) Message


DS_51003E Backend Database Error The backend database failure: <statement or
description> (<error code>).
DS_51004E Connection Error The backend connection loss: <statement or
description> (<error code>).
DS_51005E User Permission Denial The backend permission for the user <name> was
denied: <statement or description>.
DS_51006E Failure to Find Connection The backend database connection was not found:
<error message> (<statement or description>).
DS_51007E Backend Error The backend error: <error message> (<statement or
description>).
DS_51008E Backend Logic Error The backend logic error: <error message>.
DS_51009E Backend Runtime Error The backend runtime error: <error message>.
DS_51010E Backend Unknown Error The backend unknown error: <error message>.
DS_51011E PCAP Wrapper Error The PCAP wrapper error: <error message> (<error
description>).
DS_51012E MsSQL Database List Error The MsSQL database list reading error: the name is
empty.
DS_51013E MsSQL Load Schema Failure The MsSQL database '<name>' cannot be loaded:
<statement>.
DS_51014E Periodic Task Failure The task '<name>' (ID = <id>) error : <error
message>.
DS_51015E Task Load Failure The task load error : <error message>.
DS_51016E Task Error The task error : <error message>.
DS_51017E Task Failure The task (ID = <id>) error : <error message>.
DS_51018E Unknown Task Error Task (ID = <id>) failure : an unknown error.
DS_51019E Public address is not available Public address is not available
DS_51020E Proxy is not available Proxy is not available
DS_51021E Database is not available Database is not available
DS_51022E Healthcheck is OK Healthcheck is OK
DS_51025E Field crypto key error Field crypto key error
DS_51026E Backend invalid argument Backend invalid argument
DS_52001W Static Masking warning Static Masking warning
DS_52002W License Expiration Note The license '<customer name>' expires in <count>
hour(s).
DS_52003W PostgreSQL Search Path Setup Failure The search path cannot be set for a PostgreSQL
instance by the current user.
DS_52005W Time between the servers is not Time between the servers is not synchronized
synchronized
DS_52006W Need Set Masking Need set masking
DS_52007W Backend Dictionary Switching Backend dictionary switching
DS_53001I Firewall update success Firewall update success
21 Appendix 1 | 443

ID Title (used in the Web Console) Message


DS_53002I Dictionary Backup Creation Backup #<id> was created { <transfer set> }
DS_53003I Dictionary Backup Restoring Backup #<id> was restored { <transfer set> }
DS_53004I Audit Journal cleaning Audit Journal cleaning
DS_53005I Firewall Core restarting Firewall Core restarting
DS_53006I Firewall Core stopping Firewall Core stopping
DS_53007I Firewall Core starting Firewall Core starting
DS_53008I Database cannot be loaded because it is in Database cannot be loaded because it is in invalid
invalid state state
DS_53009I Doesn't Have System Users No system users available
DS_53010I License Add Note License add note
DS_53011I Backend Life Cycle Backend life cycle
DS_53012I Dictionary was imported Dictionary has been imported successfully
DS_31044I Dictionary wasn't imported Dictionary hasn't been imported
DS_53013I User was added User has been added
DS_53014I User was deleted User has been deleted
DS_53015I User was updated User has been updated
DS_53016I User update IP restrictions User update IP restrictions
DS_53017I User update email confirmation status User update email confirmation status
DS_53018I Access role was added Access role has been added
DS_53019I Access role was deleted Access role has been deleted
DS_53020I Access role was updated Access role has been updated
DS_53021I The Database is available again The Database is available again
DS_53022I Public address is available again Public address is available again
DS_53023I Proxy is available again Proxy is available again
DS_52004I Allocated Backend memory Allocated Backend memory
DS_51024I Allocated Backend memory more than Allocated Backend memory more than allowed
allowed
DS_52008W Oracle PDB not open Oracle PDB is not open
DS_52009W Empty results of Report Gen Report Gen found no results
Metadata-related Events
DS_61001E Update Metadata error. Need restart Update Metadata error. Need restart
DS_63001I Database Creation Database object was created. ID '<id>' was assigned
to database '<name>'
DS_63002I Database Update Database object '<name>' (ID '<id>') was updated
DS_63003I Database Removal Database object with ID '<id>' was deleted
DS_63004I Database Level User Creation A new database user (database level) object was
created. ID '<id>' was assigned to database user
'<name>'
21 Appendix 1 | 444

ID Title (used in the Web Console) Message


DS_63005I Database Level User Update Database user (database level) '<login>' (ID = <id>)
was updated
DS_63006I Database Level User Removal The database user (database level) object with id
'<id>' was deleted
DS_63007I Database Service Creation A new service object was inserted into configuration.
The id '<id>' was assigned to instance '<name>'
DS_63008I Database Service Update Service object '<name>' (ID = <id>) was updated
DS_63009I Database Service Removal Service object with id '<id>' was deleted
DS_63010I Database Object Property Creation A new database property '<name>' with id '<id>'
was created
DS_63011I Database Object Property Update Database property '<name>' with id '<id>' was
updated.
DS_63012I Database level LDAP user creation Database level LDAP user created
DS_63013I Database level LDAP user update Database level LDAP user updated
Other events that make pop-ups displayed
DS_70000I Important user alerts Important user alerts
DS_74001I Settings alert Settings alert

21.3 Examples of Database Connection


Strings
Below, you can find connection string templates for the majority of databases supported by DataSunrise:
• MS SQL Server:

DRIVER={<ODBC_DRIVER_NAME>};SERVER=<server_address,port_number>;DATABASE=<db_name>;UID=<login>;PWD=<password>

• Hive:

Driver=<ODBC_DRIVER_NAME>; Host=<server_address>; Port=10000; Schema=default; HiveServerType=2;


UserName=<user_name>;PWD=<password>; AuthMech=3

• Cassandra:

Host=<server>;Port=<port_number>;AuthMech=1;UID=<user_name>;PWD=<password>;

• IBM DB2:

Driver={<ODBC_DRIVER_NAME>};Database=<database>;Hostname=<server_address;Port=1234>;
[Uid=<user_name>;Pwd=<password>;[Hostname/IpAddress=val;]]
[Protocol=TCPIP;Authentication=KERBEROS;TargetPrinciple=val;]

• Impala:

Driver=<ODBC_DRIVER_NAME>; Host=<server_address>; Port=21050; Schema=<schema>; HiveServerType=2;


AuthMech=0;
21 Appendix 1 | 445
• Informix:

DRIVER=<ODBC_DRIVER_NAME>;Host=<server_address>;Server=<server_name>;
Service=<port_number>;Protocol=olsoctcp;Database=<database>;Uid=<user_name>;Pwd=<password>;

• MongoDB:

mongodb://[<username>:<password>@]<host1>[:<port1>][,<host2>[:<port2>],... [,<hostN>[:<portN>]]][/
[<db_name>][?<property_name1>=<value>&<property_nameN>=<value>]]

• MySQL, X Protocol:

mysqlx://[<login>[:<password>]@][<hosts>[:<port>]][/<database>] [?
<property_name1>=<value>&<property_nameN>=<value>]

• Netezza:

DRIVER={<ODBC_DRIVER_NAME>};SERVERNAME=<server_address>; PORT=<port_number>;DATABASE=<db_name>;
USERNAME=<user_name>;PASSWORD=<password>;LOGINTIMEOUT=<connect_timeout_in_sec>;

• Oracle:

(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<server_address>) (PORT=<port_number>))
(CONNECT_DATA=(SERVER=DEDICATED)(ORACLE_SID=orcl)))

• PostgreSQL, Redshift, Amazon Aurora PostgreSQL, Greenplum:

host=<server_address> port=<port_number> dbname=<db_name> user=<user_name> [password=<password>]


[sslmode=prefer/require] [connect_timeout=<connect_timeout_in_sec>] [application_name=DataSunrise]

• SAP Hana:

DRIVER=<ODBC_DRIVER_NAME>;SERVERNODE=<server_address>:30013;
UID=<user_name>;PWD=<password>;DATABASENAME=<db_name>;

• Teradata:

Driver=<ODBC_DRIVER_NAME>;DBCName=<server_address>;Database=<db_name>;
Uid=<user_name>;Pwd=<password>;TDMSTPORTNUMBER=<port_number>;DATAENCRYPTION=y;

• Vertica:

Driver=<ODBC_DRIVER_NAME>;Server=<server_address>;Port=<port_number>;
Database=<db_name>;Uid=<user_name>;Pwd=<password>; ConnSettings=SET+SESSION+IDLESESSIONTIMEOUT
+'60+sec';SSLMode=prefer;

You might also like