DataSunrise Database Security User Guide
DataSunrise Database Security User Guide
com
User Guide
DataSunrise Database Security User Guide
All brand names and product names mentioned in this document are trademarks, registered trademarks or service
marks of their respective owners.
No part of this document may be copied, reproduced or transmitted in any form or by any means, electronic,
mechanical, photocopying, recording, or otherwise, except as expressly allowed by law or permitted in writing by the
copyright holder.
The information in this document is subject to change without notice and is not warranted to be error-free. If you
find any errors, please report them to us in writing.
iii
Contents
1 General Information
In this configuration, DataSunrise can be used only for "passive security" ("active security" features such as database
firewall or masking are not supported in this mode). When deployed in Sniffer mode, DataSunrise is capable to
perform database activity monitoring only because it can't modify database traffic in this configuration. Running
DataSunrise in Sniffer mode does not require any additional reconfiguring of databases or client applications. Sniffer
mode can be used for data auditing purposes or for running DataSunrise in Learning mode.
Important: database traffic should not be encrypted. Check your database settings as some databases encrypt
traffic by default. If you're operating an SQL Server database, do not use ephemeral ciphers. DataSunrise deployed
in Sniffer mode does not support connections redirected to a random port (like Oracle). All network interfaces (the
main and the one the database is redirected to) should be added to DataSunrise's configuration.
1 General Information | 19
Proxy mode is for "active protection". DataSunrise intercepts SQL queries sent to a protected database by database
users, checks if they comply with existing security policies, and audits, blocks or modifies the incoming queries or
query results if necessary. When running in the Proxy mode, DataSunrise supports its full functionality: database
activity monitoring, database firewall, both dynamic and static data masking are available.
Important: We recommend to use DataSunrise in the proxy mode. It provides full protection and in this mode,
DataSunrise supports processing of encrypted traffic and redirect connections (it is essential for Hana, Oracle,
Vertica, MS SQL). For example in SQL Server redirects can occur when working with Azure SQL or AlwaysOn Listener.
Target database performs auditing using its integrated auditing mechanisms and saves auditing results in a
dedicated database table or in either a CSV or XML file depending on selected configuration. Then DataSunrise
1 General Information | 20
establishes a connection with the database, downloads the audit data from the database and passes it to the Audit
Storage for further analysis.
First and foremost, this configuration is intended to be used for Amazon RDS databases because DataSunrise
doesn't support sniffing on RDS.
This operation mode has two main drawbacks:
• If the database admin has access to the database logs, he can delete them
• Native auditing makes a negative impact on database performance.
Note: Dynamic SQL processing is available for PostgreSQL, MySQL and MS SQL Server
EXECUTE enables you to execute a query which is contained in a string, variable or is a result of an expression. For
example:
...
EXECUTE "select * from users";
EXECUTE "select * from ” || table_name || where_part;
EXECUTE foo();
...
Here table_name and where_part are variables, foo() is a function that returns a string. The second and third queries
are dynamic ones because we can’t tell what query will be executed in the database.
Let's take a look at the following example:
SELECT run_query();
This function takes a random query from the queries table, executes it and returns some result. DataSunrise can't
know which query will be executed beforehand because the exact query will be known when executing the following
subquery:
...
SELECT * FROM queries AS r(id, sql) ORDER BY id DESC LIMIT 1 INTO row;
...
1 General Information | 21
That's why DataSunrise wraps dynamic SQL in the special function, DS_HANDLE_SQL, that does the trick. As the
result, the original function is modified to be the following:
SELECT run_query();
SELECT DSDSNRBYLCBODMJOVNJLFJFH();
Inside the DS_HANDLE_SQL function the database sends a dynamic SQL to DataSunrise's handler. The handler
processed the query and audits, masks or blocks it respectively. Thus,
...
EXECUTE row.sql into RESULT;
...
executes not the original query contained in the queries table but a modified one.
To enable dynamic SQL processing, when creating a database instance, you need to enable the “Dynamic SQL
processing” option in the Advanced settings. Then you need to select host and dynamic SQL handler’s port. This is
a host of the machine DataSunrise is installed on, it should be available for your database because the database
connects to this host when processing dynamic SQL.
Important: it's required to provide an external IP address of the SQL handler machine ("127.0.0.1" or "localhost" will
not work).
For processing of dynamic SQL inside functions, you need to enable the “UseMetadataFunctionDDL” parameter in
the Additional parameters and check the : “Mask Queries Included in Procedures and Functions” for masking Rules
or “Process Queries to Tables and Functions through Function Call” for audit and security Rules respectively.
You can also enable dynamic SQL processing in an existing Instance's settings and specify host and port in proxy’s
settings.
Note that you need to configure a handler for each proxy and select a free port number.
PostgreSQL
In PostgreSQL, dblink is used for processing of dynamic SQL. It enables sending any SQL queries to another remote
PG databaase.
Thus, dynamic SQL handler uses a PostgreSQL emulator. User DB with the help of dblink sends a dynamic SQL to
our handler. The emulator receives new connection, performs handshakes and makes the client DB believe that it
1 General Information | 22
sends queries to a real DB. Since it's necessary to pass session id and operation id (to associate a query sent to the
emulator with the original query), all these parameters are transferred using dblink's connection string:
host=<handler_host> port=<handler_port>
dbname=<session id> user=<operation id> password=<connection id>
MySQL
In MySQL, the FEDERATED storage engine extension is used for dynamic SQL processing. It connects two remote
databases as well. But in MySQL case it's something like an extended table that is created in one DB, but the data is
stored in another DB. To create such a table, it's necessary to provide a connection string to MySQL DB.
During execution of a first dynamic SQL query, the HANDLE_SQL function and such an extended table is created in
the DSDS***_ENVIRONMENT schema. This table's connection string points at MySQL emulator at that. The table
includes the following columns: query, connection_id, session_id, operation_id and action.
First, the function INSERTs all the required parameters. The emulator processes the query, modifies it and changed
action to block if necessary. After that, the function SELECTs the resulted query and returns it.
In MySQL, for creation and execution of dynamic queries the following pair of entities is used: prepare stmt
from @var and execute stmt. Since the execution of latter means that a prepared statement already exists in
the database, we modify prepare. As a result, a complete query:
<stmt_name> in this case is the name of statement of user query. A separate procedure is created for every
name and for every addressing. Information about these procedures is stored in PreparedStatementManager.
@ds_sql_<stmt_name> is an exit parameter of HANDLE_SQL where the function puts a modified query to.
Important: for dynamic SQL processing in MySQL, federated engine should be enabled. To enable it, it's necessary
to add the federated string to the [mysqld] section of the /etc/my.cnf file. Another method: connect to your MySQL/
MariaDB with admin privileges, ensure that Federated Engine is off and enable it with the following query:
show engines;
install plugin federated soname
'ha_federated.so'
Note: the more proxies you open, the higher the RAM consumption you will experience.
Software requirements:
• Operating system: 64-bit Linux (Red Hat Enterprise Linux 7+, Debian 10+, Ubuntu 18.04 LTS+, Amazon Linux 2)
• 64-bit Windows (Windows Server 2019+) with .NET Framework 3.5 installed https://www.microsoft.com/en-us/
download/details.aspx?id=21
• Linux-compatible file system (NFS and SMB file systems are not supported)
• Web browser for accessing the Web Console:
Note that you might need to install some additional software like database drivers depending on the target
database and operating system you use. For the full list of required components see the Prerequisites section of the
corresponding Admin Guide.
2 Quick Start
https://<DataSunrise_ip_address>:11000
<DataSunrise_ip_address> is the IP address or the hostname of the server DataSunrise is installed on, 11000 is
the HTTPS port of the DataSunrise's Web Console. For example, if your DataSunrise is installed on your local PC,
the address should be the following:
https://localhost:11000
2. Your web browser may display an "Unsecure connection" prompt due to an untrusted SSL certificate. Follow your
browser's prompts to confirm a security exception for the DataSunrise's Web Console (refer to subs. Creating a
Certificate for the Web Console on page 41).
3. Enter your credentials and click Log in to enter the web interface. On the first startup, use admin as the user
name. Concerning the password, see the instruction below:
• Linux: use the password you received at the end of the installation process.
• Windows: use the password you set at the end of the installation process.
• AWS: use Instance ID if your EC2 machine with the DS- prefix as the password. For example:
DS-i-05ad7f56124728269
• Microsoft Azure: leave the password field empty. You will be prompted to set a new password after logging in.
• In case the dictionary.db file was removed or a password wasn't set during the installation process, leave the
password field empty to set a new password.
Note:
• The Logical Name field contains a logical name of the database profile. You can set any name
• In the Database Type drop-down list, PostgreSQL (target database type) is selected as an example
• In the Hostname field, DBs IP address is specified
• In the Port field, port number 5434 is specified, because the database listens on this port (example)
• Click Test Connection when done to check the connection between DataSunrise and your database.
4. To employ database security and masking features, it is necessary to create a DataSunrise proxy for the target
database. To create a proxy, we click Add Proxy in the Capture Mode subsection. Then we specify proxy's IP
address in the Listen on IP Address drop-down list. Then we assign proxy's port number in the Listen on Port
field. Proxy's port number should differ from the database's port number (it is 54321 in this case). When done,
click Save to save the database profile.
5. To connect to the database through the proxy, it is necessary to create a new connection in PGAdmin with
DataSunrise proxy settings.
3 DataSunrise Use Cases | 29
Note: In practice, a database is usually configured to listen on a non-standard port (54321 for example), and a
DataSunrise proxy is configured to use the port which client applications use to connect to the server. Thus, client
applications connect to the DataSunrise proxy instead of connecting to the database directly.
In the Main section subsection, the target database information is specified. It includes database type
(PostgreSQL), database instance (as the target database entry is named in the Configurations) and the Rule's
logical name.
By default, the "Audit" action is selected. It means that DataSunrise will audit user queries when the rule is
triggered. To log database responses (the output), the Log Query Results check box is checked. Since the current
scenario requires all user queries to be audited, Filter Sessions are left as by default. Thus, any query to the
database regardless of its source IP address will trigger the rule.
3 DataSunrise Use Cases | 30
Filter Statements settings are as by default as well. Thus, the Rule will be triggered by all queries that contain
any DML statements.
3. Now let's check the auditing results in the Web Console. Navigate to the Audit → Transactional Trails
subsection.
4. To view detailed information about some event, click event's ID. In a new tab, the event's details will be displayed:
SQL of the query, basic information, session information and the database output.
3 DataSunrise Use Cases | 31
The Block action in the Action Settings subsection to block all queries that meet the current rule's conditions is
set by default.
3 DataSunrise Use Cases | 32
Since the current scenario requires to prevent ALL table modification attempts, the Object Group filter is selected
in the Filter Statements subsection, INSERT, UPDATE and DELETE check boxes are checked. Thus, when the Rule
is triggered, DataSunrise will block all queries aimed at table modification. The Filtering settings also include the
customers table specified (Process Query to Database Objects subsection). Thus, the Rule can be triggered
only by the queries directed to the customers table. All actions aimed at other tables will be ignored.
UPDATE public.customers
SET "Last Name"='Burnwood'
WHERE "Last Name"='Wade';
2. As a result, the query is blocked. The blocking is performed in the form of a SQL error ("ERROR: The query is
blocked").
3. To view Data Security events and event details, go to Data Security → Events.
3 DataSunrise Use Cases | 33
In the Columns to Mask subsection a column to be masked is specified (the LastName column of the customers
table). To select it, click Select and check it in the database objects tree. The Fixed string algorithm is selected
3 DataSunrise Use Cases | 34
in the Masking Method drop-down list. Thus, the current Rule will be triggered by a query directed to the
LastName column and will obfuscate its contents in the database output. Other columns will be ignored.
2. As a result, the contents of the LastName column are obfuscated with a fixed string.
3. To view masking events, enter Data Masking → Dynamic Masking Events subsection. To view details of some
event, click the event's ID you're interested in.
The Allow value is set in the Action Settings subsection to ignore all queries that meet the current rule’s
conditions.
The current scenario requires approving of table modifications, so the Object Group filter is selected in the Filter
statements subsection, INSERT, DELETE and UPDATE check boxes are checked. Thus, once the Rule is triggered,
DataSunrise will allow all queries aimed at table modification. Filtering settings also include the customers table
specified (Process Query to Database Objects subsection). Thus, the Rule can be triggered only by the queries
directed to the customers table. It is now necessary to create a Blocking Security Rule to prevent accessing the
remaining tables.
3. Click Add Rule once again in the Security → Rules section.
4. Configure a Rule to block access to the database. Since the scenario requires to prevent table modification
attempts, the Object Group filter is selected in the Filter Statements subsection, INSERT, DELETE and UPDATE
check boxes are checked. Thus, DataSunrise will block these type of queries.
5. To prevent the Blocking Rule from blocking the customers table, it’s necessary to set the Access Rule to higher
priority. In the Data Security → Rules section, right-click and select Priority Mode from the context menu. Then
drag and drop your Rule. Click Save Priority.
3 DataSunrise Use Cases | 36
The Rules are checked and executed by DataSunrise from the top to the bottom of the list. If an incoming query
doesn’t match the first Rule conditions, DataSunrise starts to check the second Rule and so on. But if a query
matches the Rule's conditions, DataSunrise stops executing the action with the lower priority. The closer a Rule to
the top of the list — the higher its priority. Thus DataSunrise does as a higher priority Rule demands.
UPDATE public.customers
SET "LastName"='Burnwood'
WHERE "LastName"='Wade';
2. As a result, the query is allowed and the table will be successfully modified.
3. Now let’s query the customers table using the same command:
UPDATE public.customers
SET "LastName"='Burnwood'
WHERE "LastName"='Wade';
4. As a result, the Blocking Rule is triggered, and the query is blocked. The blocking is performed in the form of a
SQL error (it says "ERROR: The query is blocked").
5. To view Limited Access events and event details, go to Security → Events.
First, a function should be defined for each database on each instance. This function returns a random double
value for a column.
2. Configure a Rule to obfuscate your column: go to Masking → Dynamic Masking Rules and click the Add Rule
button. Scroll down up to Masking Settings and click Select to add the column to be masked.
3 DataSunrise Use Cases | 37
3. Click ADD REGEXP DATABASE, input ^D$ as the regular expression and then click Add. This regular expression
defines that the database name should be D exactly.
4. Once the regular expression for the database is added, hover your mouse cursor over it and the Add RegEx
Schema button will appear. Click Add RegEx Schema button, input ^S$ as the regular expression and then click
Add. This regular expression defines that the schema name must be exactly S.
5. Once the regular expression for the schema is added, please put your mouse over it and Add RegEx Table
button will appear. Click Add RegEx Table button, input ^T$ as the regular expression and then click Add
button. This regular expression defines that the table name must be exactly T.
6. Once the regular expression for the table is added, please put your mouse over it and Add RegEx Column
button will appear. Click Add RegEx Column, input ^C$ as the regular expression and then click Add. This
regular expression defines that the columns name must be exactly C. Click Done.
7. Finally, we write into Function to Call field D.randomizer, where D is the database or schema name and
randomizer is the function name previously created. Click Save Rule and that’s all.
4 DataSunrise's Web Console | 38
Each page of the DataSunrise's Web Console (fig. 5) is divided into three parts. The upper part (element group 1) is
common for all the Web Console's sections and subsections. It contains the Admin Panel.
The left part of the page (element group 2) is common for each Web Console's section. It includes the Navigation
Menu.
And the content part (element group 3) is different for each page.
See detailed description of all aforementioned elements below:
1. Admin Panel
4 DataSunrise's Web Console | 39
2. Navigation menu
Interface element Description
Dashboard link Dashboard access (refer to Dashboard on page 39)
Compliances link Compliance Manager access (Compliance Manager Overview on page 268)
Audit link Data Audit section access
Security link Data Security section access
Masking link Data Masking section access
Data Discovery link Data Discovery section access (Sensitive Data Discovery on page 243)
VA Scanner link Vulnerability Assessment section access (VA Scanner on page 263)
Monitoring link Monitoring section access (Diagrams of Internal Characteristics of DataSunrise
on page 55)
Reporting link Report Gen access (Reporting on page 259)
Resource Manager link Resource Manager access (Resource Manager on page 275)
Configuration link Configuration section access (DataSunrise Configurations on page 203)
System Settings link System Settings section access (System Settings)
Each section of the Navigation menu can be extended to access its subsections. It is used to navigate through
subsections of a current section.
3. Content area. It is used to display current subsection's content or tabs/pop-up windows.
4.2 Dashboard
The Dashboard is the starting page of the DataSunrise's Web Console. It displays general information about the
program operations.
The Dashboard's interface includes the following elements:
4 DataSunrise's Web Console | 40
1. Proxies list. Available DataSunrise proxies. Right-click on a proxy name for a context menu which enables you to
do the following:
• Test Connection. Testing a connection between the selected DataSunrise's proxy and the target database
• Active Database Sessions. Displays details of database sessions in progress
• Disable Proxies. Makes the proxies inactive.
2. Last System Errors list. Displays DataSunrise system errors.
3. System Info list. Contains information about a computer DataSunrise is installed on.
List item Description
Server Current DataSunrise server
Current Dictionary Location of the current Dictionary database
License Type Type of the DataSunrise license
Backend UpTime DataSunrise Backend working time
Version DataSunrise version number
Node Name Computer name
OS Version DataSunrise server's operating system version
License Expiration Date Expiration date of the DataSunrise license
4. Top Blocked Queries per Day list. Displays a list of the most frequent user queries that were blocked by the
DataSunrise's Data Security module.
5. Current Throughput clickable diagram. Displays a number of user sessions and the number of executed
commands in respect of a target database. The diagram is refreshed every 10 seconds.
6. Active Audited Sessions list. Displays user sessions in progress. Also it enables you to close running sessions. To
do this, select a session of interest in the list and click Interrupt Session
7. Trail DB Audit Logs list. Displays a list of available Audit Trails (see Data Audit (Database Activity Monitoring) on
page 121)
4 DataSunrise's Web Console | 41
openssl req -out CSR.csr -new -newkey rsa:1024 -nodes -keyout privateKey.key
• Remove the Passphrase from the private key with the following command:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:1024 -keyout privateKey.key -out
certificate.crt
Paste the private key and the certificate you got, into the appfirewall.pem file located in the DataSunrise
installation folder
openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem
openssl x509 -req -in datasunrise_gui.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out
datasunrise_gui.crt -days 500 -sha256
d. Copy the key and the certificate generated by the CA to the appfirewall.pem file.
e. Restart the DataSunrise's Core. Navigate to System Settings → Servers, click the required server, then make
necessary changes and in the Core and Backend Process Manager → Actions click Restart Core. As a result,
clients will have to install the CA (rootCA.pem) certificate for full SSL authentication (verify-full mode).
https://<myserver>.<mydomain>:11000
• For Kerberos-based authentication, an SPN should be created with the following command (It should be
created again if the port number or hostname were changed):
2. Navigate to System Settings → General of the Web Console, select Kerberos in the Type of Authentication...
drop-down list.
3. Create a DataSunrise user (refer to subs. Creating a DataSunrise User on page 390). Name it as follows: <short
domain name>\<user name>. In other words, name the user similarly to the AD user you are going to log into
the Web Console as. For example:
DB\Administrator
4. Restart the DATA_SUNRISE_SECURITY_SUITE system service for the changes to take effect.
4 DataSunrise's Web Console | 43
5. Enter the Web Console using port 11000. To bypass the Kerberos-based authentication mechanisms and log in to
the Web Console using regular DataSunrise user credentials, use port 12000.
2. In EC2 service, create a Target Group with TLS Protocol pointing to the DataSunrise Machines
4 DataSunrise's Web Console | 44
3. Create a new Listener for the DataSunrise Load Balancer for the TLS port 443 with specifying the certificate:
4. In the Web Browser window try to proceed to https://dsloadbalancer.yourdoimain" (the alias you have created in
step 1)
Doing so you will be directed to the required port automatically without any self-signed certificates approval
in the Web Browser. Of course, it can be automated for your environment in the Cloud Formation template, for
testing purposes you can do it manually.
about:config
2. In the Search text field, enter the following and press Enter:
network.negotiate-auth.trusted-uris
3. Double-click the parameter's name or click Edit and enter the hostname or the domain of the server protected
by Kerberos. For example:
https://localhost:11000
--auth-server-whitelist
2. On the Create a New Application tab, select Web as Platform, and OpenID Connect as Sign on Method
https://<DataSunrise_IP_address>:11000/sso_endpoint
For example:
https://127.0.0.1:11000/sso_endpoint
https://localhost:11000/sso_endpoint
4. Navigate to Assign Applications and assign your application to your Okta user
5. Go to the following page: https://developer.okta.com/docs/api/resources/oidc#request-example-3. See Request
Example. Copy the first part of the query (for example):
https://datasunriseantony.okta.com/oath2/${authServerId}/.well-known/openid-configuration
oauth2/${authServerId}
https://datasunriseantony.okta.com/.well-known/openid-configuration
authorization_endpoint
token_endpoint
jwks_uri
6. Go to Okta's Dashboard and navigate to Application → Your App → General → Client Credentials. Note the
Client ID and Client secret. You will need these parameters' values.
4 DataSunrise's Web Console | 48
7. Enter the DataSunrise's Web Console. Note that you need to specify the full IP address instead of just a host
name. For example:
https://127.0.0.1:11000
2. On the Create a New Application tab, select Web as Platform, and SAML 2.0 as Sign on Method
3. On the next tab, set application name (any) and input the following URL into Single Sign on URL and Audience
URI (SP Entity ID):
https://<DataSunrise_IP_address>:11000/sso_endpoint
For example:
https://localhost:11000/sso_endpoint
4. Navigate to Assign Applications and assign your application to your Okta user. A new page will open. Note the
Identity Provider Single Sign-On URL. You will need this parameter's value.
5. Enter the DataSunrise's Web Console. Navigate to System Settings → SSO, click Add SSO Service.
6. Input a logical name (any), select SAML in the SSO Service Type. Input the "Identity Provider Single Sign-On
URL" (see step 4) into the Authorization Token Endpoint URL field. Save the profile.
7. Navigate to Access Control → Your user (admin for example) → Single Sign-On Connections. In the Login
With drop-down list, select the SSO Service created in the previous steps and click Add Connection.
4 DataSunrise's Web Console | 50
8. You will be redirected to the logon screen of the Web Console. Input Okta credentials to be logged into the UI.
Note: the session timeout is 10 minutes by default. Thus, a Confirmation code is valid for 10 minutes for a
certain IP address. Once this time is elapsed, you need to request a new Confirmation code. You can configure
2FA session timeout by changing the TfaLinksValidation Timeout parameter's value (see Additional Parameters on
page 337)
Note: the session timeout is 10 minutes by default. Thus, a Confirmation code is valid for 10 minutes for a certain
IP address. Once this time is elapsed, you need to request a new Confirmation code. You can configure 2FA session
timeout by changing the TfaLinksValidation Timeout parameter's value (see Additional Parameters on page 337)
4.5 Monitoring
For viewing statistical information about DataSunrise operations, you can use the Monitoring section.
• You can click the icon or the name of a graph to switch off a selected graph.
Below is the list of available characteristics. To display a graph, select a required parameter from the left panel and
specify the graph update speed and the server to view the information on.
4 DataSunrise's Web Console | 56
5. Enter the required information into the Throughput From Client subsection.
Interface element Description
Host drop-down list Select a degree of conformity between an IP address specified in
the Host text field (see below) and the real IP addresses.
Host text field IP address client queries were sent from
Port text field Client application's port number
Login drop-down list Select a degree of conformity between a user name specified in
the Login text field (see below) and the real DB user name.
Login text field Database user name
Application drop-down list Select degree of conformity between a client application name
specified in the Application text field (see below) and the real
client application name.
Application text field Client application name
6. Enter the required information into the Throughput to the Database subsection.
Interface element Description
Instance drop-down list Database instance
Interface drop-down list Database network interface
Proxy/Sniffer drop-down list DataSunrise proxy or sniffer used to process database traffic
Schema text field Database schema
7. When you're done with entering the required information, click Show Lines to create a diagram.
8. Click Clear Diagram to delete an existing diagram.
5 Database Configurations | 58
5 Database Configurations
This section of the User Guide contains database-related instructions such as:
• Creating a target database profile in the Web Console
• Creating target database users required for establishing a connection between DataSunrise and the target
database
• Proxy configuring
• Encrypted traffic processing
• Configuring Two-factor authentication (2FA) in a target database
• Creating database user profiles
• SSL Key Groups
• Database Encryption functionality
5.1 Databases
5.1.1 Creating a Target Database Profile
To be able to work with a target database, DataSunrise needs to be aware of the database it should protect.
Thus, it needs a target database profile to be created in DataSunrise's Configuration. This is the first thing you should
do before creating any Rules and establishing protection. To create a profile of a target database, do the following:
Note: if you need to create a target database profile of a MySQL version lower than 8, set TLSv1,TLSv1.1 as the value
of the MySQLConnectorAllowedTLSVersions additional parameter (Additional Parameters on page 337).
UI element Description
Logical Name text field Profile's logical name (it is used by DataSunrise as a
reference to the database)
Database Type drop-down list Target database type
Hostname/IP text field Target database's address (hostname or IP address)
Port text field Database's port number
Authentication Method drop-down list User authentication type (regular login/password or Active
Directory user authentication)
Instance text field (for Oracle database only) Oracle service name or SID
Default Login text field Database user name DataSunrise should use to connect to
the target database
Save Password drop-down list Method of saving the target database's password:
• No
• Save in DataSunrise
• Retrieve from CyberArk. In this case you should specify
CyberArk's Safe, Folder and Object to store the password
in (fill in the corresponding fields)
• Retrieve from AWS Secrets Manager. In this case you
should specify AWS Secrets Manager ID
• Retrieve from Azure Key Vault. You should specify Secret
Name and Azure Key Vault name to use this feature
Password text field Database user password that DataSunrise should use to
connect to the database
Database text field (for all DB types except Oracle Name of the target DB. Required to get metadata from the
and MySQL) database
Encryption drop-down list (for Oracle only) Encryption method:
• No: no encryption
• SSL
Instance Type drop-down list (for Oracle only) A method which DataSunrise should use to connect to the
database:
• SID: using SID
• Service Name: using an Oracle service name
• You can specify multiple Service Names when
configuring an Instance. This enables you to add Primary
and Standby RAC clusters with different Service Names
to a single DataSunrise Instance. To add several Service
Names, separate them with a semicolon:
report_svc;oltp_svc
Advances Settings
Kerberos Service Name field Service name for Kerberos-based connections
5 Database Configurations | 60
UI element Description
Custom Connection String field Specify a custom connection string for database connection.
Dynamic SQL Processing check box Enable processing of Dynamic SQL (see Dynamic SQL
Processing on page 20)
Environment Name field A dedicated database or schema used for employing some
masking methods while doing Dynamic or Static masking
(see Data Masking on page 164)
Automatically Create Environment check box Create an Environment automatically (see the entry above)
IP Version drop-down list IP protocol version to use for connection:
• Auto: define automatically
• IPv 4
• IPv 6
Database keys drop-down list SSL Key Group that contains required keys for the database
(SSL Key Groups on page 106). Required for establishing
an SSL connection between the DataSunrise's proxy and the
target database.
4. Click Test to check the connection between the target database and DataSunrise.
5. Specify a method of interaction between DataSunrise and the target database in the Capture Mode subsection:
5 Database Configurations | 61
UI element Description
Server drop-down list Select DataSunrise server (DS Instance) to open a proxy
or a sniffer on
Action drop-down list Select an operating mode DataSunrise should employ
to process requests to the target database (refer to subs.
DataSunrise Operation Modes on page 18):
• Proxy: Proxy Mode on page 19
• Sniffer: Sniffer Mode on page 18
Network Adapter drop-down list (for Sniffer mode Network controller DataSunrise should use to connect
only) to the target DB
IP Address drop-down list (for Proxy mode only) IP address of the proxy
Port text field (for Proxy mode only) Number of a network port DataSunrise should be
listening to
Accept Only SSL Connections check box (for Proxy Check to disable unencrypted connections
mode only)
1. Click Databases in the main menu. A list of existing database profiles will be displayed.
2. Click profile name of a required database in the list.
3. Click Add Interface to add a new database interface and specify hostname, port number, database keys and IP
version (also SID or service name for Oracle Database). Click Save to apply new settings.
4. To add a new proxy for the target database, do the following:
a) Click Add Proxy.
b) Select a network interface for the database in the Interface drop-down list.
c) Select a DataSunrise server (node) to open a proxy on, in the Server drop-down list.
d) Select new database host in the Host drop-down list.
e) Specify proxy keys if necessary. The keys are needed to establish an SSL connection
f) Specify a port number for the DataSunrise proxy in the Port text field.
g) Check the Enabled check box to activate the proxy.
h) Click Save to apply new settings.
5. To add a new sniffer to the database, do the following:
a) Click Add Sniffer.
b) Select a required network interface in the Instance Interface drop-down list. An Interface has an IP address
and a port number on which a target server is listening. DataSunrise opens a proxy or a sniffer on an interface.
A database instance can include several interfaces.
c) Select a DataSunrise server (node) to open a sniffer on, in the Server drop-down list.
d) Select a required network device (network adapter) in the Device drop-down list.
e) Specify Sniffer keys if necessary. Sniffer key is a database server's private SSL key. It is used to decrypt the
traffic flowing between the client and the database.
f) Check the Enabled check box to activate a current sniffer.
g) Click Save to apply new settings.
5 Database Configurations | 62
Note: If a database server, database client and the firewall are installed on the same Windows-powered local
machine, the DataSunrise sniffer would not be able to capture network traffic.
- Wait until the operation is finished. You will have to restart your computer in order to implement the system
changes.
- Find Telnet application using the Windows search tool on your computer and run it. Use the o command with
the required hostname and port number as shown below:
o 192.168.1.71 3306
If Telnet client cannot connect to the host, the issue is caused by your computer or network, not by DataSunrise.
If the specified hostnames and port numbers are correct, check your network firewall or another kind of
conflicting security software that can block the network traffic.
Having created a new user, grant the following privileges to the user:
•
Note: For Dynamic masking of VIEWs, to get VIEW-related metadata, grant the following privilege:
•
Note: For Dynamic masking of functions, grant the following privilege:
Note: if you use a container-based Oracle 12+ and your instance is in CDB (all containers), use the c## prefix
for your User_name. For example:
• For non-container databases or container databases where your Instance is located in a separate container,
not cdb$root:
Note: If you're using Oracle 12c in the non-Multitenant mode, please refer to the GRANT list for Oracle 11g
provided above.
Starting from this version of Oracle Database, there is a possibility to create a user to get metadata either
from a particular container or from all containers at once.
To create a user for all containers (global user), execute the following queries:
Warning: In most cases, it is preferable to use a common user for establishing connections with your target
databases because if you use a user created for one container, DataSunrise will not be able to work with other
containers.
• For Oracle 12.1+ users created for a particular container or without a container, additionally grant the
following privileges:
Oracle 12+:
GRANT SELECT_CATALOG_ROLE TO C##<User_name>;
GRANT SELECT ON "SYS"."DBA_VIEWS" TO C##<User_name>;
Oracle 11:
GRANT SELECT_CATALOG_ROLE TO <User_name>;
GRANT SELECT ON "SYS"."DBA_VIEWS" TO <User_name>;
• For Oracle Real Application Cluster (RAC), grant the following privilege:
•
Important: if you're not allowed to grant the CREATE TABLE privilege but want to use UseMetadataViewDDL,
you need to create a temporary table to be used for downloading metadata. To do this, execute the following
query:
• If you're going to use Oracle native encryption (the EnableOracleNativeEncryption additional parameter),
provide your user with the following grant:
Note that if you input User_name without quotes your user will be saved as User_name in the Oracle's table of
users. If you need your User_name to be in lower case, use double quotes: "User_name" .
If you need to use Delete Processed Logs, grant the following privilege:
begin
rdsadmin.rdsadmin_util.grant_sys_object(
5 Database Configurations | 68
p_obj_name => 'AUD$',
p_grantee => '<User_name>',
p_privilege => 'DELETE');
end;
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'OBJ$',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'COL$',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'USER$',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'COLTYPE$',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'HIST_HEAD$',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'TAB$',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'COLLECTION$',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'DBA_EDITIONS',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'CDB_PROPERTIES',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;
5 Database Configurations | 69
4. For downloading VIEW metadata, (useMetadataViewDdl), you need the following grants:
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'DBA_VIEWS',
p_grantee => '%s',
p_privilege => 'SELECT');
end;
7. If you're going to use the Hide Rows masking method and Data Preview, you need the following grant:
Note: The user should be able to get information about the database structure from the following system tables:
• pg_database
• pg_namespace
• pg_class
• pg_catalog
• pg_attribute
• pg_user
• pg_settings
• pg_db_role_setting
GRANT SELECT ON
pg_catalog.pg_database,
pg_catalog.pg_namespace,
pg_catalog.pg_class,
pg_catalog.pg_attribute,
pg_catalog.pg_user,
pg_catalog.pg_settings,
pg_catalog.pg_db_role_setting
TO <User_name>;
2. For Dynamic masking of VIEWs, functions, "Hide Rows", and Data Preview, grant your user the following
privileges:
5 Database Configurations | 70
• PostgreSQL 14+:
2. Grant all required privileges to the user. Connect to the SYSTEM database and execute the corresponding SQL
query:
• For Netezza 6.X:
GRANT LIST ON AGGREGATE, DATABASE, EXTERNAL TABLE, FUNCTION, GROUP, MANAGEMENT TABLE, MANAGEMENT
VIEW, PROCEDURE, SEQUENCE, SYNONYM, SYSTEM TABLE, SYSTEM VIEW, TABLE, USER, VIEW to <User_name>;
GRANT LIST ON AGGREGATE, DATABASE, EXTERNAL TABLE, FUNCTION, GROUP, MANAGEMENT TABLE, MANAGEMENT
VIEW, PROCEDURE, SCHEMA, SEQUENCE, SYNONYM, SYSTEM TABLE, SYSTEM VIEW, TABLE, USER, VIEW to
<User_name>;
Note: This method has a serious drawback. Granting the SELECT privilege to a user means that this user will be
able not only to get metadata but the database contents as well. Thus if it is not acceptable, use the alternative
method described below.
5 Database Configurations | 71
To use dynamic SQL processing, grant the following privileges:
GRANT SELECT, CREATE, DROP, INSERT, EXECUTE, CREATE ROUTINE, ALTER ROUTINE ON
`<Unique_Instance_ID>`.* TO <User_name>@'%';
You can find unique_instance_id in the Core logs when you try to mask dynamic SQL for the first time.
• Now you can create a new database profile in Configuration → Databases. Use details of the MySQL database
you've installed the procedure to earlier and the credentials of the newly created user. Select Via Stored
Procedures in the Metadata Retrieval Method drop-down list.
Note: For masking of procedures and functions and dynamic SQL processing, you need to grant your user the
following privilege (for both the manual and automatic cases):
or
2. Grant the required privileges to the new user by executing the following query:
GRANT SELECT
ON "<Target_database_name>"
TO "<User_name>";
2. Providing required privileges includes two stages: first, a role should receive privileges to access schema's
objects, and then the role is assigned to a user. To grant the required privileges, execute the following query:
To be able to select Redshift External Tables and Schemas in DataSunrise, grant the following privilege:
4. Grant SELECTs:
Note: You can execute the following queries that generate queries from GRANTS for all required schemas:
select 'GRANT USAGE ON SCHEMA ' || <Schema_name> || ' to <User_name>;' from v_catalog.schemata WHERE
is_system_schema = false;
select 'GRANT ALL ON ALL TABLES IN SCHEMA ' || <Schema_name> || ' to <User_name>;' from
v_catalog.schemata WHERE is_system_schema = false;
SELECT DISTINCT
'GRANT Select ON TABLE '
|| rtrim (tabschema)
|| '.'
|| rtrim (tabname)
|| ' TO USER <User_name>;'
FROM syscat.tables
WHERE
tabschema = '<Schema_name>' AND
tabschema not like 'SYS%' AND
tabschema not IN ('SQLJ', 'NULLID')
Note: it's desirable to assign a "root" role. For other roles, some of the functionality (such as importing database
users to DS) will be unavailable.
use admin
db.createRole(
{
role: "<Role_name>",
privileges: [
{ resource: {cluster: true}, actions: [ "inprog" ] },
],
roles: [
{ role: "read", db: "admin" }
]
}
)
2. Create a new database user and assign the role you created before to the user:
use admin
5 Database Configurations | 75
db.createUser(
{
user: "<User_name>",
pwd: "<Password>",
roles: [ { role: "<Role_name>", db: "admin" } ]
}
)
2. Grant your user the following privileges for each database you want to see when setting up DataSunrise rules:
3. Grant the following privileges to enable user fetching from your database:
dynamodb:ListTables
dynamodb:DescribeTable
You can see the created user in your OS console in the following way:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:ListAllMyBuckets",
"sts:DecodeAuthorizationMessage"
],
"Resource": [
"*"
]
}
]
}
5 Database Configurations | 77
2. Attach the Policy to your IAM Role (Policies → Policy actions → Attach) and attach the Role to your DataSunrise
EC2 machine (EC2 machine → Instance Settings → Attach/Replace IAM Role).
SELECT SERVERPROPERTY('InstanceName')
b) If DataSunrise is installed on a separate machine, type the IP address or Host name of the DataSunrise server
together with the MSSQL server name.
Example: 192.168.5.78\SQLEXPRESS or
JennyPC\SQLEXPRESS.
5. Input or choose a User name and password. Select Connect
Instead of SSMS, you can choose Azure Data Studio with the same details as Management Studio.
keytool -import -keystore <cacerts> -storepass <changeit> -file <CA.crt> -alias "redshift"
To delete a certificate:
Important: you need to run JDBC client where Java is aware of the storage you've embedded a certificate to.
-Djavax.net.ssl.trustStore=<cacerts>
-Djavax.net.ssl.trustStorePassword=<changeit>
Example:
"%JAVA_HOME%\bin\java" -Djavax.net.ssl.trustStore="%JAVA_HOME%\jre\lib\security\cacerts" -
Djavax.net.ssl.trustStorePassword=changeit -jar C:\sqlworkbench\sqlworkbench.jar
ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory
3. As a result, MD5 or Password authentication method will be assigned for all database connections.
Important: on Linux, you need to log into your SSMS as sysadmin to execute the required queries
setspn -X
4. To check authorization, run SSMS on any other host of the domain, connect to sqlsrv1.HAG.LOCAL,1433 and
execute the following query:
KERBEROS
5. Having configured the authorization, configure DataSunrise (it should be installed on another host, for example,
test2008.HAG.LOCAL):
• Create an instance which can proxy to the sqlsrv1.HAG.LOCAL:1433 server, on port 1438 for example
• Create an MSSQLSvc/test2008.HAG.LOCAL:1438 SPN and assign it to the mssql-svc account
• Enable delegation for the mssql-svc account
6. If everything is configured correctly, when connecting to test2008.HAG.LOCAL:1438 with SSMS (from any other
host on the domain) and with enabled MSSQL tracing, there should be similar messages in the log:
The main information about two connections: client → proxy and proxy → server. Both connections authorize the
user using KERBEROS.
All errors associated with KERBEROS are displayed in the log too.
For example:
Here is the same connection but with delegation disabled: the first connection authorized the user using
KERBEROS because the MSSQLSvc/test2008.HAG.LOCAL:1438 SPN exists, and the second connection
authorized the user using NTLM because delegation is prohibited for the mssql-svc account.
If there is a problem with KERBEROS authorization on the client → proxy level, the log will contain something like
that:
For example:
It's important to run setspn.exe as a domain administrator or as a domain user with the privilege of "Validated
write to service principal name" for AD object for which it is necessary to configure an SPN.
5 Database Configurations | 83
To grant this privilege, go to Active Directory Users and Computers, select the server the database is installed
on, open its properties → Security tab, add the required user and check the Validated write to service principal
name check box. More information here: https://technet.microsoft.com/en-en/library/cc731241(v=ws.10).aspx
Use the following command to get a list of all registered SPNs:
setspn -L <Proxy_host>
setspn -D MSSQLSvc/<Proxy_host>:<Proxy_port>
To check the authorization scheme, connect to the server and execute the following query:
The query result will show the authorization scheme used by the database server (SQL, NTLM or Kerberos).
For example:
It's important to run setspn.exe as a domain administrator or as a domain user with the privilege of "Validated
write to service principal name" for AD object for which it is necessary to configure an SPN.
To grant this privilege, navigate to Active Directory Users and Computers, select the server the database
is installed on, open its properties → Security tab, add the required user and check the Validated write
5 Database Configurations | 85
to service principal name check box. More information here: https://technet.microsoft.com/en-en/library/
cc731241(v=ws.10).aspx
Use the following command to get a list of all registered SPNs:
3. Enable user delegation. On the domain controller machine, navigate to Active Directory Users and Computers,
locate the account of the user created in step 1.
• In the Properties section, go to the Delegation tab and select Trust this computer for delegation to
specified services only and click Add
• In the Users and Computers window, specify the user account that was used to launch the database or the
name of the server the database is installed on.
• Optionally, you can use Check names to check if the specified user or computer exists, then select the
required service and click OK.
4. Create a keytab by executing the following command:
5. Use the keytab you got in step 4 to configure Kerberos on the DataSunrise's machine (you need to move
the keytab to Datasunrise's machine first). Refer to step 4 of the following guide for details: https://
www.datasunrise.com/blog/professional-info/configuring-kerberos-authentication-protocol/. Edit the krb.conf.file
and input the required parameter values:
[libdefaults]
default_realm = <domain_realm>
default_keytab_name = FILE:<path_to gsssvc.keytab>
default_client_keytab_name = FILE:<path_to gsssvc.keytab>
clockskew = 300
ticket_lifetime = 1d
forwardable = true
proxiable = true
dns_lookup_realm = true
dns_lookup_kdc = true
default_ccache_name = FILE:<path_to krb5cc>
verify_ap_req_nofail = false
<DOMAIN_REALM> = {
kdc = <fqdn>
admin_server = <fqdn>
default_domain = <fqdn>
}
[domain_realm]
.<fqdn> = <DOMAIN_REALM>[appdefaults]
pam = {
ticket_lifetime = 1d
renew_lifetime = 1d
forwardable = true
proxiable = true
retain_after_close = false
minimum_uid = 1
debug = false
}
For example:
[libdefaults]
5 Database Configurations | 86
default_realm = DB.LOCAL
#default_keytab_name = FILE:D:\fw_home\default.keytab
#default_keytab_name = FILE:D:\fw_home\oraproxy_gssapi.db.local.keytab
default_keytab_name = FILE:W:\krb\gsssvc.keytab
default_client_keytab_name = FILE:W:\krb\gsssvc.keytab
clockskew = 300
ticket_lifetime = 1d
forwardable = true
proxiable = true
dns_lookup_realm = true
dns_lookup_kdc = true
#allow_weak_crypto = true
default_ccache_name = FILE:W:\krb\krb5cc
verify_ap_req_nofail = false
#default_tkt_enctypes = arcfour-hmac
#default_tgs_enctypes = arcfour-hmac
#permitted_enctypes = arcfour-hmac
#kdc_req_checksum_type = -138 #1[realms]
DB.LOCAL = {
kdc = dsun.db.local
admin_server = dsun.db.local
default_domain = dsun.db.local
}[domain_realm]
.db.local = DB.LOCAL[appdefaults]
pam = {
ticket_lifetime = 1d
renew_lifetime = 1d
forwardable = true
proxiable = true
retain_after_close = false
minimum_uid = 1
debug = false
}
#[plugins]
#clpreauth = { disable = yes }
6. Run DataSunrise. It doesn't matter what user you use to do this because the domain user you created in step 1
will be used anyway.
7. A client can establish a connection using the FQDN you used in step 2. Database server's FQDN should be
specified in your target database Instance's settings (Configuration → Databases) as Host.
examplecluster.abc123xyz789.us-west-2.redshift.amazonaws.com
examplecluster.abc123xyz789.us-west-2.redshift.amazonaws.com.example.com
5 Database Configurations | 87
When establishing a connection, specify proxy's alias instead of the cluster's host. For example:
examplecluster.abc123xyz789.us-west-2.redshift.amazonaws.com.example.com
All other parameters should be similar to the ones described in the following guide: https://
docs.aws.amazon.com/en_us/redshift/latest/mgmt/generating-iam-credentials-configure-jdbc-odbc.html
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"es:ESHttpDelete",
"es:ESHttpGet",
"es:ESHttpHead",
"es:ESHttpPost",
"es:ESHttpPut"
],
"Resource": "arn:aws:es:us-west-1:987654321098:domain/test-domain/*"
},
{
"Effect": "Allow",
"Action": [
"es:CreateElasticsearchDomain",
"es:DeleteElasticsearchDomain",
"es:DescribeElasticsearchDomain",
"es:DescribeElasticsearchDomainConfig",
"es:DescribeElasticsearchDomains",
"es:UpdateElasticsearchDomainConfig"
],
"Resource": "arn:aws:es:us-west-1:987654321098:domain/test-domain"
},
{
"Effect": "Allow",
"Action": [
"es:AddTags",
"es:DeleteElasticsearchServiceRole",
"es:DescribeElasticsearchInstanceTypeLimits",
"es:DescribeReservedElasticsearchInstanceOfferings",
"es:DescribeReservedElasticsearchInstances",
"es:ListDomainNames",
"es:ListElasticsearchInstanceTypeDetails",
"es:ListElasticsearchInstanceTypes",
"es:ListElasticsearchVersions",
"es:ListTags",
"es:PurchaseReservedElasticsearchInstanceOffering",
"es:RemoveTags"
],
"Resource": "*"
}
]
}
• Create an EC2 machine and attach an IAM role with minimum possible privileges to it. Navigate to the IAM
service and create a new Policy. Navigate to the JSON tab and input the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"rds-db:connect"
],
"Resource": [
"arn:aws:rds-db:us-east-2:1234567890:dbuser:db-ABCDEFGHIJKL01234/db_user"
]
}
]
}
You need to replace the required parameters' values in the Resource subsection with your own values:
• Replace "us-east-2" with your Region value
• Replace "1234567890" with your account's ID
• Replace "db-ABCDEFGHIJKL01234" with your RDS database's Resource Id
• Replace "db_user" with the name of your database user you will use for IAM authentication. You should
use something like this:
arn:aws:rds-db:us-east-1:042001279082:dbuser:<resid>/mysql_test_user
• Create a new Role ( https://console.aws.amazon.com/iam/home#/roles ) and attach your Policy to this Role.
• Attach the Role to your EC2 machine. Start EC2 and install the AWS CLI. You can download it at here: https://
docs.aws.amazon.com/en_us/cli/latest/userguide/install-cliv1.html
• Install MySQL on your EC2 machine
• Check the installation: generate a token with the following command:
aws rds generate-db-auth-token --hostname <rds-host> --port 3306 --region us-east-1 --username
mysql_test_user
• Open the DataSunrise's Web Console and create a new MySQL Instance. Select IAM Role in the Authentication
Method.
5 Database Configurations | 89
• PostgreSQL. Actions to be done for RDS PostgreSQL are similar to the ones mentioned above except some
Postgres-specific actions:
• Install psql on your EC2 machine and create a database user with the following command:
• To enable Java application to use the JKS file in Trust Store, add the following options:
•
Important: the path to your certificate storage should be like that:
• DBeaver:
• Locate file dbeaver.ini in your Dbeaver installation folder and open it with a text editor
• Add the following lines to the end of the file:
-Djavax.net.ssl.trustStore=<jks_file_path>
-Djavax.net.ssl.trustStorePassword=<jks_file_password>
For example:
-Djavax.net.ssl.trustStore=C:/Program Files/Java/jdk-11.0.1/lib/security/cacerts
-Djavax.net.ssl.trustStorePassword=changeit
• Configure a connection to your Athena from DataSunrise proxy by specifying proxy connection details
in DBeaver. At the Driver properties tab, set ProxyHost and ProxyPort according to your Athena proxy's
settings. Test the connection.
• You can face an error Unable to find valid certification path to requested target, caused by DataSunrise's
self-signed certificate. You can use a certificate from CA or do the following: run command line as
administrator and navigate to your DBeaver installation folder. Add your Athena certificate (see step 5) to
cacerts which is located in <DBeaver installation folder>jre\security\. For example:
For example:
aws configure
• Enter DataSunrise's Web Console and create a new Athena Instance. When configuring a proxy, select
Create New in Proxy Keys (new SSL Key Group will be created automatically)
• Navigate to Configuration → SSL Key Groups and open your Group's settings
• Copy Certificate and paste it in a text file (dsca.crt for example)
• Now you can query your Athena through the proxy like this:
1. Navigate to Configuration → Databases and create a Snowflake database Instance (refer to Creating a Target
Database Profile on page 58). Configure a proxy
2. Navigate to Configuration → SSL Key Groups and open the Proxy default SSL key group for CA certificate Group
3. Copy the certificate from the CA field and paste it to a .pem file (root_ca.pem for example)
4. Add the certificate to Trusted Root Certification Authorities Store
5 Database Configurations | 92
5. Use the following connection string to establish a connection through your DataSunrise proxy:
Replace SERVER_HOST with actual SQL Server host name and set required certificate lifetime.
3. Run the SQL Server Configuration Manager utility and select SQL Server Network Configuration → Protocols
for (DB instance_name).
4. Right-click on Protocols for... and select Properties.
5. On the Certificate tab, select the certificate generated in step 2 of this instruction.
6. On the Flags tab you may set the Force Encryption parameter to Yes to encrypt all TDS traffic. Or set it to No to
encrypt client authorization packet only.
7. Restart your SQL Server. To do this, select SQL Server Services → SQL Server (DB instance name) and click
Restart Service.
5 Database Configurations | 93
5.5.2.2 Generating an SSL Certificate with OpenSSL
To create an SSL certificate for SQL Server using OpenSSL, do the following:
1. Create a configuration file named config.cfg and replace SERVER_HOST with actual SQL Server's hostname:
[req]
distinguished_name = req_distinguished_name
prompt = no
[req_distinguished_name]
countryName = USA
stateOrProvinceName = Washington
localityName = Seattle
organizationName = DataSunrise
organizationalUnitName = IT
commonName = SERVER_HOST
emailAddress = [email protected]
[ext]
extendedKeyUsage = 1.3.6.1.5.5.7.3.1
When executing the first command you will need to enter some password twice. The second command resets the
password, but you will need to enter it once again. The third command creates a certificate request within the
req file. The fourth command generates a self-signed certificate within the certificate.cer file. The last command
packs the key and the certificate into the certificate.pfx file, protecting it with a password (enter the password
twice). Then you should import certificate.pfx via MMC console to the Personal container.
3. Install the certificate for your proxy (refer to subs. Installing an SSL Certificate for an MS SQL Server Proxy on page
95).
mkdir db
mkdir db\new
mkdir db\private
echo. 2>db\index
echo 01> ./db/serial
echo unique_subject = no> ./db/index.attr
[req]
distinguished_name = req_distinguished_name
prompt = no
RANDFILE = ./db/private/.rand
[req_distinguished_name]
countryName = US
stateOrProvinceName = Washington
5 Database Configurations | 94
localityName = Seattle
organizationName = DataSunrise
organizationalUnitName = IT
commonName = DataSunrise
emailAddress = [email protected]
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
RANDFILE = ./db/private/.rand
[req_distinguished_name]
countryName = US
stateOrProvinceName = Washington
localityName = Seattle
organizationName = ACME
organizationalUnitName = IT
emailAddress = [email protected]
commonName = 127.0.0.1
[v3_req]
subjectAltName = @alt_names
[alt_names]
DNS.1=127.0.0.1
DNS.2=192.168.3.2
DNS.3=10.0.8.22
DNS.4=FLAK-PC
[ext]
extendedKeyUsage = 1.3.6.1.5.5.7.3.1
[ca]
default_ca = CA_default
[CA_default]
[policy_any]
countryName = supplied
stateOrProvinceName = optional
organizationName = optional
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
5 Database Configurations | 95
Important: subjectAltName includes all the hosts that should be covered by the certificate including the one in
commonName.
@ECHO OFF
SET RANDFILE=./db/private/.rand
openssl genrsa -des3 -out ./db/private/ca.pem 2048
openssl rsa -in ./db/private/ca.pem -out ./db/private/ca.pem
openssl req -new -x509 -days 3650 -key ./db/private/ca.pem -out ./db/ca.cer -config ca
openssl x509 -noout -text -in ./db/ca.cer
@ECHO OFF
SET friendlyName=CA-signed certificate for DSUNRISE
SET RANDFILE=./db/private/.rand
SET /P serial=<./db/serial
openssl genrsa -des3 -out ./db/private/%serial%.pem 2048
openssl rsa -in ./db/private/%serial%.pem -out ./db/private/%serial%.pem
openssl req -new -key ./db/private/%serial%.pem -nodes -config cfg -out req
openssl ca -config cfg -extensions ext -infiles req
openssl pkcs12 -export -in ./db/new/%serial%.pem -inkey ./db/private/%serial%.pem -name
"%friendlyName%" -out ./db/private/%serial%.pfx
MOVE .\db\new\%serial%.pem .\db\new\%serial%.cer
6. Generated certificates will be saved in the db\new folder. Generated keys and .pfx files (packed keys and
certificates) will be saved in the db\private folder.
It is required that the CN (canonical name) used in the certificate should be available at the client/proxy side and
a client or proxy should use this name to connect to the server/proxy. It is required because otherwise they will
not pass certificate check even if a client/proxy recognizes the root certificate as trusted. You can achieve this by
adding the CN to the hosts file or by adding a corresponding entry to the DNS (if administering AD).
5. Add the key to DataSunrise: create a new group in Configuration → SSL Key Groups and insert the key into the
Private Key text field.
6. Link the created group to the Instance: navigate to Configuration → Databases → your database profile. Then
in the Capture Mode subsection, open your proxy's settings and select your SSL Key Group in the Proxy Keys
drop-down list.
Important: when working in High availability configuration, you need to specify your Load balancer's host name as
the LoadBalancerHost additional parameter's value (see Additional Parameters on page 337) to be able to get valid
authentication links in emails sent by DataSunrise. If this is not done, an internal IP will be used that can't be verified
on an external device.
1. To enable sending letters with confirmation links, it is necessary to configure an SMTP server at Configuration
→ Subscribers → Add Server (refer to subs. Configuring an SMTP Server on page 212 for details) and set the
Send security emails from this server at least for one server.
2. Open your target database profile (Configuration → Databases) and in the Advanced Settings, check the
Accept Only Two-factor Authentication Users check box if you need to block connection attempts of
unauthenticated users.
3. Navigate to Configuration → Database Users and select a user you will use to log in into your target database
as. Open this user's profile and select E-mail in the Type of Two-Factor Authentication drop-down list.
4. Now you can connect to your target database via some client application. You will get an email with a special link
which you should open to authenticate to the database. Note that the connection time is unlimited (there is no
timeout). After the connection is terminated, you can still connect to the database without using a confirmation
link during the next 10 minutes. A connection can be established only when using the user name and IP address
you've used for the previous connection. If the timeout time (10 minutes) has exceeded from the termination of
connection moment, you should authenticate via email again.
5 Database Configurations | 97
After the connection is terminated, you can still connect to the database without using a secret code during the
next 10 minutes.
If you encounter a sort of "Unrecognizable parameter" error when executing the query, probably your client
doesn't allow SET commands. In this case, use the SELECT command. For example:
Field Description
Name Logical name of the connection (any name)
Host IP address or name of the DataSunrise proxy's host
Port Port number of the DataSunrise's proxy
2. In the Connect Object Explorer window, input DataSunrise proxy details. Use the IP address and the port
number of the proxy you've configured in the corresponding database profile.
Field Description
Server name IP address and port of DataSunrise proxy, separated by a
comma
Authentication Select SQL Server Authentication, not Windows
authentication
Login Database user name required for database connection
Password Password required for database connection
5 Database Configurations | 101
3. You can also use tcp: prefix before the IP address, to enable TCP/IP for the connection.
Field Description
Connection Name Logical name of the connection (any name)
Connection Method Use Standard method (TCP/IP)
Hostname Specify your DataSunrise proxy's IP address
Port Port number of the DataSunrise's proxy
Username Name of a database user to use for authentication
3. Click Test Connection to check if you've configured everything properly and click OK. A new connection will be
created.
6 Database Users
Rules' settings enable DataSunrise to employ traffic filtering for processing queries from certain database users (for
example, you can block certain user's queries). To use this feature, you need to create your database user profiles
because DataSunrise should be aware of this user.
The Database Users subsection enables you to perform the following actions:
• Creating and editing of target DB user profiles (manually or using a .CSV file).
• Creating and editing of target DB user groups.
It is also possible to create DB user profiles automatically using DataSunrise's self-learning functionality (refer to
Learning Mode Overview).
user;user_name1
user;user_name2
user;user_name3
6 Database Users | 104
You can also add DB Instance and DB type parameters by using the following lines:
For example:
user;myuser;postgresql;pg_local
You can add the following database types (should be written in lower case):
• any
• mssql for MS SQL Server
• oracle for Oracle Database
• db2 for IBM DB2. Note that for DB2 LUW you should use db2. For DB2 z/OS users your should use db2zos
• postgresql for PostgreSQL
• netezza for IBM Netezza
• teradata for Teradata
• greenplum for Greenplum
• redshift for Amazon Redshift
• aurora for Amazon Aurora MySQL
• mariadb for MariaDB
• hive for Apache HIVE
• sap hana for SAP Hana
• vertica for Vertica
• mongodb for MongoDB
• aurorapgsql for Aurora PostgreSQL
• aurorapostgres for Aurora PostgreSQL
• dynamodb for DynamoDB
• elasticsearch for Elasticsearch
• cassandra for Cassandra
• impala for Impala
• snowflake for Snowflake
• informix for IBM Informix
• athena for Amazon Athena
• s3 for Amazon S3
• sybase for Sybase
2. If you want to specify <instance name>, ensure that the DB Instance's entry with the same name already exists
in the list of Instances (Configuration → Databases). When specifying an Instance, you don't need to change
anything - just copy and paste your Instance name. If Instance name includes spaces or non-standard characters
(for example: DB2 Z/[email protected]:50000), you should just paste it to your CSV or TXT file as is.
3. Click Actions → Import from file. The Import User page will open.
4. At the Import User page, drag-and-drop your file or click the corresponding link for the file browser and select
your file.
5. Click Attach to save new settings.
Note: If you try to import users that already exist in the list of DataSunrise's DB Users (Configuration →
Database Users), these users will be skipped.
Please note that uploading of a user list is a two-stage process. First, when you select a file, it is uploaded to the
DataSunrise server. And when you click Attach, the contents of the file are processed by DataSunrise.
6 Database Users | 105
SQLNET.ENCRYPTION_CLIENT = REQUIRED
SQLNET.ENCRYPTION_TYPES_CLIENT = (AES128)
Note: you can also use the AES256 encryption method here as a more secure method. If you omit the second
line, any available encryption method will be used.
8 Encryptions
The Encryption feature enables data-at-rest encryption to be applied to your target database. Encryption greatly
reduces the risk of intentional data leakage because it makes the data useless for bad guys that managed to access
the database.
At the moment, DataSunrise supports data-at-rest encryption for PostgreSQL only. DataSunrise uses pgcrypto
module for PostgreSQL databases and AES-128 algorithm for encryption.
DataSunrise utilizes Transparent Database Encryption (TDE) technology. In other words, it encrypts data in the target
database and decrypts it only when a database user connects to the database through DataSunrise. Encryption
and decryption are performed transparently for client applications which means that you don’t need to modify any
clients. To see the actual database contents, you need to access your database through DataSunrise proxy.
As a result of the encryption process, a copy of a source table with encrypted data included is created. Then the
source table is renamed (as “table_original” for example) and replaced with the encrypted copy named as the source
table.
An encryption key is stored in DataSunrise (or in CyberArk or AWS Key Management Service optionally). For each
new connection, an encryption key is passed to the database server and stored in a temporary table. It is safe to
store an encryption key in a temporary table because the temporary table is created during connection of the user
to the DataSunrise’s backend and no one except the user can get access to this temporary table. We developed a
special secure algorithm for fetching encryption keys to the server, which excludes the possibility of losing the key
even if a bad guy intercepts the packets used for key exchange.
Here’s the key exchange algorithm description (all steps are performed during the client connection):
• When establishing a connection between a client and a database, DataSunrise generates a pair of RSA keys, a
public key and a private key, and passes the public key to the database server.
• The database server generates a session key, encrypts it using the public key and passes the encrypted
DataSunrise session key.
• DataSunrise receives the encrypted session key, decrypts it using the private key, encrypts data key with it and
passes the data key to the database server.
• The database server gets the encrypted data key, decrypts it with its own session key and saves in a temporary
table (“ds_local”). After that, the key exchange can be considered as completed. Then the data key can be used
for encryption and decryption of data in the database. Currently, it is possible to use multiple encryption keys for
one database.
Once the key exchange is completed, encryption/decryption of data on the database server becomes possible.
DataSunrise enables you to enable encryption for separate columns or for complete tables. When encryption is
enabled, data to be encrypted is encrypted with a data key. Then you can access this data through DataSunrise
only. DataSunrise also can create indexes for encrypted columns without decreasing query execution speed while
encryption is enabled.
DataSunrise offers three ways of processing indexes:
• Leaving index columns unencrypted. Query execution speed is constant in this case, but Index columns data will
be accessible for reading for everyone.
• Encrypting index columns but creating an index using unencrypted data. It is impossible to get access to the
unencrypted index data by using standard means. And for cloud databases it is not possible at all. This is default
behavior.
• Indexes-free. If creation of indexes using unencrypted data is not acceptable, it would be necessary to not to use
them. It is the most secure option but at the same time the slowest one.
DataSunrise encryption and decryption is transparent for the user. The user performs queries to tables and
DataSunrise modifies the queries to encrypt or decrypt the data if needed.
8 Encryptions | 109
Warning: if you're using Encryptions on a PostgreSQL database, make sure that nobody changes encrypted
database tables' contents directly (bypassing DataSunrise's proxy), because it makes the table undecryptable.
6.
Note: an encryption key can consist of up to 16 pairs of hexadecimal values (0-9 digits and ABCDEF characters).
For example, you can use something like "0F9A4E6F" as an encryption key. Note that you can use a unique
encryption key for each column.
9 DataSunrise Rules
DataSunrise's functionality is based on a system of policies (Rules) used to control data auditing, database firewall
and data masking capabilities: Data Audit Rules, Data Security Rules and Data Masking Rules respectively.
DataSunrise's self-learning system (the Learning Mode) is controlled with its own set of Rules — Learning Rules.
In fact, a Rule is a set of settings that define when Rule-related module should be activated and how it should act.
Depending on certain Rule's settings, DataSunrise can activate its functionality when the following events occur:
• A user query to any target DB or to a target DB of certain type was intercepted;
• A user query addressing certain target DB's elements (schemas, tables, columns) was intercepted;
• A query came from a certain IP address, network interface or socket;
• Queries issued by certain target DB's users or client applications;
• A query matches a certain SQL pattern;
• A query contains some signs of SQL injection attack.
Each Rule's settings entail a certain action DataSunrise should execute when the Rule is activated ("triggered").
Activation and deactivation of Rules can be done in their settings or via the context menu. Right-click Rule's name in
the Rules list and select Disable to deactivate a Rule or Enable to activate.
You can configure a Rule to be activated automatically at certain time and weekday (refer to Schedules on
page 219). You can also notify concerned parties (Subscribers) about activation of a Rule via Email or instant
messengers (refer to Subscriber Settings on page 212).
Note: you can also apply other actions to your Rules by selecting them on a list and expanding the Actions menu.
Thus, you can arrange multiple Rules into groups of Rules (Group/Ungroup), create a duplicate of a Rule (Duplicate)
and add standalone Rules to existing Groups (Merge).
Condition Description
Application Client application name (Creating a Client Application Profile on page 211)
Application RegEx Client application RegEx
Application User RegEx Client application User RegEx
Application User Client application User (Capturing of Application Users on page 406)
Application User Group Client application User Group
DB User Database User (Creating a Target DB User Profile Manually)
DB User Group Database User Group (Creating a User Group)
DB User RegEx Database User RegEx
Host Host: IP address or host name (Creating a Host Profile on page 209)
Host Group Host Group (Creating a Group of Hosts on page 210)
OS User Operating System User
OS User Group Operating System User Group
OS User RegEx Operating System User RegEx
Proxy DataSunrise proxy
Sniffer DataSunrise sniffer
Interface Network interface
Session Parameters The following parameter is applicable to Oracle only:
• AUTH_TYPE: user authentication type. It supports the following values:
• PASSWORD: login/password authentication
• KERBEROS: KERBEROS and KERBEROS5PRE-based authentication
• NTS: NTS-based authentication
• BEQ: BEQ-based authentication
• RADIUS: RADIUS-based authentication
Note:
DataSunrise can process queries directed to certain
functions, but there are functions that belong to SQL
language not to the database itself. For example:
current_catalog for PostgreSQL or current_user for
MySQL. Thus if you cannot find the function you need
to process in the UI's function browser, this function
belongs to the SQL language. You cannot process
such functions by specifying them in the Process SQL
Statements to Functions.
Choose Object Groups (for Process Tables in → Object Groups of objects containing tables to be processed by
Groups only) the Rule. Click "Plus" (+) to add a new group to the list.
Skip Tables in drop-down list (for Process Tables in → Skip tables when processing.
Object Groups only)
9 DataSunrise Rules | 116
Skip Tables in drop-down list Source of tables the rule should ignore:
• Current Rule
• Object Group: a group of objects
Skip Query to Databases, Schemas, Tables, Columns Ignore selected databases, schemas, tables and columns
(for Skip Tables in → Current Rule only) during monitoring.
• Click Select to select required objects manually (refer
to Adding Objects to an Object Group Manually on
page 204).
• Click ADD REGEXP to select required objects using
regular expressions (refer to Adding Objects to an
Object Group Using Regular Expressions on page
205).
9 DataSunrise Rules | 117
Note: Refer to subs. Query Groups on page 207 for details on creating SQL statements groups.
Let's take a look at the example pictured above. The example's settings mean that DataSunrise will Block incoming
queries if the number of Failed Sessions (unsuccessful login attempts) is higher than 10 times a minute. In other
words, if a user is trying to guess or brute-force a database password. As a result, DataSunrise will Block the
database user trying to access the target database Permanently by User name and IP address.
9 DataSunrise Rules | 119
A Keyword in a Comment Number of penalties for comments containing one or multiple SQL keywords. For
Penalty example:
SELECT * FROM Users WHERE username='Administrator' -- ' AND pass='123'
Double Query Penalty Number of penalties for multiple SQL statements separated with semicolons. For
example:
SELECT * FROM Users; DROP TABLE Transactions
Constant Expression Penalty Number of penalties for an expression which is always true. For example:
SELECT * FROM events WHERE rowid = '4' OR '1'='1';
Suspicious Conversion: Blind A specific Blind SQL injection attack: the attacker tries to execute SQL
Error attack statements using standard database functions like CAST or CONVERT to analyze
error messages and statement's result-sets of the database instance.
Suspicious Function Call A specific type of Blind SQL Injection attack. The attacker tries to use database-
specific functions like SLEEP or PG_SLEEP in SQL statements to analyze errors and
statement's result-sets.
Concatenation of Single A specific Blind SQL injection attack: the attacker sends SQL statements with
Characters for Many types of character concatenations using CHR or CHAR built-in functions applicable for
Attacks specific database type.
Suspicious Condition A specific Blind SQL injection attack: the attacker uses UNICODE, ORD, ASCII
or similar function in conjunction with a conversion from character to numeric to
analyze error codes and error messages.
9 DataSunrise Rules | 120
Figure 32: Example: the Rule will be triggered only if there is a database user that performed more than 100
operations per hour
• Time Span: time period to set the counter for. Once a Rule is triggered, the counter is nulled out and the process
is repeated again.
• Set Threshold on: threshold variable to set the counter for:
• Operations: all the operations specified in the Rule
• Rows: returned database rows
• Threshold Value:
• Set Threshold On = Operations: the number of operations to be executed before the Rule will be triggered.
The operations include all queries specified in the Rule's settings only, all other operations will be skipped
• Set Threshold On = Rows: the number of database rows to be returned before the Rule will be triggered
• Calculation per:
• Rule: only settings of the current Rule will be considered when setting up the threshold
• Database User: only database user queries will be considered when setting up the threshold
• OS User: only operating system user queries will be considered when setting up the threshold
• Application User: only client application user queries will be considered when setting up the threshold
Note: if the User value (either Database User or OS User or Application User) is selected, operations and rows of
each user will be calculated separately. If the Rule value is selected, the total number of operations and rows will
be calculated.
9 DataSunrise Rules | 121
.*Kathy.*
means that the Rule will be triggered only if the response includes "Kathy" (exact value). Another regular
expression example can be used to trigger the Rule if email addresses are contained in the columns you're
searching across:
[a-z0-9_-]*@[a-z]*.[a-z]*
Parameter Description
Application Client application used to send the query
DB user Database user the query is sent by
Query type Query type Filter Statements → Query Types
Query match Add the query to an existing query group
Objects involved in the query Database objects addressed by the query
4. You will be redirected to a new Rule page. All parameters selected at the previous step will be added to the Rule.
4. Input the required information to the Filter Sessions subsection (Filter Sessions on page 111).
5. Select the required traffic filter (Filter Statements on page 114)
6. Set Response-time filter if necessary (Response-Time Filter on page 120)
7. Configure Data Filter if necessary (see Data Filter on page 121).
8. Check the Enable check box of the Rule Triggering Threshold section to set threshold parameters (Rule
Triggering Threshold on page 120).
9. Input Tags if necessary (Tags on page 199).
10. Click Save to save the Rule's settings.
Note that all the following actions should be performed as the admin user.
1. For Oracle package, Oracle 12+ before creating any objects you should select your pdb container and create
objects in this container:
• Apply the parameter group to the instance. Restart the instance for the changes to take effect
• Prepare an IAM Role for DataSunrise EC2 instance used for passive monitoring of the RDS instance with the
audit_trail=XML,EXTENDED mode configured. The IAM Role should include the following IAM policy:
Substitute the values wrapped into <> symbols by the corresponding region, AWS account ID and instance
identifier
• Apply the resulting IAM Role with an IAM Policy to the EC2 Instance
3. Prepare Oracle Database Audit Trails
• Connect to the Oracle DB as instance master user (recommended). Note that master user is recommended as
you will have to provide access to the SYS catalog database objects
• You can also test the init parameters with a SQL client (e.g. SQL*PLUS) by executing the following command:
• Use the AUDIT statement to configure the auditing strategy for the native audit mechanism:
• Note that auditing logon/logoff (SESSION item) is compulsory
• You can configure object-based, user-based, statement-based-for-all-users AUDIT configuration. It is
recommended to configure AUDIT for particular statements and DB users in order to not cause too much
overhead for your DBMS engine
• Example configuration for AUDIT is shown below:
• To disable configured audit, you can use the NOAUDIT command (example below):
You can find more information on the AUDIT command structure in the official Oracle documentation:
https://docs.oracle.com/cd/E11882_01/server.112/e41084/statements_4007.htm#SQLRF01107
• You can check existing audit policies by executing the following command:
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'AUD$',
p_grantee => 'User_name',
p_privilege => 'SELECT');
end;
• If you're going to use the Delete Processed Logs feature, you need the following grants:
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'AUD$',
p_grantee => 'User_name',
p_privilege => 'DELETE');
end;
5. Prepare a DataSunrise user for getting instance metadata and reading native audit logs
• Create an Oracle database user for DataSunrise
• Due to platform specifics, AWS RDS Oracle instances are managed a bit differently than a self-maintained
(e.g. EC2-hosted) database. The most notable difference is that Oracle RDS provides its own package of
procedures and functions for accomplishing regular DBA tasks like database permissions provisioning.
• By default, Oracle database saves and stores database user names in UPPER case. This means that in
every rdsadmin.rdsadmin_util.grant_sys_object procedure call you should pass your user name in UPPER
CASE as well with the exception for the cases when you're enclosing the user name during the CREATE
USER command into 'single quotes'. This commands the DBMS to create a user case-sensitive user name.
Example:
Below is the adapted version of the permission required for DataSunrise Oracle v12 database user:
• To work with the audit_trail=DB,EXTENDED mode, you should also provide the following permissions
throught rdsadmin_util.grant_sys_object_procedure. To be able to access the session statistics (track down
sessions), to access the native audit storage table:
(Optional, recommended). To delete processed events from a native audit storage table by DataSunrise:
• If you need to audit the SYS/SYSTEM operations or SYSASM, SYSBACKUP, SYSDBA, SYSSDG, SYSKM or
SYSOPER roles activity, issue:
SHUTDOWN IMMEDIATE;
STARTUP;
• Use the AUDIT statement to enable auditing of SESSION (compulsory) and the statements of interest. Example
for the SYSTEM user:
AUDIT SESSION, SELECT TABLE, INSERT TABLE, DELETE TABLE, EXECUTE PROCEDURE BY SYSTEM BY ACCESS;
• You can check existing audit policies using the following statement:
select * from DBA_STMT_AUDIT_OPTS where user_name is not null UNION ALL select * from
dba_priv_audit_opts where user_name is not null;
you can find more information on AUDIT structure in the official Oracle documentation: https://
docs.oracle.com/cd/E11882_01/server.112/e41084/statements_4007.htm#SQLRF01107
2. Add new Oracle instance to DataSunrise using the Audit Trails option
• For an existing instance:
• Navigate to Configuration → Databases and open the required database instance details page where
audit_trail was configured
• At the bottom section of the page, (Proxies and Sniffers) click Trail DB Audit Logs
• Select the database interface and DataSunrise server the sync with audit_trail was established on. Save the
changes
• You can enable the Delete processed logs option to save space on your Oracle server
3. Grant the following privileges: to your Oracle user
4. First, you need to enable LOGON and LOGOFF auditing in your Oracle database. To do it, execute the following
command:
AUDIT SESSION, SELECT TABLE, INSERT TABLE, DELETE TABLE, EXECUTE PROCEDURE BY <User_name> BY ACCESS
NOAUDIT
9 DataSunrise Rules | 130
For example:
NOAUDIT SESSION, SELECT TABLE, INSERT TABLE, DELETE TABLE, EXECUTE PROCEDURE BY <User_name>
5. Connect to your DataSunrise's Web Console. Create a Database profile in the Configurations → Databases or
edit an existing profile. In the Capture Mode section, click Trail DB Audit Logs
• Fill out all the required fields:
Interface element Description
Server drop-down list DataSunrise server
Format Type drop-down list Format of the file to store audit data in (use Database for local
databases)
Delete processed logs check box Note the Delete processed logs check box. It enables you to delete
auditing results stored in your Oracle's SYS.AUD$ table. The data is
deleted from the table as soon as it was processed by DataSunrise.
Note that if DataSunrise has been inactive for a long period of time
this operation can take a while.
Audit System Events check box To audit users with SYSASM, SYSBACKUP, SYSDBA, SYSDG, SYSKM
or SYSOPER privileges, enable Audit System Events check box. For
this, additionally configure the trails (AWS, SMB, Local, Package) in the
same way as audit_trail XML mode.
6. Navigate to Audit and create an audit Rule for your Database instance. For auditing results, navigate to Audit →
Transactional Trails.
1. You need an operational SMB server, enabled auditing to XML files and a shared folder at your SMB server to
store the logs.
2. Provide your database user with the following grants:
3. Run DataSunrise's Web Console and create an Oracle database profile in Configuration → Databases.
4. Configure Trail DB Audit Logs:
Setting Required value
Type XML
Connection SMB
Hostname Host name or IP address of the machine your shared folder is located
at
Login SMB server login
Password SMB server password
Path Path to the shared folder located at your SMB server
9 DataSunrise Rules | 131
5. Navigate to Audit and create an audit Rule for your Database instance. For auditing results, navigate to Audit →
Transactional Trails.
Common settings. Execute these commands regardless of Java or PL/SQL option chosen:
Important: If you are using Oracle Database 11g, you need to exclude the following arguments: onlyfnm =>
TRUE, and normfnm => TRUE.
In case you're using Java and the TrailDBLogDownloaderJavaVersion parameter is enabled (it's disabled by
default).:
3. Open DataSunrise's Web Console and create an Oracle database profile in Configuration → Databases
4. Configure Trail DB Audit Logs:
9 DataSunrise Rules | 135
5. Navigate to Audit and create an audit Rule for your Oracle Database instance. For auditing results, navigate to
Audit → Transactional Trails.
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'UNIFIED_AUDIT_TRAIL',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'DBMS_AUDIT_MGMT',
p_grantee => '<User_name>',
p_privilege => 'EXECUTE');
end;
begin
rdsadmin.rdsadmin_until.grant_sys_object(
p_obj_name => 'DBA_TAB_PRIVS',
p_grantee => '<User_name>',
p_privilege => 'SELECT');
end;
begin
rdsadmin.rdsadmin_until.grant_sys_object(
p_obj_name => 'V$PARAMETER',
p_garantee => '<User_name>',
p_privilege => 'SELECT');
end;
or
You can enable the Delete Processed Logs option to save space on your Oracle server by emptying the audit
storage table (see step 5). In this case, provide your user with the following grant:
Having created the mandatory policies, you can specify the objects you want to audit. You can find the full list
in the Oracle official documentation: https://docs.oracle.com/database/121/SQLRF/statements_5001.htm, https://
docs.oracle.com/database/121/DBSEG/audit_config.htm#GUID-526A09B1-0782-47BA-BDF3-17E61E546174
For example:
• Apply the parameter group to your RDS instance. Restart the instance for the changes to take effect
5. Add a new Oracle instance to DataSunrise using the Unified auditing option
• Open DataSunrise's Web Console and navigate to Configuration → Databases. Open the required database
Instance details page where audit_trail was configured or create a new Instance.
• At the Capture Mode section of the page, in the Mode drop-down list, select Trailing the DB Audit Logs
• Select the database interface and DataSunrise server the sync with audit_trail will be established on. In the
Format Type drop-down list, select Unified auditing. Save the changes.
• You can enable the Delete Processed Logs option to save space on your Oracle server by emptying the
audit storage table
• To audit users with SYSASM, SYSBACKUP, SYSDBA, SYSDG, SYSKM or SYSOPER privileges, enable Audit
System Events check box. For this, additionally configure the trails (AWS, SMB, Local, Package) in the same
way as audit_trail XML mode
• Fill out the remaining fields according to your instance details (interface, server, periodicity of requesting
data)
• Configure an Audit Rule to capture data from Oracle using DataSunrise's Audit Trail mode. You can use an
empty Object Group or Query Types Rule to test Audit Trail.
• To ensure that auditing works, check the data in the UNIFIED_AUDIT_TRAIL table:
1. You need to prepare your RDS PostgreSQL database first. Do the following:
• Create an RDS Parameter Group and set the following parameter values:
9 DataSunrise Rules | 138
• Assign the Parameter group to your RDS Postgres database instance (RDS Instance → Configuration → Modify
→ database's Additional Configuration)
• Connect to your RDS Postgres database using some client and execute the following query to create a
database role named rds_pgaudit:
show shared_preload_libraries;
shared_preload_libraries
--------------------------
rdsutils,pg_stat_statements,pgaudit
• Ensure that the pgaudit.role is set to rds_pgaudit by executing the following command:
SHOW pgaudit.role;
pgaudit.role
------------------
rds_pgaudit
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"rds:DownloadDBLogFilePortion",
"rds:DescribeDBLogFiles",
"rds:DownloadCompleteDBLogFile",
9 DataSunrise Rules | 139
"rds:DescribeDbClusters"
],
"Resource": "arn:aws:rds:us-east-2:012345678901:cluster:test-au-pg"
"Resource": "arn:aws:rds:us-east-2:012345678901:db:datasunrise-au-pg-node-1"
"Resource": "arn:aws:rds:us-east-2:012345678901:db:datasunrise-au-pg-node-2"
...
"Resource": "arn:aws:rds:us-east-2:012345678901:db:datasunrise-au-pg-node-n"
}
]
}
•Attach the policy to your IAM Role (Policies → Policy actions → Attach) and attach the Role to your
DataSunrise EC2 machine (EC2 machine → Instance Settings → Attach/Replace IAM Role)
• In case of cluster it's necessary to list all nodes of the cluster in the resources
• In case of cluster it is necessary to use rds:DescribeDbClusters in Action
• In case of cluster it is required to configure only the cluster parameter group instead of each node
• To create a pgaudit role and pgaudit extension for Aurora PostgreSQL it is necessary to connect to Writer
Node
• In case of creating a Read Replica (Regular RDS) or a Reader node of Aurora Cluster it is not necessary to
create anything in them because all the settings (pgaudit role and pgaudit extension) will be replicated from
original instance or writer node.
3. Connect to your DataSunrise's Web Console. Create a Database profile in the Configuration → Databases. In
the Capture Mode drop-down list, select Trail DB Audit Logs. Note that you can also create a Trailing task for an
existing Database profile. For this, navigate to the Capture Mode section and click Trail DB Audit Logs.
• Fill out all the required fields:
Interface element Description
Server DataSunrise server
Format Type Format of the file to store audit data in
Region AWS Region your target database is located in
Identifier Database Instance name
Authentication method • IAM Role: use the attached IAM role for authentication
• Regular: authentication using AWS Access/Secret Key
4. Navigate to Audit and create an audit Rule for your Database instance. For auditing results, navigate to Audit →
Transactional Trails.
Note: only super admins or file owners can read the logs. To enable other users to read the logs, you need to
save logs in another folder. You can solve this issue by doing something like the following:
mkdir /var/log/psql_logs
chmod 755 /var/log/psql_logs
chown postgres:postgres /var/log/psql_logs
log_file_mode = 0755
So other users will be able to read the logs. Refer to the following page for details: https://www.postgresql.org/
docs/9.1/runtime-config-logging.html
4. Configure the PostgreSQL configuration. Locate the following file: /etc/postgresql/12/main/postgresql.conf and
uncomment the following settings:
log_destination ='csvlog'
logging_collector = on
log_directory = '/var/log/psql_logs'
log_file_mode = 0755
log_checkpoints = off
log_connections = on
log_disconnections = on
pgaudit.role = 'pgaudit_role' # may be another
pgaudit.log = all
5. Connect to your PostgreSQL using some client and execute the following query to create a database role:
show shared_preload_libraries;
shared_preload_libraries
--------------------------
pg_stat_statements,pgaudit
9 DataSunrise Rules | 141
6. Create the pgaudit extension with the following command:
SHOW pgaudit.role;
pgaudit.role
------------------
pgaudit_role
7. Navigate to the Configuration → Databases section of the Web Console and create a new PostgreSQL
database Instance.
8. In the Capture Mode section, select Local Folder in the Connection drop-down list; specify the path to the
folder PostgreSQL stores its logs.
9. If you need to delete PostgreSQL logs automatically depending on your settings, do the following:
• Grant your user (datasunrise here) the permission to delete logs (Linux):
• In the Log Files Cleaning Options section of your Local Folder Trailings settings, set Limit Total Size of Log
Files (Mbytes) or/and Time Period to Store Log Files.
Note: set Limit Total Size of Log Files carefully since there is a chance of deleting a current file. This would
happen if you set the Limit Total Size... less that the default log file size. Therefore it's worth to set Limit Total
Size... at least double size of Default Log File Size.
10. Configure an Audit Rule to capture data from your PostgreSQL using DataSunrise's Audit Trail mode. For
auditing results, navigate to Transactional Trails section of the Web Console
1. First, you need to create a file to collect audit data in (audit target):
For example:
9 DataSunrise Rules | 142
Windows:
CREATE SERVER AUDIT audi_1 TO FILE ( FILEPATH = 'C:\Program Files\Microsoft SQL Server\120\audit\',
MAXSIZE=500MB ); );
Linux:
Example:
Example:
For example:
6. Connect to your DataSunrise's Web Console. Create a Database profile in the Configurations → Databases. In
the Mode drop-down list, select Trail DB Audit Logs. Note that you can also create a Trailing task for an existing
Database profile. For this, navigate to the Capture Mode section and click Trail DB Audit Logs
7. Insert the following lines into the Log files path/url field:
• Regular MS SQL instance: <path to the file you store audit logs in>/*
• Amazon RDS: <according to the Amazon documentation>/*
• Microsoft Azure: <Azure URL>/*
•
Note: you can check what folder your MS SQL Server uses to store auditing results in the
sys.server_file_audits. Refer to the following page for details: https://docs.microsoft.com/en-us/sql/relational-
databases/system-catalog-views/sys-server-file-audits-transact-sql?view=sql-server-ver15
8. Navigate to Audit and create an audit Rule for your Database instance. Select an Instance interface and a
DataSunrise server to run the task on. For auditing results, navigate to Audit → Transactional Trails.
1. First, you need to prepare your AWS RDS Instance. We recommend using Amazon Linux2 for hosting
DataSunrise.
2. Create a custom option group for your RDS MS SQL. Configure an AWS RDS Service4 Role required for MS
SQL's SQLSERVER_AUDIT option (there are two options):
• Create new IAM Role for AWS RDS service using corresponding drop-down list from the IAM Role
subsection. You need to provide details on S3 bucket where the generetaed logs will be stored based on the
logs Retention settings
• The AWS Account User should be authorized to create IAM Policies, Service Roles, attaching Policies to
Roles
• In case of unsufficient privileges, please request your IAM Service Administrator to provide you the
missing privileges to create the IAM Service Role required for MS SQL Native Audit option
• If you want to create the IAM Role in advance, use the following example policy with the RDS Service as a
trusted entity:
IAM Policy:
Note: the details on IAM Role policy and other topics can be found in the AWS official guide
on SQL Server Audit configuration: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/
Appendix.SQLServer.Options.Audit.html
Note: you need to generate a Database audit spec for each database that is required to monitor for
database activity. Please note that one Database Audit Specification can be attached to one Server Audit
only. In case if you need to audit multiple Databases, you have to create more Server Audit units and
Database Audit Specifications. You can also reduce the amount of audited events generated by the SQL
Server by editing example Audit Specifications provided above. Please refer to the Microsoft official
documentation for full coverage: https://docs.microsoft.com/en-us/sql/relational-databases/security/
auditing/sql-server-audit-database-engine?view=sql-server-ver15
USE MSDB;
CREATE USER <DATASUNRISE_DATABASE_USER> FOR LOGIN <DATASUNRISE_SERVER_LOGIN>;
GRANT SELECT ON DBO.RDS_FN_GET_AUDIT_FILE TO <DATASUNRISE_DATABASE_USER>;
GO
9. Prepare your DataSunrise server and configure your RDS MS SQL Instance for passive logging:
• Install unixODBC on your DataSunrise server
• Connect to your DataSunrise's Web Console. Create a Database profile in the Configurations → Databases.
In the Capture Mode drop-down list, select Trail DB Audit Logs. Note that you can also create a Trailing task
for an existing Database profile. For this, navigate to the Capture Mode section and click Trail DB Audit
Logs
Note: you need to install Microsoft SQL Server ODBC Driver 17 (recommended). For installation procedure
on Linux refer to the following document: https://docs.microsoft.com/en-us/sql/connect/odbc/linux-mac/
installing-the-microsoft-odbc-driver-for-sql-server?view=sql-server-ver15#redhat17
•
Fill out all the required fields. You can leave Request data... as by default. For Log files path/url, provide
the same parameter as in your SQL Server Audit Specification (D:\rdsdbdata\SQLAudit\*.sqlaudit). Note that
the path should be combined with *.sqlaudit.
10. Navigate to Audit and create an audit Rule for your Database instance. Select an Instance interface and a
DataSunrise server to run the task on. For auditing results, navigate to Audit → Transactional Trails. Events
may be displayed with a slight lag due to SQL Server engine handling the audit event.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"rds:DownloadDBLogFilePortion",
"rds:DescribeDBLogFiles",
"rds:DownloadCompleteDBLogFile",
"rds:DescribeDbClusters"
],
"Resource": "arn:aws:rds:us-east-2:012345678901:cluster:test-au-mysql"
"Resource": "arn:aws:rds:us-east-2:012345678901:db:datasunrise-au-mysql-node-1"
"Resource": "arn:aws:rds:us-east-2:012345678901:db:datasunrise-au-mysql-node-2"
...
"Resource": "arn:aws:rds:us-east-2:012345678901:db:datasunrise-au-mysql-node-n"
}
]
}
• Attach the policy to your IAM Role (Policies → Policy actions → Attach) and attach the Role to your
DataSunrise EC2 machine (EC2 machine → Instance Settings → Attach/Replace IAM Role)
• In case of cluster it's necessary to list all nodes of the cluster in the resources
• In case of cluster it is necessary to use rds:DescribeDbClusters in Action
• In case of cluster it is required to configure only the cluster parameter group instead of each node
• To create a mysql role for Aurora MySQL, it is necessary to connect to Writer Node
• In case of creating a Read Replica (Regular RDS) or a Reader node of Aurora Cluster it is not necessary to
create anything in them because all the settings (the role) will be replicated from original instance or Writer
node.
4. Connect to your DataSunrise's Web Console. Create a Database profile in the Configuration → Databases. In
the Capture Mode drop-down list, select Trail DB Audit Logs. Note that you can also create a Trailing task for an
existing Database profile. For this, navigate to the Capture Mode section and click Trail DB Audit Logs.
5. Navigate to Audit and create an audit Rule for your Database instance. Select an Instance interface and a
DataSunrise server to run the task on. For auditing results, navigate to Audit → Transactional Trails.
1. In your AWS MySQL Parameter group, set the following parameters as shown below:
general_log = 1
log_output=FILE
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
9 DataSunrise Rules | 147
"rds:DownloadDBLogFilePortion",
"rds:DescribeDBLogFiles",
"rds:DownloadCompleteDBLogFile",
"rds:DescribeDbClusters"
],
"Resource": "arn:aws:rds:us-east-2:012345678901:cluster:test-au-mysql"
"Resource": "arn:aws:rds:us-east-2:012345678901:db:datasunrise-au-mysql-node-1"
"Resource": "arn:aws:rds:us-east-2:012345678901:db:datasunrise-au-mysql-node-2"
...
"Resource": "arn:aws:rds:us-east-2:012345678901:db:datasunrise-au-mysql-node-n"
}
]
}
• Attach the policy to your IAM Role (Policies → Policy actions → Attach) and attach the Role to your
DataSunrise EC2 machine (EC2 machine → Instance Settings → Attach/Replace IAM Role)
• In case of cluster it's necessary to list all nodes of the cluster in the resources
• In case of cluster it is necessary to use rds:DescribeDbClusters in Action
• In case of cluster it is required to configure only the cluster parameter group instead of each node
• To create a mysql role for Aurora MySQL, it is necessary to connect to Writer Node
• In case of creating a Read Replica (Regular RDS) or a Reader node of Aurora Cluster it is not necessary to
create anything in them because all the settings (the role) will be replicated from original instance or Writer
node.
3. Create an Option Group for your database and set the following parameters' values:
• SERVER_AUDIT_ROTATE_SIZE: not less than 1000000
• SERVER_AUDIT_FILE_ROTATIONS: not less than 10
4. Connect to your DataSunrise's Web Console. Create a Database profile in the Configuration → Databases. In
the Capture Mode drop-down list, select Trail DB Audit Logs. Note that you can also create a Trailing task for an
existing Database profile. For this, navigate to the Capture Mode section and click Trail DB Audit Logs. Select
General Log in the Format Type drop-down list.
5. Navigate to Audit and create an audit Rule for your Database instance. Select an Instance interface and a
DataSunrise server to run the task on. For auditing results, navigate to Audit → Transactional Trails.
mysql -u root -p
4. Create a folder to store logs and make the mysql:adm user the owner:
5. Open the /etc/my.cnf.d/server.cnf file and add the following lines to its [mysqld] section:
plugin_load_add = server_audit.so
server_audit_events = CONNECT,QUERY
server_audit_file_path = /var/log/mysql/server_audit/server_audit.log
server_audit_file_rotate_size = 1073741824
server_audit_file_rotations = 4
server_audit_logging = ON
server_audit_output_type = file
server_audit_query_log_limit = 8192
7. Ensure that MySQL auditing works: connect to your database server with a client application and execute some
queries. If everything is OK, MySQL will create the following file: /var/log/mysql/server_audit/server_audit.log
8. Learn what group the log files belong to (adm or mysql as a rule):
sudo ls -l /var/log/mysql/server_audit
9. Add datasunrise user to the group the log files belong to (see the previous step):
Note that if you don't want to do it because of security concerns, use Samba.
10. Grant your user the privilege to read the logs:
11. Navigate to the Configuration → Databases section of the Web Console and create a new MySQL database
Instance.
12. In the Capture Mode section, select Local Folder in the Connection drop-down list; specify the path to the
folder MySQL stores its logs.
13. Configure an Audit Rule to capture data from your MySQL using DataSunrise's Audit Trail mode. For auditing
results, navigate to Transactional Trails section of the Web Console.
Note: the Audit Plugin mentioned here is just an example, you can also use other methods (NFS for example) to get
audit data from MariaDB.
2. Create a folder to store logs and make the mysql:adm user the owner:
3. Open the /etc/mysql/mariadb.conf.d/server_audit.cnf file (other possible locations are: /etc/my.cnf, /etc/my.cnf.d/
server.cnf, /etc/my.cnf.d/mariadb-server.cnf) and add the following lines to its [mariadb] section:
plugin_load_add = server_audit.so
server_audit_events = CONNECT,QUERY
server_audit_file_path = /var/log/mysql/server_audit/server_audit.log
server_audit_file_rotate_size = 1073741824
server_audit_file_rotations = 4
server_audit_logging = ON
server_audit_output_type = file
server_audit_query_log_limit = 8192
5. Ensure that MariaDB auditing works: connect to your database server with a client application and execute
some queries. If everything is OK, MariaDB will create the following file: /var/log/mysql/server_audit/
server_audit.log
6. Learn what group the log files belong to (adm or mysql as a rule):
sudo ls -l /var/log/mysql/server_audit
7. Add datasunrise user to the group the log files belong to (see the previous step):
Note that if you don't want to do it because of security concerns, use Samba.
8. Grant your user the privilege to read the logs:
9. Navigate to the Configuration → Databases section of the Web Console and create a new MariaDB database
Instance.
10. In the Capture Mode section, select Local Folder in the Connection drop-down list; specify the path to the
folder MariaDB stores its logs.
11. Configure an Audit Rule to capture data from your MariaDB using DataSunrise's Audit Trail mode. For auditing
results, navigate to Transactional Trails section of the Web Console.
mysql -u root -p
4. Create a folder to store logs and make the mysql:adm user the owner:
5. Open the /etc/my.cnf.d/server.cnf file and add the following lines to its [mysqld] section:
plugin_load_add = server_audit.so
server_audit_events = CONNECT,QUERY
server_audit_file_path = /var/log/mysql/server_audit/server_audit.log
server_audit_file_rotate_size = 1073741824
server_audit_file_rotations = 4
server_audit_logging = ON
server_audit_output_type = file
server_audit_query_log_limit = 8192
7. Ensure that MySQL auditing works: connect to your database server with a client application and execute some
queries. If everything is OK, MySQL will create the following file: /var/log/mysql/server_audit/server_audit.log
8. Learn what group the log files belong to (adm or mysql as a rule):
sudo ls -l /var/log/mysql/server_audit
10. Configure samba by editing the /etc/samba/smb.conf file in the following way:
[global]
workgroup = WORKGROUP
security = user
map to guest = bad user
wins support = no
dns proxy = no
log file = /var/log/samba/log.%m
max log size = 65536
logging = file
[server_audit]
path = /var/log/mysql/server_audit/
valid users = smbuser
guest ok = no
browsable = yes
13. Add smbuser to the group the logs belong to (see step 8):
ls
3. Connect to your DataSunrise's Web Console. Create a Database profile in the Configuration → Databases. In
the Capture Mode drop-down list, select Trail DB Audit Logs. Note that you can also create a Trailing task for an
existing Database profile. For this, navigate to the Capture Mode section and click Trail DB Audit Logs:
• Select the database interface and DataSunrise server the sync with audit trail will be performed on. Save the
changes
• Specify the database, schema, VIEW you created before and the Role that you granted the required privileges
before
4. Configure an Audit Rule to capture data from Snowflake using DataSunrise's Audit Trail mode. You can use an
empty Object Group or Query Types Rule to test Audit Trail. Note: wait for about 120 minutes for the audited
queries to be displayed at the Transactional Trails section of the Web Console. Such a delay is caused by
Snowflake itself because Snowflake refreshes session information every 120 minutes.
Important: DO NOT select the S3 bucket you're going to audit as the one you will use for storing audit log
files. This will lead to auditing of unnecessary DataSunrise and Amazon activity and may pose a threat to your
S3 security.
• Save changes
2. Add a new Amazon S3 instance to DataSunrise using the Audit Trail option:
• Open DataSunrise's Web Console and navigate to Configuration → Databases. Create new Amazon S3
database Instance or open an existing database instance details page where Audit Trail was configured
• At the bottom section of the page, click Trail DB Audit Logs
3. Configure an Audit Rule to capture data from S3 using DataSunrise's Audit Trail mode
• For auditing results, navigate to Audit → Transactional Trails.
dbms.connector.bolt.enabled=true
dbms.connector.bolt.tls_level=DISABLED
dbms.connector.bolt.listen_address=:7687
dbms.connector.bolt.advertised_address=:7687
and
9 DataSunrise Rules | 153
dbms.logs.query.rotation.size=20k
dbms.logs.query.rotation.keep_number=7
3. If necessary, execute queries you want to audit. You can find the logs in the following folder: /var/log/neo4j/
4. Navigate to the Configuration → Databases section of the Web Console and create a new Neo4J database
Instance.
5. In the Capture Mode section, select Local Folder in the Connection drop-down list; In the Mode drop-down list,
select Trailing the db audit logs; specify the path to the folder Neo4J stores its logs (/var/log/neo4j/ by default)
6. Configure an Audit Rule to capture data from your Neo4J using DataSunrise's Audit Trail mode. For auditing
results, navigate to Transactional Trails section of the Web Console.
stat -c %G /var/log/mysql
stat -c %A /var/log/mysql
stat -c %a /var/log/mysql
If the user is not included in the owner group, add this user to the group:
Note: The number and size of Amazon Redshift log files in Amazon S3 depends heavily on the activity in your
cluster. If you have an active cluster that is generating a large number of logs, Amazon Redshift might generate the
log files more frequently. You might have a series of log files for the same type of activity, such as having multiple
connection logs within the same hour. Because Amazon Redshift uses Amazon S3 to store logs, you incur charges
for the storage that you use in Amazon S3. Before you configure logging, you should have a plan for how long you
need to store the log files. As part of this, determine when the log files can either be deleted or archived based
on your auditing needs. The plan that you create depends heavily on the type of data that you store, such as data
subject to compliance or regulatory requirements. For more information about Amazon S3 pricing, go to Amazon
Simple Storage Service (S3) Pricing.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Put bucket policy needed for audit logging",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AccountId>:user/logs"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::<BucketName>/*"
},
{
"Sid": "Get bucket policy needed for audit logging ",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AccountID>:user/logs"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::<BucketName>"
9 DataSunrise Rules | 155
}
]
}
Table
4. Navigate to Properties → Database Configurations → Edit → Edit audit logging and specify bucket name and
prefix (folder to store logs in)
5. Enable Publicly accessible (Cluster name → Actions → Modify publicly accessible settings)
6. Open DataSunrise's Web Console. Navigate to Configuration → Databases and create a Redshift database
Instance
7. Click Trail DB Audit Logs and in the trailing settings, input your S3 bucket name and Prefix.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"Kinesis:DescribeStreamSummary",
"Kinesis:ListShards",
"Kinesis:GetShardIterator",
"Kinesis:GetRecords",
"Kinesis:DescribeStreamSummary",
"KMS:Decrypt",
"RDS:DescribeDBClusters"],
"Resource": ["<ARN of your Kinesis stream>",”<ARN of your KMS key>”,”<ARN of your RDS>”]
}
]
}
https://trailtest.blob.core.windows.net/sqldbauditlogs/antony-test/master/SqlDbAuditing_ServerAudit/
7. Create an Audit Rule for your Synapse Instance. For auditing results, navigate to Audit → Transactional trails.
Important: you may experience some issues with opening and closing sessions but all the database events will be
saved to DataSunrise without any problems. In case of Azure SQL Managed Instance, the configuration procedure is
similar to the one for MS SQL with the following extent: https://docs.microsoft.com/en-us/azure/azure-sql/managed-
instance/auditing-configure#createspec
Do the following:
1. Create an Azure SQL database server. Navigate to the tergate database page or in case you need to audit all
database logs, navigate to the database server page
2. Navigate to the Auditing section from the left panel (Azure SQL Auditing) and set Enable Azure SQL Auditing
to ON
3. Add a storage. Click Storage account → Configure required settings. At the next page, click Create new or
select an existing one
4. Add your Azure SQL Instance to DataSunrise (Configuration → Databases)
5. Configure Trail DB Audit Logs:
Setting Required value
Instance Interface Navigate to Azure Storage Explorer, select your Storage and Copy URL
to the blob container. Paste it here
Log files path/url URL of the Azure storage you created before
6. Navigate to Audit and create an audit Rule for your Azure SQL database Instance. For auditing results, navigate
to Audit → Transactional Trails.
log_checkpoints = OFF
log_destination = CSVLOG
pgaudit.log = ALL
shared_preload_libraries = PG_STAT_STATEMENTS, PGAUDIT
log_line_prefix = %t-%c-u"%u"u-
6. Enable diagnostic settings for your PostgreSQL server using either the Azure portal, CLI, REST API, or
PowerShell. The log category to select is PostgreSQLLogs. See the next step for a guide on using Azure portal
for that
9 DataSunrise Rules | 159
7. In Azure portal, navigate to Diagnostic settings of your PostgreSQL server. Click Add Dianostic Setting. Fill
out all the required fields. Select log type PostgreSQLLOgs. In Destination details, select Archive to a storage
account and select your Storage account. Save the setting
8. Assign an Azure role for access to BLOB data (Reader and Storage BLOB Data Reader for the app created earlier)
9. Navigate to Access Control (IAM). Click Add role assignment
10. Select Reader: Next
11. Click +Select members and select your app. Review and assign
12. Click Add role assignment again and select Storage Blob Data Reader
13. Click +Select Members and select your app. Review and assign
14. Open DataSunrise's Web Console and create a PostgreSQL Instance in Configuration → Databases. Select
Trailing the DB Audit Logs in the Instance's settings
15. Copy ClientID and TenantID from your Azure App
16. Client Secret is the VALUE mentioned in step 3
17. You can find Blob container name in Storage Accounts - your account → Containers of your Azure settings
18. Create some Audit Rules to get the logs. For auditing results, navigate to Audit → Transactional trails
sp_configure "auditing", 1
4. Create a database for storing archived Audit Tables (aud_db for example)
5. Create an archive table with columns similar to those in sybsecurity Audit Tables:
use aud_db
go
select *
into audit_data
from sybsecurity.dbo.sysaudits_01
where 1 = 2
6. Create a threshold procedure in the +sybsecurity database. Example of two Audit Tables:
7. Attach a stored procedure to audit segments. To see the information on segments, execute the following query
in the sybsecurity database
sp_helpsegment
Attach the stored procedure to audit segments by executing the following queries:
use sybsecurity
go
sp_addthreshold sybsecurity, aud_seg_01, 250, audit_thresh
go
sp_addthreshold sybsecurity, aud_seg_02, 250, audit_thresh
go
When sysaudits_01 is from 250 pages from being full, the threshold procedure audit_thresh is triggered.
The procedure switches current Audit Table to sysaudits_02 and SAP ASE starts writing new audit records
to sysaudits_02. The procedure also copies all audit data from sysaudits_01 to the audit_data archive table
located in the audit_db database. The rotation of the Audit Tables continues in this manner without any manual
intervention
8. Set auditing options. Having installed auditing, use sp_audit to set the following auditing options:
sp_audit "login", "all", "all", "pass"@ and @sp_audit "logout", "all", "all", "on"
Note: to make other projects visible in DataSunrise, apply the above roles to your Service Account for each
project.
5. Click Continue. The last step is to grant user access to your Service Account. This step is optional and may be
skipped
6. Now you need to create a Key File. Enter your Account's settings
7. Navigate to the KEYS tab. Click Add Key → Create new key
8. Select the JSON file type and click CREATE
9. At this point a Key File will be generated and you will receive a prompt to download it. Download the key file
10. Save the key file in a secure location as it contains the private key required for establishing a connection with
your BigQuery database. Note that it cannot be generated again if lost
11. Open DataSunrise's Web Console and navigate to Configuration → Databases. Click Add Database
12. Provide the following connection details:
• Logical Name: any
• Database Type: BigQuery
• Hostname or IP: leave it as by default
• Port: default 443
• Authentication Method: Regular
• Service Account Email: use the Email address from the JSON file you downloaded while creating a Service
Account
• Save Secret Key: optional
• RSA Private Key: use the Private Key from the JSON file you downloaded while creating a Service Account
• Project ID: use the Project ID from the JSON file you downloaded while creating a Service Account
13. Select Trailing the DB Audit Logs in the Instance's settings. Test the connection between DataSunrise and your
database and save the settings
14. Create some Audit Rules to get the logs. For auditing results, navigate to Audit → Transactional trails.
9 DataSunrise Rules | 162
Parameter Description
Allow check box Ignore the incoming queries.
Log Event in Storage check box Save the event info in the Audit Storage (refer to Audit Storage
Settings on page 383).
Syslog Configuration drop-down list Select a CEF group to use when exporting data through Syslog
(refer to Syslog Settings (CEF Groups) on page 222).
Blocking Method drop-down list Method of blocking an SQL query when the rule is triggered:
• Query Error: query is blocked and an SQL error notification
is sent
• Disconnect: query is blocked and client application is
disconnected from the target database
• Empty Result Set: query is blocked and the client gets an
empty result set instead of actual data
Custom Blocking Message field A message DataSunrise displays when blocking a query. Can be
unique for each Rule. It enables you to address each Rule with
respective meaningful, context-aware message to keep your
users informed of the reasons behind their access limitations to
certain areas or even during certain time periods when using
along with the Schedule feature.
4. Input the required information to the Filter Sessions subsection (Filter Sessions on page 111).
5. Input the required information to the Filter Statements subsection (Filter Statements on page 114)
6. Set Trigger the Rule only if the number of affected/fetched rows is not less than: if necessary (Response-
Time Filter on page 120)
7. Configure User Blocking Filters if necessary. These filters can be used to prevent user attempts to reach the
protected database (it blocks a user by name or IP address when a number of prohibited operations exceeds
the specified value). Use the following elements to configure the blocking:
Filter parameters Description
User Block Options drop-down list • Don't block: ignore all user operations
• Block Temporarily: block a user for a certain period of time (see
Block User for a Period of Time)
• Block Permanently: block a user permanently
User Block Method drop-down list • By User name and Host: block all access attempts coming from
the specified User and IP address/Host
• By Host only: block access attempts coming from the specified IP
address/Host
Block User for a Period of Time (minutes) Specify a period of time to temporarily block a user for
field (for Block temporarily)
Trigger the Rule if the Number of Specify the number of prohibited operations intercepted to trigger
Prohibited Operations Reached field the Rule. When the number of operations exceeds this number, the
user will be blocked
Per (minutes) field Specify the number of prohibited operations intercepted per minute
to trigger the Rule (if necessary)
Important: Random Email, Random string, Random from Lexicon, Random Credit Card Number and Regexp replace
(MS SQL only) masking methods (refer to Masking Methods on page 167) require creation of a dedicated schema
or database called DS_ENVIRONMENT (by default) to store tables and views needed to perform masking using the
aforementioned methods. This is applicable both to Dynamic and Static masking. You can change your Environment
name in Configuration → Databases → Your DB Instance → Advanced Settings → Environment Name.
An example of a SELECT query after masking applied (PostgreSQL, the "Email" column is being masked)
Except relational and NoSQL databases, DataSunrise also can mask contents of CSV files stored in Amazon S3
buckets. The masking is done by certain comma-separated fields.
Restriction: there is a limitation exist which is associated with using stored procedures for Dynamic Masking.
Let's assume that two masking Rules exist and each Rule is configured to be triggered when a certain column
is SELECTed: the first Rule is configured on "column1" and the second Rule is configured on "column2". If both
columns are SELECTed using a stored procedure, only the second Rule will be triggered.
Restriction: for AWS RDS-hosted MariaDB, dynamic masking inside functions and procedures doesn't work
because admin privileges required for masking inside routines can't be obtained on RDS databases.
Important: for Dynamic Masking using random-based methods, you need a dedicated schema (DS Environment) in
your database (see Configuring DataSunrise for Masking with random-based methods on page 177).
Parameter Description
Keep Row Count check box Disable masking of columns included into GROUP BY,
HAVING, ORDER BY, WHERE clauses.
Mask SELECTs Only check box Mask only SELECT queries. For example, the following
query will not be masked:
UPDATE customers SET id = id RETURNING *
Action drop-down list Select an appropriate option from the list to block
certain queries aimed at modification of masked
columns. For example, such queries might be blocked
(the Email column is the masked column):
UPDATE test.customers SET "Order" = '1234' WHERE
"Email" = '[email protected]';
4. Input the required information to the Filter sessions subsection (Filter Sessions on page 111).
5. Input the required information to the Masking Settings subsection:
Parameter Description
Mask Data subsection Specify database columns to mask. Click Select to do it manually and select
the required columns in the objects tree. Click Select then ADD REGEXP to use
regular expressions.
Masking Method drop-down Data obfuscation algorithm. Refer to Masking Methods on page 167.
list (for Mask Data only)
Hide Rows subsection Hide table rows which don't match the specified Masking Value. Refer to Masking
Methods on page 167. Click Select to select a table to hide rows in.
Condition for Column Value Condition for the value of the column rows of which should be hidden (any
to Show Rows field (for Hide WHERE type conditions). For example Age>25 means that the table rows where
Rows only) the Age column's value is less than 25 will be hidden.
More examples:
LastName = 'Smith'
LastName LIKE ('%Smi%')
EmployeeKey <= 500 EmployeeKey = 1 OR EmployeeKey = 8 OR EmployeeKey = 12
EmployeeKey <= 500 AND LastName LIKE '%Smi%' AND FirstName LIKE '%A%'
LastName IN ('Smith', 'Godfrey', 'Johnson')
EmployeeKey Between 100 AND 200
Note: if you select a column(s) associated with another column (linked with a primary key for example), you will
be prompted that there are columns exist that contain related data. Click on this message to select the associated
columns. Once you select them, these columns will be added to the list of columns to be masked. More on
associations: Table Relations on page 400.
Important: if you're going to use this masking method, ensure that your
case sensitivity settings correspond to the case sensitivity settings of the
database server you're going to mask data at.
Important: if you're going to use this masking method, ensure that your
case sensitivity settings correspond to the case sensitivity settings of the
database server you're going to mask data at.
FP Encryption FF3 Format-preserving encryption for NUMBER-type values using the FF3 + +
Number encryption algorithm.
Random US Phone Replacing of a US phone number with a random-generated phone number + +
Number in the following format : 1-555-XXX-XXXX. Available for MySQL, MariaDB,
Aurora MySQL, PostgreSQL, Aurora PostgreSQL, Redshift, TiDB, Greenplum,
Oracle and MS SQL Server.
NULL Value Replaces masked database entry with a NULL. + +
Substring Creating of a substring out of the original string. Starting position defines + +
the starting character of the resulting substring and String's Length
defines the substring length. Available for MySQL, MariaDB, Aurora MySQL,
Oracle, Redshift, PostgreSQL, Aurora PostgreSQL, TiDB, Greenplum, MS
SQL Server.
Random String Returns a random string of a random length (the string's length can be + +
defined with the Minimum Length and Maximum Length). Available for
MySQL, MariaDB, Aurora MySQL, Redshift, PostgreSQL, Aurora PostgreSQL,
Greenplum and Oracle.
9 DataSunrise Rules | 169
Fixed date Replacing date values with a fixed value. Select date (fixed value) via (Date) + +
drop-down lists.
Fixed time Replacing time values with a fixed value. Select time (fixed value) via (Time) + +
drop-down lists.
Fixed datetime Replacing time values with a fixed value. + +
Random date Replacing date values with a random value from a predefined range. + +
interval Specify a range of dates to select a random value from, via Starting Date
and Ending Date drop-down lists.
Random time Replacing time values with a random value from a predefined range. + +
interval Specify a range of time to select a random value from, via Starting Time
and Ending Time drop-down lists.
9 DataSunrise Rules | 170
Warning: Sometimes data masking will not work. For example, if Show First and Last algorithm you selected is
configured to show three first and three last characters of DB column's entry, and the entry itself is six characters
long, the masking will not work. In such cases use other masking types or purpose-written functions.
Note: when masking entries that include strings of fixed length ("char", "varchar", "nchar", "nvarchar" data types
for example), the string got after masking may be longer than the original string. The following masking types may
cause an obfuscated entry to exceed the original string length:
• Fixed string
• Function call
• Regexp replace
9 DataSunrise Rules | 171
9.11.4.1 Using a Custom Function for Masking
Along with prebuilt masking methods, you can use your own masking algorithms in the form of functions. To
employ custom function-based masking, do the following:
1. Create a function that will be used to mask your data. For example, here is a function for PostgreSQL database
supposed to replace logins of emails with random values (consisting of prefixes + mids + suffixes):
Procedure Findings. The patient, Patrick Kelley, is a 39 year old male born on October 6, 1979. He has
a 6 mm sessile polyp that was found in the ascending colon and removed by snare, no cautery. Patrick's
address is 19 North Ave. Humbleton WA 02462. His SSN is 123-23-234. He experienced the polyp after
getting out of his blue Honda Accord with a license number of WDR-436. We were able to control the
bleeding. Moderate diverticulosis and hemorrhoids were incidentally noted. Recurrent GI bleed of
unknown etiology; hypotension perhaps secondary to this but as likely secondary to polypharmacy. He
reports first experiencing hypotension while eating queso at Chipotle.
Masked data:
Example 2
9 DataSunrise Rules | 173
Unmasked data:
Dear Mark,I am writing you to enquire about the status of the task #18897 in TRACKME task manager
(https://cd.trackme.com/18897). As a manager of Customer Development department, it is your
responsibility to speed up this stuck task. As far as I know, it was assigned to Ellie Sanders,
junior customer relationship manager #056. Please speed this up, because Mr. Williams is expecting to
get some insights from your research for the sales campaign which will be kicked off on 2019-11-11.
You can email me at [email protected] call me. My phone no is 202-555-0181P.S. Please check
emails from Mrs. Martinez. She was looking for you to give you some details on your business trip to
Phoenix.Cheers,Mike
Masked data:
*********, I am writing you to enquire about the status of the task #***** in ******* task manager
*****************************). As a manager of ******************** department, it is your
responsibility to speed up this stuck task. As far as I know, it was assigned to *************, junior
customer relationship manager #***. Please speed this up, because ************ is expecting to get
some insights from your research for the sales campaign which will be kicked off on **********. You
can email me at ******************* *or call me. My phone no is ************ P.S. Please check emails
from *************. *** was looking for you to give you some details on your business trip to *******.
Cheers, ***
Note: you need to install Java 1.8+ to be able to use NLP Data Masking. If you're running DataSunrise on Linux, you
need to configure JVM as well (Configuring JVM on Linux on page 173). If you're experiencing some problems with
JVM on Windows, add the path to your JVM folder to the PATH environment variable (for example: C:\Program Files
\Java\jre1.8.0_301\bin\server).
For instructions on how to use Unstructured masking, refer to subs. Dynamic Data Masking on page 164
Configuring JVM on Linux
To utilize the NLP Data Masking, you need to configure a Java Virtual Machine (JVM). To do this, perform the
following:
cd /etc/ld.so.conf.d/
4. Create a configuration file that will be used to register your Java library:
5. Paste the path to your "libjvm.so" into the configuration file. For example:
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.201.b09-0.amzn2.x86_64/jre/lib/amd64/server/
6. Update cache:
sudo ldconfig
/opt/datasunrise/JvmChecker
9 DataSunrise Rules | 174
You should get something like this:
for i = 1, #batchRecords do
if dataFormat==0
then
batchRecords[i]= "masked"
end
end
This script replaces all string values (dataFormat==0) in a table with "masked" string
5. For Static masking, the following Global variables are available:
• columnName (string) - column name
• fullColumnType (string) - actual column type
• columnValue (string) - value contained in the column
• columnType (number) - column data type (0 - number, 1- string, 2 - date, 3 - date and time, 4 - time, 5 - other)
For Static masking, DataSunrise returns table's contents by rows and columns. Thus, you can mask certain
columns with a script. You should use maskValue as the output parameter. See an example of a script below:
if (columnType == 0) then
maskValue = 1
elseif (columnType == 1) then
maskValue = "masked"
elseif (columnType == 2) then
maskValue = "2017.08.09"
elseif (columnType == 3) then
maskValue = "2017.08.09 12:00:00"
elseif (columnType == 4) then
maskValue = "12:00:00"
else
maskValue = "masked"
end
This script replaces values of different types (note columnType) with corresponding values (maskValue). For
example all columns of columnType==1 (string) will be masked by replacing the contents with "masked"
string.
9 DataSunrise Rules | 175
9.11.4.4 Extending Lua Script Functionality
You can plug-in 3rd-party Lua modules to extend DataSunrise's Lua functionality.
To access the modules in the DataSunrise's Lua snippet, do the following:
1. Use 64-bit C-compiled modules only (.dll, .so).
2. Check the modules on dependencies with the Dependency Walker application before using them.
3. Place all the modules you're going to use (and the ones they depend on) into the DataSunrise's installation folder.
4. Example. Let's assume that we're going to us a custom "cjson" module. We open the DataSunrise's Lua Script
editor and add the following lines to the script:
local mymodule = {}
function mymodule.foo()
print("Hello World!")
end
return mymodule
To call the function included in this module in your Lua script, add the following lines to the script:
Important: Conditional Masking is an additional optional parameter available for all masking methods except
for FP Masking methods, Unstructured Masking, and Masking with Lua script.
Important: AWS RDS Maria DB 10.5+ doesn't support the GRANT ALL privilege. In case your database doesn't
support GRANT ALL, execute the following query:
Parameter Description
Full File Name text field Path to the file which should be masked. Note that it should start
with "/". For example: /mybucket/customers.xml
Fields/Columns text field Names of columns or numbers of columns to mask for CSV or text
inside tags to mask for XML. For example: first_name,last_name
Note that CSV column numbers start with 1 and not 0.
According to the following example, the first_name
and last_name columns of a CSV file will be masked:
first_name,last_name,middle_name masked,masked,Jonathan
masked,masked,Robert
And the text inside the <first_name> and <last_name> tags will be
masked for XML:
<first_name>masked</first_name> <last_name>masked</
last_name>
For XML, an abridged version of XPath is used. You should specify the
tag whose contents should be masked in the following way:
/root_tag/sub_tag1/sub_tag2/sub_tag.../target_tag
For your convenience, you can use "?" for one nesting level or "*" for
unknown number of nesting levels. For example:
/*/first_name means that the contents of all "first_name" tags in the
XML file will be masked.
JSON Path text field (for JSON only) Keys' values of a JSON file to be masked.
Here's a JSON example:
[ { "firstName": "John", "lastName" : "doe", "age" : 26, "address" :
{ "streetAddress": "Naist street", "city" : "Nara", "postalCode" :
"630-0192" }, ]
Note: just in case, you can find the Informix masking scripts in the DataSunrise installation folder, scripts/Masking
folder
Important: Random Email, Random string, Random from Lexicon, Random Credit Card Number and Regexp replace
(MS SQL only) masking methods (refer to Masking Methods on page 167) require creation of a dedicated schema
or database called DS_ENVIRONMENT (by default) to store tables and views needed to perform masking using the
aforementioned methods. This is applicable both to Dynamic and Static masking. You can change your Environment
name in Configuration → Databases → Your DB Instance → Advanced Settings → Environment Name.
9 DataSunrise Rules | 181
9.11.1 Generating a Private Key Needed for Data Masking
To use Format-Preserving Masking methods (Format-Preserving Masking on page 177) and Random methods with
the Consistent masking option enabled (for Static Masking), you need to create an encryption key. For this, do the
following:
1. To create a new key, navigate to Masking → Masking Keys and click Add Key. Either generate a new key at the
Generate tab or navigate to the Insert tab and paste/upload an existing key.
2. You can find masking keys in the Masking → Masking Keys section. You can edit your keys but note that they
are of fixed length.
For relational databases, DataSunrise modifies an incoming query itself making a target database to construct
a response with obfuscated data inside. For NoSQL databases (DynamoDB, Mongo, Elasticsearch), DataSunrise
modifies the database response before redirecting it to a client application.
An example of a SELECT query before masking applied (PostgreSQL)
An example of a SELECT query after masking applied (PostgreSQL, the "Email" column is being masked)
Except relational and NoSQL databases, DataSunrise also can mask contents of CSV files stored in Amazon S3
buckets. The masking is done by certain comma-separated fields.
Restriction: there is a limitation exist which is associated with using stored procedures for Dynamic Masking.
Let's assume that two masking Rules exist and each Rule is configured to be triggered when a certain column
is SELECTed: the first Rule is configured on "column1" and the second Rule is configured on "column2". If both
columns are SELECTed using a stored procedure, only the second Rule will be triggered.
9 DataSunrise Rules | 182
Restriction: for AWS RDS-hosted MariaDB, dynamic masking inside functions and procedures doesn't work
because admin privileges required for masking inside routines can't be obtained on RDS databases.
Important: for Dynamic Masking using random-based methods, you need a dedicated schema (DS Environment) in
your database (see Configuring DataSunrise for Masking with random-based methods on page 177).
Action drop-down list Select an appropriate option from the list to block
certain queries aimed at modification of masked
columns. For example, such queries might be blocked
(the Email column is the masked column):
UPDATE test.customers SET "Order" = '1234' WHERE
"Email" = '[email protected]';
4. Input the required information to the Filter sessions subsection (Filter Sessions on page 111).
5. Input the required information to the Masking Settings subsection:
9 DataSunrise Rules | 183
Parameter Description
Mask Data subsection Specify database columns to mask. Click Select to do it manually and select
the required columns in the objects tree. Click Select then ADD REGEXP to use
regular expressions.
Masking Method drop-down Data obfuscation algorithm. Refer to Masking Methods on page 167.
list (for Mask Data only)
Hide Rows subsection Hide table rows which don't match the specified Masking Value. Refer to Masking
Methods on page 167. Click Select to select a table to hide rows in.
Condition for Column Value Condition for the value of the column rows of which should be hidden (any
to Show Rows field (for Hide WHERE type conditions). For example Age>25 means that the table rows where
Rows only) the Age column's value is less than 25 will be hidden.
More examples:
LastName = 'Smith'
LastName LIKE ('%Smi%')
EmployeeKey <= 500 EmployeeKey = 1 OR EmployeeKey = 8 OR EmployeeKey = 12
EmployeeKey <= 500 AND LastName LIKE '%Smi%' AND FirstName LIKE '%A%'
LastName IN ('Smith', 'Godfrey', 'Johnson')
EmployeeKey Between 100 AND 200
Note: if you select a column(s) associated with another column (linked with a primary key for example), you will
be prompted that there are columns exist that contain related data. Click on this message to select the associated
columns. Once you select them, these columns will be added to the list of columns to be masked. More on
associations: Table Relations on page 400.
Important: if you're going to use this masking method, ensure that your
case sensitivity settings correspond to the case sensitivity settings of the
database server you're going to mask data at.
Important: if you're going to use this masking method, ensure that your
case sensitivity settings correspond to the case sensitivity settings of the
database server you're going to mask data at.
FP Encryption FF3 Format-preserving encryption for NUMBER-type values using the FF3 + +
Number encryption algorithm.
Random US Phone Replacing of a US phone number with a random-generated phone number + +
Number in the following format : 1-555-XXX-XXXX. Available for MySQL, MariaDB,
Aurora MySQL, PostgreSQL, Aurora PostgreSQL, Redshift, TiDB, Greenplum,
Oracle and MS SQL Server.
NULL Value Replaces masked database entry with a NULL. + +
Substring Creating of a substring out of the original string. Starting position defines + +
the starting character of the resulting substring and String's Length
defines the substring length. Available for MySQL, MariaDB, Aurora MySQL,
Oracle, Redshift, PostgreSQL, Aurora PostgreSQL, TiDB, Greenplum, MS
SQL Server.
Random String Returns a random string of a random length (the string's length can be + +
defined with the Minimum Length and Maximum Length). Available for
MySQL, MariaDB, Aurora MySQL, Redshift, PostgreSQL, Aurora PostgreSQL,
Greenplum and Oracle.
9 DataSunrise Rules | 186
Fixed date Replacing date values with a fixed value. Select date (fixed value) via (Date) + +
drop-down lists.
Fixed time Replacing time values with a fixed value. Select time (fixed value) via (Time) + +
drop-down lists.
Fixed datetime Replacing time values with a fixed value. + +
Random date Replacing date values with a random value from a predefined range. + +
interval Specify a range of dates to select a random value from, via Starting Date
and Ending Date drop-down lists.
Random time Replacing time values with a random value from a predefined range. + +
interval Specify a range of time to select a random value from, via Starting Time
and Ending Time drop-down lists.
9 DataSunrise Rules | 187
Warning: Sometimes data masking will not work. For example, if Show First and Last algorithm you selected is
configured to show three first and three last characters of DB column's entry, and the entry itself is six characters
long, the masking will not work. In such cases use other masking types or purpose-written functions.
Note: when masking entries that include strings of fixed length ("char", "varchar", "nchar", "nvarchar" data types
for example), the string got after masking may be longer than the original string. The following masking types may
cause an obfuscated entry to exceed the original string length:
• Fixed string
• Function call
• Regexp replace
9 DataSunrise Rules | 188
9.11.4.1 Using a Custom Function for Masking
Along with prebuilt masking methods, you can use your own masking algorithms in the form of functions. To
employ custom function-based masking, do the following:
1. Create a function that will be used to mask your data. For example, here is a function for PostgreSQL database
supposed to replace logins of emails with random values (consisting of prefixes + mids + suffixes):
Procedure Findings. The patient, Patrick Kelley, is a 39 year old male born on October 6, 1979. He has
a 6 mm sessile polyp that was found in the ascending colon and removed by snare, no cautery. Patrick's
address is 19 North Ave. Humbleton WA 02462. His SSN is 123-23-234. He experienced the polyp after
getting out of his blue Honda Accord with a license number of WDR-436. We were able to control the
bleeding. Moderate diverticulosis and hemorrhoids were incidentally noted. Recurrent GI bleed of
unknown etiology; hypotension perhaps secondary to this but as likely secondary to polypharmacy. He
reports first experiencing hypotension while eating queso at Chipotle.
Masked data:
Example 2
9 DataSunrise Rules | 190
Unmasked data:
Dear Mark,I am writing you to enquire about the status of the task #18897 in TRACKME task manager
(https://cd.trackme.com/18897). As a manager of Customer Development department, it is your
responsibility to speed up this stuck task. As far as I know, it was assigned to Ellie Sanders,
junior customer relationship manager #056. Please speed this up, because Mr. Williams is expecting to
get some insights from your research for the sales campaign which will be kicked off on 2019-11-11.
You can email me at [email protected] call me. My phone no is 202-555-0181P.S. Please check
emails from Mrs. Martinez. She was looking for you to give you some details on your business trip to
Phoenix.Cheers,Mike
Masked data:
*********, I am writing you to enquire about the status of the task #***** in ******* task manager
*****************************). As a manager of ******************** department, it is your
responsibility to speed up this stuck task. As far as I know, it was assigned to *************, junior
customer relationship manager #***. Please speed this up, because ************ is expecting to get
some insights from your research for the sales campaign which will be kicked off on **********. You
can email me at ******************* *or call me. My phone no is ************ P.S. Please check emails
from *************. *** was looking for you to give you some details on your business trip to *******.
Cheers, ***
Note: you need to install Java 1.8+ to be able to use NLP Data Masking. If you're running DataSunrise on Linux, you
need to configure JVM as well (Configuring JVM on Linux on page 173). If you're experiencing some problems with
JVM on Windows, add the path to your JVM folder to the PATH environment variable (for example: C:\Program Files
\Java\jre1.8.0_301\bin\server).
For instructions on how to use Unstructured masking, refer to subs. Dynamic Data Masking on page 164
Configuring JVM on Linux
To utilize the NLP Data Masking, you need to configure a Java Virtual Machine (JVM). To do this, perform the
following:
cd /etc/ld.so.conf.d/
4. Create a configuration file that will be used to register your Java library:
5. Paste the path to your "libjvm.so" into the configuration file. For example:
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.201.b09-0.amzn2.x86_64/jre/lib/amd64/server/
6. Update cache:
sudo ldconfig
/opt/datasunrise/JvmChecker
9 DataSunrise Rules | 191
You should get something like this:
for i = 1, #batchRecords do
if dataFormat==0
then
batchRecords[i]= "masked"
end
end
This script replaces all string values (dataFormat==0) in a table with "masked" string
5. For Static masking, the following Global variables are available:
• columnName (string) - column name
• fullColumnType (string) - actual column type
• columnValue (string) - value contained in the column
• columnType (number) - column data type (0 - number, 1- string, 2 - date, 3 - date and time, 4 - time, 5 - other)
For Static masking, DataSunrise returns table's contents by rows and columns. Thus, you can mask certain
columns with a script. You should use maskValue as the output parameter. See an example of a script below:
if (columnType == 0) then
maskValue = 1
elseif (columnType == 1) then
maskValue = "masked"
elseif (columnType == 2) then
maskValue = "2017.08.09"
elseif (columnType == 3) then
maskValue = "2017.08.09 12:00:00"
elseif (columnType == 4) then
maskValue = "12:00:00"
else
maskValue = "masked"
end
This script replaces values of different types (note columnType) with corresponding values (maskValue). For
example all columns of columnType==1 (string) will be masked by replacing the contents with "masked"
string.
9 DataSunrise Rules | 192
9.11.4.4 Extending Lua Script Functionality
You can plug-in 3rd-party Lua modules to extend DataSunrise's Lua functionality.
To access the modules in the DataSunrise's Lua snippet, do the following:
1. Use 64-bit C-compiled modules only (.dll, .so).
2. Check the modules on dependencies with the Dependency Walker application before using them.
3. Place all the modules you're going to use (and the ones they depend on) into the DataSunrise's installation folder.
4. Example. Let's assume that we're going to us a custom "cjson" module. We open the DataSunrise's Lua Script
editor and add the following lines to the script:
local mymodule = {}
function mymodule.foo()
print("Hello World!")
end
return mymodule
To call the function included in this module in your Lua script, add the following lines to the script:
.
9.11.4.5 Conditional Masking
The Conditional Masking option enables you to obfuscate sensitive data according to different specified conditions.
Sensitive data will be filtered and masked according to the chosen condition.
Conditional Masking is available for the following databases:
• MySQL
• MariaDB
• PostgreSQL
• Oracle
• Aurora PostgreSQL
• Aurora MySQL
• Greenplum
• Redshift
• CockroachDB
• TiDB
• MsSQL
You can use the following types of Conditional Masking:
• Contains (available for string data types only). Checks if the condition meets the sequence of characters specified
in the Value field.
• Does not contain (available for string data types only). Checks if the condition mismatches the sequence of
characters specified in the Value field.
• Matches. Checks if the column value fully matches the value from the Rule.
9 DataSunrise Rules | 193
• Does not match. Checks if the column value does not match the value from the Rule.
• RegEx (available for string data types only). Checks if the column value matches a Regex pattern.
• Custom condition. Checks any custom condition that returns true/false as a result of execution. This may include
checking other columns from the table.
Also, you can use Conditional Masking with the KeepNull option that prevails. It means that, Conditional Masking
will not obfuscate null values, even if these values match the condition.
Important: Conditional Masking is an additional optional parameter available for all masking methods except
for FP Masking methods, Unstructured Masking, and Masking with Lua script.
Important: AWS RDS Maria DB 10.5+ doesn't support the GRANT ALL privilege. In case your database doesn't
support GRANT ALL, execute the following query:
9.11.7 Masking XML, CSV, JSON and Unstructured Files Stored in Amazon S3 Buckets
DataSunrise can mask columns of CSV files, text inside XML elements of XML files and keys' values of JSON files and
contents of unstructured files stored in Amazon S3 buckets or S3 protocol compatible file storage services such as
Minio and Alibaba OSS. To do this, follow the steps listed below:
1. Create a Dynamic Masking Rule (Dynamic Data Masking on page 164).
2. In the Masking Settings subsection, select the required file type (CSV, XML or JSON) and input the required
information according to the table below:
9 DataSunrise Rules | 196
Parameter Description
Full File Name text field Path to the file which should be masked. Note that it should start
with "/". For example: /mybucket/customers.xml
Fields/Columns text field Names of columns or numbers of columns to mask for CSV or text
inside tags to mask for XML. For example: first_name,last_name
Note that CSV column numbers start with 1 and not 0.
According to the following example, the first_name
and last_name columns of a CSV file will be masked:
first_name,last_name,middle_name masked,masked,Jonathan
masked,masked,Robert
And the text inside the <first_name> and <last_name> tags will be
masked for XML:
<first_name>masked</first_name> <last_name>masked</
last_name>
For XML, an abridged version of XPath is used. You should specify the
tag whose contents should be masked in the following way:
/root_tag/sub_tag1/sub_tag2/sub_tag.../target_tag
For your convenience, you can use "?" for one nesting level or "*" for
unknown number of nesting levels. For example:
/*/first_name means that the contents of all "first_name" tags in the
XML file will be masked.
JSON Path text field (for JSON only) Keys' values of a JSON file to be masked.
Here's a JSON example:
[ { "firstName": "John", "lastName" : "doe", "age" : 26, "address" :
{ "streetAddress": "Naist street", "city" : "Nara", "postalCode" :
"630-0192" }, ]
Note: just in case, you can find the Informix masking scripts in the DataSunrise installation folder, scripts/Masking
folder
1. Navigate to the Audit → Learning Rules section and click Add Rule
2. Enter the required information to General Settings (General Settings on page 111)
3. Input the required information to the Filter sessions subsection (Filter Sessions on page 111)
4. Input the required information to the Actions subsection
Interface element Description
Learn radio button Log incoming queries, database objects, database user names and
client application names and add them to the predefined SQL groups
Skip radio button Ignore incoming queries
Keep Checking the List of Rules check Check other existing Rules even if the current one is triggered
box
Schedule drop-down list See Creating a Schedule on page 219
9.13 Tags
You can assign certain tags to DataSunrise Rules. You can use these tags to quickly locate your Rule or Rules in a list
of Rules. This subsection is common for all types of Rules.
To create a tag for a Rule, do the following:
• Create or open an existing Rule and navigate to the Tags subsection of the Rule's settings
• Click Edit and enter tag's Key (tag's logical name) and tag's value
• Click Save to save the tag. Once the tag is saved, you will be proposed to create a new tag. Create a new one or
click Close to close the Tags window and save the tags you've created.
Having created a tag, you will be able to see it in the Rule's list (Tags column). You can also click Edit Columns (gear
icon) and select your tag from the list of columns to display all Rules marked with this tag.
To filter Rules by tags, click Filter and select Tags to view.
Specify a date range to display. Use the From drop-down list to select an initial date and the To drop-down list
for an end date of the date range. DataSunrise will display a list of transactional trails (in the form of a table).
9 DataSunrise Rules | 200
Note: You need to create an object group with the required database specified to add to the Rule which affects
DDL queries.
Important: When executing a SELECT query, some SQL clients send additional queries to the database, resulting in
blocking of the SELECT query. In this case, you need to configure a rule which allows SHOW queries. You can check
which exactly query caused blocking in the Security → Events subsection.
As a result, the "allow_select" rule will allow only SELECT-type queries to the specified schema:
And the "block_dml" rule will block all other DML queries.
As the priority of the allowing rule is higher (it is located higher in the list), only queries to the col1 column will be
allowed and all other queries will be blocked. This query will be allowed:
10 DataSunrise Configurations
In order to utilize its data auditing, protection and masking capabilities (refer to DataSunrise Functional Modules
on page 228), DataSunrise requires information about a target database as well as about its users and client
applications used to query this database.
Configurations section enables you to accomplish the following tasks:
• Creating target DB profiles
• Creating target DB user profiles and profiles of client applications that interact with the target DB
• Entering information about IP addresses (hosts), target DB queried from as well as creating groups of IP
addresses
• Arranging target DB's objects into Object groups
• Arranging user queries intercepted by the firewall into SQL groups
• Creating Schedules
• Configuring notifications on system events via email, instant messengers and Syslog messages.
1. Click Select.
Select a database network interface in the Interface drop-down list located in the Check Columns window.
2. Check the objects of interest in the object tree. Note that you can select a row of objects by clicking the name
of the first object of the row and the name of the ending object of the row while holding the Shift button (the
selected items will be highlighted). Then click Select Multiple to check these objects.
3. Click Done to apply changes.
Note: You can search across the database object tree. Enter required DB element's name into the corresponding
text field and click Show:
To be able to preview data in the Object Tree you need to be sure that the DataSunrise user has the Reading
Database Data Web Console action and has the select table grant (see. Creating Database Users)
10 DataSunrise Configurations | 205
Note: Names of database elements marked with red asterisks are considered to be regular expressions and
not taken directly from the database. If a database connection is missing, all table and column names will be
considered as regular expressions.
3. Select an element to be added to the object group and click Add. In such way, you can add multiple objects one
by one
4. When you're done with adding objects, click Close to complete the operation.
10 DataSunrise Configurations | 206
Note: You can search across the database object tree. Enter the required element's name into the corresponding
text field and click Show:
Note: Names of database elements marked with red asterisks are considered to be regular expressions and
not taken directly from the database. If a database connection is missing, all table and column names will be
considered as regular expressions.
Click Query Groups to access Query Groups' settings. A list of SQL statements previously logged by DataSunrise will
be displayed:
10 DataSunrise Configurations | 208
To add a new SQL query to a group, click Actions → Add and enter the SQL query code in the Edit the Query
window. If you want to use regular statements to select SQL statements, check the Regular Expression checkbox.
1. Navigate to the corresponding section of the Web Console (Data Audit (Data Audit (Database Activity
Monitoring)), DataSecurity (Data Security) or Data Masking (Data Masking).
2. Navigate to the Transactional Trails or Events page, select a SQL statement you want to add to an existing
Group from the list and click its ID to view the query's code.
3. Click Add Query to the Group and select a Group you want to add the SQL query to from the Query Statement
group drop-down list. Click Apply.
10 DataSunrise Configurations | 209
10.3 IP Addresses
Rules' settings (DataSunrise Rules on page 110) enable DataSunrise to process queries coming from certain hosts, IP
addresses or networks. To use this feature you need to create host profiles respectively so DataSunrise be aware of
these IPs.
Hosts subsection enables you to perform the following actions:
• Creating and editing of host profiles (either manually or using a .CSV file)
• Creating and editing of host groups.
Note: It is possible to create host profiles automatically using DataSunrise's self-learning functionality (refer to
Learning Mode Overview).
Address text field (for Host and Network IPv6 types Actual IP address
only)
Network text field (for Network type only) Actual Subnet mask
Starting IP Address text field (for Range IPv4 type Initial IP address of the range
only)
Ending IP Address text field (for Range IPv4 type only) Ending IP address of the range
Network text field (for Range IPv6 type only) Subnet mask
1. Prepare a text file with a list of IP addresses that should be added to DataSunrise. Each line should start with
host;, followed by an IP address.
10 DataSunrise Configurations | 210
Example:
host;10.10.0.1
host;10.10.0.25
host;10.10.0.30
2. Click Actions → Import Hosts. The Import Host page will open.
3. Drag and drop your file or click the corresponding link for the file browser and select your file.
If you need to upload a range of IP addresses, begin each line with the range key word (for IPv4 addresses) or
the range_ipv6 key word (for IPv6 addresses), then enter initial IP address and ending IP address of the range
separated with a semicolon:
If you need to upload network settings, each line of your file should start with the network key word (for IPv4
addresses) or network_ipv6 key word (for IPv6 addresses):
Note: The host list uploading is a two-stage process. First, when you drag and drop, the file you choose
is uploaded to the DataSunrise server. And when you click Attach, the contents of the file is processed by
DataSunrise.
Tip:
Host groups enable you to handle all IP addresses a group includes as a single object. For example, when creating
a Data Security Rule for blocking queries from multiple IP addresses, you can specify a required host group in the
Rule's settings instead of specifying these hosts one by one.
Note: It is possible to create application profiles automatically using DataSunrise's self-learning functionality (refer
to Learning Mode Overview on page 197).
1. Prepare a .CSV or .TXT file which contains a list of client applications to be added to DataSunrise.
Each line should start with the app; keyword, followed by an application name.
Example:
app;application_name1
app;application_name2
app;application_name3
2. Click Actions → Import Applications. The Import Application page will open.
3. Drag and drop your file or click the corresponding link for the file browser and select your file.
4. Click Attach to save changes.
Note: uploading of an Applications list is a two-stage process. First, when you drag and drop a file, it is
uploaded to the DataSunrise server. And when you click Attach, the file's contents is processed by DataSunrise.
10 DataSunrise Configurations | 212
Note: The Subscribers subsection can be used only to configure mail servers and to create subscriber profiles.
To establish subscription on specific Rule events, go to the Rule settings and add existing subscribers to the
Notifications list.
1. Navigate to Subscribers.
2. Click Add Server.
3. Select SMTP in the Type drop-down list.
4. Enter the required data into the Server tab:
10 DataSunrise Configurations | 213
Figure 42: Example of External app server settings (Slack Enterprise). The Command field contains the Slack
authorization token.
1. Follow the link https://api.slack.com/ and create a Slack application for sending notifications. You can configure it
to send messages to a certain Slack channel or to certain Slack users.
2. In the DataSunrise's Web Console, navigate to Configuration → Subscribers and add a new Server (Add
Server).
3. Select Slack (direct), use port 443 (default).
4. Specify the Path with tokens. For example, to post to group "#random", the token should look like the following:
T1D93E7U6/BBPKEJWBB/cYJhcmidqsCuL8z9hQsgmeTN
xoxp-00000000000-000000000000-000000000000-00000000000000000000000000000000
2. In the DataSunrise's Web Console, navigate to Configuration → Subscribers and add a new server (Add Server).
3. Select Slack (token), use port 443 (default).
4. Paste the token into the Token field.
5. Input sender's name into the From field.
6. Click Save to save the settings.
1. In the DataSunrise Web Console, navigate to Configuration → Subscribers and add a new server (Add Server).
2. Select Jira. Specify host, protocol (HTTP or HTTPS, HTTP by default). Specify port number (443 for HTTPS, 80 for
HTTP by default).
3. Input your email into the Login field and password into the Password.
4. Input your project key into the Project key field.
5. Click Save to save the settings.
Parameter Description
Body of message
${Event.Time} Time
Misc.
${Event.Time} Time
${Event.Description} Message
Message subject
Body of message
${Report.Time} Time
10.6 Schedules
Schedules can be used to activate and deactivate DataSunrise rules automatically at predefined time.
In fact, Schedules don't control overall DataSunrise's behavior but control the behavior of separate Rules. Thus if you
need to set a Schedule for a certain Rule, you should specify it in the Rule's settings. You can use one Schedule to
control multiple Rules as well.
To create and edit Schedules, navigate to the Schedules subsection.
Note: You can select an exact date via the date picker by clicking the "Calendar" icon to the right of From.
4.2 Select end date of the Schedule's active period from the To drop-down list.
Note: You can select an exact date via the date picker by clicking the "Calendar" icon to the right of To.
Important: Thus a schedule-related Rule will be activated at the initial date of the active period and deactivated at
the end date.
5. If you need a Schedule to activate and deactivate a related Rule periodically (daily, weekly etc.), specify its activity
periods in the Time Intervals subsection.
5.1 Click Add Time Interval.
5.2 Select a day of week the Schedule should activate the related Rule on.
5.3 Specify a period of time the Schedule should activate the related Rule at, in the From and To.
Click Add Time Interval to add another activity interval to the Schedule's settings.
Important: You can create multiple time intervals for one Schedule (for example, for every day of week).
UI element Description
Name text field Logical name of the CEF group
Enabled check box Enable the current group
Members subsection
Add CEF Item button Add new CEF entry. Click the button and you will be redirected to a new
page. Enter item's name, select type of message and enter CEF.
CEFs list Includes system events and corresponding CEF codes of messages
transferred to Syslog
Save button Save current CEF group
UI element Description
Name text field Logical name of a CEF item
Type drop-down list Event type to report on
CEF field CEF code of an item. You can use the Parameters list as a reference
Enabled check box Enable the item
Note: each Periodic Task features a general subsection where you should specify Task's logical name and
DataSunrise server to start the Task on.
Note: Backup Name means the date and time of backing up and at the
same time the name of the folder the backup is saved in.
On Linux, to create new backups in different folders, use the following
Exterrnal Commands (see below):
Backup Settings check box Include information about DataSunrise's settings in the backup
Backup Users check box Include information about DataSunrise's Users in the backup
Backup Configurations check box Include information about DataSunrise objects: servers, instances
(Interfaces, Proxies, Sniffers, metadata), database Users and Groups, Hosts,
Schedules, Applications, Static Masking tasks, Data Discovery tasks, Report
Generator reports, Query Groups, Subscribers settings, SSL key groups, Data
Discovery groups, CEF groups, Rules
External Command text field An arbitrary command.
Parameter Description
Archive Data to be Removed Save the audit data to a separate folder before removal. You can move
before Cleaning check box this data to an Amazon S3 storage. The data will be saved in CSV format
compatible with Athena.
Archive Folder field Folder to save the audit data in
Execute Command after Archiving Execute a command or script to handle the data saved in the Archive Folder
field (see above). For example, you can move the data to an S3 storage with your
script.
Remove All Audited Data Older Self-explanatory. Delete outdated audit data.
Than, Days field
Parameter Description
Instances drop-down list Database instance to update metadata of
Note: before selecting your target database in the task's settings, make sure that this database's credentials are
saved in DataSunrise (refer to Creating a Target Database Profile on page 58).
3.
Note: if you're using Oracle EBS, you need to grant the following permissions to your Oracle EBS user to be able
to do user synchronization:
In the Source subsection, select a source of users to be imported. It's either database for database users or
LDAP, Oracle EBS or SAP ECC for corresponding users. Note that you can use filtering by database login roles by
selecting such roles in the Roles drop-down list.
4. Configure other parameters of the task if necessary. Run the task.
5. For added users, navigate to Configuration → Database Users and locate the group specified in the task's
settings.
Note: if the namespace differs from the default one, execute the following command:
^(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
5. Set Startup Frequency. Select Manual for manual starting of the task
6. If required, keep all search results or remove old results by checking the corresponding check box.
Important: for the list of supported databases, refer to Supported Databases and Features on page 13.
By default, DataSunrise performs Static Masking directly without using a proxy (a separate Core), but you can also
configure it to create a temporary proxy with a Rule processed by the engine which is used to perform Dynamic data
masking (old DataSunrise behavior). Note that it's not available for the In-Place option (In-Place Static Masking on
page 243).
Important: for Static Masking using random-based methods, you need a dedicated schema (DS Environment) in
your database (see Configuring DataSunrise for Masking with random-based methods on page 177).
We performed testing of Static Masking Rules on PostgreSQL, MySQL, Oracle and AWS Aurora PostgreSQL hosted
on AWS. During testing, DataSunrise was installed on AWS EC2 machine, and source and target databases were
hosted on AWS RDS. EC2 and RDS machines of the same class were used:
11 DataSunrise Functional Modules | 229
Table
Database type DB version Arithmetic average, Max speed, MB/s Configuration (RDS
MB/s and EC2 class)
PostgreSQL 13 132 146 m5.2xlarge
MySQL 8 13 22 m5.2xlarge
Oracle 19 64 131 m5.2xlarge
AWS Aurora 12.7 139 131 r5.4xlarge
PostgreSQL
Table
Important: like other databases, PostgreSQL-based databases (Postgres, Greenplum, Redshift) feature data types
which support time zones (time with time zone, timestamp with time zone). But Postgres-based databases transform
such data to UTC-supported format which causes loss of information about the time shift. Thus, date/time value is
extracted according to the client’s time zone. It means that saved time/date can’t be properly restored when using
Static Masking.
Note: DataSunrise's CLI is more convenient for static masking of large amounts of database elements. Refer to the
"Static Masking" subsection of the DataSunrise's CLI Guide.
To create a copy of a certain database with masked columns inside, do the following:
Note: If your target DB doesn't support the "database" entity, the list will include only the [[master]] database.
11 DataSunrise Functional Modules | 230
4. Select a target database instance where the database with masked data will be stored, enter credentials for the
database user and click Log on.
Once the connection is established, select a database from the Database drop-down list.
Note: the structure of a target table should be similar to the structure of a source table (same data type, same
column names). If there are no tables of the required structure, DataSunrise will create the required table.
5. By default, Static Masking creates a target table if it doesn't exist and all its objects. But you can also configure
DataSunrise's behavior by checking/unchecking the corresponding check boxes (most of them are self-
explanatory):
• Create tables if they don't exist
• Create unique constraints
• Create foreign keys: DataSunrise creates foreign keys after all the target tables have been created and the
masked data have been transferred
• Create indexes
• Create check constraints
• Create default constraints
• Apply Related Table Filters (see Table Relations on page 400)
• Automatically resolve relationships between related tables if there are undefined ones (see Table Relations
on page 400)
• Use Parallel Load: enables parallel data loading useful when processing large tables (see Additional
Parameters on page 337, StaticMaskingParallelLoadThreadsCount, see also Creating a MySQL/Aurora
MySQL/MariaDB Database User on page 240)
• Check for empty target table: DataSunrise checks if the target table is empty
• Truncate target tables: DataSunrise cleans the target table
• Disable triggers
• Drop foreign keys before truncating: DataSunrise deletes all foreign keys before wiping out the data then
transfers the masked data and creates new foreign keys (available for MySQL-like databases)
Important: MS SQL Server-specific data types hierarchyid and sql_variant are not recognized by OTL,
which DataSunrise uses to get the table data. Thus, database columns that contain such type of data can’t be
transferred to a target database.
Important: By default, DataSunrise uses the Direct Path Load mechanism to load data from Oracle Database
tables. This mechanism has a restriction: it cannot load XMLType data. To circumvent this restriction, use
standard ODBC instead of Direct Path Load, for which go to the System Settings→Additional subsection of
the Web Console and enable the LoadXMLTypeViaODBC parameter.
Important: When using "Empty" and "Default" masking algorithms for obfuscation of string-type values in
Oracle, you may get the following error:
This error occurs because Oracle doesn't see the difference between an empty column and a NULL thus it tries
to replace masked non-NULL values with NULLs.
11 DataSunrise Functional Modules | 231
7. Select columns to be masked and click Done.
You will be redirected to the Transferred Table subsection.
8. Select the required column, click Set Masking Method and select a masking method to use for this particular
column. Repeat for other columns if necessary.
Note: you can use the Filter feature for transferring source database columns according to the filter's
condition. The filtering is based on column value. For this, click Add Filter, input a condition and save it. As
a result, DataSunrise will transfer to the target table only those columns that match the filter's condition. For
example, for the "Age">25 condition only those entries will be transferred where the "Age" column's value is
higher than 25.
The filter Rows Count enables you to transfer a certain number of rows to the target database. The obligatory
parameter "Order by" works like SQL Order By, to identify the first and the last lines. In the parameters "Limit"
and "Offset" you can input any values. For example, to transfer 10 lines from the database, specify the "Offset" -
5 and "Limit" - 10. The lines from 6 to 15 will be transferred to your target database. This filter also works with
related tables by enabling the checkbox "Apply related tables filter". You can also apply this filter to multiple
tables in the task. If you input a function in the "Order by" filter, tables should not be related to each other.
You can also upload a list of columns to be masked using a CSV file. Click Import from CSV and upload a CSV
file with the following contents:
dbName,schemaName,tableName,columnName
<DB_name>,<schema_name>,<table_name>,<column_name>
For example:
dbName,schemaName,tableName,columnName
postgres,myschema,url,column1
postgres,myschema,names,id
postgres,myschema,url,id
postgres,myschema,names,name
9. Select a loader to use for static masking. Loaders differ from each other in limitations and performance (refer to
Static Masking Loaders on page 232).
We recommend using the default loader in most cases.
10. In Startup Frequency, set frequency of starting the task. Set Manual to start the task manually on demand.
11. You can also delete obsolete results of Static Masking tasks from your Dictionary by checking Remove Results
Older Than and specifying time period. It prevents overflooding of Dictionary with outdated data.
11 DataSunrise Functional Modules | 232
Loader Description
Load operator TBuild Load operator-based loader. Similar to FastLoad. Doesn't support the following
column data types:
• Long BLOB
• Var Graphic
• Byte
• CLOB
• JSON
• XML
• Period Date
• Period Time
• Period Time Tz
• Period Timestamp
• Period Timestamp Tz
Stream Operator TBuild Stream operator-based loader. Similar to Tpump operator. Doesn't support the
following column data types:
• Long BLOB
• Var Graphic
• Byte
• Period Date
• Period Time
• Period Time Tz
• Period Timestamp
• Period Timestamp Tz
Upate Operator TBuild Update operator-based loader. Doesn't support the following column data
types:
• Long BLOB
• Var Graphic
• Byte
• Period Date
• Period Time
• Period Time Tz
• Period Timestamp
• Period Timestamp Tz
dbName,schemaName,tableName,columnName,maskType,maskValue
"postgres","public","addresses","user_id","fixed number","123"
"postgres","public","addresses","street","fixed string","masked"
dbName,schemaName,tableName,columnName
In such a case, the columns specified in your CSV file will be selected for masking, but you will need to assign
masking methods for the columns manually.
For columns which are not included in your CSV file, masking type will not be changed. You can assign masking type
by checking the required columns and selecting masking method from the Masking Method drop-down list.
To use a CSV file for specifying database columns and masking methods, do the following:
1. In the Select source tables to transfer and columns to mask subsection, click Import Columns from CSV and
select your CSV file to upload. Click Import. All columns included in your CSV file will be shown and masking
methods will be selected according to the CSV file.
2. After assigning masking types for all required columns, click Save.
11 DataSunrise Functional Modules | 235
List of available masking methods and masking values
Masking method name Masking value example Notes
Default ""
Fixed number "123"
Fixed string "abc" Double quotes should be escaped.
For example: "maskValue" : "ab\"c"
Empty value ""
Random value like current ""
Random from interval "{ \"minVal\":\"123\", \"maxVal\":\"1234\", minVal - minimum value. maxVal
\"decimals\":\"1\" }" - maximum value. decimals - a
number of values after the decimal
position.
Function call "{ \"function_name\":\"my_function\",
\"arguments\": [ { \"type\":\"masked_column\",
\"value\":\"\" }, { \"type\":\"user_name\", \"value
\":\"\" } ] }"
Email masking ""
Email masking full ""
Mask username of Email ""
Credit card masking ""
Mask last chars "{ \"maskCount\":3, \"paddingText\":\"*\" }" maskCount - character count.
paddingText - masking text.
Show last chars See "Mask last chars" See "Mask last chars"
Mask first chars See "Mask last chars" See "Mask last chars"
Show first chars See "Mask last chars" See "Mask last chars"
Show first and last chars See "Mask last chars" See "Mask last chars"
Mask first and last chars See "Mask last chars" See "Mask last chars"
Regexp replace "{ \"replaceString\":\"*\", \"pattern\":\"qwe\" }" pattern - a regular expression.
replaceString - masking text.
Fixed datetime "2020-02-21 01:02:03" Format: "YYYY-MM-DD hh:mm:ss".
Fixed date "2020-02-21" Format: "YYYY-MM-DD".
Fixed time "01:02:03" Format: "hh:mm:ss".
11 DataSunrise Functional Modules | 236
2. To grant the new user the required privileges, execute the following query (being logged in as the SYS user):
GRANT CREATE SESSION, CREATE ANY TABLE, SELECT ANY TABLE, INSERT ANY TABLE, ALTER ANY TABLE,
SELECT_CATALOG_ROLE TO <User_name>;
GRANT EXECUTE ON dbms_metadata to <User_name>;
GRANT DROP ANY TABLE TO <User_name>;
GRANT RESOURCE TO <User_name>;
GRANT CREATE ANY INDEX TO <User_name>;
GRANT CREATE ANY PROCEDURE TO <User_name>;
GRANT CREATE ANY VIEW TO <User_name>;
GRANT CREATE ANY SEQUENCE TO <User_name>;
• If you get an error caused by insufficient privileges on accessing the "users" tablespace, execute the following
query:
• To enable DataSunrise to download the database's metadata, it is necessary to grant your user the privileges
listed in Creating an Oracle Database User on page 64.
Important: Oracle offers two loaders for execution of a Static Masking task: DBLINK and Direct Path. Both
loaders require extra permissions:
• To use DBLINK, you need the following grant:
• If you're going to use random-based masking methods (Masking Methods on page 167) for Static masking,
you need to create a dedicated schema named DS_ENVIRONMENT in your source database (refer to
Configuring DataSunrise for Masking with random-based methods on page 177).
11 DataSunrise Functional Modules | 238
11.1.4.2 Creating a PostgreSQL/Aurora PostgreSQL Database User
1. To create a new PostgreSQL/Aurora PostgreSQL user, execute the following query:
2. Execute the following queries to provide your user with the necessary privileges:
In case you're using a static function for masking, it is necessary to grant the privilege of executing that function:
In case you're going to use Truncate Target Tables, grant the following privilege:
3. In case the source table is created by another user, execute the following query:
4. If you're going to use the DBlink loader for masking, install the required extension:
5. If you're going to use random-based masking methods (Masking Methods on page 167) for Static masking, you
need to create a dedicated schema named DS_ENVIRONMENT in your source database (refer to Configuring
DataSunrise for Masking with random-based methods on page 177).
2. Execute the following query to provide the user with necessary privileges:
3. In case you're using a static function for masking, it is necessary to grant the privilege of execution of that
function:
2. Granting required privileges for static masking includes several stages (depends on SAP Hana database roles
management):
a) Since the Static Masking feature requires selecting data from <Source_schema>.<Source table (optional)>
and further data insertion into <Target_schema>, grant the privileges required for access to the source
schema to the new user logged in as <Source_schema> owner.
b) Grant the privilege required for data insertion into the <Target_schema> to the new user logged in as <Target
_schema> owner.
2. Execute the following query to provide the user with necessary privileges:
3. In case you're using a static function for masking, it is necessary to grant the EXECUTE privilege for that function:
4. If you're going to use random-based masking methods (Masking Methods on page 167) for Static masking, you
need to create a dedicated schema named DS_ENVIRONMENT in your source database (refer to Configuring
DataSunrise for Masking with random-based methods on page 177).
11 DataSunrise Functional Modules | 240
11.1.4.6 Creating a MySQL/Aurora MySQL/MariaDB Database User
1. To create a new MySQL/Aurora MySQL/MariaDB user, execute the following query:
2. Execute the following query to provide the user with necessary privileges:
In case you're using a static function for masking, it is necessary to grant privileges to execution of that function:
If the Use Parallel Load check box is enabled (Masking → Static Masking → Transferred Tables), it is necessary to
grant INSERT for all tables:
If you're going to use the Random from Lexicon masking method for dynamic or static masking, you need to
provide your user with the grants listed below. DataSunrise requires these grants to be able to create a schema:
3. If you're going to mask the contents of functions, grant your user the following privileges:
• MySQL 8.0, if the user specified in DEFINER doesn't have the system privileges:
• MySQL 8.0. if the user specified in DEFINER has the system privileges (root for example), provide the user with
the aforementioned grant. Additionally, grant the following privilege:
• MySQL 5:
• For masking of data inside stored procedures and functions, you need to grant your user the following
privilege:
2. Execute the following query to provide the user with necessary privileges:
GRANT LIST ON AGGREGATE, DATABASE, EXTERNAL TABLE, FUNCTION, GROUP, MANAGEMENT TABLE, MANAGEMENT
VIEW, PROCEDURE, SEQUENCE, SYNONYM, SYSTEM TABLE, SYSTEM VIEW, TABLE, USER, VIEW to <User_name>;
GRANT SELECT ON <Source_table> TO <User_name>;
GRANT INSERT ON <Target_schema> TO <User_name>;
GRANT CREATE TABLE IN <Target_schema> TO <User_name>;
In case you're using a static function for masking, it is necessary to grant privileges to execution of that function:
2. Execute the following query to provide the user with necessary privileges:
In case you're using a static function for masking, it is necessary to grant privileges for execution of that function:
3. If you're going to use random-based masking methods (Masking Methods on page 167) for Static masking, you
need to create a dedicated schema named DS_ENVIRONMENT in your source database (refer to Configuring
DataSunrise for Masking with random-based methods on page 177).
2. Provide the new user with the required privileges using the following queries:
2. Execute the following query to provide the user with necessary privileges:
In case you're using a static function for masking, it is necessary to grant privileges to execution of that function:
db.grantRolesToUser("<User_name>", ["readWrite"])
11 DataSunrise Functional Modules | 243
Note: you should grant the readWrite privilege in respect of each database involved in the masking process so
before granting the permission execute the following command:
use <Source_DB>
or
use <Target_DB>
respectively
Important: please take into account that the data in your source table is replaced with masked values during the
in-place masking process and this process is irreversible. To avoid loosing your valuable data, back the source table
up if necessary.
Note: for Oracle, MS SQL Server and MySQL-like databases (MySQL, MariaDB, Aurora MySQL) transferring of
existing triggers is enabled by default.
Search Method drop-down list Select a search method. This method depends on the Column Data Type
selected.
• Template (Strings Only). A template used to search inside columns. It
can be a regular expression.
• Unstructured text (Strings Only). Unstructured text
• NLP discovery (Strings Only). Refer to subs. NLP Data Discovery on
page 250
• Lua Script. Enable Lua scripting for searching. The Lua enables you to
create simple scripts which define the structure of the content you want
to search for.
Note: Example:
if (string.match(columnName, "first_name") and columnSize == 8) then
return 1 else return 0 end
In this case, if there is a string entry ("string" variable) in a column
named first_name and the entry is 8 characters long ("==" or "=",
but you can use ">=" or "<=" as well) then the script would "return
1" — DataSunrise will display the search results. Otherwise (return 0)
DataSunrise won't display any results.
Filename keyword validation check Search across fiiles names of which include words from a specified Lexicon.
box (for Unstructured text) You need to select a Lexicon of interest in the Lexicon drop-down list
11 DataSunrise Functional Modules | 248
Negative keyword validation check • Words list: Lexicon to exclude from the search
box (for Unstructured text) • Whole file search: search across a complete file
• Number of words: specify a number of words to exclude from the
search. Used together with the By number of words check box
• Direction: words direction to search at
Validation check box Validator: validation method to use. Luhn algorithm is available by default
but you can also use your Lua Script. Navigate to Configuration → Lua
Scripts and create the required Lua script. Then select your script in the
Validator drop-down list
Default Masking Method tab
Main Masking Method drop-down Masking algorithm to be used for a given type of data. Refer to subs.
list Masking Methods on page 167
Mask Value field Masking value
Alternative Masking Method drop- The masking method to be used if there's a relation between the discovered
down list column and other columns by foreign keys and the main masking method
can't be used
Note: The PCRE library is used for regular expressions, so PCRE syntax should be used when creating
templates. For example, the following expression is used to search for phone numbers in a database column:
^\+(?:[0-9] ?){6,14}[0-9]$
8. Click Save to save the attribute. Add additional attributes to the filter if necessary.
9. To view filter's settings, click the Information Types link in the left panel, select a filter from the list and click its
name.
10. Note that you can filter the list of information Types by associated countries. To do that, put the Group by
Countries switch on.
UI Element Description
Database Instance drop-down list Database instance to search sensitive data across
Credentials button User credentials used to connect to the target database
Save Search Results in an Object Group Select an Object Group to save the search results in (not obligatory)
drop-down list
SELECT strategy drop-down list • Select top rows: SELECT first rows of the target table defined by the
Number of analyzed rows value;
• Select random rows: SELECT random rows;
• Select all rows: SELECT an entire table.
Startup Frequency
Frequency drop-down list Task running frequency. You can use Manual for manual starting.
Note: Data Discovery fo Amazon S3 offers more settings than "regular" Data Discovery.
11 DataSunrise Functional Modules | 250
Table
UI Element Description
Enable AWS S3 Inventory metastore mode check box Enable a Crawler task (see AWS S3 Crawler on page
253)
Enable statistics on data processing speed check box Display statistics on data processing speed
Enable statistics on attributes check box Display statistics on file attributes
Additional Metrics check box Display additional metrics
Task Mode drop-down list • Standard: standard Data Discovery
• Incremental: enable Incremental scanning (see
Incremental Data Discovery on page 254)
• Randomized: enable Randomized scanning (see
Randomized Data Discovery on page 254)
5. If required, keep all Data Discovery results or remove old results by checking the corresponding check box.
Note: you need to install Java 1.8+ to be able to use NLP Data Discovery.
This script searches for database columns named "first_name" which contain entries 8 characters long.
If the NativeOCRHandlingOnExternalOCRError parameter is active, the file will be processed by the native OCR when
the external OCR fails to process the image or processes it with an error.
DataSunrise OCR Data Discovery supports the following file formats:
• JPEG
• JPEG 2000
• GIF (non-animated)
• PNG
• TIFF
• WebP
• BMP
• PNM
• PDF
Note: you need to install Java 1.8+ to be able to use OCR Data Discovery with NLP Data Discovery
11 DataSunrise Functional Modules | 253
Once you've started an OCR Data Discovery task, DataSunrise browses the contents of your S3 bucket for images.
OCR DD engine's preprocessor prepares images for further processing by making them more contrast and sharp.
Then DataSunrise with the help of the Tesseract OCR technology recognizes text pictured in images and performs
Data Discovery on this text according to your DD Task's settings. As a result, you get the names and location of
image files that contain sensitive data.
To use OCR Data Discovery, do the following:
1. Navigate to Data Discovery → Periodic Data Discovery
2. Create a Data Discovery task for your S3 bucket
3. Run the task and DataSunrise will perform OCR discovery automatically.
Parameter Description
Template to crawl through settings field Specify path to S3 folder to browse with the Crawler:
<level1_folder_logical_name>/<level2_folder_logical_name>/
<levelN_folder_logical_name>. ASCII characters only are allowed.
Note that each path segment should be ecnlosed in <> and separated
by a slash
Allowed values fields Slash-separated exact values of the level (with no spaces before/after).
Note that AWS S3 bucket folder names may contain spaces
5. If required, keep all Data Discovery results or remove old results by checking the corresponding check box.
To grant the new user the required privileges, execute the following query (as the SYS user):
To specify only the table to perform a 'Data Discovery' tasks on, execute the following query:
To let the new user perform 'Data Discovery' tasks on all tables in a specific schema, execute the following script:
BEGIN
FOR t IN (SELECT * FROM all_tables WHERE OWNER = <Target schema>)
LOOP
EXECUTE IMMEDIATE 'GRANT SELECT ON ' || t.OWNER || '.' || t.TABLE_NAME || ' TO <User_name>';
END LOOP;
END;
2. Execute the following query to provide the user with necessary privileges:
3. For Greenplum lower than 6.0: grant your user the privilege of SELECT for each table you're going to search
across:
Execute the following query to provide the user with necessary privileges:
Execute the following query to provide the user with necessary privileges:
Execute the following queries to provide the user with necessary privileges:
Execute the following query to provide the user with necessary privileges:
SELECT EXCEPTION test.dbo.NewTable err [Incompatible data types in stream operation Column: 1<RAW>,
datatype in operator <</>>: CHAR. errCode = 32000], Query : SELECT [Column1], [Column3], [Column4],
[Column5], CAST([Column6] as char) AS [Column6], [Column7], [Column8], [Column9], [Column10] FROM
[dbo].[NewTable]
ORDER BY Column1
OFFSET 0 ROWS FETCH NEXT 100 ROWS ONLY
Do the following:
• Run SSMS as the administrator (it is required to create a certificate for a global user)
• Create a main key stored at your local PC. Name the certificate or create new
• Create a key for encryption which uses the main key
• Encrypt your column with this key
Important: if you've changed an Adaptive Server character set to multibyte charset, upgrade text values by
executing the following query (the table should be in the current database):
11.4 Reporting
The Reporting section includes tools for creating reports on DataSunrise operations.
Note: the reporting component always displays depersonalized user queries, no actual queries might leak.
11.4.1 Reports
Click Reports in the Event Monitor section.
To view a report, perform the following:
1. Select a report to view in the Report Type drop-down list:
11 DataSunrise Functional Modules | 260
2. Specify a database instance to make a report for via the Instance drop-down list.
3. Specify a reporting time frame using the From and To drop-down lists.
4. Click Refresh to refresh the operations list.
5. To change the method of displaying a report, click the corresponding link:
• Table — display report in the form of a table.
• Graph — display report in the form of an interactive chart.
6. You can also export a report to a PDF or CSV file. Select file format in the Format drop-down list and click
Export.
Note: use filter parameters from the Columns in Report subsection, Filter
column to specify the data to be included in a report (see Data Filter Values
on page 262 for the full list of Filter values)
Instance and Object Group tab Select a database instance to create a report for, in the DB Instance drop-
down list or report on actions in respect of the objects from the selected
Object group
Requests per Grouping Period If necessary, select data to include in a report by total number of user
drop-down list queries per a grouping period or by total number of rows returned to a
query
Total Number of Returned Rows If necessary, select data to include in a report by total number of returned
drop-down list rows
Query Types tab If necessary, select Query types to include in a report
Rules to Report on tab If necessary, select existing Rules to report on. This enables you to create
Report Generator reports on events captured by certain Rules
Include Operations with Error Include failed operations in a report (operations with "error" status in the
check box Transactional Trails)
6. Select columns to include in a report in the Columns in Report subsection. This list includes column names
and the corresponding parameters in the Filter columns. You can use these parameters to configure the Data
Filter. Refer to Data Filter Values on page 262.
7. In the Period drop-down list, specify a reporting time frame.
8. Specify a regularity of generating reports in the Frequency of report generation subsection:
Interface element Description
Start From drop-down list Initial date and time of the report generating period
Drop-down list Frequency of report generation (once, hourly, daily, weekly, monthly).
11 DataSunrise Functional Modules | 262
9. Configure the additional settings in the Export Options:
Interface element Description
Send to Subscribers drop-down list Send report file to a Subscriber (Subscriber Settings on page 212)
Query Length Limit field Specify query length limit
External Command text field Send specified parameters to an external application
10. Click Save to save the task. You will be redirected to the task list.
11. Click Edit to view the task's details.
12. To generate a report, click Start Now. All reports are displayed in the Reports tab. To view a report, click Open.
connections.client_host='127.0.0.1'
11 DataSunrise Functional Modules | 263
sessions.user_name Database user name used for connection String, 1024 chars max
sessions.service_name SID or database service name granted access String, 1024 chars max
sessions.os_user Operating system name of the client app String, 1024 chars max
tbl_objects.tbl_name Name of the table mentioned in client application String, 1024 chars max
queries
tbl_objects.sch_name Name of the database schema used in client String, 1024 chars max
application queries
tbl_objects.db_name Name of the database used in client application String, 1024 chars max
queries
11.4.3 VA Scanner
This feature enables you to view all known vulnerabilities for the databases included in your DataSunrise's
configuration according to the CVE database and according to the Security Guidelines by CIS and DISA. The
feature also enables you to view recommendations on fixing these vulnerabilities. Note that you need an Internet
connection to be able to update your vulnerability database.
CVE Guidelines are available for the following databases:
• Apache Hive
11 DataSunrise Functional Modules | 264
• Apache Cassandra
• Apache Impala
• Elasticsearch
• Greenplum
• IBM Informix Dynamic Server
• IBM Netezza
• MongoDB
• MS SQL Server
• MySQL
• MariaDB
• Oracle Database*
• PostgreSQL
• SAP Hana
• Sybase
• Teradata Express
• Vertica
• 1.0.2.2, 1.0.2.2 R1
• 3.0.1, 3.2, 3.2.0.00.27
• 4.0, 4.0.8, 4.0.8 R2, 4.1, 4.2.0, 4.2.1, 4.2.3
• 5.1
• 7, 7.0.2, 7.0.64, 7.1.3, 7.1.4, 7.1.5, 7.3, 7.3.3, 7.3.4
• 8, 8.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.5.1, 8.0.6, 8.0.6.3, 8.1, 8.1.5, 8.1.6, 8.1.7, 8.1.7 R1, 8.1.7.0.0, 8.1.7.4, 8.1.7.4
R3
• 9, 9.0, 9.0.1, 9.0.1.4, 9.0.1.5, 9.0.2.4, 9.0.2.4 R2, 9.0.4, 9.2, 9.2.0.1, 9.2.0.2, 9.2.0.3, 9.2.0.4, 9.2.0.5, 9.2.0.6, 9.2.0.6
R2, 9.2.0.7, 9.2.0.7 R2, 9.2.0.8, 9.2.0.8 R2, 9.2.0.8DV, 9.2.0.8DV R2, 9.2.1, 9.2.2, 9i
• 10, 10.1, 10.1.0.2, 10.1.0.3, 10.1.0.3, 10.1.0.3.1, 10.1.0.4, 10.1.0.4 R1, 10.1.0.4.2, 10.1.0.4.2 R2, 10.1.0.5, 10.1.0.5
R1, 10.1.8.3, 10.2, 10.2.0.0, 10.2.0.1, 10.2.0.1 R2, 10.2.0.2, 10.2.0.2 R2, 10.2.0.3, 10.2.0.3 R2, 10.2.0.4, 10.2.0.4.2,
10.2.0.5, 10.2.1, 10.2.1 R2, 10.2.2, 10.2.3, 10g
• 11, 11.1.0.6, 11.1.0.6.0, 11.1.0.7, 11.1.0.7.0, 11.1.0.7.3, 11.2.0.1, 11.2.0.1.0, 11.2.0.2, 11.2.0.3, 11.2.0.4, 11g, 11i
• 12.1.0.1, 12.1.0.2, 12.2.0.1, 12c
• 18, 18.1, 18.1.0.0, 18.2, 18c
• 19c
DISA Guidelines are available for the following databases:
• IBM DB2 version 10.5
• MongoDB 3.2, 3.4
• MS SQL Server 2005, 2012, 2014, 2016 Database
• MS SQL Server 2005, 2012, 2014, 2016 Instance
• Oracle Database 9i, 10g, 11g, 11.2g, 12c
CIS Guidelines are available for the following databases:
• IBM DB2 version 8, 9 & 9.5, 10
• MongoDB 3.2, 3.4, 3.6
• MS SQL Server 2008 R2
• MS SQL Server 2012, 2014, 2016, 2017, 2019
• Oracle Database 11.2g, 12c
• PostgreSQL 9.5, 9.6, 10, 11, 12
11 DataSunrise Functional Modules | 265
Important: your database instance's credentials should be saved in DataSunrise for VA Scanner to work.
First, you should create a dedicated periodic task to check availability of the vulnerabilities database and download
the required files. If there is a new version of the database file is available and the internet connection is available,
DataSunrise downloads it from update.datasunrise.com and saves in the AF_HOME folder. Then the periodic task
browses DataSunrise's configuration and forms a list of vulnerabilities for each of the databases included in the
periodic task. This information is saved in the task's results.
Table
DBMS DB version
PostgreSQL 9.5, 9.6, 10, 11, 12
MS SQL Server 2005, 2008 R2, 2012, 2014, 2016
Oracle Database 9, 10, 11, 11.2, 12
IBM DB2 8, 9, 10, 10.5
MongoDB 3.2, 3.4, 3.6
To be able to use VA Scanner, you need to provide your user with the grants listed below.
• Postgres 9.5, 9.6
• Oracle 9, 10, 11
• Oracle 11.2
• Oracle 12
• DB2 8
• DB2 9
• DB2 10
use admin
db.createRole(
{
role: "getParamRole",
privileges: [ { resource: { cluster: true}, actions: [ "getParameter" ] } ],
roles: []
}
)
db.createUser(
{
user: "<User_name>",
pwd: "<Password>",
roles: [ { role: "readAnyDatabase", db: "admin" }, { role: "getParamRole", db: "admin" } ]
}
)
OR
use admin
db.createRole(
{
role: "dataSunriseRole",
privileges: [ { resource: { cluster: true}, actions: [ "getParameter" ] } ],
roles: [ { role: "readAnyDatabase", db: "admin" } ]
}
)
db.createUser(
{
user: "<User_name>",
pwd: "<Password>",
roles: [ { role: "dataSunriseRole", db: "admin" } ]
}
)
Note: all users that do not belong to any group will be blocked by the database firewall.
Parameter Description
Search in Database Database instance to search for sensitive data across
drop-down list
Schema drop-down list Schema to search for sensitive data across
Exclude Search in Exclude specified objects from the search
place. Skip Query
Analyzed Row Count Number of table rows to SELECT
field
Max Percentage of SELECT next "Analyzed Row Count" number of rows if the number of NULL-containing
NULL field rows exceeds the "Max percentage of NULL" value
Min Percentage of Minimum percentage of rows in a column that match the search filter conditions to
NULL field consider the column as containing the required sensitive data
3. Specify security standards or Information types (search filters) to use for searching the sensitive data in the Search
Criteria
4. Set frequency of performing the sensitive data search. Click Next Step
5. At the next tab, select database columns to mask and select masking algorithms to use. DataSunrise creates
Dynamic masking rules for the columns specified and uses the masking algorithms specified at this step. Click Next
Step.
6. At the next tab assign existing user groups to Compliances Roles. Click Next Step.
7. At the next step specify Report type and frequency of reporting. DataSunrise creates a Report Gen task for the
columns specified at the second step according to these settings. Click Finish Master.
As a result, DataSunrise creates a dedicated Object group which is used to store information about columns with
sensitive data found by Data Discovery. Also DataSunrise creates Data Audit, Data Security and Data Masking rules
to protect the columns with sensitive data. Report Gen tasks for the protected columns are created as well.
Action Description
Access to Elasticsearch Configure access to an Elasticsearch database to transfer your audit data to
index
Access to Kibana Configure access to Kibana integrated with your Elasticsearch database
Transfer Audit to Create a periodic task to move old audit data to your Elasticsearch database and
Elasticsearch periodic task pass new audit data to that database
2. Configure Elasticsearch:
11 DataSunrise Functional Modules | 272
Parameter Description
Authentication Method Method of authentication in your Elasticsearch database:
• AWS Regular
• IAM Role
• Regular: login/password
• Without Authentication
3. Configure Kibana:
11 DataSunrise Functional Modules | 273
Parameter Description
Authentication Method Method of authentication in your Kibana:
• Active Directory
• Regular: login/password
• Without Authentication
Note: if you select HTTPS, you need to disable the KibanaVerifySSL additional
parameter in System Settings → Additional Parameters (refer to Additional
Parameters on page 337)
5. Run the periodic task to transfer audit data to your Elasticsearch database and display it by Kibana
6. For details on audited events, navigate to Audit → Analytics:
7. Note that you can access ElasticSearch and Kibana settings at System Settings → Audit Storage page's tabs.
8. To see the events transferred to your Elasticsearch in Kibana itself, open Kibana's Web Console, navigate to
Discovery and select there the index you provided while configuring Elasticsearch in DataSunrise:
12 Resource Manager | 275
12 Resource Manager
This feature enables managing of DataSunrise configuration according to the principles of the Infrastructure as Code
concept.
Resource Manager enables you to do the following:
• Manage your DataSunrise infrastructure through declarative templates rather than scripts
• Deploy, manage, and monitor all the DataSunrise resources as a group rather than handling these resources
individually
• Redeploy DataSunrise throughout the development lifecycle and have confidence your resources are deployed in
a consistent state
• Define dependencies between resources so they're deployed in the correct order
• Apply tags to resources to logically organize all the resources in your configuration.
For example, Resource Manager can be used to deploy a new complex DataSunrise configuration using a pre-
created template. The other example is exporting a configuration of an existing DataSunrise Instance to other
Instances.
Basic definitions:
• Resource: an entity (DataSunrise object such as Rule, DB Instance, proxy, etc.) to be included in a Resource Group
• Resource Group: the result of Template deployment. In other words, it's a configuration created according to a
Template
• Template: Resource group definition (DataSunrise objects' description and corresponding parameters' values) in
the form of a Java Script Object Notation (JSON) file
• Parameter: Resource's parameter/value pair. Parameters are included in a Template or in a dedicated JSON file
• Changeset: changes made to a deployed Resource group.
{
"DSTemplateVersion" : "2020-03-10",
"ExternalResources" : {},
"Description" : "Template",
"Mappings" : {},
"Parameters" : {},
"Resources" : {}
}
"ExternalResources" : {
"DbUser_1" : {
"Properties" : {
"Instance" : {
"Ref" : "Instance_1"
},
"Login" : "postgres"
},
"Type" : "DbUser"
},
"DbUser_2" : {
"Properties" : {
"Instance" : {
"Ref" : "Instance_1"
},
"Login" : "test"
},
"Type" : "DbUser"
},
"Server_1" : {
"Properties" : {
"Name" : "local"
},
"Type" : "Server"
}
},
To mark a resource as "external", when Exporting a Resource Group (Resource Manager → Resource Groups
→ Export) in the Export DataSunrise object to code window, select the resource of interest and check the
corresponding External check box.
"Mappings" : {
"LocalServerID": "1",
"QueryGroups": {
"PgAdminQueries": "-102"
}
},
Constants can be addressed from the Resources section in the following way:
"Parameters" : {
"Password_Instance_1" : {
"Description" : "",
"Type" : "String"
},
"Password_Instance_3" : {
"Description" : "",
"Type" : "String"
},
},
Parameters can be addressed from the Resources section in the following way:
"Password" : {
"Ref" : "Password_Instance_1"
},
"Resources" : {
"Instance_1" : {
"Properties" : {
"AcceptOnlyTFAUsers" : "False",
"AdditionOption" : "",
"AsSYSDBA" : "False",
"ConnectType" : "SID",
"CustomConnectionString" : "",
"DatabaseName" : "postgres",
"DatabaseType" : "PostgreSQL",
"EnableAgent" : "False",
"FullyQualifiedDomainName" : "",
"InstanceName" : "postgres",
"KerberosRealm" : "",
"KerberosServiceName" : "postgres",
"LoadingTableReferences" : "False",
"Login" : "postgres",
"LoginType" : "Regular",
"MetadataRetrievalMethod" : "Usual",
"Password" : {
"Ref" : "Password_Instance_1"
},
"PasswordVaultType" : "LocalDB",
"QueryGroupFilter" : "{\"groups_id\":[]}",
"ServerName" : "",
"UseConnectionString" : "False",
"VerifyCA" : "False"
},
"Type" : "Instance"
},
"Interface_1" : {
"Properties" : {
"AdditionOption" : "",
"CryptoType" : "Usual",
"Instance" : {
"Ref" : "Instance_1"
},
"InterfaceHost" : "localhost",
"InterfacePort" : "5432",
"IpVersion" : "Auto",
12 Resource Manager | 278
"ProtocolType" : "Other",
"SslKeyGroup" : "0",
"VerifyCA" : "False"
},
"Type" : "Interface"
},
{
"PostgresHost": "10.0.14.168",
"PostgresDatabasePort": "54100",
"PostgresLogin": "postgres",
"PostgresPassword": "1234"
}
"Parameters" : {
"PostgresHost": {
"Description" : "Database host",
"Type": "String"
},
"PostgresDatabasePort": {
"Description" : "Database port",
"Type": "Integer"
},
"PostgresLogin": {
"Description" : "Login used to access the Postgres database",
"Type": "String"
},
"PostgresPassword": {
"Description" : "Password used to access the Postgres database",
"Type": "String"
}
},
1. First, you need to prepare the JSON for your template. You can do it either manually or by exporting the
configuration of your existing DataSunrise instance (Exporting DataSunrise Configuration into Template on page
279).
2. Navigate to Resource Manager → Templates
3. Click Create
4. Input a logical name of the Template. Paste the template's JSON into the field.
12 Resource Manager | 279
5. Click Save to save the template
you can also specify the Parameters file you want to use together with the template.
ProtocolType • Hive:
• Regular: 0
• HTTP: 1
• S3
• HTTP: protocolType: 3, cryptoType: 0
• HTTPS: protocolType: 3, cryptoType: 1
• HTTP Reverse Proxy: protocolType: 1, cryptoType: 0
• HTTPS Reverse Proxy: protocolType: 1, cryptoType: 1
• Snowflake
• HTTP: protocolType: 3, cryptoType: 0
• HTTPS: protocolType: 3, cryptoType: 1
• Aurora MySQL, MySQL, MariaDb:
• C/S Protocol: 0
• X-Protocol: 2
AdditionOption
UpdateMetadata
Host Host
Port Port
AuthType SSL (SMTP only):
• Disabled: 0
• Enabled: 1
• STARTTLS Preferred: 2
• STARTTLS Required: 3
Login Login
SslVerify Verify server SSL certificate
UseForSecurity Use the server for sending security emails
PasswordVaultSafe CyberArk Safe to store password in
PasswordVaultFolder CyberArk Folder to store password in
PasswordVaultObject CyberArk Object to store password in
Name Server logical name
ProtocolEx Syslog protocol (RFC 3164, RFC 5424)
AwsSecretID AWS Secrets Manager ID
Password Password
MailSender Send-from email address
Data Data
AuthorizationUrl Authorization Token Endpoint URL
TokenUrl Token endpoint URL
TokenKeysUrl Token Keys Endpoint URL
OidcClientId OIDC Client ID
OidcClientSecret OIDC Client Secret
Endpoint Endpoint
ClientHost
IpRestrictions
CustomData
InformationTypes
InformationType
Archive TRUE: enable "Archive Removed Data before Cleaning (to AWS Athena CSV
Format)"
ArchiveType • None
• AwsAthenaCSV
UseDirectSourceConnect TRUE:
12 Resource Manager | 308
SkipObjects
Database Database name
DatabaseRegex TRUE: Database name is a regular expression
Schema Schema name
SchemaRegex TRUE: Schema name is a regular expression
Table Table name
TableRegex TRUE: Table name is a regular expression
Title Title
Title Title
Figure 46: Active Directory users can be mapped to one database user or each AD user can be mapped to a
separate database user, as shown in the figure. When a client connects to a database, DataSunrise connects
to AD services and ascertains rights of the user to connect to the database.
Prerequisites:
The machine to be configured must belong to an Active Directory domain. Follow the Microsoft instructions on
joining the Active Directory domain.
DataSunrise Authentication Proxy configuration scheme:
1. Creating an AD user and assigning principal names with encrypted keys on the domain controller machine.
2. Configuring DataSunrise to map AD users to DB users.
Parameter Description
Amazon Redshift MD5
Greenplum MD5
MySQL SHA-1, SHA-256
Netezza MD5, SHA-256, crypto
PostgreSQL MD5
Vertica MD5, SHA-512
SQL Server xor
• Redshift. Always uses MD5 hashing. You don't need to configure anything at the server's side.
• PostgreSQL. Open pg_hba.conf file and set authentication type to "md5". Refer to the following page for details:
https://www.postgresql.org/docs/current/static/auth-pg-hba-conf.html
• Netezza. Depending on password hashing used for mapping (MD5, SHA256, crypt) set authentication method
with the SET CONNECTION command. Refer to the following page for details: https://www.ibm.com/support/
knowledgecenter/en/SSULQD_7.2.1/com.ibm.nz.dbu.doc/r_dbuser_set_connection.html
• Vertica. Depending on password hashing used for mapping (MD5, SHA512), set SECURITY ALGORITHM with
the ALTER USER command. Refer to the following page for details: https://my.vertica.com/docs/8.1.x/HTML/
index.htm#Authoring/SQLReferenceManual/Statements/ALTERUSER.htm
Parameter Description
-S Adds the specified SPN after verifying that no duplicates exist
<service>/<fqdn> Specify an SPN and a Fully Qualified Domain Name (FQDN) in the following
format: service/<fqdn>@REALM
• Use vertica as a service name for HP Vertica
• Use postgres for Amazon Redshift, PostgreSQL or Greenplum
• Use netezza for IBM Netezza
• Use MSSQLSvc for MS SQL Server
• Use HTTP for DataSunrise GUI authentication
1. On the domain controller machine, navigate to Active Directory Users and Computers, locate the account of
the machine DataSunrise is installed on.
2. In the Properties section, go to the Delegation tab and select Trust this computer for delegation to specified
services only and click Add.
3. In the Users and Computers window, specify the user account that was used to launch the database or the
name of the server the RDBMS is installed on.
4. Optionally, you can use Check names to check if a specified user or computer exists and click OK, then select a
required service and click OK.
Parameter Description
-S Adds the specified SPN after verifying that no duplicates exist.
<service>/<fqdn> Specify an SPN and a Fully Qualified Domain Name (FQDN) in the following
format: service/<fqdn>@REALM
• Use vertica as a service name for HP Vertica
• Use postgres for Amazon Redshift, PostgreSQL or Greenplum
• Use netezza for IBM Netezza
• Use MSSQLSvc for MS SQL Server
• Use HTTP for DataSunrise GUI authentication.
1. On the domain controller machine, navigate to Active Directory Users and Computers, locate the account of
the machine DataSunrise is installed on.
2. In the Properties section, go to the Delegation tab and select Trust this computer for delegation to specified
services only and click Add.
3. In the Users and Computers window, specify the user account that was used to launch the database or the
name of the server the RDBMS is installed on.
4. Optionally, you can use Check names to check if a specified user or computer exists and click OK, then select a
required service and click OK.
ktutil
ktutil: addent -password -p <service_name>/<pc_name>@<DOMAIN NAME> -k 1 -e
aes128-cts-hmac-sha1-96
<ADuser password>
ktutil: addent -password -p <service_name>/<pc_name>@<DOMAIN NAME> -k 1 -e
aes256-cts-hmac-sha1-96
<ADuser password>
ktutil: addent -password -p <service_name>/<pc_name>@<DOMAIN NAME> -k 1 -e
arcfour-hmac
<AD user password>
ktutil: addent -password -p <service_name>/<pc_name>@<DOMAIN NAME> -k 1 -e
des-cbc-md5
<AD user password>
ktutil: addent -password -p <service_name>/<pc_name>@<DOMAIN NAME> -k 1 -e
des-cbc-crc
<AD user password>
ktutil: wkt postgres.keytab
ktutil: q
[libdefaults]
default_realm = DB.LOCAL
clockskew = 300
13 DataSunrise Authentication Proxy | 322
ticket_lifetime = 1d
forwardable = true
proxiable = true
dns_lookup_realm = true
dns_lookup_kdc = true
default_keytab_name = FILE:/opt/datasunrise/backend.keytab
default_ccache_name = FILE:/tmp/krb5cc_datasunrise
[realms]
DB.LOCAL = {
kdc = dsun.db.local
admin_server = dsun.db.local
default_domain = DB.LOCAL
}
[domain_realm]
.db.local = DB.LOCAL
db.local = DB.LOCAL
[appdefaults]
pam = {
ticket_lifetime = 1d
renew_lifetime = 1d
forwardable = true
proxiable = false
retain_after_close = false
minimum_uid = 0
debug = false
}
By default, the krb5.conf file is located in the /etc/ folder. If your krb5.conf file is in another folder, you need to
reset the KRB5_CONF variable’s value:
Important: DataSunrise authentication proxy feature is available for Amazon Redshift, Greenplum, MySQL
PostgreSQL, Netezza, SQL Server and Vertica databases.
Password (if an LDAP LDAP user password. Needed for authentication and execution of queries by a
password is saved in privileged account. Used for mapping groups and AD authentication in the Web
DataSunrise) Console
Is Default check box Use this LDAP server as a default one
User Filter Expression that defines criteria of selection of catalog objects included into the
search area defined by the “scope” parameter. Thus, it is a search filter used to
search for user attributes
Note: if your system includes multiple LDAP servers, you should add all these server profiles to DataSunrise.
Note that the Base DN should be different for each server and the host name should be similar for all
13 DataSunrise Authentication Proxy | 324
the associated servers. The point is that DataSunrise looks for the user you're trying to log in as across all
available LDAP servers every time you're trying to authenticate via the DataSunrise's authentication proxy.
That's why all users should have unique names or some errors might occur.
3. Follow the mapping configuration instructions in Configuring User Mapping on page 324.
Important for MySQL users: There are two available methods of password transferring:
1. sha256_password: Recommended method of password transferring. Make sure that the
MySQLUseSHA256PasswordMethodForMapping parameter is enabled in the System Settings → Additional
subsection.
2. mysql_clear_password: use this method if your client application does not support the sha_256_password
method. To enable this method, perform the following.
• Enable the Cleartext Authentication Plugin on the client side:
Important: when the Cleartext Authentication Plugin is used, the passwords will be sent unencrypted, which is
not safe unless you use an SSL-encrypted connection.
Important: if the MySQLUseSHA256PasswordMethodForMapping parameter is set to "0" and you get the
following error "Authentication with 'mysql_clear_password' method requires SSL encryption to transmit password
securely. This requirement can be disabled.", you should enable SSL both on the client side and on the server. Or you
can disable the LdapMappingRequireClientSideSSL parameter (set "0" value).
Click Save.
To change the parameters via the DataSunrise's Web Console, navigate to System Settings → General and
specify an AD user name and a password in the corresponding text fields.
2. Perform all the steps from the previous section.
All the actions are the same except adding of AD user mapping configuration (subs Configuring User Mapping on
page 324).
13 DataSunrise Authentication Proxy | 326
Instead of an AD user name (-adLogin parameter) use the name of the AD group (-adGroup).
Note: make sure that the Group you're using for mapping is not the Primary group of AD user you're going
to authenticate with. If so, assign another Group as the Primary to the User. You can do it on your AD Domain
Controller.
Enable user mapping for your database. Click Enable and in the Enable User Mapping window select Database.
Specify the connection details of your target database and click Enable
3. Click Mapping+ to create a new User Mapping
4. Fill out the required fields
UI element Description
AD Type drop-down list Select Login for a single AD user and Group for a group of AD users
AD Login field Active Directory user's name
DB Login field Name of the database user you want to map the AD user to
DB Password field Password of the database user
Hash Type field Hash type (MD5 or SHA-512)
5. Click Save.
14 System Settings
System Settings section provides access to DataSunrise system settings as follows:
• Messages the firewall displays when blocking access to a target DB
• Configuration of a database which DataSunrise uses to store data auditing results (Audit Storage)
• Logging settings. Access to logs
• Email notification settings
• User profiles and roles
• Syslog settings
• DataSunrise servers
• Configuration backups
To enter this section, click the System Settings link.
Note: backup name means the date and time of backing up and at the
same time the name of the folder the backup is saved in
Advanced Dictionary Operations subsection. The settings enable you to clean your Dictionary (Clean Dictionary
in the Operation drop-down list) and encrypt all *.db files (including the Dictionary) with a custom encryption key
(Encryption of Configuration Files in the Operation drop-down list). The encryption operation is irreversible,
but you can encrypt the files as many times as you want. The key is stored in the crypt.pem file located in the
DataSunrise installation folder. Once the encryption is applied, the Core and Backend will be restarted.
14 System Settings | 329
Important: don't change DataSunrise version to a lower one when using the Dictionary encryption. It will cause
"Error 607" (inability to use the old configuration) because the local_settings.db file of the former version of
DataSunrise was encrypted with another key previously.
Web Console Parameters subsection. Contains settings needed to configure authentication of Active Directory
users to the DataSunrise's Web Console and to configure mapping of AD users to DataSunrise users. These settings
are required to configure the Authentication Proxy as well (refer to subs. DataSunrise Authentication Proxy on page
318).
Messages subsection
Authentication Proxy subsection (for more information on Authentication Proxy, refer to the Admin Guides)
14 System Settings | 330
Require Client Side SSL for LDAP Require usage of SSL at client side when doing LDAP Mapping
Mapping check box
Accept Only Mapped Users check box When enabled, the connection of database users via DataSunrise's
proxy will be restricted. Only mapped AD users will be able to
connect to the database via DataSunrise's proxy.
HTTP Proxy Settings subsection. Contains proxy settings required for sending metrics to AWS from closed
networks (refer to Amazon CloudWatch Custom Metrics on page 418)
Self-Service Access Request subsection. Contains SSAR settings. Refer to Self-Service Access Request on page
425.
File Name for the AppBackendService Log text field AppBackendService log file name
File Name for the AppFirewallCore Log text field AppFirewallCore log file name
Write Log Messages to the Existing Log File Before Number of hours to elapse before DataSunrise creates a
Creating a New One text field new log file (24 hours by default)
Time Period to Store Old Log Files (Days) text field Number of days DataSunrise keeps old log files (7 days
by default)
Limit Total Size of AppBackendService Log Files Maximum size of Backend's log files
(MBytes) text field
Limit Total Size of AppFirewallCore Log Files text field Maximum size of Core's log files
Maximum Size of a Single Log File (MBytes) text field Maximum size of a log file. (10 MB by default)
Statistics subsection
Checkbox Description
AgentServerTrace Agent server trace. Enabling this parameter will add additional
information about progress of agent server activity
AntlrCalcLines Enable/disable ANTLR lines calculation. When enabled, increases
CPU utilization. Be careful when enabling
AntlrIntTrace Enable/disable expanded ANTLR tracing (used in MS SQL Server)
Checkbox Description
DataDiscoveryFileFinderTrace Enabling this parameter will help you see the following information
in DataSunrise log files when Data Discovery task is running:
processed file in each thread. Processed column in file
DataDiscoveryFilterTrace Enabling this parameter will help you see the following information
in DataSunrise log files when Data Discovery task is running: match
of specific filter. Result of match (success/failure)
DataDiscoveryIncrementalTrace Enabling this parameter will help you see the following information
in DataSunrise log files when Data Discovery task with incremental
search is running
DataDiscoveryInventoryTrace Enabling this parameter will help you see the following information
in DataSunrise log files when Data Discovery task with inventory is
running
DataDiscoveryMultiProcTrace Enabling this parameter will help you see the following information
in DataSunrise log files when Data Discovery task is running:
deployment in a multiprocessor environment
DataDiscoveryOCRTrace Enable/Disable OCR trace usage for Discovery Image processing,
outputs OCR result in logs
DataDiscoveryObjectFilterTrace Enabling this parameter will help you see the following information
in DataSunrise log files when Data Discovery task is skipping objects
from processing
DataDiscoveryRPCTimeTrace Enable/Disable traces for RPC request Data Discovery (Count time
per action)
DataDiscoverySqlFinderTrace Enabling this parameter will help you see the following information
in DataSunrise log files when Data Discovery task is running:
processed column in SQL databases
DataDiscoveryTaskMgrTrace Enabling this parameter will help you see the following information
in DataSunrise log files when Data Discovery task is running: task
management
DataDiscoveryTrace Enabling this parameter will help you see the following information
in DataSunrise log files when Data Discovery task is running:
Processed file in each thread. Processed column in file. Match of
specific filter. Result of match(success/failure)
DdlDataModelLearningTrace DDL data model learning log trace for periodic task
DnsSrvTrace
DynamicSqlProcessingTrace When enabled, Dynamic SQL processing traces will be printed in
logs
ErResolverTrace
FileBufferTrace
FlushTrace
FreezeDetectorTrace
HealthCheckerTrace Log Health Check Periodic tasks
HttpClientTrace This parameter is responsible for logging requests and responses of
the HttpClient component
HttpCurlDebug Log all information sent and received by the web server
HttpCurlVerbose Log debugging information about the web server
14 System Settings | 334
Checkbox Description
IACTrace Check if you want to see IAC trace in log file
IntTestsTrace Messages about integration tests processing. Used for automated
tests by the product developers
InterchangeTrace Tracing of data exchange between two Backeds or between a
Backend and a Core (HTTP or HTTPS).
JniTrace
JsonTrace Trace JSON requests and responses between a Web Console and a
DataSunrise server
LDAPTrace Trace LDAP connection events (used in User mapping and LDAP
authentication to the Web Console)
LM_DEBUG Debug all messages in a thread
LM_TRACE Trace all messages in a thread
LastPacketsTrace Trace last packets
LeaksTrace
LicenseTrace
LogToStdout Output messages to the standard output stream. By default,
DataSunrise uses text terminal or a console as standard output
MaskingTrace When enabled, Masking processing traces will be printed in logs
MetadataCacheFillStatementTrace Enable/disable extra traces for DbObjects::fillStatement calls
MetadataCacheTrace Log debugging information about operations with internal
metadata cache which contains database structure
MetadataCacheVerboseTrace Enable extra traces to collect metadata
MetadataTrace Enable traces for metadata diagnostics. Check to get extensive
information on metadata
MsSqlParserStateTrace MS SQL Server parser state tracing
MsSqlSubTrace Send multistatement subqueries to the log
Checkbox Description
PcapTrace Trace Pcap-captured network traffic
PcapUnprocessedSegmentTrace Not used
ProcessManagerTrace
ProxyEventHookTrace Additional logs for proxy events (debugging)
ProxyTrace Trace proxy messages about volume of data received and sent
QueryHistoryLearningTrace Tracing of Table Relations mechanism
RecognizerParserTrace Log debugging information about the SQL parser
RecognizerTreeTrace
RulesTrace Trace DataSunrise Rules. Messages related to loading and checking
rules in a proxy. The Core loads Rules from the Dictionary at
startup and when Rules were changed. In this case, all Rule
settings are displayed in the logs. When a query passes through
a proxy, it is checked against all the rules. Information about the
progress of such a check is displayed in the logs. For example,
if a Rule is configured for a certain column, then the logs will
contain information about whether this column is included in the
request.With dynamic masking, the original and masked request
text is displayed in the logs. Additionally displays messages about
problems with licensing of target databases (for example, if a wrong
Instance Oracle and / or MSSQL is licensed)
SMUXCodeTrace Enable MARS proxy code tracing to troubleshooting of multiplexer
SMUXTrace Enable MARS proxy tracing.
SSLParserTrace
SSOTrace Advanced logging of SSO used for Web Console authentication
StaticMaskingTrace Static Masking and Inplace-masking logging (selected objects and
masking methods)
StaticMaskingTraceWithData
SyslogTrace Trace connections to a Syslog server and sending notifications to it
SystemBackupDictionaryTrace Messages about creating Dictionary backups
SystemTasksTrace Enable or disable System periodic task traces
TFATrace Logging of 2FA mechanism used for proxy and Web Console
authentication
TaskSchedulerTrace
ThriftParserTrace Trace Thrift protocol parser
TrafficBuilderTrace Trace Traffic builder
TrailAuditTrace Tracing of the Trailing the DB Audit Logs mechanism
TrailAuditVerboseTrace Enable/disable tracing queries and sessions in a trail db native audit
UpdateConfigTrace Trace config updates
Checkbox Description
Db2ParserTrace Trace DB2 parser (log parsing sequence)
DynamoParserTrace Trace DynamoDB parser (log parsing sequence)
ElasticSearchParserTrace Trace ElasticSearch parser (log parsing sequence)
FirebirdHandlerTrace Not used
FirebirdParserTrace Not used
HanaParserTrace Trace SAP Hana parser (log parsing sequence)
HiveParserTrace Trace Hive parser (log parsing sequence)
MongoParserTrace Trace Mongo parser (log parsing sequence)
MSSQLParserTrace Trace SQL Server parser (log parsing sequence)
MySQLParserTrace Trace MySQL parser (log parsing sequence)
NetezzaParserTrace Trace Netezza parser (log parsing sequence)
OracleParserTrace Trace Oracle parser (log parsing sequence)
PostgreHandlerTrace Trace PostgreSQL data handler
PostgreParserTrace Trace PostgreSQL parser (log parsing sequence)
S3HandlerTrace Trace Amazon S3 data handler
S3ParserTrace Trace Amazon S3 parser (log parsing sequence)
SnowflakeParserTrace Trace Snowflake parser (log parsing sequence)
TeradataParserTrace Trace Teradata parser (log parsing sequence)
VerticaHandlerTrace Trace Vertica data handler
VerticaParserTrace Trace Vertica parser (log parsing sequence)
Note: the actual log size depends on available storage space and your security policies.
BackendMainThread 3600
MaxCycleGap
BackendSigsegvDetail Enabled Provides detailed information on Backend
and system except stacktrace in the case of a
Backend failure
BackendSigsegv LogName Prefix of log file to store information about
BackendSigsegvLog Backend failure
DSARReportRowsLimit 1
DataBufferSize 64 Size of buffer for keeping network packages
DataDiscovery Disabled Enables filtering by empty column names in
CSVEnableFilterFor CSV files for Data Discovery.When enabled,
EmptyHeaders may reduce match count for such files
DataDiscoveryEnable Disabled Enables filtering by column names in
ColumnFilterForUnstructedFiles Unstructured files where data is kept in a
single <all file> column (e.g. PDF) for Data
Discovery
14 System Settings | 348
DistributeSessions ToThreads 1
DoubleRunGuard Enabled Enable protection from running multiple
DataSunrise instances with single
configuration
DsFunctionRunTimeout 10
DsTableCheckDurationMs 1000
DumpServerURL HTTP server for sending crash dumps
DynamoUserUpdateEnable Enabled Background refreshing of IAM user names list
and their accessKeyId in the Core
DynamoUserUpdatePeriod 5 Period of updating the IAM user names list
EDConnectionTimeout 30000
EDServerDefaultHost 0
EDServerDefaultPort 53002
ElasticSearchMax MaskedSize 500
EnableAWSMetrics Disabled Allows sending metrics to AWS
EnableDataConverter Enabled Convert binary data to text format
EnableHyperscan Disabled If enabled, the Hyperscan regular expression
library is used
EnableOracle NativeEncryption Disabled Enable connections encrypted with Oracle
native encryption
EnableRe2c Enabled If enabled, the Re2c regular expressions
library is used
EnterpriseOID DataSunrise's Enterprise OID
1.3.6.1.4.1.7777
FlushTimeout 30
ForceAudit Disabled A substitute of the outdated
DISABLE_SQL_RECOGNIZER. Enables you to
audit any queries
ForceFlushCoreLog Disabled Forces each line of traces to be flush to log
file
ForceFlushUserLog Enabled Forces each line of user traces to be flush to
log file
FunctionMasking 0
GenerateCrashDump 0 Create Core crash dump in TempPath:
• 0 - Disabled
• 1 - Normal dump
• 2 - Extended dump
MaxSaveRowsCount 20
MaxUncommitted ProxyRead 1024 The maximum size of the buffer (for reading
data from proxy)
MessageHandlerConnections Enabled Distribute connections in the
DistibuteByHost MessageHandler threads by the client host
MessageHandlerMain 1 The number of threads used to process the
QueueUseThreads main queues. <n> - thread count
MessageHandler ProxyThreads 5 Number of threads used to process
database queries that pass through the
proxy.Change this setting along with the
CoreDynamicMemoryArenas setting
MessageHandlerQueue 15 If the internal queue filling of the Message
FillPercentWarning Handler is more than a specified value (in %),
a corresponding alert is displayed in Event
Monitor → System Events
MessageHandler SleepTime 0
MessageHandler 5 Number of threads used to process database
SnifferThreads queries that DataSunrise sniffer receives
MessageHandlerThread 3600
MaxCycleGap
MessageHandler 10485760 Message Handler thread stack size.
ThreadStackSize Applicable for Linux and Windows
MessageHandler 0 Set the priority level value for Message
ThreadsPriority Handler thread pool
MessageHandler 5 The number of threads used to process
TrailingThreads operations from the database by 'Trailing the
database logs' mode
MessageHandlers 30000 Limit number of messages in 'global queue'
GlobalQueueHighWaterMark to be processed with Message Handler. When
reached, the thread will wait for the queue to
drop to the minimum
MessageHandlers 29000 Limit number of messages in 'global queue'
GlobalQueueLowWaterMark to be processed with Message Handler
14 System Settings | 357
PcapBufferSize 100
PcapConversationFilter A regular expression for filtering
conversations traffic of which needs to be
traced in the sniffer mode.
Conversation format:
srcip:srcport->dstip:dstport
Filter example:
.*192\.168\.1\.1.*
PcapMaxOutOfOrderMonitor 1
PcapMaxOutOfOrder 5 Maximum number of messages following a
SegmentCount lost message. DataSunrise will not process a
lost message if this number is achieved
PcapMaxSessionIdleTime 7200 Idle time after which DataSunrise stops
processing messages in a thread
PcapProxyDirection 0 • 0 - capturing traffic from both directions
• 1 - capturing traffic only from client to
proxy
• 2 - capturing traffic only from proxy to
server
PcapShowOnlyFileName 0
PcapShowProgressBySize 0
PgFetchRowCount 1000 Row count to be used with FETCH operation
for PostreSQL databases for Static Masking.
The lower the value is, the slower the
performance is and the lower amount of RAM
is used
PgMetadataSetUtf8 Enabled Overwrite current client_encoding value to
ClientEncoding UTF8 for all metadata connections
14 System Settings | 362
${Event.Time}:${Event.Name}:
${Event.Description} on ${Server.Name}
${Content} on ${Server.Name}
${Event.Time}:${Event.Name}:
${Event.Description} on ${Server.Name}
${Content} on ${Server.Name}
{
"keyNames": {
"dbType" : "engine",
"pass" : "password",
"user" : "username"
},
"engineNames": {
"AWS Aurora Postgres" : "aurorapgsql",
"AWS Aurora MySQL" : "aurora",
"My Bill Gates Super DBMS" : "mssql"
}
}
The "keyNames" section of a mapping document allows you to provide synonyms for the key fields required in order
to create a DB instance in DataSunrise.
The "engineNames" section of a mapping doc allows you to add more synonyms for the database engine names.
For example, PostgreSQL can be called by multiple names (e.g pg, pgsql, postgres (AWS RDS favorite), postgresql,
PostgreSQL etc), so it would be a good idea to gather this data in advance and aggregate it into the mapping
14 System Settings | 383
parameter to ensure that the automated process will not fail due to unknown database engine name.. Parameter
values might be the following (note that database type names have corresponding synonyms):
{ "mssql", dtMsSQL },
{ "oracle", dtOracle },
{ "db2", dtDb2 },
{ "postgresql", dtPgSql },
{ "mysql", dtMySQL },
{ "netezza", dtNetezza },
{ "teradata", dtTeradata },
{ "greenplum", dtGreenplum },
{ "redshift", dtRedShift },
{ "aurora", dtAuroraMySQL },
{ "mariadb", dtMariaDB },
{ "hive", dtHive },
{ "sap hana", dtHana },
{ "vertica", dtVertica },
{ "mongodb", dtMongoDB },
{ "aurorapgsql", dtAuroraPgSql },
{ "aurorapostgres", dtAuroraPgSql },
{ "dynamodb", dtDynamoDB },
{ "elasticsearch", dtElasticSearch },
{ "cassandra", dtCassandra },
{ "impala", dtImpala },
{ "snowflake", dtSnowflake },
{ "informix", dtInformix },
{ "athena", dtAthena },
{ "s3", dtS3 },
{ "sybase", dtSybase },
You can assign multiple synonyms both in keyNames and engineNames section. If no match, DataSunrise will check
if the standard key names are used. Otherwise, you will receive the error message in the response that such key is
not known or does not exist. Both keyNames and engineNames are not compulsory, so you can leverage either one
of them, use both or none of them respectively.
Login text field (if the Specify Connection Parameters User name used to access the database
radio button is activated)
Save password drop-down list Method of saving the database password:
• Save in DataSunrise
• Retrieve from CyberArk. In this case you should
specify CyberArk's Safe, Folder and Object (fill in the
corresponding fields)
• Retrieve from AWS Secrets Manager. In this case you
should specify AWS Secrets Manager ID
• Retrieve from Azure Key Vault. You should specify
Secret Name and Azure Key Vault name to use this
feature
Important: there is a risk that an external Audit Storage can become non-operational and audit data collected
at that time can be lost. For such cases DataSunrise includes the Emergency Audit feature. This feature enables
automatic saving and storing of audit data in an external file if a connection with the Audit Storage is lost. Once the
connection with the Audit Storage database is restored, DataSunrise uploads the data from that file to the Audit
14 System Settings | 385
Storage. Note that temporary audit data files are stored in the DataSunrise's installation folder, in separate folders
for each Audit Storage database available (for example, if you have three different Audit Storages, you will have
three folders. Note that only one Audit Storage can be used). Names of the folders that contain audit data files are
created using the base64 method.
You can configure the Emergency Audit by changing the following parameters in the DataSunrise's Additional
Parameters (System Settings → Additional Parameters):
• AuditOperationDataLoadInterval: size of operation data to be reached before been uploaded to the Audit Storage
• AuditOperationMetaLoadInterval: size of metadata to be reached before been uploaded to the Audit Storage
• AuditOperationDatasetLoadInterval: size of operation datasets to be reached before been uploaded to the Audit
Storage
• AuditOperationRulesLoadInterval: size of Rules-related data to be reached before been uploaded to the Audit
Storage
• AuditOperationExecLoadInterval: size of operation executions to be reached before been uploaded to the Audit
Storage
• AuditSubQueryOperationLoadInterval: size of subquery operation data to be reached before been uploaded to
the Audit Storage
• AuditOperationsLoadInterval: size of operation logs to be reached before been uploaded to the Audit Storage
• AuditSessionsLoadInterval: size of session data to be reached before been uploaded to the Audit Storage
• AuditTransactionsLoadInterval: size of operation transactions data to be reached before been uploaded to the
Audit Storage
• AuditConnectionsLoadInterval: size of connection data to be reached before been uploaded to the Audit Storage
• AuditSessionRulesLoadInterval: size of session rules data to be reached before been uploaded to the Audit
Storage
• AuditOperationGroupsLoadInterval: size of operation groups data to be reached before been uploaded to the
Audit Storage
• AuditTrafficStatLoadInterval: size of traffic statistical data to be reached before been uploaded to the Audit
Storage
• AuditRulesObjectDetailLoadInterval: size of object details data to be reached before been uploaded to the Audit
Storage
• AuditRulesStatLoadInterval: size of Rules statistical data to be reached before been uploaded to the Audit Storage
Refer to Additional Parameters on page 337 for description of these parameters and the way to configure them.
AuditRotationAgeThreshold Time to store the current audit.db file before creating a new one
AuditRotationSizeThreshold Maximum size the current audit.db file can reach before creating a
new audit.db file
4. You can use an audit file during a current DataSunrise user session only. When a session is closed, DataSunrise
automatically switches an active file to the latest available file.
Select a required audit.db in the table and click Switch to Selected to make the selected audit.db active.
Clean button Delete audit data in the Audit Storage database (DELETE or DROP depending
on which one is selected).
4. Navigate to System Settings → Audit Storage → Encryption, select a place to store your encryption key at in
the Key storage drop-down list
5. Input an encryption key into the Key field and click Enable to start the encryption process
6. Restart the DataSunrise system service
7. To ensure that everything is OK, verify that operations.sql_query and operation_data.data in your Audit Storage
are encrypted.
2. Navigate to System Settings → General → Advanced Dictionary Operations and select Encryption of
Dictionary in the Operation drop-down list
3. In the Key Storage drop-down list, select a place to store the encryption key that should be used
4. Input your encryption key into the Key field
5. Click Enable to start an encryption process
6. Restart the DataSunrise system service and check your Dictionary columns.
14 System Settings | 388
Partitioning for MS SQL Server is configured similarly to PostgreSQL (see the instruction above)
UI element Description
Product text field Software program name to be included into the message header (DataSunrise
Database Security by default)
Vendor text field Vendor name to be included into the message header (DataSunrise by default)
Product Version text field Product version number to be included into the message header
CEF Version text field CEF protocol version number (this protocol is used to create a message string)
UI element Description
Local Syslog/Remote Syslog radio button Syslog server to receive DataSunrise auditing data. The
following variants are available:
• Local Syslog
• Remote Syslog
Protocol Type drop-down list (if Remote Syslog Protocol that should be used to export data to a remote
server is selected). Syslog server. The following variants are available:
• RFC_3164
• RFC_5424
Remote Host text field (if Remote Syslog server is Hostname of a Syslog remote server
selected).
Remote Port text field (if Remote Syslog server is Port number of a Syslog remote server
selected).
Important: Do not confuse DataSunrise users with target database users (Database Users on page 103). A
DataSunrise user is a person with legitimate rights to access the DataSunrise's Web Console and manage its settings.
14 System Settings | 390
Note: On Windows, an AD group name should be specified in the following format: <DOMAIN>\<GROUP>.
Example: "DB\access_manager". On Linux, an AD group name should be specified in the following
format:<REALM>\<GROUP>. Example: DB.LOCAL\access_manager
4. Specify Web Console objects and what a user can do in respect of these objects in the Objects subsection: Select
an object in the list and check privileges to grant.
14 System Settings | 392
Object Description
AI Detection of Users If DELETE is disabled, changing and cleaning of Audit Storage is not allowed
AWS S3 Inventory Items Getting S3 object's metadata by the means of S3 Inventory
Access Custom File If disabled, uploading of a file for creating a new Resource Group or backup
restoring is not allowed
Active Directory Mapping Authentication Proxy (Configuration → Databases → Actions →
Authentication Proxy settings)
Application Data Model Resource Manager Data Model
Application User Settings Application Users Capturing
Applications Applications
Audit Rules Audit Rules
Blocked Users Blocked Users
Compliance Manager Compliance Manager
DSAR Configuration DSAR Configuration
DSAR Field DSAR Fields
Data Discovery Filters Data Discovery filters
Data Discovery Groups DD Scan Groups
Data Discovery Incremental Data AWS S3 Data Discovery Incremental Scanning Mode
Data Discovery Incremental Data Discovery Incremental Group
Group
Data Discovery Task Error Data Discovery Task errors
DataSunrise Servers Actions with DataSunrise Servers
Database Instance Users Actions with Database Users (Configuration → Database Users)
Database Instances Actions with DB Instances (Configuration → Databases)
Database Interfaces Actions with DB Interfaces (Configuration → Databases → DB Profile)
Database Properties (Displaying Database Properties on page 62)
Database Users Actions with Database Users
Databases Display a list of database properties. If disabled, DB properties (Configuration
→ Databases) are not displayed. If INSERT is not allowed, a new DB Instance
can't be created
Deferred Task Info Display information about deferred Data Discovery Tasks
Dynamic SQL Replacements Dynamic SQL (available for PostgreSQL, MySQL and MS SQL Server)
Encryptions Encryptions (Encryptions on page 108)
Entity Groups Lists of Audit, Security, Masking, Learning Rules. If disabled, a list of Rules is not
displayed
Function Replacements Data Masking inside functions
Groups of Database Users DB User groups (Database Users on page 103)
Groups of Hosts Creating a Group of Hosts on page 210
Hosts Creating a Host Profile on page 209
Instance Properties Creating a Target Database Profile on page 58
14 System Settings | 393
Object Description
Instance Users Creating a DataSunrise User on page 390
LDAP Servers LDAP on page 396
Lexicon Groups Discovering Sensitive Data Using Lexicon on page 251
Lexicon Items Creating a Lexicon on page 251
License Keys License keys
Lua Script Discovering Sensitive Data Using Lua Script on page 251
Masking Rules Creating a Dynamic Data Masking Rule on page 165
Metadata Columns Access to DB Instance metadata columns
Metadata Objects Access to DB Instance metadata objects
Metadata Schemas Access to DB Instance metadata schemas
Object Filters Object Group Filter on page 115
ObjectGroups Object Groups on page 203
Pair of Associated Columns Table Relations on page 400
Periodic Tasks Periodic Tasks on page 222
Proxies Display Proxies
Queries Display Query Groups
Queries Map Queries Map Parameters on page 302
Query Groups Query Group Parameters on page 292
Resource Manager Deployment Resource Manager on page 275
Resource Manager Templates Template Structure on page 275
Results of VA Scanner VA Scanner on page 263
Roles DataSunrise Roles (System Settings → Access Control → Roles)
Routine Parameters Creation of replacement functions and views during data masking
Rule Subscribers Rule Subscribers
SSL Key Groups SSL Key Groups on page 106
SSL Key Connection encryption keys (Configuration → SSL Key Groups)
SSO Services Single Sign-On in DataSunrise on page 46
Schedules Schedules on page 219
Security Guidelines Available Security Guidelines (VA Scanner → Scan Tasks → New → Choose
Guidelines)
Security Rules Data Security Rules
Security Standards Security Standards for Data Discovery (Data Discovery → Security Standards)
Sessions Active database sessions
Sniffers Available Sniffers (Configuration → Databases → DB Instance → Sniffers)
Subscriber Servers Configuring an SMTP Server on page 212
Subscribers Subscriber Settings on page 212
Syslog Configuration Groups Syslog Settings (CEF Groups) on page 222
14 System Settings | 394
Object Description
Syslog Configuration Item Syslog configuration (Configuration → Syslog Settings → Syslog Settings)
System Settings System Settings
Table Reference Actions with Table Relations
Tags Tags on page 199
Tasks Periodic Tasks on page 222
Temporary Files Temporary files
Trailing the Db Audit Logs Trailing the DB Audit Logs mode. Used for auditing (Configuration →
Databases → DB profile → Capture Mode → Trail DB Audit Logs)
Users DataSunrise Users (System Settings → Access Control → Users)
5. Specify Web Console actions a user can execute, in the Actions subsection:
14 System Settings | 395
Action Description
Audit Cleaning System Settings → Audit Storage → Clean Audit
Audit Storage Changing System Settings → Audit Storage → Audit Storage
Change Audit Storage System Settings → Audit Storage → Database Type
Encryption Settings
Change Dictionary Encryption System Settings → General → Advanced Dictionary Operations →
Settings Encryption of Configuration Files
Change Password Settings System Settings → Access Control → User → Change Password
DataSunrise Starting DataSunrise Backend startup (System Settings → Servers → Your server →
Core and Backend Process Manager → Actions)
DataSunrise Stopping DataSunrise Backend stop (System Settings → Servers → Your server → Core
and Backend Process Manager → Actions)
DataSunrise Updating DataSunrise update (System Settings → About → System Info → Download
Latest)
Dictionary Cleaning System Settings → General → Advanced Dictionary Operations → Clean
Dictionary
Dictionary Restoring System Settings → General → Configuration Control → Upload Backup
Discovery Column Content Displaying matching snippets (sensitive data) in Data Discovery results
Displaying
Flush Enable enforced synchronization of Backend and Core with the flush CLI
command. Used for testing purposes
Logs management Logging settings (System Settings → Logging and Logs)
Manual Audit Rotation System Settings → Audit Storage → Rotated Files → Rotate
Manual Dictionary Backing-up System Settings → General → Configuration Control → Create Backup
Original Query Displaying Needed to get audited events
Query Bindings Displaying Bind variables logging (Audit → Rules → Action → Log Bind Variables)
Query Results Displaying Query Results logging (Audit → Rules → Action → Log Query Results)
Reading Database Data The ability to preview data in the Object Tree during the creation of Rules, Tasks,
and Compliance.
View Dynamic Masking Events Masking → Dynamic Masking Events
View Event Description Masking → Dynamic Masking Events → Event Description
View Operation Group System Settings → Operation Group
View Query Parsing Errors System Settings → Query Parsing Errors
View Security Events Security → Events
View Session Description Audit → Session Trails → Session Details
View Session Trails Audit → Session Trails
View Top Blocked Queries Per Dashboard → Top Blocked Queries per Day
Day
View Transaction Trails Audit → Transactional Trails
Note: Password Settings can be edited only by DataSunrise users with the privilege of editing such settings
(System Settings → Access Control → Role → Edit Role → Actions → Change Password Settings).
UI element Description
Minimum Password Minimum lenght of a password string
Length field
Maximum Password Maximum length of a password string. Unlimited by default
Length field
Special Symbols field Special characters that may be used when setting a password
Use Letter... check boxes Self-explanatory
Old Password Storing Number of days to store an old password for
Count Days field
14.9 Logs
This tab enables you to view system logs of DataSunrise's modules. Navigate to System Settings → Logging and
Logs to get to the Logs tab.
Use the Log Type drop-down list to switch between logs and the Server drop-down list to select a DataSunrise
server to show a log for (if multiple servers exist).
14.10 LDAP
LDAP subsection contains LDAP servers' settings. An LDAP server is required to configure the Authentication proxy
(mapping of Active Directory users on database users). For more information on Authentication proxy, refer to the
DataSunrise Admin Guides.
To create a new LDAP server, do the following:
1. Navigate to LDAP and click Add LDAP Server to access the server's settings
2. Fill out the required fields:
14 System Settings | 397
Password (if an LDAP LDAP user password. Needed for authentication and execution of queries by a
password is saved in privileged account. Used for mapping groups and AD authentication in the Web
DataSunrise) Console
Is default check box Use the current LDAP server as the default one
User Filter Expression that defines criteria of selection of catalog objects included into the search
area defined by the “scope” parameter. Thus, it is a search filter used to search for user
attributes
3. Having configured an LDAP server, click Test to test the connection between DataSunrise and the server. Click
Save to save the server profile.
14 System Settings | 398
14.11 Servers
The System Settings → Servers subsection displays existing DataSunrise servers. For more information on
DataSunrise multiple servers, refer to the DataSunrise Admin Guide. To access Server settings, do the following:
1. Select a required server in the list and click its name to access the server's settings
2. Reconfigure a server if necessary:
Interface element Description
Main Settings
Logical Name Logical name of the DataSunrise server (instance)
Host IP address of the server, the Instance is installed on
Backend Port DataSunrise Backend's port number (used to access the Web Console)
Core Port DataSunrise Core's port number
Use HTTPS for Backend Process Use HTTPS protocol to access the Backend
Use HTTPS for Core Processes Use HTTPS protocol to access the Core
Core and Backend Process Manager
Table with Core processes Each Proxy uses its own Core process. Select a process to take actions with
and use the Restart/Start/Stop buttons from the Actions drop-down list.
File Manager
Drop-down list with available Select the file of interest and use the Upload button to upload your local file
DataSunrise files to the current server. Or use the Download button to download the file of
interest from the current server
Server Info
Table (not configurable) Displays information about the current server (refer to About on page 399)
14.14 About
This subsection displays general information about DataSunrise and contains the License manager:
Parameter Description
License type DataSunrise license type
License Expiration Date DataSunrise license expiration date
Version DataSunrise version number
Backend UpTime Backend operating time
Server Time Current server time
Main Dictionary Default Dictionary database used (Dictionary location)
Current Dictionary Dictionary database currently used
Default Dictionary Version Default Dictionary database version number
Current Dictionary Version Current Dictionary database version number
OS Type DataSunrise server operating system type (Windows or Linux)
OS Version DataSunrise server operating system version
Machine DataSunrise server hardware information
Node Name DataSunrise server name (PC name)
Encoding Current encoding
Server DataSunrise server the license is applied to
Audit Version • For SQLite-based Audit Storage: main part version / rotated part
version
• For remote Audit Storage: audit version
15 Table Relations
The Table Relations feature enables DataSunrise to analyze database traffic and create associations between
database columns. "Associated columns" means that columns can be linked by integrity constraints or by JOIN and
WHERE clauses in queries. For access to Relations' settings, navigate to Configurations → Table Relations.
Associations are used:
• When configuring Dynamic and Static Data masking, suggestions on possible associations may be given
when selecting columns to be masked. When selecting a column associated with another column, you will be
prompted that there associations exist, if there are. You can include an associated column in a Rule or a Static
Masking task.
• Columns associated with columns retrieved by Data Discovery tool will be shown too (refer to Periodic Data
Discovery on page 248)
DataSunrise builds associations using the following methods:
• Integrity constraints, such as foreign and primary keys. When creating an instance, the Search for Table
Relations... check box should be checked so that during a metadata update associations are analyzed too. At
this, in the process of database's metadata update, a default_model.<instance_name> Table Relation will appear.
It is a default database model with associations that will be updated after every metadata update
• Analysis of JOIN and WHERE clauses in database traffic using a Learning Rule (Database Traffic Analysis on page
405)
• Analysis of JOIN and WHERE clauses in database query history using a dedicated Periodic Task (Database Query
History Analysis on page 400)
• Analysis of functions, views and procedures for JOIN and WHERE clauses included using a dedicated Periodic
Task, Periodic DDL Table Relation Learning Task on page 404
• If the above-mentioned actions were not sufficient, associations might be edited manually (Manual Editing of
Table Relations on page 405)
Important: all the associations work inside DataSunrise only and no database tables are modified at that.
general_log: 1
slow_query_log: 1
log_output: TABLE
3. Note that if your Parameters Group was created from scratch, you will need to edit the Instance itself to avoid
using the default Parameters Group.
1. You should create a pg_stat_statements VIEW. To do this, you need to create a new Parameters Group (if you're
using the default Parameters Group) or edit your existing Parameters Group if you're using a custom one.
2. In the Group's settings, set the shared_preload_libraries parameter's value to pg_stat_statements
3. Restart the Instance.
1. Create an EVENT MONITOR FOR STATEMENTS which writes the data to a local table.
Note: the DB2 user you're using for creating the Monitor should have rights required for reading from the table
created by the Monitor.
1. Method 1. Query the following system VIEWs: _v_gryhist and _v_grystat. To do this, you should have a user with
SELECT privileges to the aforementioned VIEWs. Execute the following query:
1. Open your PostgreSQL installation folder, data folder and locate the postgresql.conf file.
2. Add the following line to the end of the postgresql.conf file:
shared_preload_libraries = 'pg_stat_statements'
5. Note that you should create the pg_stat_statements VIEW in your PostgreSQL server's MASTER database
(postgres as a rule) and NOT in the database you'll be browsing.
Note: SQLTEXT's value should be big enough to store all the queries. By default, it's 200 characters.
3. If you don't need to write logs anymore, you can disable it:
$ su - gpadmin
2. Execute the command shown below. This command will install and start the gpperfmon, will create a service
database for it (gpperfmon) and will create a gpmon superuser with the <password> password.
3. Edit the gpperfmon configuration file in the way described below. The file is located here:
$MASTER_DATA_DIRECTORY/gpperfmon/conf/gpperfmon.conf
min_query_time = 0
$ gpstop -r
5. Grant the Greenplum user you specified in your database instance's settings (Configuration → Databases), the
privileges listed below. To do this, connect to the database as a user with admin privileges, select the gpperfmon
database as current and execute the query:
Note: if you're using Google Chrome, you need to enable Hardware Acceleration in your browser.
The association diagram shows the associated columns and the Toolbar to work with them in the left upper corner.
The toolbar enables you to:
• Add a new table to establish an association with;
• Remove highlighted association;
• Rebuild a graph in such a way that another table is in its root;
• Download an associations graph from another Table Relation;
• Select only the required tables to be shown in the diagram;
• Open a current graph in a new browser tab.
16 Capturing of Application Users | 406
Important: DataSunrise supports the SAP ECC App User Capturing method for SAP Hana, SAP Sybase, IBM DB2
and Oracle Database only.
Important: when using Oracle EBS based Application User Capturing, the database's password should be saved in
DataSunrise, CyberArk or AWS Secrets Manager. Otherwise DataSunrise will not be able to capture Oracle EBS users.
Client application users interact with a target database through a database user or users they are mapped to. To
identify an app user, DataSunrise uses certain markers described below.
Information about a user of client application can be contained:
• within query's SQL
• within query results
• within bindings for prepared statements
Thus, DataSunrise uses some markers to identify the actual client app user name.
First, the DataSunrise administrator should define which way an application user will be captured within the
database traffic. To identify an end application user, DataSunrise uses three markers:
1. Query-based.
• Select id from appusers where username='([a-z]*)';
• SELECT '([a-z]*)' as for DataSunrise where ([a-z]*) is a template used by DataSunrise to find a user name.
16 Capturing of Application Users | 407
3. Bindings-based. DataSunrise can find user's name in bind variables for prepared statements.
16 Capturing of Application Users | 408
Note: Column Index is the counting number of a bind variable in a query. The counting starts from 0. In this
particular case we will be searching for the u2 bind variable because Column Index = 1.
One of existing queries (executed by an application) or queries added to an application to integrate it with
DataSunrise can be defined by a DataSunrise user as a marker for an application user.
4. Session Parameter. DataSunrise can find the required information in the parameters of a session.
To learn these parameters, create an Audit Rule and audit a session. For parameters of your session, navigate to
Audit → Session Trails → Session of interest → Parameters tab:
16 Capturing of Application Users | 409
Note: when enabling multiple capturing types, it is important to remember that Query and SAP ECC types are
applied first as they work on request, while other types work on response.
• Selected Query in the Capture Type drop-down list. It is needed to search for application user's name in
query's SQL;
• Pasted the following expression into the Pattern field:
'DataSunriseEvent:AppUserSet="([a-zA-Z]+)"';
2. We create a Masking Rule and in the Filter Sessions we select Application user, as well as the application user
("firstappuser" here), whose queries we want to capture (to be able to do it, we should create the required user
in the Configurations → Database users first). Then we select the "Card" column to be obfuscated and select
a masking method to use ("Credit Card Number" here). Thus, all queries issued by "firstappuser" will result in
masking of the "Card" column.
SELECT 'DataSunriseEvent:AppUserSet="firstappuser"';
SELECT * FROM customers;
SELECT 'DataSunriseEvent:AppUserSet="secondappuser"';
SELECT * FROM customers;
6. Then we create a new user ("[email protected]") at Configurations → Database Users. It is required to create a
Rule for this user.
7. Now we can create a masking Rule for the application user.
In the Filter Sessions subsection, we select "Application user" and the user to whom we want to display
obfuscated data instead of actual data ("[email protected]"). We mask the "card" column with the "Credit Card
Number" masking algorithm. Thus, all queries issued by "[email protected]" will result in masking of the "card"
column.
16 Capturing of Application Users | 413
8. To check the result, we query the table through the web site. As you can see, the values in the "Card" column are
obfuscated.
.
3. After that, we will execute several simple queries preceded by dbms_SESSION.set_identifier('my_client') in SQL
Developer. SQL Developer due to its specifics will create a bind parameter during the execution of this query:
4. As a result, in the Transactional Trails, we can see that two SELECTs in a single session were executed by two
different users:
3. Add a user you want to mask the values for, in Configuration → Database Users
4. When creating a Masking Rule, add the new user in the FIlter Sessions section as an Application User:
5. Connect through a proxy to the target database, execute the query for the user first:
Parameter Description
<datasunrise server IP address or host name of DataSunrise's server, 11000 is the port number of the
name> DataSunrise's Web Console
<instance name> Name of DataSunrise instance (for example, an AWS instance)
<database name> Name of the database configured to be used with DataSunrise (RDS database for
example)
<database user> Database user name (login) you can use to connect to your database
<database password> Database user password you can use to connect to your database
disabled_proxy_is_error Show an error if the checked proxy is disabled
force_interface_check If True (1), health checking ends with an error which was displayed when checking an
Interface
Note: If login and password are saved for a certain instance, you can skip them in the URL, or you will get "error
500" with a corresponding message. The server will return "warning 200" if success.
When checking all instances, a health checker checks all DataSunrise proxies and if a proxy does not respond, it will
return "error 500" with a corresponding message.
2. You can use the following URL to check all proxies on all instances:
/healthcheck/all_instances
Note: if login/password are not saved in the instance's settings, this particular instance will not be checked.
/healthcheck/general
17 Amazon Web Services (AWS) | 418
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"cloudwatch:PutMetricData",
"ec2:DescribeTags"
],
"Effect": "Allow",
"Resource": [
"*"
]
}
]
}
Metric Description
AuditProcessingSpeed Processing of queries by audit journal speed (operations/sec)
OE883B7OD5B6E3EE37D37198049C9507C8383DB6 #app2
• Click Add, the Hash will be added as an authentication characteristic with the information that you specified.
8. Specify the application’s Allowed Machines. This information enables the Credential Provider to ensure that only
applications that run from the specified machines can access their passwords.
• In the Allowed Machines tab, click Add, the Add allowed machine window will be displayed.
• Specify IP/hostname/DNS of the machine where the application will run and will request passwords, then click
Add, the IP address will be listed in the Allowed machines tab. Make sure the allowed servers include all
mid-tier servers or all endpoints the AAM Credential Providers are installed on.
Note: for more information about adding and managing privileged accounts, refer to the “Privileged Account
Security Implementation Guide”.
2. Add the Credential Provider and application users as members of the Password Safes where the application
passwords are stored. This can either be done manually in the Safes tab, or by specifying the Safe names in a
CSV file if you want to add multiple applications.
3. Add the Provider user as a “Safe Member” with the following privileges:
• List accounts
• Retrieve accounts
• View Safe Members
18 Integration with the CyberArk AAM | 423
4. Add the application (DataSunriseDBSecurity) as a Safe Member with the following authorizations:
• Retrieve accounts
To enable the Credential Provider, check application’s Authentication details:
• In the Authentication tab, click Add; a drop-down list with authentication characteristics included will be
displayed.
• Select an authentication characteristic to specify.
5. If your environment is configured for dual control:
• In PIM-PSM environments (v7.2 and lower), if the Safe is configured to require confirmation from authorized
users before passwords can be retrieved, give the Provider user and the application the following permission:
Access Safe without Confirmation
• In Privileged Account Security solutions (v8.0 and higher), when working with dual control, the Provider user
has access without confirmation, thus, it is not required to set this permission.
6.
Note: for more information about configuring Safe Members, refer to the “Privileged Account
Security Implementation Guide”.
If the Safe is configured for object level access, make sure that both the Provider user and the application have
access to the password(s) to be retrieved.
export LD_LIBRARY_PATH=/opt/datasunrise
sudo ./AppBackendService DICTIONARY_APPLICATION_ID=<Dictionary application ID>
DICTIONARY_TYPE=<Dictionary DB type>
DICTIONARY_HOST=<Dictionary IP address>
DICTIONARY_PORT=<Dictionary port number>
DICTIONARY_DB_NAME=<Dictionary DB name>
DICTIONARY_LOGIN=<User name to access the Dictionary>
DICTIONARY_PASS_QUERY="Safe=<CyberArk Safe name>;Folder=<CyberArk Folder name>;Object=<CyberArk
Object name>"
FIREWALL_SERVER_HOST=<DataSunrise server IP address>
FIREWALL_SERVER_BACKEND_PORT=<DataSunrise Backend's port number (11000 by default)>
FIREWALL_SERVER_CORE_PORT=<DataSunrise Core's port number (11001 by default)>
FIREWALL_SERVER_NAME=<DataSunrise server name (any)>
FIREWALL_SERVER_BACKEND_HTTPS=1
FIREWALL_SERVER_CORE_HTTPS=1
19.1 Overview
The Self-Service Access Request (SSAR) functionality enables database users trying to access database objects
protected by DataSunrise to request access to these objects from DataSunrise administrators. A DataSunrise
administrator having received a request can decide whether to approve it or decline. Thus, if a request is approved,
the database user that sent it is added to the allow list of the Rule that is protecting the requested database objects.
6. The database user follows the link, fills out a request and sends it to a DataSunrise administrator
7. For available access requests, navigate to Security → Requests. Locate the request of interest in the list and
click Show
8. You can see general information about the request and objects were tried to access in the General Info section
9. In the bottom section of the page you can manage available access requests by selecting database objects to
give Read-Only or Read/Write rights for to the particular user
10. Having finished, approve or decline the request by clicking the corresponding button
11. Note that you can revoke access rights any time by navigating to the settings of the request of choice and
clicking Revoke.
20 Frequently Asked Questions | 426
DataSunrise Updating
Q. I can't update my DataSunrise. I run a newer version of the DataSunrise installer, but the installation
wizard is not able to locate the old DataSunrise installation folder.
Run DataSunrise installer in Repair mode. It removes the previous installation and updates your DataSunrise to a
newer version.
Q. I've updated DataSunrise and I get the following error:
Now DataSunrise uses a new method of getting metadata. Do the steps mentioned here: Editing a Target Database
Profile on page 61
Q. I'm trying to enter the Web Console after DataSunrise has been updated, but it displays the following:
Most likely, you kept the Web Console tab open in your browser while updating the firewall. Log out the Web
Console if necessary and press Ctrl + F5 to refresh the page.
Databases
Q. When connecting to Aurora DB, the MySQL ODBC driver stops responding.
Most probably, you're using ODBC driver version 5.3.6, which is known to cause freezes from time to time. Install
MySQL ODBC driver version 5.3.4.
Q. I'm using DataSunrise in Sniffer mode and get the following messages in the Event Monitor:
The current version of DataSunrise sniffer supports TLS v.1.0 only. You need to downgrade TLS version on the server
side. Create two keys in the register:
[HKEY_LOCAL_MACHINE][System][CurrentControlSet][Control][SecurityProviders][SCHANNEL][Protocols][TLS
1.1][Server]
[HKEY_LOCAL_MACHINE][System][CurrentControlSet][Control][SecurityProviders][SCHANNEL][Protocols][TLS
1.2][Server]
DisabledByDefault=1
Enabled=0
[HKEY_LOCAL_MACHINE][System][CurrentControlSet][Control][SecurityProviders][SCHANNEL]
This notification is displayed when a sniffer captured a big amount of traffic on SSL sessions started before
the DataSunrise service had been started. By default, the volume of captured traffic should not exceed 10 Mb
(pnMsSqlDelayedPacketsLimit parameter).
Sometimes this notification can be displayed if there is a huge load on pcap driver. Thus a sniffer can capture too
much of a delayed traffic. In this case you need to increase pnMsSqlDelayedPacketsLimit parameter's value.
Q. I need to use an SSL certificate for database connection. What are my options?
Turn off certificate validation for the connection in your client application (Sisense). For example, you can check Trust
Server Certificate in your client software.
In your environment, you can use a certificate for DataSunrise generated by your CA from the root certificate.
Generate a self-signed certificate and copy it to your client system.
Q. I'm trying to establish a connection to a DataSunrise proxy created for an Amazon Redshift database, but
receive the following error:
This issue is caused by DataSunrise's self-signed certificate which is used by default to handle encrypted
connections. The problem is that some client applications perform strict certificate check and don't accept self-
signed certificates.
You can solve this issue with the following methods:
• Allow usage of self-signed certificates in your client application
• Issue a certificate using your corporate Certification Authority and paste the certificate into the proxy.pem file
• Generate a self-signed certificate and allow usage of root certificates in your database connection string (ex
sslrootcert=/path/to/certificate.pem).
More on proxy certificates here: SSL Key Groups on page 106.
Q. I've configured Google-based two-factor authentication, but I can't authenticate in the target database.
Probably, your smartphone and database server are working in different time zones. Your smartphone and database
server should work in the same time zone, so synchronize the timezones and time.
Q. I can't create an SAP Hana Database instance in DataSunrise because of the following error:
DRIVER=HDBODBC;SERVERNODE=192.168.1.244:39017;UID=SYSTEM;PWD=mawoi3Nu;DATABASENAME=SYSTEMDB;CHAR_AS_UTF8=true;
Q. I'm trying to establish a connection between DataSunrise and an Oracle database but get the following
error:
Warning| Could not connect to the database. Error: Couldn't load libclntsh.so.
Ensure that you have Oracle Instant Client installed (see the corresponding Admin Guide) and create a
corresponding .conf file:
General Audit Queue In Thread #x' is filled for more than XX%. The current level is XX%
This message indicates that your Audit Storage database can't process events in timely manner (AuditQueue is less
than AuditHighWaterMark). To get rid of these errors, you can do the following:
• Increase Audit Storage database performance: enlarge CPU, RAM, change HDD to SSD
• Decrease the amount of data to audit: , audit only queries you really need to monitor
• Activity on business logic objects (where PII data is stored)
• Audit only those queries you need to monitor
• Use Filter Sessions to specify conditions to log events (skip ETL/OLTP/service applications activity for example)
• Adjust you Audit Storage parameters for better performance. Note that DataSunrise doesn't provide any
guidilines on how to do that.
Audit Rules
Q. If Local Syslog is enabled, where does log data get written to?
By default, AWS EC2 is configured to write to /var/log/messages. You have to enable the Syslog service in your
system if it's not done yet. For Local Syslog messages you can select the default Syslog Configuration in your Rules'
settings.
Q. How can I audit DQL, DML, DDL and TCL queries?
In the DataSunrise's Web Console, navigate to Audit → Rules. Then create a new Rule and in the Filter Statements
subsection, change filter type to Admin Queries. Click Add Admin Query and select queries to add to the filter.
Q. My query doesn't trigger the Rule I set up. What's wrong?
Before reaching our Support Team, please check the following:
• DataSunrise deployment scheme: Proxy, Trailings or Sniffer. Note that Sniffer doesn't work with SSL/TLS
encrypted connections except MS SQL Server
• Basic checks:
• A valid license should be installed. DataSunrise with an expired license doesn't block/audit/masks queires but
just passes traffic without any processing
• Check you problematic Rule:
• Filter Sessions: if not empty, define what you're trying to achieve
• Filter Statements: if not empty, ensure actions/user/application does match the list of SQL query types/
CRUD operations and/or Objects (or Groups) selected
20 Frequently Asked Questions | 429
•You can try debugging: enable Log Event in Storage in your Rule's settings if disabled to see if a new entry
is generated in the corresponding Events list. You can also enable Rules Trace and check how your query is
processed
• DataSunrise specific:
• Proxy: ensure that your user is connecting through your DataSunrise Proxy
• Sniffer: check if SSL/TLS is used or any database-specific transport encryption (for exaempl Oracle Native
Encryption). Note that MS SQL Server Sniffer only supports encrypted traffic processing
• Trailing: check if Native Audit is configured to capture expected actions
• Advanced checks
• Check if there are no PARSER ASSERT messages in the Core log files of the problematic worker.
If anything of the aforementioned helps, contact our Support Team for.
Masking Rules
Q. When performing Dynamic masking with the Fixed String method, the target database returns the
original unmasked value instead of a masked string.
Most probably, a table which is being masked, was created by a user connected to the database directly (not
through the Datasunrise proxy). You should update your database's metadata (Editing a Target Database Profile on
page 61) before creating a Data Masking rule.
Q. I'm using Static Masking on an Oracle database and get the following error:
Q. I've created a Dynamic Masking Rule for Informix and have selected the Email masking method, but when
I try to execute a query I get the following error:
Informix doesn't include some functions required for email masking. Refer to Informix Dynamic Masking Additional
Info on page 179
Q. I'm hosting DataSunrise on Windows. I try to configure dynamic masking for Unstructured files but get
the following error:
Code: 10 The JVM was not initialized: Please check the documentation for setting up the JVM
If you're experiencing some problems with JVM on Windows, add the path to your JVM folder to the PATH
environment variable. For example:
C:\Program Files\Java\jre1.8.0_301\bin\server
Q. I'm trying to perform In-place Static Masking on my database and get the following error:
The last In-Place Static Masking task was performed unsuccessfully. Probably, database objects could be
left in an inconsistent state. It's recommended to restore your database from a backup copy.
It means that your database may contain duplicates (masked original tables that haven't been renamed, table
constraints may be deleted or named in a different way).
20 Frequently Asked Questions | 430
Q. When I'm using loader DBLink for PostgreSQL 10 version, the static masking task ends with the following
error:
DBLink must be located in the target database. For the extension to be found, it must be set in the schema public.
To find out in which schema the extensions are located, execute the following query:
Other
Q. On Ubuntu, when creating a Server for Subscribers, if I select certificate type "Signed", I get an error:
The problem is that the root certificate is placed in another location. Add the following string to the /etc/
datasunrise.conf file:
for example, on Ubuntu the root certificate file is located here: /etc/ssl/certs/ca-certificates.crt
Q. My Dictionary and/or Audit Storage are located in the integrated SQLite database and I get the following
message:
SQLITE_BUSY
It's not an error! SQLite supports only one writer (Backend/Core thread) at a time so when some process accesses
DB file for a write operation, others have to wait and receive the SQLITE_BUSY message.
Let's take a look at two scenarios:
• Audit Storage: more that one proxy with Audit/Learning Rules an/or Security/Masking Rules with the Log event
in Storage option enabled. In this case, you can check Core log files for the SQLITE_BUSY message. The another
option is to check Monitoring → Queues → Audit queue length. You get a problem if the graph is constantly
rises to the Watremark.
To solve this issue, disable Log events in storage in your Security/Masking Rule and disable your Audit/Learning
Rules.
• Dictionary: an Update Metadata task or a Table Relations task (any type of this task) is running.
To solve this issue, wait for the task to be completed.
Another solution is to transfer your Dictionary and/or Audit Storage to another database type supported by
DataSunrise.
20 Frequently Asked Questions | 431
Q. I'm getting the following warning:
The free disk space limit for audit is reached. The current disc space amount is XXX MB. The disk space
limit is 10240 MB
If you want to decrease the disk space threshold for this warning, navigate to System Settings → Additional and
change the "LogsDiscFreeSpaceLimit" parameter's value from 10240 to 1024 Mb for example.
Q. I'm trying to decrypt a PostgreSQL table I encrypted before but getting the following error:
SQL Error [39000]: ERROR: decrypt error: Data not a multiple of block size
Where: PL/pgSQL function ds_decrypt(bytea,text) line 6 at RETURN
This means that somebody edited your encrypted table's contents directly, bypassing your DataSunrise's proxy. This
process is irreversible and your encrypted table can't be decrypted.
Q. I'm trying to export a big number of resources to a Resource group with Resource Manager but get the
following error:
Navigate to System Settings → Additional Parameters. Locate the DictionaryAuditOtlLongSize parameter and set
its value to 8192.
Q. I'm trying to audit Oracle queries but get the following error:
This problem occurs on DataSunrise 6.3.1 when updated from version 5.7 and lower. Update your database's
metadata to get rid of that problem.
Q. I configured a MySQL database to be used as the Dictionary and Audit Storage. I get the following error:
in Innodb, row level locks are implemented by having a special lock table located in the buffer pool where small
record allocated for each hash and for each row locked on that page bit can be set. If the pool size is overflown, the
aforementioned error is thrown. The MySQL "innodb_buffer_pool_size" parameter's recommended value is 3/4 of
your RAM size. To get rid of that error, execute the following command:
or edit the mysqld section of the my.cnf (Linux) or my.ini (Windows) file in the following way:
[mysqld]
innodb_buffer_pool_size = 2147483648
Q. I want to delete audit data manually from my Audit Storage database. Can I do it?
Yes you can but you can't do that for SQLite. Regarding other databases, to delete audit data manually, you need to
derive the SESSION_ID from the date you want to remove all events before. Use the following Python script to get
the SESSION_ID:
BASE_TIME = 1451606400000
remove_before_date = "2022-10-19 10:15:20"
dt_obj = datetime.strptime(remove_before_date, '%Y-%m-%d %H:%M:%S')
timestamp = dt_obj.timestamp() * 1000
timestampWithDiff = timestamp - BASE_TIME
20 Frequently Asked Questions | 432
result = (timestampWithDiff / 10) * 10000
print(result)
Once you get your SESSION_ID value and OK with REMOVE_BEFORE_DATE value, excute the following queries in
your Audit Storage:
DELETE FROM sessions WHERE end_time IS NOT NULL AND end_time < '<remove_before_date>';
DELETE FROM operation_exec WHERE end_time IS NOT NULL AND end_time < '<remove_before_date>';
DELETE FROM transactions WHERE end_time IS NOT NULL AND end_time < '<remove_before_date>';
DELETE FROM operations WHERE end_time IS NOT NULL AND end_time < '<remove_before_date>';
DELETE FROM connections WHERE end_time IS NOT NULL AND end_time < '<remove_before_date>';
DELETE FROM app_sessions WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM long_data WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM session_rules WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM operation_sub_query WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM operation_rules WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM operation_meta WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM operation_dataset WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM operation_data WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM lob_operation WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM col_objects WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM tbl_objects WHERE session_id < <derived_session_id_as_a_number>;
Note: deleting data like that generates BLOAT. Consider running VACUUM FULL ANALYZE or configuring
autovacuum to run periodically to catch up with the changes done to storage due to DELETEs.
21 Appendix 1
Note: The default DataSunrise identifier (Enterprise OID) is 1.3.6.1.4.1.7777. Thus, the following table displays
events' OIDs based on the default Enterprise OID.
Notifications:
Configuration change events
Trap OID 1.3.6.1.4.1.7777.0.1.1 Notifications on changes in DataSunrise configuration
Authentication events
Trap OID 1.3.6.1.4.1.7777.0.1.2 Notifications on user authentication events (successful
authentication, authentication errors)
Core events
Trap OID 1.3.6.1.4.1.7777.0.1.3 Notifications on DataSunrise Core events (start, stop,
restart)
Backend events
Trap OID 1.3.6.1.4.1.7777.0.1.5 Notifications on DataSunrise Backend events
Note: each ID number consists of "DS_" prefix (means "DataSunrise"), five digits and a postfix ("I" — info, "E"—
error, "W" — warning ). The first ID's digit defines a group of events (Configuration, Core etc.). The second ID's digit
defines level of an event (1 — error, 2 — warning, 3 — info). The last three digits mean event's number.
21 Appendix 1 | 436
DRIVER={<ODBC_DRIVER_NAME>};SERVER=<server_address,port_number>;DATABASE=<db_name>;UID=<login>;PWD=<password>
• Hive:
• Cassandra:
Host=<server>;Port=<port_number>;AuthMech=1;UID=<user_name>;PWD=<password>;
• IBM DB2:
Driver={<ODBC_DRIVER_NAME>};Database=<database>;Hostname=<server_address;Port=1234>;
[Uid=<user_name>;Pwd=<password>;[Hostname/IpAddress=val;]]
[Protocol=TCPIP;Authentication=KERBEROS;TargetPrinciple=val;]
• Impala:
DRIVER=<ODBC_DRIVER_NAME>;Host=<server_address>;Server=<server_name>;
Service=<port_number>;Protocol=olsoctcp;Database=<database>;Uid=<user_name>;Pwd=<password>;
• MongoDB:
mongodb://[<username>:<password>@]<host1>[:<port1>][,<host2>[:<port2>],... [,<hostN>[:<portN>]]][/
[<db_name>][?<property_name1>=<value>&<property_nameN>=<value>]]
• MySQL, X Protocol:
mysqlx://[<login>[:<password>]@][<hosts>[:<port>]][/<database>] [?
<property_name1>=<value>&<property_nameN>=<value>]
• Netezza:
DRIVER={<ODBC_DRIVER_NAME>};SERVERNAME=<server_address>; PORT=<port_number>;DATABASE=<db_name>;
USERNAME=<user_name>;PASSWORD=<password>;LOGINTIMEOUT=<connect_timeout_in_sec>;
• Oracle:
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<server_address>) (PORT=<port_number>))
(CONNECT_DATA=(SERVER=DEDICATED)(ORACLE_SID=orcl)))
• SAP Hana:
DRIVER=<ODBC_DRIVER_NAME>;SERVERNODE=<server_address>:30013;
UID=<user_name>;PWD=<password>;DATABASENAME=<db_name>;
• Teradata:
Driver=<ODBC_DRIVER_NAME>;DBCName=<server_address>;Database=<db_name>;
Uid=<user_name>;Pwd=<password>;TDMSTPORTNUMBER=<port_number>;DATAENCRYPTION=y;
• Vertica:
Driver=<ODBC_DRIVER_NAME>;Server=<server_address>;Port=<port_number>;
Database=<db_name>;Uid=<user_name>;Pwd=<password>; ConnSettings=SET+SESSION+IDLESESSIONTIMEOUT
+'60+sec';SSLMode=prefer;