Ciop Img v2024
Ciop Img v2024
Implementation Guide
Adaptive UX
70-3349-2024
Adaptive UX 2024
March 2024
This document contains proprietary information that is protected by copyright and other intellectual property laws. No part
of this document may be reproduced, translated, or modified without the prior written consent of QAD Inc. The information
contained in this document is subject to change without notice.
QAD Inc. provides this material as is and makes no warranty of any kind, expressed or implied, including, but not limited
to, the implied warranties of merchantability and fitness for a particular purpose. QAD Inc. shall not be liable for errors
contained herein or for incidental or consequential damages (including lost profits) in connection with the furnishing,
performance, or use of this material whether based on warranty, contract, or other legal theory.
This document contains trademarks owned by QAD Inc. and other companies.
QAD Inc.
100 Innovation Place
Santa Barbara, California 93108
Phone (805) 566-6000
https://www.qad.com
2
Table of Contents
QAD Adaptive UX Implementation Guide . . . . . . . . . . . . . . . . 4
Adaptive UX Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Recommended Web Browsers . . . . . . . . . . . . . . . . . . . . . . 6
Security Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Role Permissions Configuration . . . . . . . . . . . . . . . . . . . . . 8
Quality Orders - Hiding the Specification and Specification
Detail Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Browse Performance Controls . . . . . . . . . . . . . . . . . . . . . 12
Custom Browses and Drill Downs . . . . . . . . . . . . . . . . . . . 13
Action Centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Logi Analytics Technical Overview . . . . . . . . . . . . . . . . 15
Query Service Technical Overview . . . . . . . . . . . . . . . . 18
Action Center Key Components . . . . . . . . . . . . . . . . . . 20
Action Center Installation . . . . . . . . . . . . . . . . . . . . . . . 29
Action Center Security . . . . . . . . . . . . . . . . . . . . . . . . . 45
Action Center Configuration . . . . . . . . . . . . . . . . . . . . . 52
Action Center Maintenance and Troubleshooting . . . . . 63
Action Center Disaster Recovery . . . . . . . . . . . . . . . . . 82
Logi Composer Migration Guide . . . . . . . . . . . . . . . . . . 90
Migration Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Prepare for Migration . . . . . . . . . . . . . . . . . . . . . . . . 94
Run Automated Migration . . . . . . . . . . . . . . . . . . . . . 96
Review and Repair Migration Gaps . . . . . . . . . . . . . 99
Review and Repair Migration Errors . . . . . . . . . . . 105
Clean Up Action Centers and Visuals . . . . . . . . . . 114
Complete the Migration Process . . . . . . . . . . . . . . 121
Special Migration Procedures . . . . . . . . . . . . . . . . 123
Global Order Management Distribution Processing . . . . 127
TAM Conversion and Implementation . . . . . . . . . . . . . . . 128
QAD CRM Calendar Integration . . . . . . . . . . . . . . . . . . . 130
QAD CRM Email Integration . . . . . . . . . . . . . . . . . . . . . . 138
Implementation FAQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Operation System Sizing . . . . . . . . . . . . . . . . . . . . . . 147
Operating System Configuration . . . . . . . . . . . . . . . . . 148
Installing Adaptive UX . . . . . . . . . . . . . . . . . . . . . . . . . 150
3
Adaptive UX Implementation Guide
Before proceeding with any installation or implementation tasks, be sure to read the release notes.
4
Adaptive UX Implementation Guide
Adaptive UX Installation
For QAD Adaptive ERP with Adaptive UX, QAD performs and manages the installation. For on-premise installations of
Adaptive UX, see the QAD Adaptive ERP On-Premise Installation Guide, available for early adopters in the QAD
Document Library.
5
Adaptive UX Implementation Guide
To make sure you have the latest security updates, set your Chrome or Safari browser to receive automatic updates from
Google or Apple.
For tablet use, the user interface is only supported on iPad Pro (or newer equivalent) with the Safari web browser.
Although other tablets can be used, you may experience differing levels of performance and user experience.
6
Adaptive UX Implementation Guide
Security Configuration
QAD Enterprise Platform includes comprehensive security features. For more information about security, please see the Q
AD Security Administration Guide, located in the QAD Document Library.
7
Adaptive UX Implementation Guide
For more information about user and role configuration, read the QAD Security Administration Guide, Users and Roles
(chapter 6), located in the QAD Document Library.
8
Adaptive UX Implementation Guide
To hide the specification fields in every browse and grid in Quality Orders, you will need to use Role Permissions and
the Display Specification for Results Entry options in Inventory Control and Quality Control. See the following table for
more information:
Role Permissions Quality Orders (Lot and Quality The Specification and Specification Details fields
Type) are hidden for the specified roles that do not have
read access.
Quality Orders > Attributes pan
el (grid and details form)
Quality Orders > Test Records
> Test Attributes panel (grid
and details form)
Inventory Control > Display Quality Orders (Lot Type) The values in the Specification and Specification
Specification for Results Details fields are blank for all users, regardless of
Entry field Quality Orders > Attributes role.
Details screen > Attributes
browse
Quality Control > Display Quality Orders (Quality Type) The values in the Specification and Specification
Specification for Results Details fields are blank for all users, regardless of
Entry field Quality Orders > Test Records role.
Details screen > Test Attributes
browse
Quality Orders > Attributes
Details screen > Attributes
browse
1. In Role Permissions, select the role for which the specification fields will be hidden.
2. In the Search field, enter the URI for the components that will be affected:
Quality Orders > urn:fg:com.qad.quality. Hides the Specification and Specification Details fields
Attributes panel qualityorders. in the Attributes panel (grid and details form) for
IQualOrderAttrV2:mfg- quality orders (lot and quality type).
SPECIFICATION
Quality Orders > urn:fg:com.qad.quality. Hides the Specification and Specification Details fields
Test Records > qualityorders. in the Test Attributes panel (grid and details form) for
Test Attributes IQualOrderTestAttr:mfg- quality orders (quality type).
panel SPECIFICATION
9
Adaptive UX Implementation Guide
3. To hide both fields, select Deny for Read and Write for the two URIs listed in the table.
4. Select Save. The specification fields will now be hidden in the Quality Orders > Attributes panel and Quality
Orders > Test Records > Test Attributes panel.
1. Hide the specification fields in the Attributes panel for quality orders (lot and quality type):
a. In Quality Orders, select a quality order (lot or quality type) and open the Attribute details screen.
b. From the More drop-down menu, select Permissions.
c. In the Role Permissions screen, select the desired role.
d. Then select Quality Attribute Details > Field Groups > Specification. The Specification and Specification
Details fields are grouped together under the Specification field group. To hide both fields, select the
Specification field group and then select Deny.
10
Adaptive UX Implementation Guide
e. Select Save.
f. For the specified role, the Specification and Specification Details fields will now be hidden in the Quality
Orders > Attributes panel.
2. Hide the specification fields in the Test Attributes panel for quality order (quality type):
a. In Quality Orders, select a quality order (quality type) and open the Test Records > Test Attributes
details screen.
b. From the More drop-down menu, select Permissions.
c. In the Role Permissions screen, select the desired role.
d. Then select Quality Attribute Details > Field Groups > Specification. The Specification and Specification
Details fields are grouped together under the Specification field group. To hide both fields, select the
Specification field group and then select Deny.
e. Select Save.
f. For the specified role, the Specification and Specification Details fields will now be hidden in the Quality
Orders > Test Records > Test Attributes panel.
11
Adaptive UX Implementation Guide
qad-qracore.bcbrowse.timeouts=
The property is comprised of a comma-separated list of timeouts for particular BC browses with the format: browseURI
[timeout], where the timeout is specified in milliseconds.
Browse-specific timeouts override the default timeout. If you set this property for an individual browse to a value less than
the system default, the browse will time out at the lower value regardless of the system setting.
12
Adaptive UX Implementation Guide
When defining a custom browse, be sure that the name of the browse (the label) is unique to avoid user confusion. Note
that the browse naming convention on the QAD .NET UI is to have the name of the browse end with "Browse," while on
the QAD Web UI, the menu item type is indicated by the menu type icon. Do not include "Browse" at the end of the name.
To ensure consistency, the Menu System Maintenance Label field setting should match the Browse Maintenance
Description Term field, but do not include "Browse" at the end of the QAD Web UI's string.
For the drill-down links to be included on a QAD Web UI hybrid view, you first need to identify the browse used by the
hybrid view of interest. To identify the browse used by a hybrid view, locate the browse on Role Permission Maintain's
Secured Resources tab and get the browse identifier. For example, the browse used with the Sales Orders hybrid view is
identified as "so803.p".
With the browse identifier, you can then define the drill-down links for it using Drill-Down/Lookup Maintenance. Note that in
Drill-Down/Lookup Maintenance, in the Calling Procedure field, the browse identifier includes "br". For example, for "so803.
p", in the Calling Procedure field, you enter "sobr803.p". You can then specify the Procedure to Execute.
Once the link has been defined in Drill-Down/Lookup Maintenance, the change is included on the QAD Web UI after an
administrator runs the following YAB commands:
13
Adaptive UX Implementation Guide
Action Centers
This section describes the architecture and components supporting the Action Centers, along with instructions for
installing, configuring, and troubleshooting them.
It does not comprehensively cover installation or administration, but highlights points specific to the Action Centers and
refers to other guides as needed.
14
Adaptive UX Implementation Guide
For releases prior to September 2021, Logi Info is provided, but is enabled for read-only use so that existing Action
Centers provided by QAD or built by the users can still be displayed. For the September 2019 through March 2022
releases, all KPIs created in the KPI screen of the Web UI automatically use Logi Platform Services for creating and
displaying visuals. The visuals for existing KPIs that were created using Logi Info can be displayed but not
changed. Instead, the KPI must be copied to a new KPI that uses Logi Platform Services and the visuals re-created using
the Logi Platform Services functionality.
As of the September 2021 release, Logi Info is not supported and KPIs created using Logi Info can no longer be displayed.
As of the September 2022 release, Composer is the primary analytics framework and the only one supported for new AUX
installations. Customers upgrading from earlier releases have the option to continue using Logi Platform Services,
although Composer is the default option. When upgrading an AUX environment to Composer, the customer's existing
Action Centers and visuals are automatically converted for use with Composer, although some manual rework is still
required. Once AUX has been installed with Composer, it is not possible to revert the environment back to Logi Platform
Services. QAD plans to retire Logi Platform Services in a future AUX release.
September 2022 environments installed from scratch or upgraded to use Composer cannot be reverted back to Logi
Platform Services, which will be retired in a future AUX release.
The technical overview describes the high-level architecture of Composer, Logi Platform Services, and Logi Info.
Composer
Composer has a multi-tiered architecture built around a suite of microservices implemented in Java, using a PostgreSQL
database to persist all configurations.
15
Adaptive UX Implementation Guide
The web-based user interface supports the HTML5 standard for display and the WebSockets communications protocol,
implemented using ReactJS and native JavaScript. The use of WebSockets provides faster transmission of analytics data
to-from the browser client than approaches used in earlier Logi frameworks.
The services tier is made up of an extensible mesh of microservices, from which QAD has selected the ones needed to
support current AUX Action Center capabilities: Query Engine, Configuration, Service Discovery. Other microservices
may be added in future releases.
Composer offers a wide set of data connectors, also configured as separate microservices, to access different databases
and other data sources. For AUX, two data connectors are used: PostgreSQL (required) and SparkSQL. The SparkSQL
data connector reads analytics data directly from the Apache Spark ThriftServer through SQL queries.
The four components of the LogiPS architecture include the Front End, Web Tier, Data Services, and Back End.
LogiPS provides system administrators and content creators with an easily secured and configured system that can be
embedded in web sites and applications without the use of iFrames. Visualizations, dashboards, and reports are created
and stored in libraries, and the required code for embedding them is generated on request.
16
Adaptive UX Implementation Guide
LogiPS uses an advanced data retrieval technology based on the Dataview. A Dataview is defined by a JSON document
that specifies connection information, query details, and data enrichment options. This Dataview Definition (DVD) is stored
in a system database and executed at runtime.
When executed at the server, a Dataview retrieves data, caches it in a self-tuning columnar data store, and makes it
available for use with visualizations. The benefits of this include:
For Action Centers, the data is retrieved from the Query Service with SQL queries, as referenced in later sections of this
document.
Logi Info
Logi Info is a framework for developing and displaying analytical data embedded inside a host application, in this case
QAD Adaptive Applications. The resulting functionality is packaged and deployed as a web application, rendering
visualizations and serving them to the browser as HTML pages.
A Logi application consists of web page sources known as 'report definitions.' Some of these pages provide robust self-
service features that allow end users to define their own charts and cross-tabulation grids interactively, given the source
data and metadata. These visuals can then be published and added to user-defined dashboards (Action Centers).
Logi Info applications separate the development, data access, and presentation processes, as shown in the graphic.
1. Report Definitions are text files that contain the information describing report layout and contents, stored as XML
documents. While it is possible to create and edit definitions with any text editor, Logi Studio provides an
integrated development environment with tools and helpful wizards that do much of the coding for you, reducing
development time and effort.
2. When a report page is requested by a user, the Logi Server Engine, on the web server, processes the report
definition and accesses whatever data sources are required. A wide variety of data sources are supported and
data caching is used to speed up performance.
3. The Logi Server Engine formats the retrieved data and presentation details based on the definition and
accompanying style sheets, generates HTML and JavaScript, and returns the report page to the user's browser
for viewing.
There are two main data sources accessed by Logi Info to populate the Action Centers.
1. Browses: The same kinds of browses that can be defined by end users and displayed from the standard menu
can also be used to retrieve data for the Action Centers through APIs called by Logi Info. The data sets
consumed by Logi Info consist of the records returned by the browses.
2. Financials Report Writer (FRW) KPIs: The Financials Report Writer allows financial users to define key
performance indicators using the enterprise's financial data, chart of accounts, and reporting structure. This
information is also provided to Logi Info through APIs.
17
Adaptive UX Implementation Guide
Key Concepts
Reasons for Change
Previous Action Center releases had performance limitations when summarizing and displaying large amounts of source
data, principally the very large result sets returned by some browses. In particular, performance degraded rapidly as the
number of records returned by a browse increased. To mitigate this problem, the source data that could be included in
Action Centers was limited to 5,000 records or less. The performance limitations have several causes.
The overhead of processing browse requests in Progress AppServer agents, which often take a long time to
complete and have high CPU usage. AppServer-based solutions are therefore difficult to scale up for high-
volume environments.
The high memory usage and slow processing time required in the Action Center web application to load and
display very large data sets. The Logi Info software powering the Action Centers is optimized for self-service
visualization and rendering, not high-volume data grouping and aggregation.
To address these issues, the architecture supporting data queries, post-processing, and retrieval was implemented using
a Query Service.
Query Service
The original browse data retrieval engine was supplemented and partially replaced by a new infrastructure known as the
Query Service. The Query Service incorporates several concepts and third-party components to bring required source
data into the Action Centers more quickly and in a more compact, usable form.
In the case of BC browses, infrastructure is provided to retrieve the OpenEdge source data through JDBC connections,
rather than ABL code running in a Progress AppServer. This approach significantly reduces the CPU overhead and
processing time to retrieve the browse data. It also leverages open SQL and JDBC standards, as opposed to the more
closed and proprietary Progress AppServer architecture.
While current QAD-defined KPIs are based on MFG and FIN browses that rely on the older AppServer-based browse
engine, in future releases this is expected to change as user-defined business components are more widely used and
legacy browses are migrated to the newer infrastructure with SQL-based retrieval.
Data Lake
Data feeding the Action Centers is copied from its OpenEdge sources into a separate data lake repository built on Apache
Cassandra. Cassandra stores the information in de-normalized form, where the relational data is joined and flattened
before being written. Cassandra's columnar data structure is optimized for immutable (read-only) storage and fast
retrieval, unlike relational databases such as OpenEdge, which are designed to support general-purpose CRUD
operations.
For Action Centers, the data stored in the data lake is refreshed from its OpenEdge source overnight by default, when
scheduled refresh is enabled both at the system level and for individual KPIs. As previously mentioned, the retrievals are
processed using a combination of JDBC-based queries and Progress AppServer agents, depending on the browses
needed. Refreshes may also be requested manually by end users for individual KPIs in Action Center panels.
Beginning with the September 2020 release, the data lake is also used to store historical snapshots of KPIs, for those
KPIs that are defined as 'historical' instead of 'current.' This functionality allows KPI results from different points in time to
be presented together for comparison and trend analysis purposes. The frequency of the snapshots and the total number
allowed in the data lake are limited in order to manage disk space effectively, but are configurable by KPI. Unlike the
scheduled refreshes of current KPI data mentioned previously, the historical snapshots are not re-createable from the
OpenEdge sources. For historical KPI snapshots, the data lake is therefore the system of record.
18
Adaptive UX Implementation Guide
In future releases, the data lake is planned to support other functions outside of Action Centers including historical
archiving, reporting, and business intelligence. Its use will not be confined to Action Centers. However, supporting the
Action Centers is its role in the current release.
In-Memory Post-Processing
Browse data stored in Cassandra is cached in memory using Apache Spark. Spark is a fast, scalable, in-memory data
processing layer that can respond to queries in a flexible way, using any database fields as indexes. It also applies
filtering, grouping, aggregation, cross-tabulation, and deduplication on demand much faster and more efficiently than
could be done by OpenEdge ABL code running in a Progress AppServer.
At runtime when an Action Center is displayed, the data can be pre-grouped and pre-aggregated by Spark based on the
definition of the KPIs associated with that Action Center. The data returned to the Action Center is much more compact as
a result with fewer records, allowing the Action Centers to overcome the 5,000-record limit for source data prescribed in
previous releases.
Beginning with the September 2018 release, the Logi integration was enhanced to take advantage of the superior
performance of the new Query Service. As end users design visuals and their data contents using the Action Center's self-
service capabilities, metadata defining the group-by categories and numeric aggregations needed to support those visuals
is automatically extracted from Logi and sent to the Query Service. The Query Service is then able to apply the maximal a
mount of grouping and pre-aggregation that Logi can consume in order to render the requested charts. In summary, the
Logi integration optimizes its request to obtain exactly the required data in pre-processed form, with the result set made
much smaller through Query Service grouping.
Beginning with the September 2019 release and the introduction of Logi Platform Services (LogiPS), the Logi integration
was re-engineered again to allow Logi to retrieve the Query Service data using SQL through a JDBC connection, which is
supported inside the Query Service by the Spark ThriftServer. This approach allows Logi to retrieve the data as though
from a relational database, which better leverages the built-in capabilities of LogiPS to group and aggregate the data
automatically to suit the needs of each visual.
19
Adaptive UX Implementation Guide
Logi Composer
Logi Composer replaces Logi Platform Services as the analytics framework used to maintain and display Action
Centers. Unlike previous Logi frameworks, it is built as a set of loosely coupled microservices that can be selectively
installed depending on the capabilities required. These microservices are implemented in Java using Spring Boot, and
managed in a single service mesh. Many of its components use the internal name "zoomdata".
Query Service
The Composer Query Engine is responsible for processing and dispatching queries to retrieve Action Center data, sitting
between the WebUI and the data connectors that directly access the data sources. It has the following primary tasks.
1. Deconstructs and converts user query requests into distributed execution plans.
2. Optimizes the execution plans based on data platform capabilities, in-memory cached results, and the Query
Engine capabilities.
3. Communicates with Composer data connectors to execute push-down queries. In the case of AUX, the
SparkSQL data connector is used.
20
Adaptive UX Implementation Guide
4. Uses in-memory processing to combine, append, or manipulate one or more data sets to produce only the values
needed to fulfill the request.
Data Connectors
Composer offers a wide set of data connectors, each configured as a separate microservice, to access the sources of
analytics data. While many kinds of databases are supported, AUX uses the SparkSQL data connector to read Action
Center data from the Apache Spark ThriftServer, where it is cached in memory. The PostgreSQL data connector is also
required, in order to access the Composer configurations stored in its native PostgreSQL database. In addition, it is
possible to create custom data connectors for other data sources.
Service Discovery
The Service Discovery microservice helps manage the mesh of microservices by integrating Composer with Consul, an
open source tool that supports secure access to and communications across microservices. It provides:
Other Microservices
Various other microservices are available in Composer but not yet used by AUX. It is possible that some of these may be
included in future AUX releases.
Configuration
The Composer Configuration microservice is packaged with the Spring Cloud Configuration server, which allows
Composer to easily integrate with its Spring-based microservices. It provides the mechanism by which Composer property
settings can be maintained using the Service Monitor, another Composer microservice. The property settings are
persisted in a supported PostgreSQL data store or in a GitHub repository.
Service Monitor
The Service Monitor microservice provides administrators with real-time views of microservice health, configuration, and
logs. The Service Monitor is not installed as part of a default Composer installation, so it must be installed and configured
before it can be used. It allows system administrators to do the following tasks.
Data Writer
Composer offers a multi-purpose Data Writer microservice that writes data to a relational database for an enriched
analytic experience. Its current uses are to:
Persist user-uploaded flat text or CSV files for reuse, performance, and functional improvements such as push-
down processing and derived fields.
Simplify landing or persisting streaming data into a high-performance data platform for subsequent analysis.
Store user-generated keysets for “set analysis” to be used in single and multisource data environments.
Uploaded data is stored in the separate "zoomdata-upload" PostgreSQL DB that can follow the same backup/restore
process as the main zoomdata database.
Screenshot
The Screenshot microservice allows users to view snapshots of saved dashboards.
21
Adaptive UX Implementation Guide
Starting with the September 2022 release, Logi Composer is the only option available to install in new AUX environments
and the default option for existing AUX environments. While Logi Platform Services can still be used in older AUX
environments that have been upgraded to September 2022, it is deprecated and will be retired in a future AUX release.
Logi Platform Services (LogiPS), a.k.a. Logi Composer, uses an n-tier, service-oriented architecture that layers
presentation services, application services, and data services into distinct sets that can be deployed logically and
physically at different tiers in an environment.
The four components of the LogiPS architecture include the front end, web tier, data service, and back end.
In a front end client, a developer embeds the desired Logi widgets into an HTML web page or application. Widgets are
configurable, client-side components that are used to create visualizations. Multiple widgets can be embedded in a web
page and they can communicate with one another. Content authors and users will use the client to interact with
LogiPS. The complex widget used to build visuals is called a Thinkspace view.
The front end runs on Angular.js, which supports the model-view pattern on the client. Angular uses Restangular, a
service that handles REST API requests, to communicate with Logi application and data services. Responses are
delivered with JSON data.
The web or application service tier passes the HTTP requests to the appropriate handler on the server. It uses Node.js,
an embedded, run-time environment for server-side web applications that handles all server-side processing at the
application level. Node.js uses the Express framework, which is a lightweight web server with full REST support.
The LogiPS data service processes and queues requests from the application service using the ActiveMQ message
broker, communicating via the STOMP protocol. ActiveMQ passes the request to the Logi Transport service, which
interprets it and makes a request to the Logi Data Engine for retrieval of the requested data.
LogiPS uses the back end via JDBC to retrieve and process the appropriate Dataview (stored in the Platform Database),
which in turn retrieves the appropriate data from an external data source or cache.
The server-side components are run within two separate processes, which are started and stopped separately.
Data Service: Includes the data services tier, containing the data engine and controlling the LogiPS Platform
Database (PDB).
Application Service: Includes the web tier and the REST APIs called from clients to maintain the contents of the
PDB.
The self-service capabilities of Logi Platform Services are enhanced to consume larger volumes of source data in a more
performant way using the Query Service's other components, which are described in the following sections.
Starting with the September 2021 release of AUX, Logi Info and KPIs created using Logi Info are no longer supported.
Apache Cassandra
22
Adaptive UX Implementation Guide
In small- to medium-sized environments, Cassandra will be deployed on the same node as the core Enterprise Application
Infrastructure. In larger environments, it may reside on a dedicated server as part of a larger data lake infrastructure. In
very large, multi-site enterprise environments, Cassandra could be deployed as a decentralized cluster with multiple
nodes.
While Cassandra is a columnar database, many of the key concepts with which application developers and DBAs are
familiar also apply to Cassandra. There are tables, records, and fields. However, data is internally organized into columns
rather than rows, unlike relational databases such as OpenEdge and Oracle. This makes Cassandra well suited to high-
volume queries and analytics, where users often drill down by columns (for example, "Show me all the data for this item or
this customer") rather than transactions, as in traditional ERP (for example, "Create an invoice for this customer").
Columnar databases support a flexible schema model and much improved performance for analytics, but do not support
join operations or secondary indexes very well. In implementations that require the flexibility of user-defined joins and
filters, these actions should be done in complementary frameworks such as Apache Spark (see below).
High availability
Distributed table storage
Tunable consistency
Fault tolerance
High performance
Low latency
Linear scalability
Table-specific tuning
Data model flexibility
Dynamic schema controlled by queries
Management and monitoring via JMX and SQL queries
Open source licensing
Terminology
The following basic terms are necessary to understand how Cassandra works.
Cassandra is a distributed database running nodes in a cluster. The nodes communicate in a peer-to-peer,
master-less fashion.
Cassandra rows are stored in tables, where each table has a mandatory primary key.
A keyspace groups tables as a logical entity, similar to a schema in relational databases. In the Query Service, all
browse result sets are stored in the "browses" keyspace.
Data is accessed via CQL, an SQL-like query language.
Cassandra writes to a data log first, similar to the OpenEdge before-image file. It then writes to an in-memory
cache inside a JVM heap called memtables before flushing to disk (SSTables).
23
Adaptive UX Implementation Guide
Cassandra housekeeps the data on disk, compacting it and discarding tombstones, which are markers placed
inside obsoleted data to mark it for later physical deletion.
The following table cross-references Cassandra terms to similar concepts from OpenEdge.
Cassandra OpenEdge
Cluster N/A
Column Field
CQL ABL
Keyspace Database
Node Server
Table Table
After many deletes, the resulting tombstones can grow to consume a significant amount of disk space and slow
Cassandra processing. However, Cassandra deletes tombstones automatically in its compaction runs, which are triggered
every few minutes. By default, tombstones older than 10 days are deleted during compaction. While this time period can
be configured if needed, in the case of Action Centers this should not be necessary. The browse data stored in Cassandra
is only deleted during a scheduled or manual refresh, and the refresh causes the affected Cassandra tables to be
truncated and re-populated. Hence, individual rows are not deleted and tombstones will not accumulate in the database.
Cassandra performs best with local low-latency SSD or SAS storage, which is also cheaper to provision than relatively
high-latency network storage. It uses an efficient log–structured engine that converts updates into sequential I
/O. Cassandra’s storage engine does not read or rewrite existing data when processing updates, but only
appends the updated data. This approach allows updates to be processed very fast. However, updates and deletes can
be expensive and generate tombstones, which can affect query performance.
Core Tools
Several native Cassandra tools are useful for system administrators and DBAs. Basic database start, stop, remove,
rebuild, and other functions are controlled through YAB and not listed here. To see a description of the YAB commands for
managing Cassandra, use the command 'yab help cassandra-'.
SSTable: Table utilities such as dump, print metadata, split table, list tables.
nodetool: Comprehensive utility used to monitor and manage a cluster. This tool can also be started using the
'yab cassandra-default-nodetool' command.
cqlsh: Command-line utility for connecting to a Cassandra database and executing CQL commands.
cassandra-stress: Stress testing tool.
Some of these tools are implemented in Python, but Cassandra in general is entirely Java-based.
24
Adaptive UX Implementation Guide
reported by O/S tools, such as top, accordingly. However, on 64-bit systems virtual address space is effectively unlimited,
so it is seldom a concern.
A key point is that for a mmap’d file, there is never a need to retain the data in physical memory. Thus, whatever is in
physical memory is there only as a temporary cache, in the same way that normal I/O will cause the kernel page cache to
retain data that has been read or written.
The major difference between normal I/O and mmap is that in the mmap case, the memory is mapped to the process, thus
affecting the virtual size as reported by top. The main argument for using mmap instead of standard I/O is that read
actions only need to access memory; there is no page fault, therefore no kernel overhead to perform context switching.
Consistency: All nodes see the same data at the same time.
Availability: Every request receives a response about whether it succeeded or failed.
Partition tolerance: The system continues to operate despite arbitrary message loss or component failure.
All databases can be categorized as CP, AP, or CA, based on which of the guarantees are supported. For example,
OpenEdge and other relational databases are CA databases, while MongoDB and ElasticSearch are CP databases.
Cassandra is an AP database. The advantages of an AP database are greatest in environments where short periods of
data inconsistency are preferred over short periods of database unavailability.
Cassandra satisfies a weaker consistency requirement by adopting the BASE standard, which is a modified version of the
ACID (Atomicity, Consistency, Isolation, Durability) properties satisfied by most relational databases.
Basically Available: The system guarantees the availability of data in the sense that it will respond to any request.
However, the response could be a failure to obtain the requested data, or a data set in an inconsistent or
changing state.
Soft: The state of the system is always “soft” in the sense that eventual consistency, described below, may cause
changes in the system state at any given time.
Eventually Consistent: The system will eventually become consistent once it stops receiving new data inputs. As
long as the system is receiving inputs, it does not check the consistency of each transaction before it moves to
the next transaction.
Full consistency has a negative effect on cost-effective horizontal scaling. If the database needs to check the consistency
of every transaction continuously, a database with billions of transactions will incur a significant cost to perform all the
checks. The idea of consistency is not practical in a large distributed database. It is the principle of eventual consistency
that has allowed Google, Twitter, and Amazon, among others, to interact with millions of their global customers, keeping
their systems available by supporting partition tolerance. Without the principle of eventual consistency, today's systems
could not support the exponential rise of data volumes caused by cloud computing, social networking, and related trends.
Limitations
Cassandra is in some ways very restrictive compared to relational databases like OpenEdge, as summarized below.
Joins: Cassandra does not allow joins, and is therefore not suitable for representing normalized data models. Join
s must be implemented in a separate component such as Apache Spark, which is described in the following
section. The data stored in Cassandra should be self-describing documents (for example, an invoice object in
XML or JSON form) or de-normalized, flattened views designed for query purposes.
Transactions: Cassandra supports only "lightweight" transactions, essentially only existence checks, without the
ACID compliance common in relational databases.
Secondary Indexes: Cassandra does not fully support secondary indexes, as it imposes heavy restrictions on
which fields can be included in a secondary index.
Text Search: Cassandra allows full text search with advanced features, but only on specific fields with a text
search index.
Tombstones: Cassandra does not delete and update data in real time like a relational database. In order to
ensure good read performance, it simply marks the affected data with a tombstone. Future queries automatically
skip over the tombstones. However, tombstones can build up quickly in tables with heavy delete/update activity,
even to the point where there are more tombstones than actual data. In such cases serious performance
problems can result, as reading tombstones can cause excessive latency, timeouts, and even exceptions. Tombst
ones also consume disk storage unnecessarily.
Apache Spark
25
Adaptive UX Implementation Guide
Apache Spark is a general purpose in-memory data processing engine and cluster computing framework that can perform
Extract, Transform, and Load (ETL) operations, ad-hoc queries, machine learning, and graph processing on large volumes
of data at rest (batch processing) or in motion (stream processing).
It supports native APIs for manipulating and querying data in the following programming languages: Scala, Java, Python,
R. In addition, it provides libraries that allow the same data to be accessed through more specialized and advanced
languages and protocols.
SQL: A Spark module for structured data processing. It provides a programming abstraction called DataFrames
to access data organized into named columns, like a relational table, and can act as a distributed SQL query
engine.
Streaming: Spark Streaming is a scalable fault-tolerant streaming system, receiving data streams and chopping
them into batches. Spark then processes those batches and pushes out the result. Besides working directly with
files and sockets, it integrates with a variety of popular data sources, including HDFS, Flume, Kafka, and Twitter.
MLlib: Built on top of Spark, MLlib is a scalable machine learning library that provides high-quality algorithms
performing at high speeds. The library is usable in Java, Scala, and Python.
GraphX: A graph computation engine built on top of Spark that enables users to interactively build, transform,
and reason about graph structured data at scale. It comes with a library of common algorithms.
Architecture
Spark applications run as independent sets of processes on a cluster, coordinated by the SparkContext object in a main
program called the "driver" program. The Query Service has its own driver program and SparkContext instance, from
which all the in-memory browse data can be accessed.
Spark can run standalone where all the necessary components are loaded at run time and jobs are executed. However,
this method is (a) slow to instantiate, because of the need to load the components; and (b) difficult to manage, because
each process has its own Java heap memory and resource requirements. In the Action Center context, the Query Service
driver program connects to the Spark Cluster Manager, which accepts job requests and allocates resources across
applications. Once connected, Spark acquires executors on nodes in the cluster, which are processes that run
computations and store data for the application. Next, it sends the application code packaged in JAR or Python files to the
executors. Finally, the SparkContext dispatches tasks to the executors to be run.
26
Adaptive UX Implementation Guide
ThriftServer
The Query Service originally used Spark's native Java API to read and manipulate the browse data. With the introduction
of Logi Platform Services in the Sep 2019 release, the data residing in the Query Service is retrieved directly by Logi as
an SQL data source. This is accomplished by a component of Spark called the ThriftServer.
The ThriftServer is a server interface within Spark that enables remote clients to execute SQL queries and retrieve the
results through a JDBC connection. In essence, it exposes the tables and views within Spark as an SQL database,
supporting multi-client concurrency and authentication.
The ThriftServer is embedded inside the Query Service, which runs inside Tomcat. It requires the configuration of an
additional port for Logi to connect through JDBC, but does not run inside a separate process.
27
Adaptive UX Implementation Guide
Spark serves as the in-memory cache where the data retrieved from browses is stored for on-demand retrieval. Before
returning the browse results, it groups the data, pre-aggregates the numeric values within each group, and applies filters
in order to return the minimum amount of detail needed to support Action Center display. Because grouping, aggregation,
and filtering requirements can be changed by Action Center users at any time, Spark's combination of flexibility and speed
is critical.
Cassandra serves as the data lake where the browse data is persisted, generally before it is needed. On a scheduled
basis, browses that are configured for use in the Action Centers are refreshed from the operational OpenEdge database
tables through one of two query mechanisms.
1. SQL with JDBC connections: Business Component browses are processed as SQL queries directly against the
OpenEdge database. This approach is preferred, as it is significantly faster than the AppServer-based approach.
2. AppServer-based browse engine: Other browses are processed using the existing browse engine, which runs
inside Progress AppServer agents and reads the OpenEdge data using ABL code.
End users can also request refreshes of a specific browse from the Action Centers, which causes the data to be refreshed
in both Cassandra and Spark. In addition, beginning with the Sep 2020 release, historical snapshots of some KPIs are
automatically created, stored in Cassandra, and cached in Spark. In future releases, browse results may be pushed into
Cassandra more continuously as the source data is updated in Enterprise Applications through transaction processing
activity.
The current KPI browse results extracted from the OpenEdge databases are stored in tables that reside in the Cassandra
keyspace "browses." For most browses, there is a single table for each KPI and combination of browse and domain or
browse and entity, depending on whether the browse was defined to access financial data, which are generally associated
with financial entities, or operational data, which are generally associated with domains. The contents of these tables are
then cached in Spark for online retrieval by the Action Centers.
This historical KPI snapshots are stored in table that reside in the Cassandra keyspace "historical_kpi". In this keyspace,
there is a single table for each KPI snapshot. Because the number of historical snapshots per KPI can vary widely
depending on KPI configuration, different historical KPIs can have different number of snapshot tables in the keyspace.
28
Adaptive UX Implementation Guide
The Installation section describes useful details about how Action Centers and the related Logi and Query Service
infrastructures are installed. It is not a comprehensive, step-by-step guide, as the installation process is largely automated
through the use of the YAB tool. Action Centers are not installed in isolation, but as part of an overall release as
documented in the Adaptive UX On-Premise Installation Guide. However, this section describes some installation steps in
greater depth that are specific to Action Centers, referring to YAB command details and other guides as needed for
context. It also covers important steps that must be completed before the automated installation is run.
This document assumes that the reader is familiar with basic Linux system administration and YAB. For more details on
YAB, see the latest QAD Configuration and Administration Guide for YAB, available on the QAD Document Library.
System Requirements
Memory
Spark and Cassandra are fast because they do considerable amounts of in-memory processing. Logi Platform Services
and Logi Info also require memory to render the visuals. The minimum memory requirement for running production
systems with the Action Centers and Query Service is 16GB.
CPU
Systems running the Action Centers and Query Service should have a minimum of four cores.
Software
Java 8 and Python 2.7 in particular are required to run Cassandra and Spark. Java 11 is required to run Logi
Composer. See the latest version of the QAD Adaptive UX On-Premise Installation Guide, available on the QAD
Document Library, for other software prerequisites.
If the batch user is not registered for the PLA license using this procedure, the YAB installation/update will fail and
corrective action will be needed as described later in this section.
Log into the .NET UI as an admin user, search for the License Registration screen, and open it.
Scroll down the list of licenses, checking for the PLA license.
If the PLA license is already present in the list, skip to the procedure 'Registering Users for PLA License' below.
If the PLA license is not present in the list, obtain the correct PLA license codes to use for the environment.
Customer codes are obtained from QAD Global Customer Administration (GCA).
Navigate to the Add button and press Enter.
29
Adaptive UX Implementation Guide
Enter the license codes for the PLA license using the .NET UI License Registration screen.
Log into the .NET UI as an admin user, search for the License Registration screen, and open it.
Scroll down the list of licenses, checking for the PLA license.
If the PLA license is not present in the list, follow the previous procedure, 'Adding PLA License and License
Codes.'
Select the PLA license from the list.
30
Adaptive UX Implementation Guide
Navigate to the OK button, and press Enter to access the list of registered users.
To register a single user, in particular the 'batch user,' to use the PLA license, press Enter to see the list of users.
31
Adaptive UX Implementation Guide
Navigate to the user to be registered, and press Space to select the user. An asterisk will be displayed to the left
of the user.
Press the GO key to process the change, and confirm it when prompted by the UI.
Alternatively, to register all users in the environment to use the PLA license, enter "All," press the GO key
(usually F1), and confirm the action when prompted by the UI.
Display the Users screen, select the user to register for the PLA license, and navigate to the Applications panel
of the screen.
Press the New button to add a new line to the Applications grid. Press the lookup icon in the Application field of
the new line to display a list of applications for which the user can be registered.
32
Adaptive UX Implementation Guide
Find and select the PLA license from the list, and press OK.
Press the Save button to register the user for the license.
33
Adaptive UX Implementation Guide
181)
at com.qad.build.java.tasks.http.HttpCallCommand.invoke(HttpCallCommand.java:47)
at com.qad.yab.qra.QraWebuiApiClient.submit(QraWebuiApiClient.java:115)
at com.qad.yab.qra.QraLpsArtifactUpdateProcess.execute(QraLpsArtifactUpdateProcess.java:
171)
...
The previous example references the 'assetmgmt' app, but the specific app raising the error in other environments could
be different.
This error is triggered by missing permissions for the batch user, caused by the fact that the user has not been registered
for the PLA license. To correct the problem, do the following.
Complete the steps in the section 'Adding PLA license and license codes,' if the PLA license and codes do not
already exist in the environment.
Register the batch user for the PLA license as described in the section 'Registering Users for PLA License in
NetUI' or 'Registering Users for PLA License in WebUI.'
Run the following YAB command to re-execute the failed YAB command(s) that failed in the first attempt.
To install a September 2022 AUX environment enabled to use Logi Platform Services rather than Logi Composer, set the
following YAB property before running the installation.
qad-analytics-core.composer.enabled=false
When Logi Composer is enabled, existing Action Centers not provided by QAD that were created with Logi Platform
Services are automatically migrated to Logi Composer during the installation. This migration process and the required
manual steps are covered in a separate section of the current guide: Logi Composer Migration Guide. This guide also
contains more information about how to defer the migration to Logi Composer during the initial upgrade to AUX
September 2022, and complete it sometime later.
Logi Licenses
To use Action Centers, a license is required from Logi Analytics, in addition to the PLA license required by
QAD. Regardless of the Logi product being used or the AUX version, as of Aug 2022 a current Logi license must be
installed in production and non-production environments, or else Action Centers and visuals cannot be used. This license
is an OEM license that is automatically installed with the following AUX releases.
For AUX environments using any of these releases, no special action is required to obtain the correct Logi license.
For environments using older AUX releases, a separate Logi License Replacement Utility must be run to update the
license. For Cloud customer, this utility is run by QAD Cloud personnel. On-premise customers can download the utility
with instructions from this link in the QAD Store: https://store.qad.com/content/logi-license-replacement-utility-0.
However, after installing the new license into older environments, the following steps must be excluded from the YAB
update process to ensure that the new license is not automatically reverted during a subsequent YAB update.
logi-platform-services-default-license-unassign
logi-platform-services-default-license-assign
logi-platform-services-default-license-import
34
Adaptive UX Implementation Guide
This can done by adding the above lines to the <AUX home>/build/config/etc/process-ignore file.
When upgrading an existing QAD Adaptive UX September 2019 environment to a service pack that includes a newer
version of Logi Platform Services, several additional steps are required before running the full YAB update.
This section applies only to existing September 2019 installations with Logi Platform Services that are upgrading to a newer
Logi Platform Services service pack. It does not apply to installations that did not previously contain Logi Platform Services,
such as new installations or upgrades of environments that are older than September 2019. It also does not apply to QAD
Adaptive UX installations of the March 2020 release or later.
yab stop
yab logi-platform-services-status
mkdir /mydirectory/temp
cp <Adaptive ERP root>/servers/logi-platform-services/default/platform/db/LogiDB.mv.db
/mydirectory/temp/
yab logi-platform-services-default-rebuild
yab update
35
Adaptive UX Implementation Guide
This procedure assumes that Logi Platform Services will always be accessed using HTTPS, which is secure HTTP. If the
Web UI is set up to be accessed through HTTPS, then Logi Platform Services must be configured to use HTTPS also, to
avoid network security errors raised at run time. By default, Logi Platform Services is installed in QAD Adaptive ERP to be
accessed through HTTPS, not basic HTTP.
Certificate and key files are not included in the Adaptive ERP installation and are not generated by YAB, but must be
created on-site specifically for each company that is installing the software. They can be created in different ways, and
there is no prescribed procedure. However, below is a simple procedure that has been used to create self-signed
certificates for Logi Platform Services with the help of the open source tool openssl. This procedure converts an existing
JKS keystore file into certificate and key files that can be used by Logi Platform Services. In the example, the environment
variable JAVA_HOME must be set to the location of a JDK 8 installation on the local server.
The previous steps are sufficient to create a self-signed certificate. However, to generate a certificate signed by a
Certificate Authority (CA), any required intermediate certificates must also be downloaded and appended to the end of the
*.crt file generated above. Below is a sample procedure to do this, once the intermediate certificate ('myInt.crt' in the
example) has been obtained.
openssl x509 -inform der -in myInt.crt -out myInt.pem # Convert the intermediate certificate
from binary (DER) to textual X.509 (PEM) format
cat myInt.pem >> myKeystore.crt # Append the intermediate certificate to the certificate file
The root certificate for the Certificate Authority is not required, as it should already be a well-known, trusted CA.
logi-platform-services.default. Full path of the filename containing the private key to be used by N/A
sourcekeyfile LogiPS.
logi-platform-services.default. Full path of the filename containing the public certificate to be N/A
sourcecertfile used by LogiPS.
36
Adaptive UX Implementation Guide
However, in environments where an Apache reverse proxy is being used, particular configuration changes are required to
support both Logi Composer and Logi Platform Services correctly.
Logi Composer
Composer uses the WebSockets protocol to support client-server communications between the web browser and its
services and data connectors. Several Apache configuration changes are required to support WebSockets.
First, the Apache proxy_wstunnel_module should be enabled. To do this, uncomment the following line in <Apache
home>/conf/httpd.conf.
In Adaptive ERP installations, the Apache configuration file <Apache home>/conf.d/vhosts.conf contains a <VirtualHost *:
443> element where the HTTPS configuration settings for the virtual host are defined.
<VirtualHost *:443>
...
</VirtualHost>
<VirtualHost *:443>
...
RewriteEngine on
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteCond %{HTTP:Connection} upgrade [NC]
RewriteRule ^/<reverse proxy path>/?(.*) "wss://<tomcat-webui hostname>:<tomcat-webui port>
/<webapp context>/$1" [P,L]
...
</VirtualHost>
<VirtualHost *:443>
...
RewriteEngine on
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteCond %{HTTP:Connection} upgrade [NC]
RewriteRule ^/clouderp/?(.*) "wss://vmlfwy0000.qad.com:22011/qad-central/$1" [P,L]
...
</VirtualHost>
Once this change is made, restart the Apache server. This is done outside of YAB, with no need to restart Tomcat or any
other component of Adaptive ERP.
This scenario can be supported by expanding on the changes to the file <Apache home>/conf.d/vhosts.conf described in
the previous section. Add separate RewriteCond and RewriteRule directives for each target AUX environment as follows.
<VirtualHost *:443>
...
RewriteEngine on
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteCond %{HTTP:Connection} upgrade [NC]
RewriteRule ^/<reverse proxy path 1>/?(.*) "wss://<tomcat-webui hostname 1>:<tomcat-webui
port 1>/<webapp context 1>/$1" [P,L]
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteCond %{HTTP:Connection} upgrade [NC]
RewriteRule ^/<reverse proxy path 2>/?(.*) "wss://<tomcat-webui hostname 2>:<tomcat-webui
port 2>/<webapp context 2>/$1" [P,L]
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteCond %{HTTP:Connection} upgrade [NC]
37
Adaptive UX Implementation Guide
Below is a sample from a reverse proxy serving two AUX environments, one with the path "clouderp/devl" and the other
with "clouderp/test".
<VirtualHost *:443>
...
RewriteEngine on
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteCond %{HTTP:Connection} upgrade [NC]
RewriteRule ^/clouderp/devl/?(.*) "wss://vmlfwydevl.qad.com:22011/qad-central/$1" [P,L]
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteCond %{HTTP:Connection} upgrade [NC]
RewriteRule ^/clouderp/test/?(.*) "wss://vmlfwytest.qad.com:22011/qad-central/$1" [P,L]
...
</VirtualHost>
<Location /clouderp>
ProxyPass https://vmlqad0000.qad.com:22011/qad-central
ProxyPassReverse https://vmlqad0000.qad.com:22011/qad-central
Header edit Set-Cookie "^(.*)/qad-central(.*)$" $1/clouderp$2
Header edit Location "/qad-central/" "https://vmlqad0000.qad.com/clouderp/"
AddOutputFilterByType SUBSTITUTE text/html image/svg+xml
Substitute "s|/qad-central|/clouderp|iq"
Substitute "s|https://vmlqad0000.qad.com:22011|https://vmlqad0000.qad.com|iq"
</Location>
In order to support Logi Platform Services, change the AddOutputFilterByType element in the previous example to the
following, adding the MIME type 'application/com.qad.webshell.proxy+json' to the end.
Once this change is made, restart the Apache server. This is done outside of YAB, with no need to restart Tomcat or any
other component of Adaptive ERP.
Assetmgmt
Custrelmgmt
Financials
Fixedassets
Inventory
Planning
Purchasing
Pushproduction
Quality
Sales
Service
TAM (Trade Activity Management)
In earlier releases, the QAD-provided Action Centers were built using Logi Platform Services or Logi Info.
38
Adaptive UX Implementation Guide
The Action Center metadata for the QAD database is loaded for each app as part of the YAB metadata-update command
and its sub-commands. To install the metadata for a single package, including its Action Centers, run the YAB command
with the syntax metadata-<package>-update. For example, the Sales Action Centers are installed by the YAB command m
etadata-sales-update.
The Action Center objects for Logi Composer are loaded by the same metadata-update and metadata-<package>-update
commands as the QAD data. However, they are loaded as Configuration Data rather than app-specific artifacts, as in
earlier AUX versions. Once installed, they are no longer directly associated with their source apps, but are owned entirely
by the users of the environment.
The Action Center objects for Logi Platform Services are loaded for each app by a set of YAB commands, each of which
loads a different kind of Logi object.
lps-<package>-tag-update
lps-<package>-referencedataview-update
lps-<package>-enrichmentdataview-update
lps-<package>-visualization-update
lps-<package>-crosstabtable-update
lps-<package>-dashboard-update
For each app, these commands must be run in the order shown. The command lps-artifact-update loads all of them in the
prescribed order for all apps in the environment.
The Action Center files for Logi Info are copied for all apps at the same time for each file type (dashboard, KPI, gallery) by
the following YAB commands.
action-center-dashboard-update
action-center-kpi-update
action-center-gallery-update
For environments that are used for app development and testing, as opposed to production, there is the need to change
or delete the Action Centers and KPIs that came from previous versions of app to current versions. In order to allow new
versions of these artifacts to be promoted through the steps of the app development life cycle, a property qad-analytics-
core.composer.artifacts.overwrite has been introduced. If this property is set to true, the upgrade of an app in the AUX
environment will process updates and/or deletes of Action Centers and KPIs that came from an earlier version of the
same app.
39
Adaptive UX Implementation Guide
Action Centers Developed Using Logi Info are Installed But Not Updated or
Deleted
In releases prior to September 2019, there is an important difference in the handling of Action Centers vs other kinds of
metadata. These Action Centers, built using the older Logi Info framework, are installed by YAB, but not updated or
deleted once they are installed. Standard Action Centers may be used off the shelf and modified after installation to meet
local requirements. If the standard Action Centers were routinely updated as part of a system or package upgrade, local
modifications would be overwritten and lost. While any application packages containing Action Centers can be upgraded,
data related to existing Action Centers and KPIs is skipped during the YAB update.
To update Action Centers and KPIs inside one environment with Action Centers and KPIs of the same name that have
been created in a different environment (for example, migrate updated versions of an Action Center and its associated
KPIs from a development environment to production), use the KPI Migrate function in the Web UI to export the KPIs from
the source environment and import them into the target environment. There is no similar function to export and import an
Action Center, so a modified Action Center must be updated manually in the target environment. However, updating an
Action Center using Logi Info consists mainly of removing/adding/rearranging dashboard panels, and this is usually a
quick process.
When updating apps that were developed outside the organization (for example, a recent release of a previously installed
QAD package), newer versions of Action Centers and KPIs that already exist in the target environment are not updated for
the reasons previously mentioned. In this case, the source environment in which the Action Centers and KPIs were
defined is not available and the KPI Migrate function cannot be used to export and import them. If the newer versions of
these predefined Action Centers and KPIs are needed, please contact QAD Support for assistance.
When updating apps that include Action Centers and KPIs new to the target environment, no special steps are
needed. The new Action Centers and KPIs are installed automatically as part of the YAB update.
When you migrate a KPI from a source environment to a target environment, the migration does not include the browse that
is used as the data source. You must ensure the browse used as the data source already exists in the target environment
and that it has the same fields in its definitions. You can migrate browse definitions with the import/export tool in Browse
Maintenance in the QAD .NET UI.
40
Adaptive UX Implementation Guide
YAB Commands
Logi Composer
To obtain a list and description of all YAB commands that can be used to monitor and control Logi Composer, run the
following command.
Most of the Logi Composer commands documented in the YAB help are run only by YAB, and would not normally be run
directly from the command line. However, some of them can be useful to system administrators. The following YAB
commands would normally be run from the command line for routine monitoring and control purposes. Any of them can be
run without the need to stop/start Tomcat or other Adaptive ERP components.
postgresql-default-start Starts the 'default' instance of the PostgreSQL database used by Composer.
postgresql-default-status Checks the status of the 'default' instance of the PostgreSQL database used
by Composer.
postgresql-default-stop Stops the 'default' instance of the PostgreSQL database used by Composer.
To obtain a list and description of all YAB commands that are used to install and extract Action Centers and related
objects to/from Logi Platform Services, run the following command. The YAB commands described are run by YAB during
Action Center deployment.
41
Adaptive UX Implementation Guide
Most of the Logi Platform Services commands documented in the YAB help are run only by YAB, and would not normally
be run directly from the command line. However, some of them can be useful to system administrators. The following YAB
commands would normally be run from the command line for routine monitoring and control purposes. Any of them can be
run without the need to stop/start Tomcat or other Adaptive ERP components.
logi-platform- Starts the application The data service must be started before the application service is started.
services- service for the Logi
default- Platform Services 'default'
application- instance.
start
logi-platform- Stops the application The application service must be stopped before the data service is
services- service for the Logi stopped.
default- Platform Services 'default'
application- instance.
stop
logi-platform- Starts the data service for The data service must be started before the application service is started.
services- the Logi Platform Services
default-data- 'default' instance.
start
logi-platform- Stops the data service for The application service must be stopped before the data service is
services- the Logi Platform Services stopped.
default-data- 'default' instance.
stop
logi-platform- Starts the 'default' Logi Combines the logi-platform-services-default-data-start and logi-platform-
services- Platform Services instance. services-default-application-start commands.
default-start
logi-platform- Stops the 'default' Logi Combines the logi-platform-services-default-application-stop and logi-
services- Platform Services instance. platform-services-default-data-stop commands.
default-stop
logi-platform- Stops and immediately re- Tomcat-webui does not have to be restarted when LogiPS is restarted.
services- starts the 'default' Logi
default- Platform Services instance.
restart
logi-platform- List the LogiPS license(s) Useful for checking the presence or type of a LogiPS license in case of
services- installed in the Logi license-related errors.
license-info Platform Services 'default'
instance.
logi-platform- Assigns or re-assigns and The type of license assigned depends on whether the environment type
services- installs a LogiPS license to is development, test, or production.
license- the Logi Platform Services
assign 'default' instance.
logi-platform- Un-assigns and deletes May be needed if the wrong kind of Logi license was installed for any
services- the currently assigned reason, or if the production environment is being deleted or its server de-
license- LogiPS licenses from the commissioned. In the latter case, failure to un-assign the license would
unassign Logi Platform Services trigger the purchase of an extra LogiPS production license unnecessarily.
'default'instance.
42
Adaptive UX Implementation Guide
Logi Info
To obtain a list and description of all YAB commands that can be used to deploy and extract Action Center files used by
Logi Info, run the following command, which gives details for the three types of Action Center files: dashboard files,
gallery, and KPI files.
Cassandra
To obtain a list and description of all YAB commands that can be used to administer Cassandra, run the following
command.
For a list of all Cassandra-related settings, including the YAB commands, run the following command.
Following are the commands that would most commonly be run from the command line.
cassandra- Stops and re- Spark and tomcat-webui do not have to be restarted when Cassandra is restarted.
default- starts the
restart 'default'
Cassandra
instance.
cassandra- Runs the Run 'yab cassandra-default-nodetool -command:help' for a list of subcommands in
default- Cassandra the nodetool utility, and 'yab cassandra-default-nodetool -command:<subcommand>
nodetool nodetool to run any of them. For more detailed background about each one, see the
command-line Cassandra web pages.
utility.
Spark
To obtain a list and description of all YAB commands used to administer Spark, run the following command.
For a list of all Spark-related settings, including the YAB commands, run the following command.
Most of the Spark commands documented in the YAB help will be rarely used, especially given that all Adaptive UX
releases deploy Spark in a non-clustered manner on a single server. Following are the ones that would most commonly be
run from the command line.
43
Adaptive UX Implementation Guide
spark-restart Stops and re-starts the Spark master If Spark is restarted, tomcat-webui must also be restarted in
and slave (worker) processes. order to restore Query Service connections to Spark.
44
Adaptive UX Implementation Guide
Action Centers
Permissions are granted by role to create Action Centers, and to view and delete particular Action Centers. "Sharing"
permissions can also be granted by role that allow authorized users to add, replace, and delete visuals in the common
gallery. In addition, the user who created a particular Action Center always has full access permissions to it, regardless of
his/her role. In this respect, Action Centers are different from other secured resources in Adaptive ERP. For more
information on Action Center permissions, see the online help for the QAD Web UI (https://documentlibrary.qad.com/help
/webui/2023_0/en-US/index.html).
Domain-Entity-Site Membership
Another aspect of Action Center security is the membership of users within particular domains, financials entities, and
sites. This type of security is part of the common Web UI security infrastructure, and is not covered in this
document. However, domain, entity, and site membership are used by the Action Centers to automatically filter the data
that end users can view. For example, two users have permissions to view a particular Action Center, but User 1 is a
member of domain 10USA and User 2 is not. In this case, both users would see the same panels and visuals in the Action
Center, but the visuals shown to User 1 would include data from domain 10USA, whereas the visuals shown to User 2
would not.
System Users
Composer has two built-in users required to maintain the system: "admin" and "supervisor". These user identities are used
internally for system configuration and to communicate with Composer, not for on-line Web UI Action Center access. Their
passwords are set at installation time by YAB and can be changed after installation. Changing the default passwords are
strongly encouraged in order to keep the system more secure. The YAB properties used to set their values are as follows.
logi-composer.default.admin.password=Password1!
logi-composer.default.supervisor.password=Password1!
The passwords must be over eight characters long with a mix of lowercase characters, uppercase characters, numbers,
and special characters. In AUX environments configured as secured, they are encrypted by the Key Management Service
(KMS).
logi-composer.default.trusted-access-client-api.request.client_id
logi-composer.default.trusted-access-client-api.request.client_secret
The client secret is generated at installation time based on an encryption algorithm, and is specific to each environment. It
can also be re-generated if needed by running the following YAB command, although this should not be required:
45
Adaptive UX Implementation Guide
logi-composer-default-trusted-access-client-create
In AUX environments configured as secured, the client ID and client secret are encrypted by the Key Management
Service (KMS). Whenever the client secret is re-generated, both tomcat-webui and Composer must be restarted.
Starting with the September 2022 release, Logi Composer is the only option available to install in new AUX environments
and the default option for existing AUX environments. While Logi Platform Services can still be used in older AUX
environments that have been upgraded to September 2022, it is deprecated and will be retired in a future AUX release.
Logi Platform Services has its own security model that has been integrated with QAD Adaptive UX, with the goal of
minimizing the amount of overlapping security data maintained across the two systems. It supports two authentication
mechanisms used by the Action Centers functionality of QAD Adaptive UX: native user authentication and trusted access
authentication.
The credentials for the LogiPS "admin" user are set at installation time by YAB and can be changed after installation.
Changing the default password is strongly encouraged in order to keep the system more secure. The username "admin"
should not be changed. The YAB properties with default values are as follows:
logi-platform-services.default.username=admin
logi-platform-services.default.password=password
yab logi-platform-services-default-password-update
Whenever the password is changed, both tomcat-webui and Logi Platform Services must be restarted.
This trusted access authentication mechanism is automatically applied and requires no system administration effort. It
requires only a client ID and client secret code pair maintained in both LogiPS and the Web UI, which is generated by
YAB automatically at installation time. For reference, the YAB properties in which these values are stored are as follows:
logi-platform-services.default.clientid=analytics
logi-platform-services.default.clientsecret=eyJhbGciOiJBMTI4S1ciLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0.
Fr4TBVdbNNNBnwdB7IAPUuXoVO5K-pT7KDL8SlAFjtMolXShXjg6cg.Kr5vg5dre2iKkMjlZByZEg.
KO3UzPUve149bmPTglKSCn-yahkEEIq7x-xgL-
3JtmeFs2B3do5mnXg6SrZgB59f_wG_Gk010YD4elkgzHbwLbekW0glns09MRwyDoYo2Ck.L04Buq0IrsPOSenuPA-UCA
The client secret is generated at installation time based on an encryption algorithm, and is specific to each environment. It
can also be re-generated if needed by running the following YAB command, although this should not be required.
46
Adaptive UX Implementation Guide
logi-platform-services-default-clientsecret-update
Whenever the client secret is re-generated, both tomcat-webui and Logi Platform Services must be restarted.
As of the September 2021 release, Logi Info is not supported and KPIs created using Logi Info can no longer be displayed.
Logi Info is deployed as a separate Tomcat web application, named 'qad-dashboards' by default, but is intended to be
accessed only from within the Web UI. Users are not allowed to access Logi Info or display Action Centers without first
logging into the Web UI. To ensure that all access to the Action Centers is restricted to Web UI sessions, Logi SecureKey
authentication is enabled.
With SecureKey authentication enabled, every Web UI request to display Action Centers or other Logi Info views is
preceded by a server-to-server HTTP 'handshake' call from the Web UI webapp to the Logi Info webapp requesting a valid
SecureKey token. A token is then returned to the Web UI, allowing the Web UI to include Action Center displays through a
subsequent request. This SecureKey authentication handshake is processed quickly and is invisible to end users. While it
is enabled by default and should require no manual installation or configuration steps, the properties controlling the
processing are summarized here.
qad-analytics-core.logiSecureKeyEnabled=true
It is also enabled in the Logi Info configuration file _Definitions/_Settings.lgx in the XML Security element.
spark-thriftserver.default.port
Because the ThriftServer port allows browse results from Adaptive Applications to be retrieved using remote SQL clients
with a valid JDBC connection, it must be secured to prevent unauthorized access. Because only Logi Platform Services
needs to connect to this port, QAD recommends that outside access to it be blocked by a network firewall. In addition,
access to the port is secured by username-password credentials that can be set in the following YAB properties:
spark-thriftserver.default.username
spark-thriftserver.default.password
YAB assigns default values to these properties. Changing the default password is strongly encouraged in order to keep
the system more secure. The password can be changed in a YAB update, following by a restart of tomcat-webui and Logi
Platform Services.
Cassandra Authentication
47
Adaptive UX Implementation Guide
By default, authentication to access the Cassandra data lake is disabled. Whether authentication is enabled or disabled
has no direct effect on Action Center users. However, the lack of authentication exposes security holes regarding access
to the browse data that is stored in Cassandra. In particular, a user with the ability to run the Cassandra shell cqlsh,
described in the Action Center Maintenance and Troubleshooting section, could connect to the data lake and view/update
/delete the browse data without restriction using SQL commands. For this reason, it is strongly recommended to enable
Cassandra authentication using YAB, so that valid username-password credentials are required to access any of its data.
cassandra.default.node.main.authenticator=PasswordAuthenticator
Once Cassandra authentication is enabled, the Cassandra user and password are set to "qad" and "qad" respectively by
default. Change these to more secure values for the installation by setting the following YAB properties.
cassandra.default.user
cassandra.default.password
Driver UI
The 'driver' is the Query Service itself running inside Tomcat, which creates a Spark application by communicating to the
stand-alone Spark cluster manager, called the 'master.' This UI allows users to view all Spark environment settings, both
those set by YAB and those internal to Spark, which include internal Spark passwords. It also displays 'kill' hyperlinks that
allow users to stop in-process Spark jobs, which disrupt Action Center processing. If a Spark job is killed in this manner,
the Tomcat instance hosting the Web UI has to be restarted to ensure that the Action Centers work properly.
The port used to access the driver UI is configured by the following YAB property:
qad-qraview.spark.ui.port
The driver UI is enabled by default. To disable it entirely, set the following YAB property:
qad-qraview.spark.ui.enabled=false
The ability to kill Spark jobs from the driver UI is controlled by the following YAB property, set to false by default:
qad-qraview.spark.ui.killEnabled=false
Master UI
The "master" runs in a separate Spark JVM process with responsibility for dispatching and tracking the work across a
cluster of workers, which in the current release is only a single worker node. Like the driver UI, it displays 'kill' hyperlinks
that allow users to stop in-process Spark jobs, which disrupt Action Center processing.
The port used to access the master UI is configured by the following YAB property:
spark.masterdefault.env.webui.port
48
Adaptive UX Implementation Guide
The master UI is automatically enabled and cannot be disabled. However, the ability to kill Spark jobs from the master UI
is controlled by the following YAB property, set to false by default:
spark.masterdefault.properties.spark.ui.killEnabled=false
To prevent all use of the master UI, the network firewall must be configured to block access to its port.
Worker UI
The "worker" runs in a separate spark JVM process and performs queries-processing based on requests from the master
node. Unlike the other Spark UIs, the worker UI does not expose any sensitive data or affect Spark processing.
The port used to access the worker UI is configured by the following YAB property. It is enabled by default and cannot be
disabled.
spark.slavedefault.env.webui.port
To prevent all use of the worker UI, the network firewall must be configured to block access to its port.
The PostgreSQL databases are not accessed outside of Logi Composer. The port and connection password are
maintained using YAB in the following properties.
postgresql.default.port
postgresql.default.roles.admin.password
The PDB is not accessed outside of LogiPS. Neither YAB nor QAD Adaptive UX ever reads or writes its contents directly,
and there would be no reason for its connection port to be exposed outside of the enterprise firewall. While this port can
be configured by YAB, the database connection credentials are native to LogiPS and not maintained using YAB.
The port used by the PDB for client connections is stored in the following YAB property, dynamically assigned by YAB at
installation time.
logi-platform-services.default.service.data.h2.port
The PDB connection credentials are automatically set by LogiPS at installation time and would normally never need to be
changed. However, LogiPS provides a command line utility dbPassword.sh that can be used to change them, if needed.
The help text for using this utility is shown below:
49
Adaptive UX Implementation Guide
Usage
The -d option above is used to change the PDB password. Logi Platform Services should be stopped before using the
utility. After the utility is run, Logi Platform Services must be restarted.
HTTP or logi-composer.default.url Configurable username- Native Composer UI. Not used in Sysadmin
HTTPS password credentials for AUX. Should be accessed only /Cloud
client access 'admin' and 'supervisor' when required for troubleshooting personnel
to Composer users, managed by YAB. purposes. only.
UI (URL)
As shown in this table, all the ports are internal to Logi Composer except for the HTTP/HTTPS port for the Composer UI,
50
Adaptive UX Implementation Guide
HTTPS logi-platform- Native Logi Platform ('native user') Used for Logi Platform tomcat-webui
client services.default. authentication and Web UI ('trusted API access server of QAD
access to service.application. user') authentication Adaptive UX o
LogiPS webserver.sslport nly
HTTP client logi-platform- Native Logi Platform ('native user') Used for Logi Platform tomcat-webui
access to services.default. authentication and Web UI ('trusted API access, but disabled server of QAD
LogiPS service.application. user') authentication by default in favor of Adaptive UX
webserver.port HTTPS only
As shown in this table, all the ports are internal to Logi Platform Services except for the HTTPS and HTTP service ports,
which are accessed from the tomcat-webui server. None of them require direct access by client browsers.
51
Adaptive UX Implementation Guide
This guide includes details about only a portion of the properties used to configure Action Centers. For descriptions of
most of them, run the command:
For descriptions of the properties related specifically to Financial Report Writer KPIs, run the command:
Activating KPIs
Starting with the September 2020 release, all KPIs have an Active flag. This flag allows KPIs that are not currently being
used to be created or imported into the system, so that system resources such as memory and CPU are not consumed
unnecessarily to retrieve and cache data for unused KPIs. The feature is especially important for managing KPIs provided
by QAD, of which only a subset may be of interest to any one customer. By default, all new KPIs created or installed in an
Adaptive UX environment are set to inactive. To enable an unused KPI, select the Active flag on the KPIs screen.
Before the September 2021 release, the Active flag had to be selected manually on the KPIs screen. As of the September
2021 release, a bulk action Assign Domains & Entities has been added to the KPIs browse.
52
Adaptive UX Implementation Guide
You can use this action to activate any desired set of KPIs, as well as to assign/unassign domain or entity codes to them,
without having to update each KPI individually. This action is particularly useful when configuring a new Adaptive UX
environment, because until KPIs are activated and assigned valid domains/entities, Action Centers in the environment do
not display any data.
KPI Caches
KPIs defined in the Web UI describe the data sets that populate the Action Centers. In order to achieve acceptable
performance, they are cached in memory for on-demand display in the Action Centers.
There are several different data sources for KPIs, all of which retrieve data from the OpenEdge databases.
Browses
Financial Report Writer (FRW) (obsolete)
As of the March 2022 release, FRW KPIs are no longer supported in Adaptive UX. Only the infrastructure supporting
browse-based KPIs will be covered in the current section.
Browse-Based KPIs
KPIs that have browses as their data sources make up the majority of KPIs, because browses use a powerful data
retrieval mechanism and can be defined by knowledgeable end users. They have the capability of retrieving data from
almost any database table in the system, including financial and operational data.
Pre-defined browse-based KPIs are packaged and installed inside various apps, generally the same apps as the Action
Centers that use them.
The browse result sets used by the browse-based KPIs are cached by the Query Service, using its Spark and Cassandra
infrastructure.
In Cassandra, browse result sets are persisted to disk in tables that are specific to a combination of browse and entity for
financial browses, and browse and domain for all other browses. These tables reside within the 'browses' keyspace.
Following are some examples of table names displayed from this Cassandra keyspace.
cqlsh:browses>
53
Adaptive UX Implementation Guide
The KPI tables whose names end in a QAD domain or financial entity code (example: "_10usa") contain data from that
domain or entity only for a particular KPI. KPI tables with no domain suffix contain data from all domains or entities
enabled for that KPI. Unfortunately, the KPI name and its source browse are not identifiable from the table name. Table
names may be changed in a future Adaptive UX release to correct this. However, starting with the September 2021
release of AUX, several REST APIs are available that will return the name of the Cassandra tables and Spark views
containing the data for a given KPI. The primary one can be called at the following URL from an HTTP client, such as a
web browser or the curl command.
Example:
https://vmlfwy0000.qad.com:22011/qad-central/api/analytics/kpiMetadata/table-info?
kpiName=Commitments%20and%20Historical%20Spending
Note that the KPI name parameter must be URL-encoded to escape the space characters.
{
"errors": [],
"showResult": true,
"resultMessage": "",
"data": {
"kpiMetadataInfo": {
"kpiName": "Commitments and Historical Spending",
"kpiCode": "8a4a88f8-2a59-e6a0-5514-6a2ff8cbf089",
"kpiType": "Current Data"
},
"sparkTableName": "kpi_1721476749___94397eb4dc9aba5faeb22c5347019ecc",
"cassandraTableNames": [
"browses.kpi_1721476749_31aus",
"browses.kpi_1721476749_40brz",
"browses.kpi_1721476749_11can",
"browses.kpi_1721476749_30chn",
"browses.kpi_1721476749_20fra",
"browses.kpi_1721476749_23ger",
"browses.kpi_1721476749_12mex",
"browses.kpi_1721476749_21nl",
"browses.kpi_1721476749_22uk",
"browses.kpi_1721476749_10usa"
]
},
"success": true,
"errorSeverity": 0
}
The Cassandra tables are cached in memory by Spark, which creates views on the fly with appropriate filtering and
grouping to support the needs of specific Action Centers.
Manual Refresh
Cache Warming
Scheduled Refresh
Manual Refresh
Depending on the configuration of each KPI, it is possible for Action Center users to manually force a refresh of the data
from the OpenEdge database by selecting the refresh icon in the lower right corner of a dashboard panel. The icon is
displayed next to a date-time stamp showing when the data was last retrieved from the operational database, as
highlighted in the following graphic.
54
Adaptive UX Implementation Guide
The refresh icon is only available if the Allow Manual Refresh flag in the KPI screen is selected for the KPI associated with
the panel.
When the refresh icon is selected, the source browse is re-processed, re-cached in memory, and displayed in the Action
Center. All the Action Center panels are re-displayed in the browser, but only those panels using the refreshed browse
display refreshed source data.
In the case of KPIs whose data source is a large browse, a manual refresh can take several minutes, because the source
browse must be processed, the results stored in Cassandra and re-cached in Spark, and the visuals in the Action Center re-
rendered by the Logi software.
Because of the potential load in the system when KPIs use large browses, manual refreshes should only be used when
current data is required. Routine regular refreshes should be accomplished by enabling cache warming and scheduled
refresh.
The following system-level properties affect KPI caching and refresh behavior, enforcing limits to ensure acceptable online
performance.
qad-analytics-core. Maximum number of active fields that may be included in a KPI. 20 Should be limited
maxKpiActiveFields in order to
conserve system
resources.
qad-analytics-core. Causes the copying of a KPI result set into the data lake to be skipped if 1 Normally should
browseBatchRefresh the same KPI was copied more recently than the value of this property, not be changed.
TimeLimit expressed in minutes.
Cache Warming
55
Adaptive UX Implementation Guide
Cache warming refers to the process of caching all the KPIs in memory when an environment is started. This allows the
KPIs to be available without a significant wait for an Action Center to display the first time one of the underlying KPIs is
needed. By default, cache warming is enabled when the system is installed.
In the case of browse-based KPIs cached in the Query Service, it is important to note that cache warming does not
necessarily refresh the cache contents to reflect current OpenEdge database contents. If the required browses are already
present in the Cassandra data lake, cache warming loads the memory caches from Cassandra without reprocessing the
browse requests. Only required browses that are not yet in Cassandra are retrieved from the OpenEdge databases
through browse requests. This approach allows cache warming to proceed much faster and with lower system resource
usage at application startup, which is often an important practical consideration. Refreshing the caches from the
operational database sources is accomplished mainly by scheduled refresh.
Cache warming is controlled by various properties that can be modified using YAB if necessary. This document does not
provide a comprehensive list, but only covers those that are most likely to affect overall performance and/or require tuning.
qad-analytics-core. Indicates if the cache true Set to false in order to disable cache warming.
cache.kpis-browse. is warmed at
loadAtStartup application startup.
qad-analytics-core. Number of KPI 2 Increase to submit refresh requests faster to the Query Service,
metricKpiCacheWarmerE refreshes that can be potentially warming the cache in less time but at the cost of
xecutor.poolSize requested greater resource usage. Must have a value of 1 or greater.
concurrently.
The effect of this setting is constrained by the Query Service
property qad-qracore.browseCassandraDataService.
concurrency (described below).
Scheduled Refresh
Because Action Center displays retrieve their KPI data from in-memory caches, the data may not reflect current database
contents. If the data sets are not periodically refreshed, over time they will become stale and less relevant to the needs of
the organization. While end users can trigger the data in particular Action Center panels to be refreshed from the
OpenEdge sources, manual refreshes are resource intensive and often slow. To keep the Action Center contents current
enough to be useful, the system automatically refreshes the browse and FRW KPI caches periodically, based on a
configurable schedule. This feature is called 'scheduled refresh.'
Scheduled refresh is configurable by KPI. KPIs can be explicitly enabled for scheduled refresh on a daily, weekly, or
monthly basis in the KPI screen using the Auto Refresh setting.
However, there is a limit on the number of KPIs for which scheduled refresh can be enabled, as described below:
qad- Maximum number 30 Can be increased to allow more KPIs to be automatically refreshed, at the cost of
analytics- of KPIs for which more resource-intensive browse requests. The impact of a longer scheduled
core. scheduled refresh refresh depends on the number of KPIs, the size of browse result sets, and the
maxKpisAut may be enabled overall system load at the time of day when the scheduled refresh is run.
oRefreshed
Scheduled refresh is controlled by various properties that can be modified using YAB if necessary. This document does
not provide a comprehensive list, but only covers those that are most likely to affect overall performance and/or require
tuning.
The scheduled refresh process is started and runs inside the tomcat-webui instance, not in separate scripts. If tomcat-
webui is not running at the time when the scheduled refresh is scheduled (ex. during an offline backup), or was stopped
before an in-process scheduled refresh could complete, no special recovery process is initiated. Instead, the scheduled
refresh will run at the next scheduled date-time once tomcat-webui is running again.
56
Adaptive UX Implementation Guide
qad- Enables scheduled refresh processing across the true Set to false to disable all scheduled refreshes.
analytics system, based on the other properties and
-core. individual KPI configuration.
cache.
kpis-
browse.
scheduled
Refresh.
enabled
qad- Maximum number of requests that are batched 100 Lower values may allow the scheduled refresh to be
analytics for processing by a single KPI refresh thread. completed faster, at the cost of using more memory
-core. and/or processor resources. Not recommended to
cache. change.
kpis-
browse.
scheduled
Refresh.
batchSize
qad- Number of background threads that can 2 More threads would allow more scheduled refreshes
analytics concurrently request KPI refreshes. Must be to run concurrently, at the cost of using more
-core. greater than or equal to 2. memory and/or processor resources.
cache.
kpis-
browse.
scheduled
Refresh.
quartzJob
Count
qad- String expression in a cron format that specifies 000**? Set to a time schedule that suits the organization,
analytics schedule when the scheduled refresh is run. See (every day based on system load and user activity over a 24-
-core. the Quartz documentation for a more detailed at hour period. If possible, schedule for a time of day
cache. description of cron syntax. midnight) with low online user activity.
kpis-
browse. This setting does not cause cron scripts to run.
scheduled The cron syntax is only used to set the schedule
Refresh. for background refresh activity run within the
cronExpre Tomcat environment.
ssion
qad- For KPIs that are configured to be refreshed sun Set to a day of the week that suits the organization,
analytics weekly, specifies the day of the week on which based on system load and user activity.
-core. the refresh is performed. Valid values are sun,
cache. mon, tue, wed, thu, fri, sat.
kpis-
browse.
scheduled
Refresh.
kpi-
weekly-
day
qad- For KPIs that are configured to be refreshed first,0 Set to a day of the month that suits the organization,
analytics monthly, specifies the day of the month on which based on system load and user activity.
-core. the refresh is performed.
cache.
kpis- Syntax is a comma-separated string with two
browse. values:
scheduled
Refresh. first or last: Indicates if the refresh is
kpi- defined relative to the beginning or end of
monthly- each month.
day offset days: Sets the number of days before
or after the first or last of the month to
perform the refresh. A positive value states
the number of days after the first or last day
of the month; a negative value states the
number of days before the first or last day
of the month; and a value of 0 indicates the
first or last day of the month with no offset.
57
Adaptive UX Implementation Guide
qad- Minimum number of minutes allowed between 1 Increase the value in order to increase the minimum
analytics refreshes of the same KPIs browse by the Query amount of time allowed for refreshes of the same
-core. Service. If a requested browse was refreshed browse. This may be important in order to prevent
browseBat within this time interval, the new request is excessive load on the system, in terms of AppServer
chRefresh skipped. This property prevents repeated agents and SQL connections that are consumed
TimeLimit refreshes of the same browse from being processing browse requests. Smaller values allow
processed within a short time interval (for the same browse to be refreshed more frequently.
example, by repeatedly clicking the refresh This property allows you to adjust the trade-off
control inside an Action Center panel). between system resource usage and data currency
of Action Center displays.
qad- Number of 3 Limits the number of concurrent browse requests that can be processed by the Query
qracore. browse Service. Browse requests are one of the following types.
browseCas retrievals and
sandraDat data copies SQL queries using an OpenEdge JDBC connection
aService. into Cassandra Browse engine queries run on a QRA AppServer agent.
concurren that can be
cy processed
concurrently.
This property limits the number of browse requests of either type that can
be processed at the same time. For browse requests processed on the
QRA AppServer, this is a critical setting, as Progress AppServers are often
a scarce and CPU-intensive system resource. If the number of AppServer
agents being used to process Query Service browse requests causes the
environment to run out of available agents, a fatal 'No Servers Available'
error may be raised by Progress. This error will cause the request that
needs the AppServer agent to fail, whether it comes from the Query
Service or from another system component. For this reason, this property
should be set to a value lower than the total number of QRA AppServer
agents available in the environment, allowing enough agents available for
other activities to proceed while cache warming or a scheduled refresh is
in process.
qad- Maximum 120 In a heavily loaded environment with high CPU and/or data retrieval activity, especially
qracore. number of using AppServer agents, this setting might have to be increased to give the system
browseCas seconds that time to retrieve complete browse results.
sandraDat the Query
aService. Service will
timeoutLi wait for a page
mit of output to be
returned from
a browse
request.
qad- Number of 5000 Could be adjusted to retrieve browse data from the source in smaller or larger chunks,
qracore. records potentially affecting data retrieval traffic and latency.
browseCas retrieved in a
sandraDat single page or
aService. chunk from a
pageSize browse
request.
58
Adaptive UX Implementation Guide
For these settings, it is assumed that the appender stdout references the standard tomcat-webui console log file. The
resulting messages have a log level of DEBUG. They trace the caching of each KPI for each domain or financial entity,
without showing details about the related browses or queries.
Like the Scheduled and Manual Refresh functions, historical snapshots are configured by KPI on the KPI screen.
A chronological list of existing snapshots is displayed in the Snapshot History section of the KPI screen, for all historical
KPIs.
However, there are limits on the number and size of snapshots that are configured at the system level, as described below.
Because historical KPI snapshots accumulate over time based on their frequencies and schedules, they can grow to
consume large amounts of disk space depending on their size, grouping level, and so on. The limits on historical KPI
snapshots should therefore be considered carefully for each Adaptive UX installation.
59
Adaptive UX Implementation Guide
In releases prior to September 2019 when all Action Centers were supported using Logi Info, there were several situations
that required manual configuration changes to the Logi Info settings. The following information is not necessary for
environments created with the September 2019 release or later.
To address this potential issue, a system-level property is provided to limit the number of data rows that can be returned
for a single KPI and workspace.
qad- Maximum number of browse rows that can be 5000 Raise only after considering the additional system
analytics- retrieved for any KPI in the system for a single load that will be incurred by processing large
core. domain or entity. browses in order to retrieve KPI data.
maxRowCount
The name of this file is _Settings.lgx. It is saved in the _Definitions/ directory under the root directory of the
qad-dashboards webapp. To find this root directory, run the following command.
60
Adaptive UX Implementation Guide
Open the file with a text editor. Find the Globalization element near the end of the file:
Add the attribute FirstDayOfFiscalYear to the Globalization element as shown below. Set its value to MM/DD, where
MM is the two-digit month of the calendar year and DD is the two-digit day of the month.
Save the change. It will take effect immediately, with no need to restart the environment.
If this property is not present in the file, Logi Info assumes that the fiscal and calendar years are the same, with no need
for the Fiscal Year or Fiscal Quarter options when configuring visuals.
The name of this file is _Settings.lgx. It is saved in the _Definitions/ directory under the root directory of the
qad-dashboards webapp. To find this root directory, run the following command:
Open the file with a text editor. Find the Globalization element near the end of the file:
Add the attribute FirstDayOfWeek to the Globalization element as shown below. Set its value to one of the following:
61
Adaptive UX Implementation Guide
Save the change. It will take effect immediately, with no need to restart the environment.
62
Adaptive UX Implementation Guide
The databases, managed within the same PostgreSQL instance, are named zoomdata and zoomdata-qe. Their contents
are changed online as a result of end-user activity that creates, modifies, or deletes KPIs, Action Centers, and visuals.
They also contains much configuration data internal to Composer. As the PostgreSQL activity is not recorded in
OpenEdge database rollback or roll-forward logs, their contents cannot be precisely synchronized with the OpenEdge
databases when the databases must be restored to a particular point in time through automatic roll-forward operations,
such as in some disaster recovery scenarios. However, in practice the risk of data corruption is low, given the following
points.
KPI, Action Center, and visuals maintenance are typically low-volume, low-frequency activities that are performed
by only a subset of users during working hours.
Most of the contents of are not tightly coupled with OpenEdge database contents. In addition, the restore
process run by YAB also calls a REST API in AUX that ensures Composer contains the same objects with the
same identifiers as the QAD databases. As a result, changes to the PostgreSQL tables generally do not affect
the QAD databases and vice versa.
To minimize the risk of data loss, the PostgreSQL databases should always be backed up at the same time as the
OpenEdge databases. The following YAB command creates a backup of the PostgreSQL contents:
yab postgresql-default-backup
The backups are stored in the directory referenced by the following YAB property:
postgresql._backup.dir
The following YAB commands include the PostgreSQL databases in backups of the system environment:
yab environment-online-backup
yab environment-offline-backup
The following YAB command provides a list of all the existing PostgreSQL backups for Logi Composer:
yab postgresql-default-backup-list
The output of the backup list command is similar to the following example:
Tag: default
---------------
Location: /dr01/dbs/backup/default/postgresql
Tag: 20220711120330
---------------
Location: /dr01/dbs/backup/20220711120330/postgresql
63
Adaptive UX Implementation Guide
The following YAB command deletes all Logi Composer PostgreSQL backups:
yab postgresql-backup-remove
yab database-all-backup-remove
PostgreSQL backups can be restored only when Logi Composer is offline. They are restored by the following YAB
command:
yab postgresql-default-restore
yab database-all-restore
Starting with the September 2022 release, Logi Composer is the only option available to install in new AUX environments
and the default option for existing AUX environments. While Logi Platform Services can still be used in older AUX
environments that have been upgraded to September 2022, it is deprecated and will be retired in a future AUX release.
Starting with the September 2019 release, the Action Center dashboard and visual definitions developed using Logi
Platform Services are stored in Logi's Product Database (PDB), which is implemented using the H2 relational database
manager. Its contents are changed online as a result of end-user activity that creates, modifies, or deletes KPIs, Action
Centers, and visuals. The PDB also contains much configuration data internal to LogiPS. PDB activity is not recorded in
OpenEdge database rollback or roll-forward logs. As a result, the PDB cannot be precisely synchronized with the
OpenEdge databases when the databases must be restored to a particular point in time through automatic roll-forward
operations, such as in some disaster recovery scenarios. In practice, the risk of data corruption is relatively low, given the
following points.
KPI, Action Center, and visuals maintenance are typically low-volume, low-frequency activities that are performed
by only a subset of users during working hours.
The contents of the PDB are not tightly coupled with OpenEdge database contents. As a result, changes to the
Logi tables generally do not affect the QAD databases and vice versa.
To minimize the risk of data loss, the PDB should always be backed up at the same time as the OpenEdge databases.
The following YAB command creates a backup of the Logi Platform Services PDB.
yab logi-platform-services-default-backup
The backups are stored in the directory referenced by the following YAB property:
logi-platform-services-backup.dir
The following YAB commands include the PDB in backups of the system environment:
yab environment-online-backup
yab environment-offline-backup
The following YAB command provides a list of all the existing PDB backups for Logi Platform Services:
logi-platform-services-default-backup-list
The output of the backup list command is similar to the following example:
64
Adaptive UX Implementation Guide
Tag: default
--------------
Location: /dr01/dbs/backup/default/logi-platform-services
Tag: 20190527080605
--------------
Location: /dr01/dbs/backup/20190527080605/logi-platform-services
Tag: foo
--------------
Location: /dr01/dbs/backup/foo/logi-platform-services
The following YAB command deletes all Logi Platform Services PDB backups:
yab logi-platform-services-backup-remove
yab database-all-backup-remove
The PDB can be restored only when Logi Platform Services is offline. It is restored by the following YAB command:
yab logi-platform-services-default-restore
yab environment-restore
Because the PDB contains license information, if a PDB backup from an environment containing a different Logi Platform
Services license is restored into the target environment, the correct license should be re-imported after the restore using
the following YAB command:
yab logi-platform-services-default-license-import
Logi Files
As of the September 2021 release, Logi Info is not supported and KPIs created using Logi Info can no longer be displayed.
Prior to the September 2019 release, the Action Center dashboard and visual definitions were stored in XML files inside
the Logi Info web application, not in a database. For environments still running a pre-September 2019 release, these files
are changed online as a result of end-user activity that creates, modifies, or deletes KPIs, Action Centers, and visuals. Cha
nges to these files are not recorded in OpenEdge database rollback or roll-forward logs. As a result, these files cannot be
precisely synchronized with the OpenEdge databases when the databases must be restored to a particular point in time
through automatic roll-forward operations, such as in some disaster recovery scenarios. In practice, the risk of data
corruption is relatively low, given the following points:
KPI, Action Center, and visuals maintenance are typically low-volume, low-frequency activities that are performed
by only a subset of users during working hours.
The contents of the Logi files are not tightly coupled with OpenEdge database contents. As a result, changes to
the Logi files generally do not affect the database and vice versa.
All the Action Center files are stored in a single directory inside the Logi Info web app. To find the directory location, query
the YAB configuration:
65
Adaptive UX Implementation Guide
To minimize the risk of data loss, the Logi files should always be backed up at the same time as the OpenEdge
databases. The following YAB commands include the Logi files in backups of the system environment:
yab environment-online-backup
yab environment-offline-backup
yab directorybackup-backup
yab directory-action-center-backup
Data restore activities performed by system administration personnel should be implemented to include both the
OpenEdge databases and the Logi files. The files can be backed up and restored through simple file copies, without the
use of any special utilities. The following YAB commands restore the Logi files backups created by a previous YAB
backup:
yab directorybackup-restore
yab directory-action-center-restore
Because changes to the Logi files are not automatically logged, QAD recommends that the Logi files be backed up more
frequently than the OpenEdge databases in order to support recovery procedures when it is necessary to restore the
entire system to a stable state as of a particular point in time. If the OpenEdge databases must be restored and rolled
forward to a particular point in time, more frequent Logi file backups allow the files to be restored to a state that is closer to
the restored database state. There is still the possibility of some Action Center data loss if the current Logi files were lost
during a severe service outage, but the risk can usually be managed to a low level through this approach.
Cassandra Keyspaces
The 'browses' keyspace in the Cassandra data lake that contains Query Service data is not a system of record, but is
created entirely from the contents of operational OpenEdge database tables. There is no need to back up its contents or
restore a backup in case of data loss. It should therefore be included in the list of keyspaces exempted from backups in
the YAB property cassandra.default.node.backup.blacklist. Whenever there is a need to restore/refresh the data in this
keyspace to the current state, the keyspace should be rebuilt from its sources as described in the section 'Rebuilding the
Cassandra Keyspace.'
The 'historical_kpi' keyspace in the Cassandra data lake is the system of record for historical KPI snapshots, and must be
included in all database backups. It should therefore be omitted from the list of keyspaces exempted from backups in the
YAB property cassandra.default.node.backup.blacklist.
Spark Cache
Within the Query Service, Spark is used as an on-demand, memory-based cache of browse data that is loaded from the
Cassandra data lake. Hence, there is no need to back up its contents or restore a backup in case of data loss. Instead,
the cache is rebuilt or refreshed at the same time as the Cassandra keyspace (see above).
Log Files
Because much of the Action Center and Query Service processing is performed in the background and is not visible in the
user interface, log files are the most important resource for diagnosing problems.
Tomcat Logs
The console log file written by the Tomcat instance that supports the Web UI (tomcat-webui) is the first place to look for
error details. By default, the current log file is named catalina.out, and the files created on previous days are named
catalina.<date>.log, where <date> is the date when the log was written. These file are stored in the logs/directory under
the Tomcat instance. To find the root directory of the Tomcat instance, run the following command.
66
Adaptive UX Implementation Guide
Various Composer microservice and data connector components also write separate log files. While they can be
configured to write to different locations in the file system, by default they write to the same location as the one mentioned
above. In case different locations are used, the YAB configuration can be queried using several commands:
PostgreSQL database logs are written to a different location, queried as shown below:
To find the Logi Platform Services log files generated by the Logi Application Service, query the following property and go
to the logs/ sub-directory under it.
Cassandra Logs
To find the Cassandra log files, query the YAB configuration.
The log files are located in the sub-directory default/ within this directory. The cassandra-default.log file shows
Cassandra activity, and the gc.log.* files show its Java garbage collection activity.
Spark Logs
To find the Spark log files, query the YAB configuration for the master and slave processes running in Spark:
The Spark master process manages the resource used by Spark workers to process particular requests:
67
Adaptive UX Implementation Guide
Spark worker processes carry out particular tasks based on the incoming requests.
YAB Logs
If errors related to Action Centers or the Query Service are raised while running a YAB command, consult the YAB log file
for details recorded by YAB (for example, a cache warming failure when the Tomcat instance is started during a YAB
update).
The YAB log file is named yab.log, and is stored in the build/logs/ directory under the root of the installation.
yab webapp-analytics-composer-api-configure
The API can also be called from a web browser or other HTTP client tool at the following endpoint:
68
Adaptive UX Implementation Guide
In this case, running the following REST API from the web browser or another HTTP client tool may fix the problem by
adding/updating the Logi dataviews used by the Action Center.
1. Press the F12 key in the web browser to open the DevTools console.
2. Go to the Network tab.
3. Refresh the browser page while the Action Center is displayed.
4. Type the string "effectivePermissions" into the Filter box of DevTools.
5. Find the URL containing the string "/system.configs/dashboard-". The dashboard ID is a string prefixed with
'dashboard-', as in the following example.
Beginning with the March 2020 release, all Action Centers in the system along with all their associated Logi artifacts can
be repaired by running the following YAB command, which can take many minutes to run
69
Adaptive UX Implementation Guide
yab action-center-dashboard-repair
Dashboards (Action Centers) in the Logi database do not have associated records in the QAD database.
Dataviews in the Logi database do not have associated KPIs in the QAD database.
App or KPI tags exist in the Logi database without an associated app or KPI in the QAD database.
Often these cases will not affect Adaptive UX operation, as 'orphaned' objects in the Logi database will usually not be
accessible or visible to QAD users. However, occasionally special technical problems may result. As of the September
2021 release, the following REST API can be run in the web browser or other HTTP client tool to delete these objects:
It is intended to be used only by Support personnel and should not be run routinely.
KPI Refresh
When it is necessary to refresh the KPI data stored in Cassandra and cached in Spark, manual refreshes can be
performed for a single KPI by selection Actions > Refresh Data on the KPI screen. However, occasionally there is the
need to force a refresh of the data displayed by all KPIs without waiting for the next scheduled refresh run. You can do
this by running the following REST API in a web browser or other HTTP client tool:
Depending on the environment, this API can take a lot of time and consume significant system resources. You should avoid
running this API in production environments.
To address this problem, the following REST API, which was introduced in September 2021, returns information about
where KPIs are stored in Cassandra and Spark.
When run from the web browser or another HTTP client tool, it returns JSON data for the requested KPI, as in the
following example for the KPI "Top Customers."
70
Adaptive UX Implementation Guide
{
"errors": [],
"showResult": true,
"resultMessage": "",
"data": {
"kpiMetadataInfo": {
"kpiName": "Top Items",
"kpiCode": "994dcb3b-d29c-bf89-5614-675f4010b8b2",
"kpiType": "Current Data"
},
"sparkTableName": "kpi_1817003066___afd7af965b156ba801a896a0995dd3a6",
"cassandraTableNames": [
"browses.kpi_1817003066_10usa",
"browses.kpi_1817003066_12mex",
"browses.kpi_1817003066_11can"
]
},
"success": true,
"errorSeverity": 0
}
The property 'sparkTableName' is the name of the Spark view, visible when using the Spark Beeline tool (see below).
Because the KPI data is stored by domain or by financial entity in Cassandra, a list of table names may be returned as the
property 'cassandraTableNames'. The Cassandra table names can be queries in the Cassandra shell (see below).
In case the unique KPI code of the KPI is known, the following REST API can be used instead. It returns the same
information as the previous API.
Starting with the March 2022 release of Adaptive UX, the following REST API returns the same information for all KPIs in
a single request.
Cassandra Shell
Cassandra provides a command-line shell cqlsh that can be used to execute CQL commands, the SQL-like language
supported by Cassandra. The shell is especially useful for examining the contents of the browses or historical_kpi
keyspaces, which contain all the Query Service browse data retrieved from the OpenEdge databases:
To run the Cassandra shell, Python 2.7 must be present on the command PATH.
Starting with the September 2021 release, a 'wrapper' script for cqlsh is provided by YAB that can be used to start it from
the command line with no need to provide parameters such as port numbers and credentials. In this, change to the
Adaptive UX home directory and run the following command:
scripts/cqlsh
For releases prior to September 2021, the native Cassandra script must be run with various command line parameters. To
find the location of the shell utility and the Cassandra port number to which to connect, query the YAB configuration.
Change to the directory containing the Cassandra scripts. Start the shell, passing the required hostname and port number.
The actual hostname (not 'localhost') must be used.
71
Adaptive UX Implementation Guide
Once the Cassandra shell has been started, determine the keyspace whose data you want to review and make it the
default for all subsequent CQL commands.
The shell can then be used to query the contents of the data lake with various commands, such as the following.
cqlsh:browses>
As of the September 2020 release of Adaptive UX, the KPI and browse cannot be identified based on the table name. The
table names may be changed in future releases to facilitate easier inspection using the Cassandra shell.
72
Adaptive UX Implementation Guide
cqlsh:browses>
Starting with the September 2021 release, the browsecopystatus table was enhanced to include the Cassandra table
name of each KPI browse. This information can be useful when the contents of a particular table must be inspected to
determine the cause of some data refresh problem.
73
Adaptive UX Implementation Guide
cqlsh:browses>
Here is an example.
74
Adaptive UX Implementation Guide
Alternatively, the following command will drop all browse tables from the data lake. A timeout error may be displayed if
processing requires more that the default 10-second limit. However, in this case the command will still be processed in the
background despite the timeout.
To re-populate the Cassandra keyspace with browse data, restart the Tomcat Web UI instance with cache warming
enabled. All browses needed to support the KPIs used by the Action Centers are re-processed, and the results are copied
into Cassandra.
cqlsh:browses> quit;
For more information about the shell and other Cassandra tools, see the documentation at the Apache Cassandra web site.
For more information, see the documentation at the Apache Spark web site.
To display the UI, query the YAB configuration to find the correct HTTP port to connect.
The page contains hyperlinks providing more details on particular tasks and links to native Spark log entries for them.
75
Adaptive UX Implementation Guide
While the Spark Web UI described above provides many operational details regarding Spark activities, queries, and
environment characteristics, it does not provide a way to query the browse data that has been cached in the Query
Service. However, Spark also includes beeline, a command-line utility that can connect to the Spark ThriftServer within the
Query Service, and access the cached data using SQL.
To find the location of the Spark files, the ThriftServer port number to which to connect, and the required ThriftServer
credentials, query the YAB configuration.
Change to the Spark package directory. Start the beeline tool, passing the required hostname and port number. The
actual hostname (not 'localhost') must be used.
Alternatively, the above parameters can be passed in directly from the command line.
All Spark SQL commands must specify the name "qad_global_temp" of the Spark global temporary view, or no data will be
shown. The global temporary view is essentially a virtual database that contains the datasets cached in the Query Service
in the form of SQL views. Unfortunately, the global temporary view cannot be set as the default database for SQL
commands, but must be specified explicitly in each one.
76
Adaptive UX Implementation Guide
77
Adaptive UX Implementation Guide
M | false | 223217.740000000000000000 |
0E-18 | 0E-18 | 0E-18 |
0E-18 | 0E-18 | 0E-18 |
0E-18 | 0E-18 | 0 |
0E-18 | 0 | 0E-18 |
Standard | 24801.970000000000000000 | 0E-18 |
0E-18 | Medical Ultrasound | fff | ULTRA |
DEVICES | 10 | ACTIVE | EA |
| 10USA | 2 | 2017-10-14 17:00:00.0 | 0E-18 | 0E-
18 | NULL | 10USA | 2017-10-15 17:00:00.0 |
01010 | 0 | 0E-18 | 0E-18 | 0E-
18 | 0E-18 | 0E-18 | 2017-10-15 17:00:00.0 |
dsite-18 | A | |
M | false | 0E-18 |
0E-18 | 0E-18 | 0E-18 |
0E-18 | 0E-18 | 0E-18 |
0E-18 | 0E-18 | 0 |
0E-18 | 0 | 0E-18 |
Standard | 0E-18 | 0E-18 |
0E-18 | Medical Ultrasound | fff | ULTRA |
DEVICES | 10 | ACTIVE | EA |
| 10USA | 3 | NULL | 0E-18 | 0E-
18 | NULL | 10USA | NULL |
01010 | 0 | 0E-18 | 0E-18 | 0E-
18 | 0E-18 | 0E-18 | NULL |
dsite-8 | A | |
M | false | 0E-18 |
0E-18 | 0E-18 | 0E-18 |
0E-18 | 0E-18 | 0E-18 |
0E-18 | 0E-18 | 0 |
0E-18 | 0 | 0E-18 |
Standard | 0E-18 | 0E-18 |
0E-18 | Medical Ultrasound | fff | ULTRA |
DEVICES | 10 | ACTIVE | EA |
| 10USA | 4 | NULL | 0E-18 | 0E-
18 | NULL | 10USA | NULL | 01010-
008 | 0 | 0E-18 | 0E-18 | 0E-
18 | 0E-18 | 0E-18 | NULL |
10-100 | b |
| | false | 0E-18 |
0E-18 | 0E-18 | 0E-18 |
0E-18 | 0E-18 | 0E-18 |
0E-18 | 0E-18 | 0 |
0E-18 | 0 | 0E-18 |
Standard | 0E-18 | 0E-18 |
0E-18 | Verify Link to CC | |
| | | ACTIVE | EA |
| 10USA | 5 | NULL | 0E-18 | 0E-
18 | NULL | 10USA | NULL | 01010-
014 | 0 | 0E-18 | 0E-18 | 0E-
18 | 0E-18 | 0E-18 | NULL |
10-100 | a |
| | false | 0E-18 |
0E-18 | 0E-18 | 0E-18 |
0E-18 | 0E-18 | 0E-18 |
0E-18 | 0E-18 | 0 |
0E-18 | 0 | 0E-18 |
Standard | 0E-18 | 0E-18 |
0E-18 | this is a testret | |
| | | | EA |
...
+-------------+--------------+------------------------+-----------------------
+-----------------------+-----------------------+---------------------+------------------------
+-------------------+-----------------------+-----------------------+------------------------
+-----------------------+----------------------+--------------------------
+------------------------+-------------------+-------------------------------
+-------------------------------+-------------------------------+-------------------------------
+-------------------------------+-------------------------------+-------------------------------
+-------------------------------+-------------------------------+-------------------------------
+-------------------------------+-------------------------------+-------------------------------
+-------------------------------+-------------------------------+-------------------------------
+-------------------------------+-------------------------------+-------------------------------
+-------------------------------+----------------------+---------------------+--------------------
+--------------------+------------------------+------------------------+---------------------
+-----------------+--+
745 rows selected (2.579 seconds)
0: jdbc:hive2://vmlwebs20t:22178>
78
Adaptive UX Implementation Guide
To disconnect from the ThriftServer and exit beeline, use the '!quit' command.
0: jdbc:hive2://vmlwebs20t:22178> !quit
While inside beeline, use the '!help' command to get information about many other beeline commands.
0: jdbc:hive2://vmlwebs20t:22178> !help
Any specialized logging configuration changes not supported by YAB properties and commands must be made manually in
a text editor. If this is necessary, it should be done only in consultation with QAD Support personnel.
Tomcat
In order to investigate the details of problems related to Action Centers and Query Service, it is often necessary to enable
debugging in the Tomcat instance supporting the Web UI. To accomplish this, the logging configuration file must be
modified. To find the location of the Tomcat web app, query the YAB configuration.
The logging configuration file is named logback.xml, and is stored in the WEB-INF/config/ directory under the web app
location. To obtain help about the relevant YAB properties and commands, run the following YAB command.
Debugging should be enabled selectively for relevant parts of the Tomcat environment, not globally for all Java classes.
Depending on the problem to be debugged, the following Java packages and classes are usually the most important.
These packages and classes are used as the 'NAMESPACE' entries referenced in the YAB help.
Query com.qad.qracore.service.impl.spark
Service -
Spark
processing
79
Adaptive UX Implementation Guide
For example, to enable debug-level logging for the com.qad.analytics.core.service package, set the following YAB
property.
logback.webshell.loglevel.com.qad.analytics.core.service=debug
Enabling debug-level logging can cause the Tomcat log file size to expand quickly. It should be used only selectively and
for short periods of time in production environments.
After the desired YAB properties have been set, run the following YAB command to update the logging settings.
yab logback-webshell-update
Once the changes are saved, they take effect within a minute or so with no need to restart the Tomcat instance.
Logi Composer
When investigating problems in the display of visuals inside the Action Centers supported by Logi Composer, expanded
panels, or the KPI screen, it is sometimes helpful to enable debugging for one or more of the Composer components or
microservices. The 'zoomdata' component is generally the most useful one for debugging purposes. To do this, set the
following YAB properties.
logi-composer.default.microservice.zoomdata.configuration.logging.level.com.zoomdata=debug
logi-composer.default.microservice.zoomdata-query-engine.configuration.logging.level.com.
zoomdata=debug
logi-composer.default.microservice.zoomdata-edc-postgresql.configuration.logging.level.com.
zoomdata=debug
logi-platform-services.default.service.application.loglevel=debug
logi-platform-services.default.service.data.loglevel=debug
After these properties have been set, restart Logi Platform Services. There is no need to restart Tomcat.
Logi Info
When investigating problems in the display of visuals or grids inside the Action Centers supported by Logi Info, expanded
panels, or the KPI screen, it is sometimes helpful to enable debugging for the Logi plugin classes specific to Action
Centers. Most of the common logging configuration changes to enable debugging can be done using YAB commands. To
find the location of the Tomcat webapp, query the YAB configuration.
The logging configuration file is named log4j.properties, and is stored in the WEB-INF/classes/ directory under the web
app location. To obtain help about the relevant YAB properties and commands, run the following YAB command.
To enable debugging for the QAD Logi plugin classes, set the following YAB property to change the root log level from
FATAL to INFO.
log4j.analytics-logi.loglevel=info
80
Adaptive UX Implementation Guide
Do not set the Logi log level to DEBUG, as this causes many internal Logi log messages to be written that are unlikely to be
helpful in Action Center problem diagnosis.
After the desired YAB properties have been set, run the following YAB command to update the logging settings.
yab log4j-analytics-logi-update
The Tomcat instance must be restarted for the new log level to take effect.
81
Adaptive UX Implementation Guide
OpenEdge databases: Contains the KPIs and browse definitions that define the data required for the visuals
and Action Centers, as well as the dashboard resource identifiers and access permissions for each Action Center
displayed on the Web UI menu.
Logi Composer PostgreSQL databases or Logi Platform Services H2 product database: Contains the
definition of all visuals, and the contents and layout of all Action Centers.
Cassandra data lake: Contains current and historical snapshots of the KPI datasets extracted from the
operational databases, mainly browse result sets.
This section summarizes the disaster recovery procedures required or recommended for Action Center data. However, it
does not cover the specific YAB commands that implement database backups and restores. For this information, see the
YAB Configuration and Administration Guide or use the yab help command.
The details of the Logi Composer PostgreSQL databases and older Logi Platform Services H2 database are very
different, but the disaster recovery considerations and recovery approaches are very similar. In both cases, the Logi
databases are backed up at the same time as the OpenEdge databases, and the backups will therefore be in sync. When
the most recent backups for OpenEdge and Logi are restored following a disaster, their contents will agree. No in-flight
transactions will be lost, as all commits are written to disk immediately, not buffered in memory and flushed to disk later.
However, unlike the OpenEdge databases, write-ahead logging and roll-forward capability are not enabled for the Logi
databases. While databases transactions since the last backup can be applied automatically to the restored OpenEdge
databases to make them current as of the time of the service outage, the same cannot be done for the Logi databases. Th
e Logi databases contain only metadata, such as visual and dashboard definitions, which are relatively static and not
updated as frequently as the business data. Roll-forward capability is therefore not nearly as important as with other
Adaptive UX data. However, this difference implies that following a disaster and roll-forward recovery, changes made to
the Logi databases since the most recent backup may be recoverable only through manual re-entry. This section
describes in more specific terms the information that could be lost.
yab webapp-analytics-composer-api-sync
This command ensures that there is a correctly identified Action Center and KPI in Composer for every Action Center,
KPI, and visual in the qaddb database. For objects that do not exist in qaddb but exist in Composer, it deletes them from
Composer. From objects that exist in qaddb but not in Composer, it adds them to Composer. Thus, it partially cleans up
the Composer database to help bring it into agreement with AUX, although it cannot restore updates made to Action
Centers and visuals in Composer since the most recent backup was taken. Such updates must be re-created manually
with the help of the instructions in this section of the Implementation Guide.
82
Adaptive UX Implementation Guide
Often, visuals are created or modified in combination with changes to the KPI definition with which those visuals are
associated. The last modified date-time of each KPI is stored in the OpenEdge database. Following roll-forward actions to
fully restore the OpenEdge database, it may be helpful to identify those KPIs that were modified since the restored backup
using a database query. Users who created or use those KPIs in particular could then be requested to check the
associated visuals for currency, as an additional reminder.
The following ABL query can be run in the Progress Editor of the NetUI to identify those KPIs modified since the backup:
Alternatively, the following OpenEdge SQL query will also work in the Progress Editor:
The date-time literal in the above expressions does not have to be quoted, but must be in the correct OpenEdge date-
format for the database (for example, '05/31/2020' for US databases in 'MDY' format).
The user can then re-create the Action Center using the same procedure as when it was created originally.
Visuals that were deleted from Logi Platform Services since the last database backup prior to the service outage will have
to be deleted again, once OpenEdge database restore and roll-forward actions are complete. To do this, a user with
dashboard sharing permissions can display any Action Center that he/she is allowed to edit, click the Edit button, and
press the Visual Gallery button . The gallery will be displayed in a sidebar on the right side of the window.
83
Adaptive UX Implementation Guide
Unwanted visuals can be selected with the help of the search bar at the top. Selected visuals are deleted from the gallery
by pressing the cog icon and selecting the Delete command. However, any visual to be deleted must first be removed
from Action Centers where it is being used, or the delete action will not be allowed.
Action Centers that were deleted from Logi Composer or Logi Platform Services since the last database backup prior to
the service outage will be absent from the Web UI menu, once OpenEdge database restore and roll-forward actions are
complete. Because they are no longer known and cannot be seen by the Web UI, they can be deleted only through Logi
Platform Services directly. While this can be accomplished using internal Logi APIs, it is outside the scope of the present
document to describe these details. It is recommended to contact QAD Service Delivery for assistance in this case (AB-
26845).
Because deleted Action Centers no longer exist in the Web UI at all, they cannot be accessed by any QAD user and,
therefore, have no impact on other Adaptive UX functions.
84
Adaptive UX Implementation Guide
In addition, the console log file for the tomcat-webui web server, normally catalina.out, would contain an error
referencing the deleted/renamed field. In the following example, the 'Discount Amount' field that is used on the above
visual 'Sum of Discount Amount' has been deleted.
85
Adaptive UX Implementation Guide
In this case, the visual must be modified or re-created without referencing the KPI field that is no longer valid. The Action
Center containing the visual must then be modified to include the replacement visual.
86
Adaptive UX Implementation Guide
A similar problem occurs when the definition of the source browse associated with a KPI was modified since the last
restored backup to remove fields that were being used by the KPI. In this case, following disaster recovery the current
OpenEdge database and browse definitions might be running with an older Logi Composer or Logi Platform Services
database with contents based on the old browse. In this kind of case, the KPI referencing the non-existent browse field
would be broken.An Action Center panel containing visuals associated with that KPI would display with errors.
In addition, the console log file for the tomcat-webui web server would contain one or more errors referencing the KPI.
ERROR[http-bio-22011-exec-2] c.q.a.c.l.LogiPlatformWidgetSetupService:154
getLogiDashboardPanelSetupInfo(): error creating data visualization panel info for KPI code:
9be65599-692e-f782-6214-94e3c0caec26
java.lang.NullPointerException: null
at com.qad.analytics.core.lps.DataLakeFieldService.lambda$2(DataLakeFieldService.java:60)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
at java.util.Iterator.forEachRemaining(Iterator.java:116)
at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
at java.util.stream.StreamSpliterators$WrappingSpliterator.forEachRemaining
(StreamSpliterators.java:312)
at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at com.qad.analytics.core.lps.LogiPlatformWidgetSetupInfoService.fillDataLakeInfo
(LogiPlatformWidgetSetupInfoService.java:195)
at com.qad.analytics.core.lps.LogiPlatformWidgetSetupInfoService.
setUpLogiKpiBasedWidgetSetupInfo(LogiPlatformWidgetSetupInfoService.java:178)
at com.qad.analytics.core.lps.LogiPlatformWidgetSetupInfoService.
getLogiDashboardPanelSetupInfo(LogiPlatformWidgetSetupInfoService.java:148)
at com.qad.analytics.core.lps.LogiPlatformWidgetSetupService.getLogiDashboardPanelSetupInfo
(LogiPlatformWidgetSetupService.java:152)
...
at com.qad.analytics.core.lps.LogiPlatformWidgetSetupService.getLogiDashboardPanelSetupInfos
(LogiPlatformWidgetSetupService.java:142)
at com.qad.analytics.core.lps.mvc.controller.data.WidgetSetupController.
setupDashboardPanels_aroundBody14(WidgetSetupController.java:98)
at com.qad.analytics.core.lps.mvc.controller.data.WidgetSetupController$AjcClosure15.run
(WidgetSetupController.java:1)
at org.aspectj.runtime.reflect.JoinPointImpl.proceed(JoinPointImpl.java:149)
at com.qad.qracore.mvc.interceptor.BEExtensionInterceptorImpl.executionAround
(BEExtensionInterceptorImpl.java:143)
at com.qad.webshell.aspects.BEExtensionAspect.executionAround(BEExtensionAspect.java:28)
at com.qad.analytics.core.lps.mvc.controller.data.WidgetSetupController.
setupDashboardPanels_aroundBody16(WidgetSetupController.java:96)
at com.qad.analytics.core.lps.mvc.controller.data.WidgetSetupController$AjcClosure17.run
(WidgetSetupController.java:1)
...
87
Adaptive UX Implementation Guide
1. Any visuals associated with the KPI that reference the deleted field must be either deleted or modified to remove
the obsolete field reference.
2. The KPI definition must be re-configured in the KPI screen for the updated browse by pressing the Configure
button to display the browse, then pressing OK. Once the KPI definition is correct, the KPI can be saved.
3. If necessary, existing visuals must be modified or new ones created for the KPI that reference the correct data.
4. Any Action Center containing visuals that were removed in an earlier step should be modified to include
replacement visuals.
browses Keyspace
In the Cassandra data lake, the browses keyspace is used to hold current KPI data as of the latest refresh, whether
scheduled or manual. The datasets may be accessed on line from Action Centers displayed in the Web UI.
The detailed KPI and browse data stored in the browses keyspace are normally not backed up, as they are created from
the current contents of the OpenEdge databases. In case of disaster, the contents of this Cassandra keyspace can be re-
created using the standard scheduled refresh and cache warming processes, once those databases have been recovered
following the service outage. Hence, there is virtually no risk that the KPI datasets directly consumed by and displayed in
Action Center visuals would be permanently lost.
In case of disaster, the OpenEdge databases should first be restored using normal procedures. Once this is done and
Cassandra is running again, the browses keyspace will be re-populated automatically at tomcat-webui startup, assuming
the cache warming is enabled, and Action Center will display normally with current data from the OpenEdge databases.
If cache warming is not enabled, the KPI data must be re-created using one of the following options:
Scheduled refresh: For those KPIs that are configured to refresh daily, weekly, or monthly, their Cassandra data
will be populated automatically at the next scheduled refresh. In this case, however, Action Center users would
have to wait to see current data for these KPIs.
Manual refresh: Web UI users of Action Centers who are authorized to manually refresh KPIs that have been
configured with manual refresh enabled can do so from the Action Center screens. Under the More menu on the
Action Center toolbar, there is a Refresh Data command that will re-populate all Cassandra tables for the KPIs
used in that Action Center. Data refreshes for particular dashboard panels can also be triggered by pressing the
refresh icon in the lower right-hand corner of each panel.
88
Adaptive UX Implementation Guide
historical_kpi Keyspace
The historical_kpi keyspace is used to hold KPI data from historical snapshots taken at scheduled date-times, which may
extend years into the past. Unlike the browses keyspace, its contents cannot be re-created from the current OpenEdge
databases. In this case, Cassandra is the system of record for the data and should be backed up at the same time the
OpenEdge databases are backed up each day. This can be accomplished by ensuring that the historical_kpi keyspace is
absent from the blacklist of Cassandra keyspaces excluded from the YAB environment-backup operation, listed in the
property cassandra.default.node.backup.blacklist. YAB backs up the designated keyspaces using the Cassandra
'nodetool snapshot' command.
If there is a service outage while data is being written into Cassandra, no in-flight transactions are lost. Cassandra first
writes all changes to a commit log before treating them as successful. If the system crashes before those writes have
been saved in the persistent database tables, the commit log is automatically replayed when the node is restarted to read
them into in-memory 'memtables', from which they are flushed to disk.
Once the Cassandra environment has been restored following a disaster, the historical_kpi keyspace will contain all data
as of the time of the service outage. However, historical snapshots that were in process and interrupted at the time of the
service outage may contain only a portion of the required data, and therefore may not be usable. In this case, the
snapshot failure will be detected at tomcat-webui startup, and a new historical snapshot will be taken automatically to
replace each one that failed. If new activity has been processed by Adaptive UX since the restoration of service but before
replacement snapshots are created, it is possible that the contents of the new snapshots will be different than the contents
that would have been included in the respective failed snapshots. However, in most disaster recovery scenarios this
window of time will be short, reducing the risk that the new KPI snapshot will be inaccurate by a significant amount.
89
Adaptive UX Implementation Guide
90
Adaptive UX Implementation Guide
Migration Overview
The purpose of this guide is to walk through the steps required to migrate Action Centers, visuals, and dataviews (KPIs)
from the Logi Platform Services tool used in previous Adaptive UX releases to Logi Composer. Starting with the
September 2022 release of AUX, Logi Composer is enabled by default and Logi Platform Services is disabled. It is
possible to avoid or postpone the use of Logi Composer and to retain Logi Platform Services in a September 2022
environment, but QAD strongly recommends that Composer be used instead of Logi Platform Services. Composer has
richer functional capabilities, renders Action Centers in a more engaging UI, and is more usable generally. It also has
better run-time performance.
Logi Platform Services is deprecated as of the September 2022 release, and will be obsoleted entirely in future AUX
releases.
Starting with the September 2022 AUX release, all pre-defined QAD Action Centers and visuals are provided in Composer-
compatible form only. The previous versions of these Action Centers for Logi Platform Services will no longer be
maintained or included in any AUX release.
In any one AUX environment, either Logi Composer or Logi Platform Services may be used, but not both. During the
update process, when the objects are being migrated, both Composer and Logi Platform Services are running. However,
once the system administrator signals that the migration has been completed, Logi Platform Services is disabled and only
Composer is used. The migration is one way only; there is no reverse migration from Composer back to Logi Platform
Services. The migrated AUX environment is connected only to Composer, and no longer to Logi Platform Services. Thus,
for each AUX environment, the decision to move to Composer is an irreversible one.
91
Adaptive UX Implementation Guide
Once the Action Centers and related objects have been converted to Composer, they must be reviewed for correctness.
To facilitate this review, QAD strongly recommends that one AUX environment be upgraded to September 2022 and
migrated to Composer first, while another environment, that has not yet been upgraded, remains available for use. Ideally,
the two environments should contain almost the same business data. For most customers who maintain a set of controlled
development, test, and production environments; the development environment would generally be upgraded to
September 2022 and migrated to Composer first, while the test environment remains available for validation purposes.
This approach makes before versus after comparisons relatively easy, as the configuration, along with the content of each
visual in the upgraded Composer environment, can be checked by users against the same visuals in the Logi Platform
Services environment.
To upgrade an environment to September 2022, without enabling Logi Composer, add the following YAB property to conf
iguration.properties before running the YAB update:
qad-analytics-core.composer.enabled=false
The updated environment will continue to use Logi Platform Services, without migrating any Action Centers or visuals.
When the decision is made later to migrate the environment from Logi Platform Services to Logi Composer, set the above
property to true and run another YAB update. The automated migration will run at the time Composer is enabled, unless
the qad-analytics-core.lps.composer-migration-complete property has been set to true (see below).
Using the automated migration process completes much of the conversion automatically, but still requires users
to diagnose and fix errors for those portions of the Action Centers and visuals that could not be converted
automatically. Re-implementing the Action Centers from scratch in Composer requires more manual work, but
this work is less technical in nature and can be done entirely within the Web UI.
If the number of Action Centers is small, it may be less work for Action Center owners to rebuild the Action
Center contents from scratch using the powerful Composer UI features, with no need to review log files and to
run a migration tool, in addition to performing some manual rework.
Logi Composer is much more powerful than Logi Platform Services, and the products of an automated migration
will not use all the new Composer features that end users may want in their new Action Centers. Often, the
usability and value of an Action Center can be significantly improved by rebuilding it from scratch using
Composer features and visual types that were not available in Logi Platform Services.
For environments containing many non-QAD Action Centers and visuals to be migrated, using the automated migration
approach is recommended, as it usually requires significantly less manual work.
To upgrade an environment to AUX September 2022 with automated migration disabled, complete the following steps:
1. Add the following YAB property to configuration.properties before running the YAB update:
qad-analytics-core.lps.composer-migration-complete=true
yab update
3. After the YAB update has completed successfully, run the following YAB command. It moves all Action Centers from
their origin apps to Configuration Data, and modifies various identifiers in Web UI and Composer to build valid links
between them.
yab webapp-analytics-composer-api-sync
92
Adaptive UX Implementation Guide
Following these steps, the Composer-based Action Centers provided by QAD will be present, but all other Action Centers,
developed in previous AUX releases using Logi Platform Services, will be gone. They will need to be re-implemented from
scratch using the Composer functionality embedded in the Web UI of the September 2022 release.
For AUX environments being updated to the September 2022 release with Composer enabled, but for which no
conversion of objects is necessary, complete the same steps described in the above section, Disabling Automated
Migration.
After the environments have been upgraded to the September 2022 release, the converted Action Centers, previously
exported from another upgraded environment, can be imported using the Configuration Data screen.
93
Adaptive UX Implementation Guide
If this property is set to true, the migration process runs automatically one time to convert all non-QAD Action Centers
from Logi Platform Services to Logi Composer, and the Web UI will access only Composer to display Action Centers and
visuals. If the property is not set to true, nothing will be migrated and the Web UI will continue using Logi Platform
Services. Also, the latest QAD-provided Action Centers will not be available, as they are supported using Composer only.
One or more KPIs can be activated in a single action using the Assign Domains & Entities bulk action on the KPIs screen.
From the pop-up window, you can select and activate any KPIs in the system.
94
Adaptive UX Implementation Guide
The KPI activation takes place in the background, and if you select many KPIs, it can take time. Display the Web UI
Background Processing screen to check the status of the background job, and make sure that it has completed successfully
before starting the next step of this procedure.
$ yab action-center-dashboard-repair
Make sure that the Action Center repair has completed successfully before starting the next step of this procedure.
95
Adaptive UX Implementation Guide
qad-analytics-core.composer.enabled=true
yab logi-composer-default-lps-migrate-mode-append
Reruns may be required during the migration process in order to fix migration errors. These scenarios are covered in more
detail in later sections of this document.
To check this, compare the Action Centers listed in the Web UI menu against the expected list from before the upgrade.
Note any missing entries.
Next, check the KPIs listed in the KPI screen against the expected list from before the upgrade. Note any missing entries.
96
Adaptive UX Implementation Guide
The omissions will be investigated in the Review and Repair Migration Gaps step of this procedure.
Also, you will see that all panels are aligned under a single column, rather than in multiple rows and columns across the
screen. The layout will be corrected in the Clean Up Working Action Centers and Visuals step of this procedure.
97
Adaptive UX Implementation Guide
98
Adaptive UX Implementation Guide
...
2022-09-12 13:47:20,812 DEBUG [Thread-3:] STDOUT - [2022-09-12 20:47:20.810][info] - Result file
'appendResult1663015640792.json' was created successfully.;
2022-09-12 13:47:20,869 DEBUG [Thread-3:] STDOUT - [2022-09-12 20:47:20.869][info] - Appended
finished!;
2022-09-12 13:47:20,869 DEBUG [Thread-3:] STDOUT - [2022-09-12 20:47:20.869][info] - Process was
done!;
2022-09-12 13:47:20,940 DEBUG [main:92bb] APPLY - logi-composer-default-lps-migrate-mode-append
UPDATED
...
The location of this file is in the migration tool directory, which is stored in the logi-composer.default.migration-
tool.dir YAB property.
After you open this file in a text editor, you see a record of all Logi Platform Services objects migrated to Composer
successfully, with errors and omissions listed at the end, as in the following example:
99
Adaptive UX Implementation Guide
{
"objects": [
{
"sourceId": "12db6b5d-cab1-4b59-ba82-3501c4b5ac15",
"targetId": "12db6b5d-cab1-4b59-ba82-3501c4b5ac15",
"sourceName": "Planning Action Messages by Production Line and Site - quality-app app -
planning-app app",
"targetName": "Planning Action Messages by Production Line and Site - quality-app app -
planning-app app",
"sourceObjectType": "enrichment",
"targetObjectType": "source",
"sourceConnectionId": "eaa62846-4357-420b-858e-a52c38e77de8"
},{
"sourceId": "vc3c3a44d-dddb-4d56-80f0-5103706b784e",
"targetId": "c3c3a44d-dddb-4d56-80f0-5103706b784e",
"sourceName": "Action Message Summary for Production Lines and Sites - quality-app app -
planning-app app",
"targetName": "Action Message Summary for Production Lines and Sites - quality-app app -
planning-app app",
"sourceObjectType": "table",
"targetObjectType": "Raw Data Table",
"sourceConnectionId": "eaa62846-4357-420b-858e-a52c38e77de8",
"sourceEnrichmentId": "12db6b5d-cab1-4b59-ba82-3501c4b5ac15"
},{
...
}],
"errors": {
"objectsMigration": [
{
"objectId": "99841643-db5c-d4a4-5814-d68ae0dee15a",
"objectType": "source based on enrichment and reference",
"objectName": "",
"message": "Can't migrate source object based on the enrichment with id - '99841643-db5c-d4a4-
5814-d68ae0dee15a'. Can't execute post request for source object based on the enrichment with id
- '99841643-db5c-d4a4-5814-d68ae0dee15a', name - 'Cash Flow Analysis'"
},{
"objectId": "dashboard-0f344b2ca09dc98933e401b7370008f6",
"objectType": "dashboard",
"objectName": "Financial Analysis",
"message": "Can't migrate dashboard with id - dashboard-0f344b2ca09dc98933e401b7370008f6 name
- Financial Analysis. Request failed with status code 400"
},{
...
}],
"fatal": ""
}
}
While the appendResults file lists dashboards (Action Centers), visuals, and dataviews (KPIs) that were not migrated,
for some reason, error details will often be shown only in the yab.log file in the section for the logi-composer-
default-lps-migrate-mode-append YAB command. The remainder of this section assumes that you can open and
review both files, as needed.
{
"objectId": "dashboard-0f344b2ca09dc98933e401b7370008f6",
"objectType": "dashboard",
"objectName": "Financial Analysis",
"message": "Can't migrate dashboard with id - dashboard-0f344b2ca09dc98933e401b7370008f6 name
- Financial Analysis. Request failed with status code 400"
}
100
Adaptive UX Implementation Guide
Cause
Often, the dashboard could not be migrated because of an error in one or more of the dataviews (KPIs) providing data to
visuals in that dashboard. In this case, the appendResults file will also reference the dataviews that were not migrated,
but will not identify the visuals where the error was raised.
Solution
1. Find the error in the appendResults file referencing the dataview used by the skipped dashboard. Usually, the name
of the dataview (KPI) will allow you to determine whether that dataview is used on the missing Action Center. The
following is a sample error of this kind for the Cash Flow Analysis KPI.
{
"objectId": "99841643-db5c-d4a4-5814-d68ae0dee15a",
"objectType": "source based on enrichment and reference",
"objectName": "",
"message": "Can't migrate source object based on the enrichment with id - '99841643-db5c-d4a4-
5814-d68ae0dee15a'. Can't execute post request for source object based on the enrichment with id
- '99841643-db5c-d4a4-5814-d68ae0dee15a', name - 'Cash Flow Analysis'"
}
2. Diagnose and fix the dataview problem, as described below in the Fix Enrichment Dataviews Not Migrated section.
3. After a fix has been applied to the dataview (KPI), re-run the migration tool for the unmigrated dataview object only, as
described in the Rerun the Migration for Selected Dataviews section of this document. When the dataview migration
succeeds, the visuals and Action Centers that use it will be migrated automatically. Alternatively, the migration tool may be
rerun unconditionally with the following YAB command, although throughput time will be greater and log output more
verbose:
yab logi-composer-default-lps-migrate-mode-append
{
"objectId": "d728f0de-ccca-5faa-9214-8782108784b6",
"objectType": "source based on enrichment and reference",
"objectName": "",
"message": "Can't migrate source object based on the enrichment with id - 'd728f0de-ccca-5faa-
9214-8782108784b6'. Request failed with status code 400"
}
Cause
There are several possible causes of an unmigrated dataview, both of which are preventable:
The KPI associated with the dataview has inactive status in Web UI.
The fields in the KPI definition do not agree with the dataview definition in Logi Platform Services.
If the steps described in the Prepare for Migration section of this document were completed, these errors should not occur.
All required KPIs would have active status, and the Action Center repair step would ensure that KPI and dataview
definitions are in sync.
Solution
1. If the name of the KPI associated with the dataview is not included in the error message, find the KPI in the Web UI
through its ID value.
b. Open the Browse Configuration control and check the KPI ID field in the list, then click Apply.
101
Adaptive UX Implementation Guide
c. Search for the KPI whose KPI ID value is equal to the objectId field from the error message in the appendRe
sult file ("d728f0de-ccca-5faa-9214-8782108784b6" in the above example), and select it.
2. Check in the Web UI KPIs screen if the KPI associated with the unmigrated dataview is active. If it should be migrated
to Composer, but does not have active status, do the following.
a. Make the KPI active in the KPIs screen of the Web UI by checking the Active checkbox, and saving the KPI.
b. Rerun the migration process for the missing dataview only, as described in the Rerun the Migration Tool for
Selected Dataviews section of this document.
3. If the KPI is already active, review the yab.log file for a more detailed error related to the dataview by searching for its
objectId value. In particular, look for a long error message related to an invalid data entity or query, as in the following
example:
...
[2022-08-24 21:06:03.495][info] - Start preparing source object that related to enrichment
'd728f0de-ccca-5faa-9214-8782108784b6' for migration;
[2022-08-24 21:06:03.495][debug] - Connecting to https://vmlfwy0005.qad.com:22192. The URL - /api
/platform/system.dataviews.enrichment/d728f0de-ccca-5faa-9214-8782108784b6;
[2022-08-24 21:06:03.541][debug] - Successfully connected to https://vmlfwy0005.qad.com:22192.
The URL - /api/platform/system.dataviews.enrichment/d728f0de-ccca-5faa-9214-8782108784b6;
[2022-08-24 21:06:03.541][info] - Start enrichment migration;
[2022-08-24 21:06:03.541][info] - Checking dependent objects;
[2022-08-24 21:06:03.541][debug] - Connecting to https://vmlfwy0005.qad.com:22192. The URL - /api
/platform/system.dataviews.reference/Reference-d728f0de-ccca-5faa-9214-8782108784b6;
[2022-08-24 21:06:03.588][debug] - Successfully connected to https://vmlfwy0005.qad.com:22192.
The URL - /api/platform/system.dataviews.reference/Reference-d728f0de-ccca-5faa-9214-8782108784b6;
[2022-08-24 21:06:03.591][debug] - Connecting to https://vmlfwy0005.qad.com:22131. The URL -
/composer/api/sources/data-entities/describe;
[2022-08-24 21:06:06.922][error] - Error response from - https://vmlfwy0005.qad.com:22131. The
URL - /composer/api/sources/data-entities/describe;
[2022-08-24 21:06:06.922][error] - "Data entity is not valid. org.apache.hive.service.cli.
HiveSQLException: Error running query: org.apache.spark.sql.AnalysisException: Table or view not
found: ...
...
102
Adaptive UX Implementation Guide
This kind of error indicates that the KPI definition in AUX and the dataview definition in Logi Platform Services are out of
sync for some reason, possibly because of KPI fields missing from the dataview. In this case, Composer must be
temporarily disabled in the environment so that an Action Center repair can be run. Then, Composer must be re-enabled
and the migration retried for the repaired dataview. Complete the following steps in this case:
yab tomcat-webui-stop
qad-analytics-core.composer.enabled=false
yab reconfigure
yab tomcat-webui-start
yab action-center-dashboard-repair
yab tomcat-webui-stop
yab reconfigure
yab tomcat-webui-start
10. Rerun the migration process for the missing dataview only, as described in Special Migration Procedures.
Alternatively, you can rerun the migration tool unconditionally with the following YAB command, although
throughput time will be greater and log output more verbose.
103
Adaptive UX Implementation Guide
yab logi-composer-default-lps-migrate-mode-append
In Composer, date select widgets like this do not exist. Instead, Composer provides a built-in timebar control that can be
used to filter the data on any visual or the entire Action Center, based on any date field within a KPI. At the Action Center
level, this timebar can be controlled near the bottom of the display.
...
Therefore, date widgets, like the one above, are intentionally skipped by the migration process.
To apply default date filtering to an Action Center that formerly used a date select widget, open the Action Center in Web
UI and set the from-to dates on the timebar to the desired dates, and save the changes.
104
Adaptive UX Implementation Guide
Search this portion of the file for all occurrences of the string [error] to skip all log entries, except for those requiring
attention. The rest of this section describes the most important and common errors, and how to address them. Note that
some of the errors are false positives that may be ignored or require only a rerun of the migration process with no changes.
The following are examples of calculation errors from the log. Each error shows the source calculation from Logi Platform
Services and the invalid converted value ("prepared calculation"):
105
Adaptive UX Implementation Guide
Cause
The calculation expression for a calculated field from a visual in Logi Platform Services cannot be converted automatically
to the syntax required for a "derived field" in Composer. This may be caused by a built-in Logi Platform Services function
that does not exist in Composer. It may also be caused by a conditional formula ('IIF' function) from Logi Platform Services
that was not correctly converted to a CASE statement, as required for Composer.
Solution
Find the definition of the derived field in Composer, that corresponds to the calculated field from Logi Platform Services,
and write a new calculation expression for it. In Composer, derived fields are stored in the "source" (KPI) object, not in the
visual, as was usually the case with Logi Platform Services. Therefore, the first step is to find the KPI where the invalid
calculation is defined.
1. Find the KPI in the Web UI through its KPI ID value, which is the same as the source ID in Composer.
b. Open the Browse Configuration control and check the KPI ID field in the list, then click Apply.
106
Adaptive UX Implementation Guide
2. Search for the KPI whose KPI ID value is equal to the UUID portion of the URL referenced in the error
message from the log ("ee36242d-68e4-7f9c-5614-94fa78a4a48b" in the third example above), and select it.
a. Go to the Visuals panel of the screen, and open any one of the visuals for the selected KPI.
b. Select any of the fields used for the axes of the chart to display a list of all the fields. Find the field with the
invalid calculation, whose name ends in the string "- Stub calculation, need to be reviewed and fixed manually."
107
Adaptive UX Implementation Guide
c. Click the three-dot control ("...") next to the field, and select Edit in the displayed pop-up dialog. The calculation
string will be blank.
3. Re-implement the calculation using valid Composer syntax. It may be helpful to review the original calculation
in a non-Composer AUX environment for comparison purposes.
108
Adaptive UX Implementation Guide
Cause
Functions used in the filter are not supported in Composer.
Solution
Find the definition of the filter in Composer, corresponding to the one from Logi Platform Services. Determine if the filter
definition is needed and, if so, re-implement it. The log file does not identify the visual(s) where the filter is used, only the
source (KPI). Therefore, the first step is to find the KPI and its visual(s) where the filter is needed.
1. Find the KPI in the Web UI through its KPI ID value, which is the same as the source ID in Composer.
b. Open the Browse Configuration control and check the KPI ID field in the list, then click Apply.
2. Search for the KPI whose KPI ID value is equal to the UUID portion of the URL referenced in the error message from
the log ("a588c665-88e9-4a98-5414-4d96f8302610" in the above example), and select it.
3. Go to the Visuals panel of the screen, and open each of the visuals for the selected KPI. For comparison purposes, it
may be helpful to open the same visuals in a non-Composer AUX environment.
4. Determine whether the unmigrated filter is needed in Composer. If the filter is being applied to date fields, as in the
above example, it does not have to be created in Composer because Composer supports date and date range filtering
using its built-in timebar feature. The timebar within the visual can be enabled and configured at the bottom of the Visual
Builder window.
109
Adaptive UX Implementation Guide
...
5. If the filter is needed in Composer, define it in the filter area within the right-hand sidebar in the Visual Builder window
and click Apply.
110
Adaptive UX Implementation Guide
Cause
Inability to connect to Logi Platform Services for unknown reasons, often intermittently, can occur for any of the following
reasons:
A heavy server load, causing slow response times from Logi Platform Services.
Logi Platform Services has not been fully started yet.
Network problems on the machine such as a blocked port.
Solution
1. If the log shows that the retries succeeded, the problem is solved and nothing else is needed.
2. Otherwise, check that Logi Platform Services is running, and that only a single instance of the Logi Platform Services
processes are running:
yab logi-platform-services-default-status
111
Adaptive UX Implementation Guide
3. If this is not the case, kill any duplicate processes from the command line and restart Logi Platform Services. After
starting Logi Platform Services, wait several minutes to ensure that all processes are running.
yab logi-platform-services-default-start
4. Rerun the migration process using the following YAB command. As this problem can be intermittent, simply rerunning
the process may be sufficient.
yab logi-composer-default-lps-migrate-mode-append
Cause
A field with the same name already exists in the same KPI.
Solution
No action is needed because an additional copy of the field will cause no problems and will not be visible on the KPI
screen.
Cause
A visual with the same name already exists in the same KPI. The migration process creates a separate visual for each
copy of the original visual that was present in Logi Platform Services.
112
Adaptive UX Implementation Guide
Solution
No action is needed because an additional copy of the field will cause no problems. However, consider removing
unnecessary copies as part of a final clean up, as described in the Clean Up Action Centers and Visuals section of this
document.
Authentication Error
Authentication errors with subsequent retries, similar to the following example, are sometimes listed in the log file:
Cause
The absense of a valid Logi Platform Services session for unknown reasons, often intermittently.
Solution
No action is needed because the automatic retries almost always resolve the problem before the migration process is
disrupted.
113
Adaptive UX Implementation Guide
114
Adaptive UX Implementation Guide
These errors do not appear in log files, but are visible when displaying the visuals where the KPI used by the visual
contains no business data to return.
The errors appear only in cases where dynamic date range filters are used in the visual, as opposed to static ranges.
"Dynamic" means that the start and/or end of the data range is determined by the dataset being displayed, rather than a
boundary based on the calendar such as "1 Jan 2022," "Previous Year-End," "This Quarter," or "2 Weeks Ago." The errors
are harmless and disappear automatically when business data is included in the visuals, but you can remove them
explicitly by changing the date range filter.
115
Adaptive UX Implementation Guide
When expanded, the filter for one of these visuals shows the start and/or end date as "START OF DATA" or "END OF
DATA."
116
Adaptive UX Implementation Guide
To remove the errors, modify the filter to change "START OF DATA" and "END OF DATA" to some static value, as shown
in the following example:
When you modify the filter, the expected "No Data Available" message displays on the changed visual:
117
Adaptive UX Implementation Guide
However, if the visual requires a dynamic, rather than static, date range; you cannot make this kind of change and the
error message remains.
118
Adaptive UX Implementation Guide
To change the layout and sizing of the panels, open the Action Center in the Web UI and simply drag and expand-contract
individual panels, as desired. Save the results.
119
Adaptive UX Implementation Guide
These duplicate visuals work and you do not have to remove them. However, if common visuals are used across many
Action Centers, having multiple copies of the same visual means that all copies must be updated individually whenever a
common change is needed. If this is the case, consider modifying the Action Centers that use the common visuals so that
they use only one of the copies. When the unnecessary copies have been been removed-replaced in all Action Centers,
you can delete those copies by locating the KPI of the visual on the KPIs screen, selecting the specific visual in the
Visuals panel, and clicking the Delete button.
120
Adaptive UX Implementation Guide
yab logi-composer-migration-complete
This command disables all migration processes, stops Logi Platform Services, and disables all YAB commands related to
Logi Platform Services. It can be run only once in an AUX environment.
While not common, it is possible that some Action Centers in AUX created using Logi Platform Services may not have
been migrated into the new AUX environment for some reason. It may be that problems in particular visuals or KPIs used
by an Action Center prevented the Action Center itself from being migrated into Composer. In such cases, the decision
may have been made simply to abandon the old Action Center and create a new one using the more advanced Composer
functionality.
In this scenario, orphaned Action Center records may exist in the upgraded AUX environment that do not exist in Logi
Composer. As these Action Centers are incomplete, they cannot be used and are not displayed in the WebUI menu., All
Action Centers in AUX, both migrated and unmigrated, are displayed in the Action Center screen.
After the migration has been marked as complete as described in the previous section, an Actions Delete Unmigrated
Action Centers command is displayed on the Action Center screen. This command can be run to delete any unmigrated
Action Centers from the Adaptive UX environment.
121
Adaptive UX Implementation Guide
Run this command after the completing the migration to clean the unmigrated Action Center records from the database.
To remove Logi Platform Services from the AUX environment, run the following command:
yab logi-platform-services-default-remove
122
Adaptive UX Implementation Guide
1. Obtain the IDs of the dataviews in Logi Platform Services to be included in the migration run. The ID values are UUIDs
(example: "ee36242d-68e4-7f9c-5614-94fa78a4a48b"), and are generally labeled objectId in the migration appendRes
ult file, as in the following example:
{
"objectId": "99841643-db5c-d4a4-5814-d68ae0dee15a",
"objectType": "source based on enrichment and reference",
"objectName": "",
"message": "Can't migrate source object based on the enrichment with id - '99841643-db5c-d4a4-
5814-d68ae0dee15a'. Request failed with status code 400"
}
2. Make a local copy of the standard migration template file used to configure the scope of the migration process. The
pathname of this file is stored in the logi-composer.default.migration-tool.configuration-template YAB
property.
The contents of the standard migration file are similar to the following:
{
"connection-timeout": "$yab.eval('${instancekey}.connection-timeout')",
"mode": "${mode}",
"property-replacing": [{
"namespace": "system.connections",
"id": "$yab.eval('sourceId')",
"properties": [{
"path": "payload.password",
"value": "$yab.eval('${instancekey}.connection-pwd')"
}]
}],
"data-points-limit": $yab.eval('${instancekey}.data-point-limit'),
#if (${mode} == "append")
"appendFile": "appendFile.json",
#end
"source-tags": {
"namespace": "com.qad.tags",
"linkTypes": [
"comQadApp",
"comQadKpi"
],
"ids": [
"!urn:app:com.qad.*",
"!urn:kpi:com.qad.*"
]
},
"source-url": "$yab.eval('${${instancekey}.lps}.url')",
"source-user": "$yab.eval('${${instancekey}.lps}.username')",
"source-pwd": "$yab.eval('${${instancekey}.lps}.password')",
"target-url": "$yab.eval('${composerinstancekey}.url')",
"target-user": "$yab.eval('${composerinstancekey}.users.admin.username')",
"target-pwd": "$yab.eval('${composerinstancekey}.users.admin.password')"
}
123
Adaptive UX Implementation Guide
3. In the local copy, remove the source-tags property and replace it with a source-objects property that references
the dataviews to be migrated in the ids property. Do not make any other changes to the file. The following example
shows a reconfigured template file with the correct syntax for the source-objects property. In this example, the object
Id of the dataview to be migrated is 99841643-db5c-d4a4-5814-d68ae0dee15a. You can also specify multiple
dataviews by entering a comma-delimited list of objectId values inside the ids array of this file, instead of a single
entry.
{
"connection-timeout": "$yab.eval('${instancekey}.connection-timeout')",
"mode": "${mode}",
"property-replacing": [{
"namespace": "system.connections",
"id": "$yab.eval('sourceId')",
"properties": [{
"path": "payload.password",
"value": "$yab.eval('${instancekey}.connection-pwd')"
}]
}],
"data-points-limit": $yab.eval('${instancekey}.data-point-limit'),
#if (${mode} == "append")
"appendFile": "appendFile.json",
#end
"source-objects": [
{
"namespace": "system.dataviews.enrichment",
"ids": [
"99841643-db5c-d4a4-5814-d68ae0dee15a"
]
}
],
"source-url": "$yab.eval('${${instancekey}.lps}.url')",
"source-user": "$yab.eval('${${instancekey}.lps}.username')",
"source-pwd": "$yab.eval('${${instancekey}.lps}.password')",
"target-url": "$yab.eval('${composerinstancekey}.url')",
"target-user": "$yab.eval('${composerinstancekey}.users.admin.username')",
"target-pwd": "$yab.eval('${composerinstancekey}.users.admin.password')"
}
4. Rerun the migration process using the new template file through the following YAB command. The new template file will
be used only in the context of this command, without permanently changing the value of the logi-composer.default.
migration-tool.configuration-template YAB property.
Example:
yab -logi-composer.default.migration-tool.configuration-template:/home/mfg/migration-template-
enrichment-dataview.json logi-composer-default-lps-migrate-mode-append
5. Review the results of the migration in the appendResult file and YAB log, as described in the earlier sections of this
document.
For example, an existing AUX customer upgrading to the September 2022 release may have installed a mix of QAD-
provided, third-party-provided, and internally developed apps. By default, the third-party and internally developed Action
Centers will be migrated automatically to Composer during the upgrade. However, if there is a reason to migrate the third-
party Action Centers but not the internally developed ones, or vise versa, it is possible to configure the migration process
to filter by app.
124
Adaptive UX Implementation Guide
1. Identify the URIs of the apps to be included in the migration. The URIs can be found by displaying the Apps screen in
Web UI.
2. Make a local copy of the standard migration template file used to configure the scope of the migration process. The
pathname of this file is stored in the logi-composer.default.migration-tool.configuration-template YAB
property.
{
"connection-timeout": "$yab.eval('${instancekey}.connection-timeout')",
"mode": "${mode}",
"property-replacing": [{
"namespace": "system.connections",
"id": "$yab.eval('sourceId')",
"properties": [{
"path": "payload.password",
"value": "$yab.eval('${instancekey}.connection-pwd')"
}]
}],
"data-points-limit": $yab.eval('${instancekey}.data-point-limit'),
#if (${mode} == "append")
"appendFile": "appendFile.json",
#end
"source-tags": {
"namespace": "com.qad.tags",
"linkTypes": [
"comQadApp",
"comQadKpi"
],
"ids": [
"!urn:app:com.qad.*",
"!urn:kpi:com.qad.*"
]
},
"source-url": "$yab.eval('${${instancekey}.lps}.url')",
"source-user": "$yab.eval('${${instancekey}.lps}.username')",
"source-pwd": "$yab.eval('${${instancekey}.lps}.password')",
"target-url": "$yab.eval('${composerinstancekey}.url')",
"target-user": "$yab.eval('${composerinstancekey}.users.admin.username')",
"target-pwd": "$yab.eval('${composerinstancekey}.users.admin.password')"
}
125
Adaptive UX Implementation Guide
3. In the local copy, replace the contents of the ids property inside the source-tags property with the list of app URIs to
be migrated. Do not make any other changes to the file. The following example shows a reconfigured template file with the
correct syntax for the modified source-tags property. In this example, the URI of the app to be migrated is urn:app:
com.qad.qadextensions. Multiple apps can also be specified by entering a comma-delimited list of URIs inside the ids
array of this file, instead of only a single entry. In addition, an asterisk ('*") character can be placed in any URI value as a
wildcard to include multiple apps with a single entry (example: ''urn:app:com.thirdparty.*).
{
"connection-timeout": "$yab.eval('${instancekey}.connection-timeout')",
"mode": "${mode}",
"property-replacing": [{
"namespace": "system.connections",
"id": "$yab.eval('sourceId')",
"properties": [{
"path": "payload.password",
"value": "$yab.eval('${instancekey}.connection-pwd')"
}]
}],
"data-points-limit": $yab.eval('${instancekey}.data-point-limit'),
#if (${mode} == "append")
"appendFile": "appendFile.json",
#end
"source-tags": {
"namespace": "com.qad.tags",
"linkTypes": [
"comQadApp",
"comQadKpi"
],
"ids": [
"urn:app:com.qad.qadextensions"
]
},
"source-url": "$yab.eval('${${instancekey}.lps}.url')",
"source-user": "$yab.eval('${${instancekey}.lps}.username')",
"source-pwd": "$yab.eval('${${instancekey}.lps}.password')",
"target-url": "$yab.eval('${composerinstancekey}.url')",
"target-user": "$yab.eval('${composerinstancekey}.users.admin.username')",
"target-pwd": "$yab.eval('${composerinstancekey}.users.admin.password')"
}
4. Rerun the migration process using the new template file through the following YAB command. The new template file will
be used only in the context of this command, without permanently changing the value of the logi-composer.default.
migration-tool.configuration-template YAB property.
Example:
yab -logi-composer.default.migration-tool.configuration-template:/home/mfg/migration-template-app.
json logi-composer-default-lps-migrate-mode-append
5. Review the results of the migration in the appendResult file and YAB log, as described in the earlier sections of this
document.
126
Adaptive UX Implementation Guide
Please contact QAD Services to implement this capability in your system. QAD Internal Link: Distributed Processing.
127
Adaptive UX Implementation Guide
Creates a qad_wkfl record that blocks pricing from executing while the conversion is processing, which is defined
as follows:
qad_domain = global_domain
qad_key1 = "ANALYSIS"
qad_key2 = SessionUniqueID
qad_key3 = "AP_CONV"
mfc_domain=global_domain
mfc_ctrl.mfc_module="SO"
mfc_ctrl.mfc_field="pic_adaptive_pricing"
mfc_ctrl.mfc_seq=460
mfc_domain=global_domain
mfc_ctrl.mfc_module="SO"
mfc_ctrl.mfc_field="pic_use_pricing_cache"
mfc_ctrl.mfc_seq=450
pic__qadc01="YY"
pic_cust_regen=true
pic_item_regen=true
Upgrade
Conversion Routine
When TAM is upgraded to 3.0 for existing customers, a conversion routine is run. The routine determines if a previous
analysis code conversion routine was executed in this environment. If the analysis code conversion was executed, then it
does following:
mfc_domain=global_domain
mfc_ctrl.mfc_module="SO"
mfc_ctrl.mfc_field="pic_adaptive_pricing"
mfc_ctrl.mfc_seq=460
mfc_domain=global_domain
mfc_ctrl.mfc_module="SO"
mfc_ctrl.mfc_field="pic_use_pricing_cache"
mfc_ctrl.mfc_seq=450
pic__qadc01="YY"
pic_cust_regen=true
pic_item_regen=true
Post-Installation Steps
128
Adaptive UX Implementation Guide
Existing TAM customers who are upgrading to TAM 3.0 must complete the following steps after the installation is complete.
The next three steps are done from the QAD .NET UI.
2. Run Analysis Code Detail Build for each and every domain. This needs to be done for both Item and Customer.
3. Run convert_indaddress_tam3.p to convert IndirectAddress data for the new IndAddressCode field.
4. Run convert_claim.p to convert claim data for the new Claim screen.
5. Run convert_earneddiscount_3031.p to convert contract earned discount data, if currently running TAM
Contracts on TAM 3.0 or earlier. The conversion program was introduced in TAM 3.1 or above. If you have
questions about running this program, please contact QAD Support.
129
Adaptive UX Implementation Guide
A bidirectional integration solution is presented for both Microsoft 365 Calendar and Google Calendar. For other calendar
apps, a one-way directional integration solution from CRM to the calendar is available.
a. Open the navigation icon in the Azure Portal and select All services.
130
Adaptive UX Implementation Guide
c. Fill in the required details as shown in the graphic and then select Register.
d. After registration, you must add the client ID and tenant ID to the configuration.
131
Adaptive UX Implementation Guide
e. Go to the Application, select Certificates & secrets, then select +New client secret. Add a description and then select
Add.
Note: Save or make a note of the secret as you will need to add this to the configuration.
f. Finally, navigate to the API permissions page. Select Add a permission, then select Microsoft Graph, then Applicati
on permissions, then Calendars.ReadWrite, then Users.Read.All. Select Add permissions. Grant admin consent for
your tenant.
Note: API permissions must be set within the Azure portal for integration to work properly. Although you may still obtain a
token, Calendar APIs within the QAD Adaptive UX may not be accessible without completing this step.
132
Adaptive UX Implementation Guide
Your new public/private key pair is generated and downloaded to your machine. It serves as the only copy of the private
key. You are responsible for storing it securely. If you lose this key pair, you will need to generate a new one.
If you need to grant G Suite domain-wide authority to the service account, click the email address of the service account
that you created, then copy the value from the Unique ID box.
To delegate authority to the service account, use the value you copied as the client ID.
Reference - https://developers.google.com/identity/protocols/oauth2/service-account
Then, a super administrator of the G Suite domain must complete the following steps:
1. From your G Suite domain’s Admin console, go to Main menu menu > Security > API Controls.
2. In the Domain wide delegation pane, select Manage Domain Wide Delegation.
3. Click Add new.
4. In the Client ID field, enter the service account's Client ID. You can find your service account's client ID in the Se
rvice accounts page.
5. In the OAuth scopes (comma-delimited) field, enter the list of scopes that your application should be granted
access to. For Google Calendar Integration in Adaptive UX, it needs domain-wide full access to the Google
Calendar API. Enter: https://www.googleapis.com/auth/calendar.
6. Click Authorize.
Your application now has the authority to make API calls as users in your domain (to "impersonate" users). When you
prepare to make authorized API calls, you specify the user to impersonate.
It usually takes a few minutes for impersonation access to be granted after the client ID is added, but in some cases, it
might take up to 24 hours to propagate to all users of your Google Account.
Reference - https://developers.google.com/identity/protocols/oauth2/service-account#delegatingauthority
The downloaded json file of the service account's credentials should be in a format similar to the following:
json file
{
"type": "service_account",
"project_id": "qad-api",
"private_key_id": "abcde",
"private_key": "-----BEGIN PRIVATE KEY-----\nAAAAA\nBBBBB\nCCCCC\n-----END PRIVATE KEY-----\n",
"client_email": "[email protected]",
"client_id": "1234567",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/crm-google-calendar-
testing%40qad-api.iam.gserviceaccount.com"
}
To make the Google Calendar Integration work in an Adaptive UX environment, the credentials need to be configured in
the configuration.properties file:
configuration.properties
qad-erp-collaborationadapters.calendar.google.clientid=1234567
qad-erp-collaborationadapters.calendar.google.clientemail=crm-google-calendar-testing@qad-api.iam.
gserviceaccount.com
qad-erp-collaborationadapters.calendar.google.privatekeyid=abcde
qad-erp-collaborationadapters.calendar.google.privatekey=AAAAABBBBBCCCCC
Remove the carriage returns in "AAAAA\nBBBBB\nCCCCC" when setting the private key in the YAB property. You do not
need to copy the private key's "-----BEGIN PRIVATE KEY-----\n" and "\n-----END PRIVATE KEY-----\n" to the YAB property.
As illustrated in the previous example, the final private key in the YAB property is "AAAAABBBBBCCCCC".
Verify that you own the domain - Before you can register your domain, you need to verify that you own it. Complete the
site verification process using Search Console. For more details, see the site verification help documentation.
134
Adaptive UX Implementation Guide
Details
Paste the homepage URL of the Web UI. Ensure the URL is public and accessible by Google and click "CONTINUE".
In this case, https://qqcoqadlin008.qad.com/vmlr8gcrm2/ is used.
Choose the "HTML Tag" method to validate the site. Copy the tag (e.g. <meta name="google-site-verification" content="
wob9_Yh5RSNPE30EmABnZhd8lq8vmNdN0F2hNRp4KI0" />)
qad-webshell.googleSiteVerificationContent=wob9_Yh5RSNPE30EmABnZhd8lq8vmNdN0F2hNRp4KI0
135
Adaptive UX Implementation Guide
Register your domain - Go to the Domain verification page in the API Console and click Add domain. Fill in the form,
then again click Add domain.
Create watch channels for users - To ask Google to send notifications to Adaptive UX, configure the following in
configuration.properties to create a subscribing channel of the webhook by schedule.
configuration.properties
# Mandatory setting for homepage url that's defined in Google site verification
qad-erp-collaborationadapters.calendar.pushnotification.homepageurl=https://qqcoqadlin008.qad.com
/vmlr8gcrm2/
# Indicates whether periodically renew channel. Default as false, should be set to true
qad-erp-collaborationadapters.calendar.pushnotification.channelrenew.enabled=true
# How long the channels will be periodically checked for expiration. No needs to have this line
if using the default 900 seconds.
qad-erp-collaborationadapters.calendar.pushnotification.channelrenew.seconds=900
# Renew the channel for system user if the channel will be expired in certain minutes. No needs
to have this line if using the default 60 minutes.
qad-erp-collaborationadapters.calendar.pushnotification.channelrenew.minutesBeforeExpire=60
# How long the channels will be expired when creating channel. No needs to have this line if
using the default 20 day. It's allowed 24 days in maximum.
qad-erp-collaborationadapters.calendar.google.pushnotification.daysToExpire=20
The following settings need to be configured in the KMS along with the credentials properties above:
configuration.properties
kms.enabled=true
kms.server.ssl.key-store=/dr01/certificates/keystore.jks
kms.server.ssl.keyStoreType=JKS
kms.server.ssl.keyAlias=qad-wildcard
KMS currently does not support file encryption with YAB configuration. In a future release, the json file will be able to be
placed directly into the YAB environment.
136
Adaptive UX Implementation Guide
Please be aware that this option only supports one-direction integration from CRM to the calendar. Changes made directly
to the calendar cannot be integrated back into QAD CRM.
The system uses the email set below to send the notifications. Replace <email id> and <password> with the real
email ID and password you want to use as the email account from which invites will be sent.
Note: Replace <email> and <password> with the real email ID and password.
137
Adaptive UX Implementation Guide
Using an email poller program to attach email messages to proper CRM records such as CRM contacts, CRM
accounts, leads, opportunities, marketing campaigns, and events.
Sending batch emails that are handled via background processing. This function offers an email template feature
where the field values of the email template can be replaced for each contact. For example, if a user wants to
send a marketing email to all customers, the contact name in the template can be replaced with the contact’s real
name and each contact will receive their own customized email.
A bounced email account can be configured to identify which emails are bounced. These bounced contacts can
be reviewed later to correct the email ID.
A solution to let customers unsubscribe from emails.
An Email Contacts action to send emails to multiple contacts is included for the CRM Contacts, CRM Accounts, and Lead
Contacts screens as well as select other browses and grids.
Important: The email accounts used for all three of these purposes should not be the same account.
qad-erp-custrelmgmt.crm.email.poller.autostart=true
qad-erp-custrelmgmt.crm.email.poller.username=<email1>
qad-erp-custrelmgmt.crm.email.poller.password=<password1>
qad-erp-custrelmgmt.crm.email.poller.protocol=<protocol>
qad-erp-custrelmgmt.crm.email.poller.hostname=<hostname>
qad-erp-custrelmgmt.crm.email.poller.port=<port>
qad-erp-custrelmgmt.crm.email.poller.mailbox=inbox
qad-erp-custrelmgmt.crm.email.bounced.poller.autostart=true
qad-erp-custrelmgmt.crm.email.bounced.poller.username=<email2>
qad-erp-custrelmgmt.crm.email.bounced.poller.password=<password2>
qad-erp-custrelmgmt.crm.email.bounced.poller.protocol=<protocol>
qad-erp-custrelmgmt.crm.email.bounced.poller.hostname=<hostname>
qad-erp-custrelmgmt.crm.email.bounced.poller.port=<port>
qad-erp-custrelmgmt.crm.email.bounced.poller.mailbox=inbox
138
Adaptive UX Implementation Guide
qad-erp-custrelmgmt.crm.email.unsubscribe.poller.autostart=true
qad-erp-custrelmgmt.crm.email.unsubscribe.poller.username=<email3>
qad-erp-custrelmgmt.crm.email.unsubscribe.poller.password=<password3>
qad-erp-custrelmgmt.crm.email.unsubscribe.poller.protocol=<protocol>
qad-erp-custrelmgmt.crm.email.unsubscribe.poller.hostname=<hostname>
qad-erp-custrelmgmt.crm.email.unsubscribe.poller.port=<port>
qad-erp-custrelmgmt.crm.email.unsubscribe.poller.mailbox=inbox
Ensure IMAP is enabled for the email accounts. If the email account is Google account, refer to this page https://support.
google.com/mail/answer/7126229?hl=en to set up IMAP and also turn on "Less secure app access" on this page https://my
account.google.com/lesssecureapps.
Once the settings are defined in configuration.properties, run the following command:
Then, go to chrome://settings/handlers and check if mail.google.com is present; if it is, set it as default. If it is not, reload
Gmail and click on the icon to the left of the search icon, as seen in the image below. Select Allow and click Done.
139
Adaptive UX Implementation Guide
To find Handlers, go to the Privacy and security section. Then go to Site Settings and click Additional Permissions.
140
Adaptive UX Implementation Guide
141
Adaptive UX Implementation Guide
If Outlook is not present in Chrome://settings/handlers, log into Outlook 365 web client in Chrome, press F12 to open
DevTools.
Replace <outlook 365 website> with the one you are currently using and press Enter to save.
142
Adaptive UX Implementation Guide
For example:
143
Adaptive UX Implementation Guide
The user's email history is synced to QAD CRM using the email account previously set up as the Cc email address in the S
et Up CC Email Address section. This address must be entered in the To, Cc or Bcc field.
Please follow the steps below to install the QAD Add-on for Outlook.
For example:
https://vmlasf0001.qad.com/clouderp/api/open/custrelmgmt/manifest.xml
For example:
manifest.xml.
144
Adaptive UX Implementation Guide
145
Adaptive UX Implementation Guide
Implementation FAQ
A successful implementation requires you to gather and understand knowledge and data from a wide range of QAD
resources. This section addresses common questions about installation and implementation and when applicable, points
you to the source documents for the answers.
QAD Sites
Use the following links to access training, documentation, partner information, and more:
FAQ Topics
Operation System Sizing
Operating System Configuration
Installing Adaptive UX
146
Adaptive UX Implementation Guide
Memory Sizing
All values are in gigabytes (GB)
User Count < 50 < 100 < 200 < 400 < 800 < 1000 < 2000
NOTE: Figures do not include sizing for any other applications, such as monitors, logstash, anti-virus, and icinga.
Note 1: If sizing is less than the minimum number of CPUs, use the minimum number.
Note 2: Figures do not include sizing for any other applications, such as monitors, logstash, anti-virus, and icinga.
Note 3:
Very heavy concurrent Action Center use may require additional CPU Resources.
Activity Feed Entity tracking for calculated fields results in additional API fetches, which may require additional
CPU resources.
Bulk record changes for fields that have Activity Tracking enabled for non-database fields may require more
CPU resources to avoid the server becoming overloaded or Activity Feed events being delayed.
Assumptions:
147
Adaptive UX Implementation Guide
vm.swappiness = 1
vm.max_map_count = 262144
kernel.shmmax = <RAM SIZE>
kernel.shmall = <SHMMAX>
kernel.shmmni = 4096
kernel.sem = 10000 640000 2560 20480
fs.file-max = 100000
net.ipv4.ip_local_port_range = 1024-14999,30001-65000
net.ipv4.ip_local_reserved_ports = 15000-30000
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
The knowledge base entry includes the following details. The total number of required semaphores is dependent upon:
The semaphore usage is presented per Progress guidelines. Manual tuning of these parameters may be necessary. They
recommend testing and monitoring in order to identify the optimal value for the system.
148
Adaptive UX Implementation Guide
149
Adaptive UX Implementation Guide
Installing Adaptive UX
What is the Web UI Proxy URL?
During Adaptive UX installation, the installer prompts you to enter an optional Web UI Proxy URL. This is typically an
Apache server instance used to control external access to the Web UI.
The Apache Reverse Proxy server sits in the DMZ, accessible to the public internet, and serves as a gateway to Tomcat
servers running the Web UI. Because it is publicly accessible, the Apache server must be protected with basic security
hardening measures, including the use of SSL/TLS for all communication. The server's main responsibilities are to pass
HTTPS requests to the correct Tomcat server and to enable compression for performance reasons. Compression is
critical if the Web UI will be accessed over a WAN.
1. Configuration for Action Centers / Logi. Refer to the Action Centers section of this guide and the QAD Security
Administration Guide (September 2022 version) on the QAD Document Library.
2. Compression. The information is outlined below as a guide, but refer to the official Apache documentation.
150
Adaptive UX Implementation Guide
Certain platform development features are only enabled in a Development environment and not in a Test or
Production environment.
The licensing of some products can differ based on whether the environment is a Production environment or not.
UI theming can be based on the environment type, so different color schemes can be used for a Dev, Test or
Prod environment.
151