aiR for Review Guide
September 17, 2025
For the most recent version of this document, visit our documentation website.
Table of Contents
1 aiR for Review 6
1.1 Analysis review types 6
1.2 aiR for Review workflow 6
1.3 How it works 7
1.4 Understanding documents and billing 8
1.5 Regional availability of aiR for Review 8
1.6 Language support 9
1.7 Analyzing emojis 9
2 Installing aiR for Review 10
2.1 Object types available 10
2.2 Tabs available 11
3 Permissions 12
3.1 Viewing the aiR for Review dashboard 12
3.2 Creating and running an aiR for Review project 12
3.3 Editing and running an existing aiR for Review project 12
3.4 Viewing highlights in the Viewer 13
3.5 Viewing the aiR for Review Jobs tab 13
3.5.1 Instance-level permissions 13
3.5.2 Workspace-level permissions 13
3.6 Clearing and restoring job results 14
3.7 Running the aiR for Review mass action 14
4 Best practices 15
4.1 Tips for writing prompt criteria 15
4.2 Prompt criteria iteration sample documents 16
4.3 Prompt criteria iteration workflow 16
5 Navigating the aiR for Review dashboard 17
5.1 Project details strip 17
5.2 Prompt Criteria panel 17
5.3 Aspect selector bar 18
5.4 Project metrics section 19
5.4.1 Version Metrics tab 19
5.4.2 History tab 21
aiR for Review Guide 2
5.4.3 Analysis Results section 22
6 Creating an aiR for Review project 25
6.1 Project workflow 25
6.2 Job capacity, size limitations, and speed 25
6.2.1 Size limits 25
6.2.2 Volume limits 25
6.2.3 Speed 26
6.3 Setting up the project 26
6.3.1 Choosing the data source (saved search) 26
6.3.2 Choosing an analysis type 26
6.3.3 Setting up an aiR for Review project 27
6.4 Deleting an aiR for Review project 28
6.5 Using project sets 29
6.5.1 Creating a new project set 29
6.5.2 Validating the prompt criteria 30
6.5.3 Applying the prompt criteria 30
7 Developing Prompt Criteria 32
7.1 Entering Prompt Criteria 32
7.2 Prompt Criteria tabs and fields 33
7.2.1 Case Summary tab 33
7.2.2 Relevance tab 33
7.2.3 Key Documents tab 34
7.2.4 Issues tab 34
7.3 Using prompt kickstarter 35
7.4 Editing and collaboration 38
7.5 How Prompt Criteria versioning works 38
7.6 How version controls affect the Viewer 39
7.7 Revising the Prompt Criteria 39
7.8 Exporting Prompt Criteria 40
8 Running the analysis 42
9 Analyzing aiR for Review results 44
9.1 How aiR for Review analysis results work 44
9.1.1 Predictions versus document coding 44
9.1.2 Variability of results 45
aiR for Review Guide 3
9.2 Understanding document scores 45
9.3 Viewing results from the aiR for Review dashboard 45
9.4 Viewing results for individual documents from the Viewer 45
9.4.1 Citations and highlighting 46
9.4.2 Adding aiR for Review fields to layouts 48
9.5 Filtering and sorting aiR for Review results 48
9.6 How document errors are handled 48
9.7 Ungrounded citations 49
10 aiR for Review Prompt Criteria validation 50
10.1 Prerequisite 50
10.2 How validation fits into aiR for Review 50
10.3 High-level Prompt Criteria validation workflow 50
10.4 Setting up aiR for Review Prompt Criteria validation 52
10.4.1 Applying the validated results or developing different Prompt Criteria 54
10.4.2 Validation Settings fields 54
10.5 Prompt Criteria validation in Review Center 55
10.5.1 How Review Center fits into the validation process 55
10.5.2 Managing the coding process 56
10.5.3 Reviewing validation statistics 56
10.5.4 Accepting or rejecting results 57
10.5.5 How changes affect the validation results 58
10.6 Prompt Criteria validation statistics 58
10.6.1 Defining the validation statistics 59
10.6.2 How documents are categorized for calculations 59
10.6.3 Prompt Criteria validation metric calculations 60
10.6.4 How the confidence interval works 62
10.6.5 How Prompt Criteria validation differs from other validation types 62
11 Managing aiR for Review jobs 63
11.1 aiR for Review Jobs tab 63
11.2 How aiR for Review document linking works 63
11.3 Managing jobs and document linking 63
11.4 Viewing job details 64
11.5 Jobs tab fields 64
12 Creating document views and saved searches 66
aiR for Review Guide 4
12.1 Creating an aiR for Review results view 66
12.2 aiR for Review results fields 66
12.2.1 aiR Relevance Analysis fields 66
12.2.2 aiR Issues Analysis fields 67
12.2.3 aiR Key Analysis fields 68
13 Running aiR for Review as a mass operation 70
14 Using aiR for Review with Review Center 71
14.1 Using prompt criteria validation 71
14.2 Using aiR to prioritize documents in a review queue 71
15 Archiving and restoring workspaces 72
aiR for Review Guide 5
1 aiR for Review
aiR for Review harnesses the power of large language models (LLM) to review documents. It uses generative artificial
intelligence (AI) to simulate the actions of a human reviewer by finding and describing relevant documents according
to the review instructions (prompt criteria) that you provide. It identifies the documents, describes why they are
relevant using natural language, and demonstrates relevance using citations from the document.
A few benefits of application include:
l Highly efficient, low-cost document analysis
l Quick discovery of important issues and criteria
l Consistent, cohesive analysis across all documents
Below are some common use cases for it:
l Beginning the review process—prioritize the most important documents to give to reviewers.
l First-pass review—determine what you need to produce and discover essential insights.
l Gaining early case insights—learn more about your matter right from the start.
l Internal investigations—find documents and insights that help you understand the story hidden in your data.
l Analyzing productions from other parties—reduce the effort to find important material and get it into the
hands of decision makers.
l Quality control for traditional review—compare aiR for Review's coding predictions to decisions made by
reviewers to accelerate QC and improve results.
1.1 Analysis review types
aiR for Review offers three analysis types, each suited to a specific review or investigation phase.
Analysis Type Description
Relevance Use to find documents that are relevant to a case or situation that you describe,
such as documents responsive to a production request.
Key Documents Use to find documents that are "hot" or important to a case or investigation, such
as those that might be critical or embarrassing to one party or another.
Issues Use to find documents that include content that falls under specific categories.
For example, you might use this to check whether documents involve coercion,
retaliation, or a combination of both.
1.2 aiR for Review workflow
aiR for Review's process is similar to training a human reviewer: explain the case and its relevance criteria, hand over
the documents, and check the results. If the application misunderstood any part of the prompt criteria, simply explain
that part in more detail, then try again.
The workflow has three phases:
1. Develop—you write and iterate on the Prompt Criteria (review instructions) and test on a small document set
until aiR’s recommendations align sufficiently with expected relevance and issue classifications.
aiR for Review Guide 6
2. Validate—run the Prompt Criteria on a slightly larger set of documents and compare to results from reviewers.
3. Apply—use the verified Prompt Criteria on much larger sets of documents.
Within RelativityOne, the main steps are:
1. Select the documents to review.
2. Create an aiR for Review project. See Creating an aiR for Review project on page 25 for more information.
3. Write and submit the review instructions, collectively called Prompt Criteria. See Developing Prompt Criteria on
page 32 for more information.
4. Review the results (citations, rationale, considerations, recommendation). See Analyzing aiR for Review results
on page 44 for more information.
When setting up the first analysis, we recommend running it on a sample set of documents that was already coded by
human reviewers. If the resulting predictions are different from the human coding, revise the prompt criteria and try
again. This could include rewriting unclear instructions, defining an acronym or a code word, or adding more detail to
an issue definition.
For additional workflow help and examples, see Workflows for Applying aiR for Review on the Community site.
1.3 How it works
aiR for Review's analysis is powered by Azure OpenAI's GPT-4 Omni large language model. The LLM is designed to
understand and generate human language. It is trained on billions of documents from open datasets and the web.
When you submit prompt criteria and a set of documents to aiR for Review, Relativity sends the first document to
Azure OpenAI and asks it to review the document according to the prompt criteria. After Azure OpenAI returns its
results, Relativity sends the next document. The LLM reviews each document independently, and it does not learn
from previous documents. Unlike Review Center, which makes its predictions based on learning from the document
set, the LLM makes its predictions based on the prompt criteria and its built-in training.
Azure OpenAI does not retain any data from the documents being analyzed. Data you submit for processing by Azure
OpenAI is not retained beyond your organization’s instance, nor is it used to train any other generative AI models from
Relativity, Microsoft, or any other third party. For more information, see the white paper A Focus on Security and
Privacy in Relativity’s Approach to Generative AI on the Relativity website.
For more information on using generative AI for document review, we recommend:
l Relativity Webinar - AI Advantage: How to Accelerate Review with Generative AI
l MIT's Generative AI for Law resources
aiR for Review Guide 7
l The State Bar of California's drafted recommendations for the use of generative AI
1.4 Understanding documents and billing
For billing purposes, a document unit is a single document. The initial pre-run estimate may be higher than the actual
units billed because of canceled jobs or document errors. To find the actual document units that are billed, see Cost
Explorer on the Relativity documentation site.
A document will be billed each time it runs through aiR for Review, regardless of whether that document ran before.
Caution: Customer may not consolidate documents or otherwise take steps to circumvent the aiR for Review
Document Unit limits, including for the purpose of reducing the Customer's costs. If Customer takes such action,
Customer may be subject to additional charges and other corrective measures as deemed appropriate by Relativity.
1.5 Regional availability of aiR for Review
aiR for Review's availability may vary by region, as well as the availability of the LLM model used. After OpenAI
releases an LLM model to a region, Relativity tests it and notifies clients before upgrading aiR for Review.
The following table lists the current LLM model available and date it was deployed to aiR for Review per region. Also
listed is the current version of aiR for Review, which may vary by region.
aiR for Review Model Current aiR for Review
Region Current LLM Model
Deployment Date Version
United States GPT-4 Omni - November 2025-06-16 2025.06.1
United Kingdom GPT-4 Omni - November 2025-06-16 2025.06.1
Australia GPT-4 Omni - November 2025-06-16 2025.06.1
Canada GPT-4 Omni - November 2025-06-16 2025.06.1
France GPT-4 Omni - November 2025-06-16 2025.06.1
Germany GPT-4 Omni - November 2025-06-16 2025.06.1
Hong Kong GPT-4 Omni - November 2025-07-08 2025.06.1
India GPT-4 Omni - November 2025-07-08 2025.06.1
Ireland GPT-4 Omni - November 2025-06-16 2025.06.1
Japan GPT-4 Omni - November 2025-07-08 2025.06.1
Netherlands GPT-4 Omni - November 2025-06-16 2025.06.1
Singapore GPT-4 Omni - November 2025-07-08 2025.06.1
South Korea GPT-4 Omni - November 2025-07-08 2025.06.1
Switzerland GPT-4 Omni - November 2025-06-16 2025.06.1
When using Relativity's AI technology, the selected customer data may be processed outside of your specific Geo
location as provided below. If not provided below, please contact your Relativity Success Manager for further
information.
RelativityOne Deployment Geography aiR Processing Geography
APAC (Hong Kong, Japan, Singapore, South Korea) Japan
Australia Australia
aiR for Review Guide 8
RelativityOne Deployment Geography aiR Processing Geography
Canada Canada
EEA (France, Germany, Ireland, Netherlands) EEA (currently Germany)
India India
Switzerland Switzerland
United Kingdom United Kingdom
United States United States
For more details about availability in your region, contact your account representative.
For technical specifications of your region's current LLM model, see documentation on the Azure website.
1.6 Language support
The underlying LLM used by aiR for Review has been evaluated for use with 83 languages. While aiR for Review itself
has been primarily tested on English-language documents, unofficial testing with non-English datasets shows
encouraging results.
If you use the application with non-English data sets, we recommend the following:
l Rigorously follow best practices for writing. For more information, see Best practices on page 15.
l Iterate on the prompt criteria. For more information, see Revising the Prompt Criteria on page 39.
l Analyze the extracted text as-is. You do not need to translate it into English.
l When possible, write the prompt criteria in the same language as the documents being analyzed. This should
also be the subject matter expert's native language. If that is not possible, write the prompt criteria in English.
When you view the results of the analysis, all citations stay in the same language as the document they cite. By
default, the rationales and considerations are in English. If you want the rationales and considerations to be in a
different language, type “Write rationales and considerations in [desired language]” in the Additional Context field of
the prompt criteria.
For the study used to evaluate Azure OpenAI's GPT-4 model across languages, see MEGAVERSE: Benchmarking
Large Language Models Across Languages, Modalities, Models and Tasks on the arXiv website.
1.7 Analyzing emojis
aiR for Review has not been specifically tested for analyzing emojis. However, the underlying LLM does understand
Unicode emojis. It also understands other formats that could normally be understood by a human reviewer. For
example, an emoji that is extracted to text as :smile: would be understood as smiling.
aiR for Review Guide 9
2 Installing aiR for Review
aiR for Review is available as a secured application from the Application Library. You must have an active aiR for
Review contract to use it.
To install it in the workspace:
Note: This application is not available for repository workspaces.
1. Navigate to a workspace where you want to install the application.
2. Click the Workspace Admin tab and the Relativity Applications tab.
3. Click New Relativity Application to display an application form.
4. Click the Select from Application Library radio button in the Application Type section.
Note: Global applications are not listed in the Select from Application Library option when attempting to
add an application to a workspace.
5. Click in the Choose from Application Library field.
6. Select the aiR for Review application on the Select Library Application dialog. This dialog displays only applic-
ations added to the Application Library.
7. Click Ok. The application form displays the following fields:
l Schema Version—displays the version of the application that you are installing.
l User-friendly URL—displays a user-friendly version of the application's URL. This field may be blank.
l Application Artifacts—displays object types and other application components.
8. (Optional) Click Clear to remove the application from the form.
9. Click Import to save your mappings and import the application.
10. Review the import status of the application. Verify that the install was successful or resolve errors.
For more information on installing applications, see Relativity Applications in the Admin guide.
See Permissions on page 12 for the list of permissions required for using aiR for Review.
2.1 Object types available
The following object types will appear in your workspace:
l aiR Relevance Analysis—records the Relevance results of aiR for Review analysis runs.
l aiR Issue Analysis—records the Issue results of aiR for Review analysis runs.
l aiR Key Analysis—records the Key results of aiR for Review analysis runs.
l aiR for Review Prompt Criteria—records the Prompt Criteria settings and contents for each analysis run. This
also records Prompt Criteria drafts for each user.
l aiR for Review Project—records the details of each aiR for Review project.
aiR for Review Guide 10
2.2 Tabs available
The following tabs will appear:
l aiR for Review Projects (workspace level)—create and manage aiR for Review projects and view the project
dashboard.
l aiR for Review Jobs (workspace level)—view and manage jobs created by the aiR for Review application
within the workspace.
l aiR for Review Jobs (instance level)—view and manage jobs created by the aiR for Review application across
all workspaces in the instance.
aiR for Review Guide 11
3 Permissions
This information below outlines the permissions required for using aiR for Review.
3.1 Viewing the aiR for Review dashboard
To view the aiR for Review dashboard, you need the following permissions:
Object Security Tab Visibility
l aiR for Review Project - View l aiR for Review
Projects
l aiR for Review Prompt Criteria
- View
l aiR Relevance Analysis - View
l aiR Issue Analysis - View
l aiR Key Analysis - View
You can view results only for the analysis types you have permission to view.
3.2 Creating and running an aiR for Review project
To creating an aiR for Review project and run the analysis, you need the following permissions:
Object Security Tab Visibility
l aiR for Review Project - View, Edit, Add l aiR for Review
Projects
l aiR for Review Prompt Criteria - View,
Edit, Add
l aiR Relevance Analysis - View
l aiR Issue Analysis - View
l aiR Key Analysis - View
You can run the job without permissions for the analysis types, but you won't be able to see the results.
3.3 Editing and running an existing aiR for Review project
To edit an existing aiR for Review project and run the analysis, you need the following permissions:
Object Security Tab Visibility
l aiR for Review Project - View, Edit l aiR for Review
Projects
l aiR for Review Prompt Criteria -
View, Edit
l aiR Relevance Analysis - View
l aiR Issue Analysis - View
aiR for Review Guide 12
Object Security Tab Visibility
l aiR Key Analysis - View
You can run the job without permissions for the analysis types, but you won't be able to see the results.
3.4 Viewing highlights in the Viewer
To see aiR for Review results highlighted in the Viewer, you need the following permissions:
Object Security
l aiR Relevance Analysis
- View
l aiR Issue Analysis -
View
l aiR Key Analysis - View
You will see highlighting only for the analysis types you have permission to view. If you are not granted any of these
permissions, you will not see the aiR for Review Analysis icon.
3.5 Viewing the aiR for Review Jobs tab
There are two versions of the aiR for Review Jobs tab: one at the instance level, and one at the workspace level. The
instance-level tab shows all jobs across all workspaces. It includes several extra columns to identify the workspace,
matter, and client connected to each job.
The following permissions allow users to see the job list and click on each job to view Prompt Criteria details. Users
with access to this tab can also cancel in-progress jobs.
3.5.1 Instance-level permissions
To view the instance-level aiR for Review Jobs tab, you need the following permissions:
Tab Visibility Admin Operations
l aiR for l Admin Repository
Review Jobs - View
Assign these permissions under the Instance Details tab.
3.5.1.1 Viewing Prompt Criteria at the instance level
To view Prompt Criteria details for a job, you also need some permissions within that job's workspace:
l You must belong to more than just the Workspace Admin Group within the workspace.
l You must have aiR for Review Prompt Criteria - View rights within that job's workspace.
Without these, you can see jobs from that workspace but cannot click on them to view their Prompt Criteria.
You can also use item-level permissions to restrict access to a specific job's aiR for Review Prompt Criteria. For more
information, see Security and permissions in the Admin guide.
3.5.2 Workspace-level permissions
To view the workspace-level aiR for Review Jobs tab, you need the following permissions:
aiR for Review Guide 13
Object Security Tab Visibility
l aiR for Review Prompt Criteria l aiR for
- View Review Jobs
Assign these permissions under the Workspace Details tab within the chosen workspace.
You can also use item-level permissions to restrict access to a specific job's aiR for Review Prompt Criteria. For more
information, see Security and permissions in the Admin guide.
3.6 Clearing and restoring job results
To clear or restore job results using the aiR for Review Jobs tab, you need the following permissions:
Object Security Tab Visibility
l Document - View, Edit l aiR for
Review Jobs
l aiR Relevance Analysis -
View, Edit
l aiR Issue Analysis - View,
Edit
l aiR Key Analysis - View, Edit
You can only clear or restore results for analysis types if you have Edit permissions for them.
For more information on clearing and restoring results, see Managing jobs and document linking on page 63.
3.7 Running the aiR for Review mass action
To run the aiR for Review mass action, you need the following permissions:
Object Security Other Settings
l aiR for Review Prompt Criteria - View, l Mass Operations - aiR for
Edit, Add Review
You must also belong to at least one user group other than the Workspace Admin Group.
aiR for Review Guide 14
4 Best practices
aiR for Review works best after reviewing the prompt criteria. Analyzing just a few documents at first, comparing the
results to human coding, and then adjusting the prompt criteria as needed yields more accurate results than diving in
with a full document set.
4.1 Tips for writing prompt criteria
The prompt criteria entered often aligns with a traditional review protocol or case brief in that they describe the matter,
entities involved, and what is relevant to the legal issues at hand.
When writing prompt criteria, use natural language to describe why particular types of documents should be
considered relevant. Write them as though you were describing them to a human reviewer.
l Write clearly—use active voice, use natural speaking phrases and terms, be explicit.
l Be concise—write as if "less is more," summarize lengthy text or only include key passages from a long review
protocol. The prompt criteria have an overall length limit of 15,000 characters.
l Simply describe the case—do not give commands, such as “you will review XX."
l Use positive phrasing—phrase instructions in a positive way when possible. Avoid negatives ("not" statements)
and double negatives.
l Use natural writing format styles—use whatever writing format makes the most sense to a human reader. For
example, bullet points might be useful for the People and Aliases section, but paragraphs might make sense in
another section.
l Is it important?—ask yourself will the criteria affect the results, it is essential.
l Avoid legal jargon or explanations—for example, don't use "including but not limited to" and "any and all" and
don't include explanations of the law.
l Use ALL CAPS—helps identify essential information for the model to focus on, for example use "MUST" instead
of "should."
l Identify internal jargon and phrases—the learning language model (LLM) has essentially “read the whole Inter-
net.” It understands widely used slang and abbreviations, but it does not necessarily know jargon or phrases
that are internal to an organization.
l Identify aliases, nicknames, and uncommon acronyms—for example, a nickname for William may be Bill, or BT
may be an abbreviation for the company name Big Thorium.
l Identify unfamiliar emails—normal company email addresses do not need identified, but unfamiliar ones
should, for example Dave Smith may use [email protected] and [email protected].
l Iterate, iterate, iterate—test the prompt criteria and review the results, adjust it to obtain more accurate pre-
dictions and results.
Refer to the helper examples in the prompt criteria text boxes of the dialogs for additional guidance entering criteria in
each field.
For additional guidance on prompt writing, see the following resources on the Community site:
l aiR for Review Prompt Writing Best Practices—downloadable PDF of writing guidelines
l aiR for Review example project—detailed example of adapting a review protocol into prompt criteria
l AI Advantage: Aiming for Prompt Perfection?—on-demand webinar discussing prompt criteria creation
aiR for Review Guide 15
4.2 Prompt criteria iteration sample documents
Before setting up the aiR for Review project, create a saved search that contains a small sample of the documents you
want reviewed.
For best results:
l Include roughly 50-100 test documents that are a mix of relevant, not relevant, and challenging documents.
l Make sure they highlight all the key features of your relevance criteria.
l Have human reviewers code the documents in advance.
For more information about choosing documents for the sample, see Selecting a Prompt Criteria Iteration Sample for
aiR for Review on the Community site.
See Creating or editing a saved search in the Searching guide for details about saved searches.
4.3 Prompt criteria iteration workflow
We recommend the following workflow for crafting prompt criteria:
1. For your first analysis, run the prompt criteria on a saved search of 50-100 test documents that are a mix of rel-
evant, not relevant, and challenging documents.
2. Compare the results to human coding. In particular, look for documents that the application coded differently
than the humans did and investigate possible reasons. This could include unclear instructions, needing to
define an acronym or code word, or other blind spots in the prompt criteria.
3. Tweak the prompt criteria to adjust for blind spots.
4. Repeat steps 1 through 3 until the application predicts coding decisions accurately for the test documents.
5. Test the prompt criteria on a sample of 50 more documents and compare results. Continue tweaking and
adding documents until you are satisfied with the results for a diverse range of documents.
6. Finally, run the prompt criteria on a larger set of documents.
aiR for Review only sees the extracted text of a document. It does not see any non-text elements like advanced
formatting, embedded images, or videos. We do not recommend using aiR for Review on documents such as images,
videos, or spreadsheets with heavy formulas. Instead, use it on documents whose extracted text accurately
represents their content and meaning.
aiR for Review Guide 16
5 Navigating the aiR for Review dashboard
When you select a project from the aiR for Review Projects tab, a dashboard displays showing the project's prompt
criteria, the list of documents, and controls for editing the project. If the project has been run, it also displays the
results.
5.1 Project details strip
At the top of the dashboard, the project details strip displays:
Features
1 Project Name given to the project during set up. The Analysis Type appears underneath the name.
name
2 Project set Indicates whether the project set and criteria are Develop, Validate, or Apply.
type
3 Data Name of the data source (saved search) chosen at project set up and the document count. Click
Source the link to view the documents in the data source.
Name
4 Version Version number of the prompt criteria for the set and last run or saved date. For more inform-
number ation, see How Prompt Criteria versioning works on page 38.
5 Down- Click to move between project sets to view their statistics.
arrow
6 + plus Click to create a new project set using a different data source for the Prompt Criteria, to validate
icon (Pro- the Prompt Criteria on a target document population, or to apply it to the larger document set.
ject set) See Using project sets on page 29 for more information.
7 Run but- Click to analyze the selected documents using the current version of the prompt criteria. If no
ton documents are selected or filtered, it will analyze all documents in the data source.
l If you are viewing the newest version of the prompt criteria and no job is currently running,
this says Analyze [X] documents.
l If an analysis job is currently running or queued, this button is unavailable and a Cancel
option appears.
l If you are viewing older versions of the prompt criteria, this button is unavailable.
8 Feedback Click to send optional feedback to the aiR for Review development team.
icon
5.2 Prompt Criteria panel
On the left side of the dashboard, the Prompt Criteria panel displays tabs that match the project type you chose when
creating the project. These tabs contain fields for writing the criteria you want aiR for Review to use when analyzing
the documents.
aiR for Review Guide 17
Possible tabs include:
l Case Summary—appears for all analysis types.
l Relevance—appears for Relevance and Relevance and Key Documents analysis types.
l Key Documents—appears for the Relevance and Key Documents analysis type.
l Issues—appears for the Issues analysis type.
For information on filling out the prompt criteria tabs, see Developing Prompt Criteria on page 32.
For information on building prompt criteria from existing case documents, like requests for production or review
protocols, see Using prompt kickstarter on page 35.
To export the prompt criteria displayed, see Exporting Prompt Criteria on page 40.
If you want to temporarily clear space on the dashboard, click the Collapse symbol ( ) in the upper right of the
prompt criteria panel. To expand the panel, click the symbol again.
5.3 Aspect selector bar
The aspect selector bar appears in the upper middle section of the dashboard for projects that use Issues analysis or
Relevance and Key Documents analysis. This lets you choose which metrics, citations, and other results to view in the
Analysis Results grid.
aiR for Review Guide 18
l For a Relevance and Key Documents analysis:
Two aspect tabs appear: one for the field you selected as the Relevant Choice, and one for the field you selec-
ted as the Key Document Choice.
l For Issues analysis:
l The All Issues tab displays first. It offers a comprehensive overview of predictions and scores for all
issues and documents within the project. The total number of issue predictions displayed on the All
Issues tab is calculated by multiplying the number of issues by the number of documents. For example, if
you had 10 issues and 100 documents, issue predictions would equal 1000.
l A tab appears for every Issues field choice that has choice criteria. Click each to view the corresponding
data. The tabs appear in order according to each choice's Order value. For information on changing the
choice order, see Choices in the Admin guide.
When you select one of the aspect tabs in the bar, both the project metrics section and analysis results grid update to
show results related to that aspect. For example:
l If you choose the Key Document tab:
The project metrics section shows how many documents have been coded as key. The Analysis Results grid
updates to show predictions, rationales, citations, and all other fields related to whether the document is key.
l If you choose an issue from the aspect selector:
The project metrics section and analysis results grid both update to show results related to that specific issue.
The total number of issue predictions in this section is calculated by multiplying the number of issues by the
number of documents. For example, if there are five issues and 100 documents, there will be 500 issue pre-
dictions.
5.4 Project metrics section
In the middle section of the dashboard, the project metrics section shows the results of previous analysis jobs. There
are two tabs: one for the current version's most recent results (Version Metrics tab below), and one for a list of
historical results for all previous versions (History tab on page 21).
5.4.1 Version Metrics tab
The Version [X] Metrics tab shows metrics divided into sections:
aiR for Review Guide 19
Section Field Description
Documents l Reviewer Uncoded(for Relevance or Relevance and Key Documents analysis only)—total
number of documents that have not been coded by reviewers.
l Reviewer Coded Issues(for Issues analysis only)—total number of documents that reviewers
coded as having the selected issue. Be aware that when the All Issues tab is selected, the
counts for each issue are combined, which may result in a number that is more than the total
document count.
l Analyzed—documents in the data source that have a prediction attached from this prompt cri-
teria version.
l Not Analyzed—documents in the data source that do not have a prediction attached from this
prompt criteria version.
l Errored—documents that received an error code during analysis. For more information, see
How document errors are handled on page 48.
Issue
Predictions Note: The Issue Predictions section displays for Issues analysis when the All Issues tab is selec-
ted.
l Not Relevant—issues predicted as junk or not relevant to the current aspect.
l Borderline—issues predicted as bordering between relevant and not relevant to the current
aspect.
l Relevant—issues predicted relevant or very relevant to the current aspect.
l Errored—issues that received an error code during analysis.
aiR Analysis l Not Relevant—documents predicted as junk or not relevant to the current aspect.
l Borderline—documents predicted as bordering between relevant and not relevant to the cur-
rent aspect.
l Relevant—documents predicted relevant or very relevant to the current aspect.
Conflicts l Total—total number of documents that have a different coding decision from the predicted res-
ult. This is the sum of the Relevant Conflicts and Not Relevant Conflicts fields.
l Relevant—documents predicted as relevant or very relevant to the current aspect, but the cod-
ing decision in the related field says something else.
l Not Relevant—documents predicted as not relevant to the current aspect, but the coding
decision in the related field says relevant.
The metrics adjust their counts based on the type of results displayed:
l For Relevance results, relevance-related metrics use the Relevance Field for their counts.
l For Key Document results, relevance-related metrics use the Key Document Field for their counts.
l For Issues analysis, relevance-related metrics count documents marked for the selected issue.
For instance, when viewing results for an issue such as Fraud, the aiR Predicted Relevant field displays documents
identified as associated with Fraud. When viewing Key Document results, the aiR Predicted Relevant field displays
documents identified as key documents.
aiR for Review Guide 20
5.4.1.1 Filtering the Analysis Results using version metrics
To filter the Analysis Results table based on any of the version metrics, click the desired metric in the Version Metrics
banner. This narrows the results shown in the table to only documents that are part of the metric. It also auto-selects
those documents for the next analysis job. The number of selected documents is reflected in the Run button's text.
This makes it easier to analyze a subset of the document set instead of selecting all documents every time. To remove
filtering, click Clear selection underneath the Run button.
You can also filter documents in the Analysis Results grid by selecting them in the table. See Filtering and selecting
documents for analysis on the next page.
5.4.2 History tab
The History tab shows results for all previous versions of the prompt criteria. This table includes all fields from the
Version Metrics tab, sorted into rows by version. For a list of all Version Metrics fields and their definitions, see Version
Metrics tab on page 19.
It also displays two additional columns:
l Version—the prompt criteria version that was used for this row's results.
l Timestamp—the time the analysis job ran.
aiR for Review Guide 21
5.4.3 Analysis Results section
In the middle section of the dashboard, the Analysis Results section shows a list of all documents in the project. If the
documents have aiR for Review analysis results, those results appear beside them in the grid.
The fields that appear in the grid vary depending on what type of analysis was chosen. For a list of all results fields and
their definitions, see Analyzing aiR for Review results on page 44.
Note: aiR for Review's predictions do not overwrite the Relevance, Key, or Issues fields chosen during prompt
criteria setup. Instead, the predictions are held in other fields. This makes it easier to distinguish between human
coding choices and aiR's predictions.
To view inline highlighting and citations for an individual document, click on the Control Number link. The Viewer
opens and displays results for the selected prompt criteria version. For more information on viewing aiR for Review
results in the Viewer, see aiR for Review Analysis information in the Viewer documentation.
5.4.3.1 Filtering and selecting documents for analysis
To manually select documents to include in the next analysis run, check the box beside each individual document in
the Analysis Results grid. The number of selected documents is reflected in the Run button's text. To remove filtering,
click Clear selection underneath the Run button.
aiR for Review Guide 22
You can also filter the Analysis Results grid by clicking the metrics in the Version Metrics section. See Filtering the
Analysis Results using version metrics on page 21.
5.4.3.2 Clearing selected documents
Click the Clear selection link below the Run button to deselect all documents in the Analysis Results grid. This resets
your selections so the next analysis includes all documents in the data source.
5.4.3.3 Saving selected documents as list
To save a group of documents in the Analysis Results grid as a list, follow the steps below.
1. Select the box beside each individual document in the Analysis Results grid that you want to add to the list.
2. Click the Save as List option from the mass operations list at the bottom of the grid.
3. Enter a unique Name for the document list.
aiR for Review Guide 23
4. Enter any Notes in the text box to help describe the list.
5. Click Save.
For more information on lists, see Lists in the Admin guide.
aiR for Review Guide 24
6 Creating an aiR for Review project
The instructions you give aiR for Review are called Prompt Criteria. They often mimic a traditional review protocol or
case brief in that they describe the matter, entities involved, and what is relevant to the legal issues at hand. For best
results, we recommend analyzing a small set of documents, tweaking the Prompt Criteria as needed, then finally
analyzing a larger set of documents. Starting with a small set lets you see immediately how aiR's coding compares to
a human reviewer's coding and adjust the prompts accordingly.
Refer to Best practices on page 15 for more information on Prompt Criteria tips, sample workflow guidance, and
iteration workflow.
See these additional resources in Community:
l Workflows for Applying aiR for Review
l aiR for Review example project
6.1 Project workflow
The aiR for Review project workflow has three basic parts:
1. Setting up the project
2. Developing the Prompt Criteria
3. Running the first analysis
At any point in this process, you can save your progress and come back later.
6.2 Job capacity, size limitations, and speed
Based on the limits of the underlying large language model (LLM), aiR has size limits for the documents and prompts
you submit, as well as volume limits for the overall jobs.
6.2.1 Size limits
The documents and Prompt Criteria have the following size limits:
l Prompt Kickstarter lets you upload up to 5 documents with a combined length of 150,000 characters. See Using
prompt kickstarter on page 35 for more information on using the prompt kickstarter feature.
l The Prompt Criteria have an overall length limit of 15,000 characters.
l We recommend only including documents whose extracted text is between 0.05KB and 150KB. Although the
LLM can handle some larger or smaller documents, most will receive an error.
Note: The size of a job encompasses both the document size and the Prompt Criteria size. If the combined total
exceeds the per document limit, the job will be too large to process.
If a document receives an error, your organization will not be charged for it. For more information, see How document
errors are handled on page 48.
6.2.2 Volume limits
The per-instance volume limits for aiR for Review jobs are as follows:
aiR for Review Guide 25
Volume Type Limit Notes
Max job size 250,000 doc- A single job can include up to 250,000 documents.
uments
Total documents running 600,000 doc- Across all jobs queued or running within an instance, there is a maximum
per instance uments of 600,000 documents.
Concurrent large jobs 10 jobs For jobs with over 200 documents, only 10 jobs can be queued or running
per instance at the same time within an instance.
Concurrent small jobs No limit Jobs with 200 or fewer documents have no limit to how many can queue or
per instance run at the same time.
6.2.3 Speed
After a job is submitted, aiR analyzes on average 10,700 documents per hour globally. Job speeds vary widely
depending on the region, the number and size of documents, the overall load on the LLM, and other similar factors.
6.3 Setting up the project
First, you need to set up the project by selecting the type of analysis desired and the data source (saved search) to
use. Then, you can move on to developing prompt criteria and running an analysis.
6.3.1 Choosing the data source (saved search)
Before setting up the project, create a saved search that contains a small sample of the documents you want
reviewed.
For best results:
l Include roughly 50 test documents that are a mix of relevant and not relevant.
l Have human reviewers code the documents in advance.
For more information about choosing documents for the sample, see Selecting a Prompt Criteria Iteration Sample for
aiR for Review on the Community site.
For more information about creating a saved search, see Creating or editing a saved search in the Searching guide.
6.3.2 Choosing an analysis type
aiR for Review offers three analysis types, each suited to a specific review or investigation phase. Choose the
appropriate type before beginning your project. Then, based on the analysis type chosen, you will need the fields
indicated in the table. aiR for Review does not actually write to these fields. Instead, it uses them for reference when
reporting on its predictions.
Analysis Type Description Fields needed
Relevance Analyzes whether documents are rel- One single-choice results field. The field
evant to a case or situation that you must have at least one choice.
describe, such as documents responsive
to a production request.
Relevance and Key Docu- Analyzes documents for both relevance Two single-choice results fields. These
ments and whether they are "hot" or key to a should have distinct names, such as "Rel-
case. evant" and "Key," and each field should
aiR for Review Guide 26
Analysis Type Description Fields needed
have at least one choice.
Issues Analyzes documents for whether they One multi-choice results field. Each of
include content that falls under specific the issues you want to analyze should be
categories. For example, you might use represented by a choice on the field.
this to check whether documents involve
coercion, retaliation, or a combination of Note: Currently, aiR for Review ana-
both. lyzes a maximum of 10 issues per run.
You can have as many choices for the
field as you want, but you can only ana-
lyze 10 at a time. To analyze more, run
multiple jobs.
6.3.3 Setting up an aiR for Review project
To set up an aiR for Review project:
1. On the aiR for Review Project tab, select New aiR for Review Project.
2. Fill out the following fields on the Setup Project modal:
l Project Name—enter a name for the project.
l Description—enter a project description.
l Data source—select the saved search that holds your document sample. Refer to Choosing the data
source (saved search) on the previous page for more information.
aiR for Review Guide 27
l Project Prompt Criteria—select one of the following:
l Start blank—select this if you plan to write new prompt criteria from scratch.
l Copy existing—select this to choose a previously created set of prompt criteria and copy it for this
project.
l
Prompt Criteria Name—either leave as the default or click the Edit ( ) icon to rename the prompt cri-
teria. This name must be unique.
l Analysis Type—select one of the following. For more information, see Choosing an analysis type on
page 26.
l Relevance—analyzes whether documents are relevant to a case or situation that you describe,
such as documents responsive to a production request.
l Relevance and Key Documents—analyzes documents for both relevance and whether they are
“hot” or important (key) to a case.
l Issues—analyzes documents for whether they include content that falls under specific categories.
l Project Use Case—choose the option that best describes the purpose of the project.
l If none of the options describe the project, choose Other to type your own description in the field. It
will only be used for this project. Keep this description generic and do not include any confidential
or personal information.
l This field is used for reporting and management purposes. It does not affect how the project runs.
3. Click Create Project.
After the project is created, the aiR for Review project dashboard appears.
Next, refer to Developing Prompt Criteria on page 32 for the steps for writing the set of Prompt Criteria for the project
analysis.
6.4 Deleting an aiR for Review project
Use Mass Operations to delete one or more projects from the project list. For more information on Mass delete, see
Mass delete documentation.
Note: Mass delete is permanent action, so be sure to double-check your selections before proceeding. This
operation cannot be undone.
1. Navigate to the aiR for Review Projects tab.
2. Select the check box next to one or more projects to be deleted from the projects list.
3. Click Delete from the Mass Operations option at the bottom of the grid.
aiR for Review Guide 28
4. Click View Dependencies to review dependencies as needed on the confirmation modal.
5. Click Delete to confirm.
6.5 Using project sets
Project sets enable you to develop, validate, and apply prompt criteria using a new data source (saved search) without
having to create a new aiR project. This lets you run multiple jobs within a single aiR project.
They provide three different options for working in an aiR for Review Project.
l Develop: This set type is used to develop your prompt criteria on a small document set. With develop sets, the
data source (saved search) being used in the project can be changed to allow greater flexibility to run multiple
jobs within a single aiR project.
l Validate: This set type is used to validate your prompt criteria leveraging Review Center.
l Apply: This set type is commonly used after validation to apply your prompt criteria to a larger document pop-
ulation.
For each set, you will see:
1 - the set type (Develop, Validate, Apply).
2 - the saved search used for the set, number of documents analyzed, and number of documents in the saved search.
3 - the version number of the prompt criteria for the set and date it ran or was saved.
4 - click the down arrow to track the progress of your aiR for Review project, view previous project sets, and review the
predictions and metrics.
5 - click the + sign to create a new project set using a different data source, to validate the currently selected prompt
criteria version, or to apply it to a larger document population.
6.5.1 Creating a new project set
To create a new project set:
aiR for Review Guide 29
1. Click the project sets + sign.
2. Click I want to develop my prompt criteria using a different data source.
3. Click Select and choose the data source (saved search) to use for this project set.
4. Click Create Develop Set.
5. Proceed with modifying the Prompt Criteria as needed.
6.5.2 Validating the prompt criteria
To validate the project set's prompt criteria:
1. Within the desired project set, click the project sets + sign.
2. Click I want to validate the prompt criteria on a target document population.
3. Click Select and choose the data source (saved search) to use for this project set.
4. Fill out the Validation Settings fields to determine the validation sample. For more information on each field,
see Validation Settings fields on page 54.
5. Review the confirmation summary modal and provide email addresses, if needed, then click Start Analysis.
6. Click Go to Review Center to begin the human review and coding of the validation sample in Review Center.
Refer to Prompt Criteria validation in Review Center on page 55 for the steps to follow in Review Center.
For more information on validating Prompt Criteria, see Setting up aiR for Review Prompt Criteria validation on
page 52.
6.5.3 Applying the prompt criteria
To apply the project set's prompt criteria:
aiR for Review Guide 30
1. Within the desired project set, click the project sets + sign.
2. Click I want to apply my prompt criteria to a document population.
3. Click Create Apply Set.
4. Review the confirmation summary modal and provide email addresses, if needed, then click Start Analysis.
For more information, see Running the analysis on page 42.
aiR for Review Guide 31
7 Developing Prompt Criteria
The Prompt Criteria are a set of inputs that give aiR for Review the context it needs to understand the matter and
evaluate each document. Developing the Prompt Criteria is a way of training aiR for Review, which is your "reviewer,"
similar to training a human reviewer. See Best practices on page 15 for tips and workflow suggestions.
Depending which type of analysis you chose during set up, you will see a different set of tabs on the left-hand side of
the aiR for Review dashboard. The Case Summary tab displays for all analysis types.
When you start to write your first Prompt Criteria, the fields contain grayed-out helper text that shows examples of
what to enter. Use it as a guideline for crafting your own entries.
You can also build Prompt Criteria from existing case documents, like requests for production or review protocols, by
using the prompt kickstarter feature. See Using prompt kickstarter on page 35 for more information.
If needed, you can create project sets within a single aiR for Review project to develop, validate, and apply Prompt
Criteria without having to create new aiR projects for each iteration. See Using project sets on page 29 for more
information.
To learn more about how prompt versioning works and how versions affect the Viewer, see How version controls
affect the Viewer on page 39
Additional resources on prompt writing is available on the Community site:
l aiR for Review Prompt Writing Best Practices—downloadable PDF of writing guidelines
l aiR for Review example project—detailed example of adapting a review protocol into prompt criteria
l AI Advantage: Aiming for Prompt Perfection?—on-demand webinar discussing prompt criteria creation
7.1 Entering Prompt Criteria
The tabs that appear on the Prompt Criteria panel depend on the analysis type you selected during set up. Refer to
Setting up the project for more information.
1. On the aiR for Review dashboard, you can enter data in the Prompt Criteria tabs using any of the methods
below:
l Select each tab and manually enter the required information as outlined in the Prompt Criteria tabs and
fields on the next page section.
l Use prompt kickstarter to upload and use existing documentation, like requests for production or review
protocols, to fill in the tabs. See Using prompt kickstarter on page 35 for more information.
l Use project sets to develop, validate, and apply prompt criteria using a new data source (saved search)
without having to create a new aiR project. See Using project sets on page 29 for more information.
2. Click Save after entering data on each tab or after all tabs are completed.
3. After you have the desired prompt criteria set, click Start Analysis to analyze documents. For more information
on analyzing documents, see Running the analysis on page 42.
4. To validate the Prompt Criteria, see aiR for Review Prompt Criteria validation on page 50 and Setting up aiR for
Review Prompt Criteria validation on page 52 for more information.
5. To run multiple project set iterations on different saved searches within the current aiR for Review project, see
Using project sets on page 29
aiR for Review Guide 32
7.2 Prompt Criteria tabs and fields
Use the sections below to enter information in the necessary fields.
Note: The set of Prompt Criteria have an overall length limit of 15,000 characters.
7.2.1 Case Summary tab
The Case Summary gives the Large Language Model (LLM) the broad context surrounding a matter. It includes an
overview of the matter, people and entities involved, and any jargon or terms that are needed to understand the
document set.
This tab appears regardless of the Analysis Type selected during set up.
Note: Limit the Case Summary content to 20 or fewer sentences overall, and 20 or fewer each of People and
Aliases, Noteworthy Organizations, and Noteworthy Terms.
Fill out the following:
l Matter Overview—provide a concise overview of the case. Include the names of the plaintiff and defendant,
the nature of the dispute, and other important case characteristics.
l People and Aliases—list the names and aliases of key custodians who authored or received the documents.
Include their role and any other affiliations.
l Noteworthy Organizations—list the organizations and other relevant entities involved in the case. Highlight
any key relationships or other notable characteristics.
l Noteworthy Terms—list and define any relevant words, phrases, acronyms, jargon, or slang that might be
important to the analysis.
l Additional Context—list any additional information that does not fit the other fields. This section is typically left
blank.
Depending on which Analysis Type you chose when setting up the project, the remaining tabs will be Relevance, Key
Documents, or Issues. Refer to the appropriate tab section below for more information on filling out each one.
7.2.2 Relevance tab
This tab defines the fields and criteria used for determining if a document is relevant to the case. It appears if you
selected Relevance or Relevance and Key Documents as the Analysis Type during setup.
Fill out the following:
l Relevance Field—select a single-choice field that represents whether a document is relevant or non-relevant.
This selection cannot be changed after the first job run.
l Relevant Choice—select the field choice you use to mark a document as relevant. This selection cannot be
changed after the first job run.
l Relevance Criteria—summarize the criteria that determine whether a document is relevant. Include:
l Keywords, phrases, legal concepts, parties, entities, and legal claims.
l Any criteria that would make a document non-relevant, such as relating to a project that is not under dis-
pute.
aiR for Review Guide 33
l Issues Field (Optional)—select a single-choice or multi-choice field that represents the issues in the case.
l Choice Criteria—select each of the field choices one by one. For each choice, write a summary in the
text box listing the criteria that determine whether that issue applies to a document. For more information,
see Developing Prompt Criteria on page 32.
Note: aiR does not make Issue predictions during Relevance review, but you can use this field for reference when
writing the Relevance Criteria. For example, you could tell aiR that any documents related to these issues are rel-
evant.
For best results when writing the Relevance Criteria:
l Limit the Relevance Criteria to 5-10 sentences.
l Do not paste in the original request for production (RFP), since those are often too long and complex to give
good results. Instead, summarize it and include relevant excerpts.
l Group similar criteria together when you can. For example, if an RFP asks for “emails pertaining to X” and “doc-
uments pertaining to X,” write “emails or documents pertaining to X.”
7.2.3 Key Documents tab
This tab defines the fields and criteria used for determining if a document is "hot" or key to the case. It appears if you
selected Relevance and Key Documents as the Analysis Type during setup.
Fill out the following:
l Key Document Field—select a single-choice field that represents whether a document is key to the case. This
selection cannot be changed after the first job run.
l Key Document Choice—select the field choice you use to mark a document as key. This selection cannot be
changed after the first job run.
l Key Document Criteria—summarize the criteria that determine whether a document is key. For best results,
limit the Key Document Criteria to 5-10 sentences. Include:
l Keywords, phrases, legal concepts, parties, entities, and legal claims.
l Any criteria that would exclude a document from being key, such as falling outside a certain date range.
7.2.4 Issues tab
This tab defines the fields and criteria used for determining whether a document relates to a set of specific topics or
issues. It appears if you selected Issues as the Analysis Type during setup.
Fill out the following:
l Field—select a multi-choice field that represents the issues in the case. This selection cannot be changed after
the first job run.
l Choice Criteria—select each of the field choices one by one. A maximum of 10 choices can be analyzed at a
time. To remove a selected choice, click the X in its row. For each choice, write a summary in the text box listing
the criteria that determine whether that issue applies to a document. Include:
l Keywords, phrases, legal concepts, parties, entities, and legal claims.
l Any criteria that would exclude a document from relating to that issue, such as falling outside a certain
date range.
aiR for Review Guide 34
Note: The field choices cannot be changed after the first job run. However, you can still edit the summary in the text
box.
For best results when writing the Choice Criteria:
l Limit the criteria description for each choice to 5-10 sentences.
l Each of the choices must have its own criteria. If a choice has no criteria, either fill it in or remove the choice.
7.3 Using prompt kickstarter
aiR for Review's prompt kickstarter enables you to efficiently create a project's Prompt Criteria from existing case
documents, such as requests for production, review protocols, complaints, or case memos. By uploading up to five
documents (with a total character count of up to 150,000), aiR for Review will analyze them to automatically draft the
initial Prompt Criteria with relevant data. This enables you to start a new project with minimal effort. See Job capacity
and size limitations for more information on document and prompt limits.
You can repeat this process as needed to refine the Prompt Criteria before starting the first job analysis. After the
analysis begins, the Draft with AI option is disabled.
Notes:
l There are no additional charges to use prompt kickstarter.
l Prompt kickstarter uses the large language model (LLM) based on aiR for Review region availability. For
more information, refer to Regional availability of aiR for Review on page 8.
l The feature currently does not build prompt criteria for issues on any of the analysis type tabs (Relevance,
Relevance & Keys, or Issues). Full support for issues is planned for a future release.
To use prompt kickstarter:
1. Click the Draft with AI button or click the option from the More ( ) list next to the Collapse (<<) icon.
2. Upload content using the methods below:
Note: The maximum number of documents is five. The combined total character limit is 150,000.
aiR for Review Guide 35
l Drop file here or browse files—use this to drag or upload document files. Supported formats are TXT,
DOCX, and PDF.
l Add text manually—click to add text manually or copy/paste content from a document.
3. Select the Document Type for each uploaded file. Options include Review Protocol, Request for Production,
General Case Memo, Complaint, Key Document, and Other. If you select Other, enter a document type descrip-
tion in the text box. The number of characters in each file appears below the filename to help keep track of the
aiR for Review Guide 36
150,000 limit.
4. Repeat steps 2-3 to upload more files.
5. To delete a file from the list, click the circle X icon.
6. Click Draft to begin drafting the Prompt Criteria. You should receive the results typically within 1-2 minutes.
Note: You cannot run document analysis during the drafting process.
aiR for Review Guide 37
7. Review and edit the draft Prompt Criteria in the available tabs. Click Save to keep the changes or Discard to
delete them.
8. Repeat these steps with other documents and information as needed until the desired set of Prompt Criteria is
met.
9. Once you have the desired Prompt Criteria set, click Start Analysis to analyze documents. For more inform-
ation on analyzing documents, see Running the analysis on page 42
Note: The Draft with AI option is unavailable after an analysis job begins.
7.4 Editing and collaboration
If two users edit the same Prompt Criteria version simultaneously, the most recent save will overwrite previous
changes. For this reason, it can be beneficial to have only one user edit a project's Prompt Criteria at any given time.
Defining distinct roles for users when updating Prompt Criteria may help streamline the process.
To facilitate team collaboration outside of RelativityOne, use the Export option. It exports the contents of the currently
displayed Prompt Criteria to an MS Word file. For more information, see Exporting Prompt Criteria on page 40.
7.5 How Prompt Criteria versioning works
Each aiR for Review project comes with automatic versioning controls so that you can compare results from running
different versions of the Prompt Criteria. Each analysis job that uses a unique set of Prompt Criteria counts as a new
version.
aiR for Review Guide 38
When you run aiR for Review analysis, the initial Prompt Criteria are saved as Version 1. Edits to the criteria create
Version 2, which you can repeatedly modify until you finalize by running the analysis again to see the results.
Subsequent edits follow the same pattern, creating new versions that finalize with each analysis run.
To see dashboard results from a earlier version, click the arrow next to the version name in the project details strip.
From there, select the version you want to see.
7.6 How version controls affect the Viewer
When you select a Prompt Criteria version from the dashboard, this also changes the version results you see when
you click on individual documents from the dashboard. For example, if you are viewing results from Version 2, clicking
on the Control Number for a document brings you to the Viewer with the results and citations from Version 2. If you
select Version 1 on the dashboard, clicking the Control Number for that document brings you to the Viewer with results
and citations from Version 1.
When you access the Viewer from other parts of Relativity, it defaults to showing the aiR for Review results from the
most recent version of the Prompt Criteria. However, you can change which results appear by using the linking
controls on the aiR for Review Jobs tab. For more information, see Managing aiR for Review jobs on page 63.
7.7 Revising the Prompt Criteria
After running an aiR for Review job for the first time on the sample set, the initial results on the dashboard can be used
as feedback for improving the prompt criteria. The cycle of examining the results, revising the prompt criteria, then
running a new job on the sample documents is known as iterating on the prompt criteria. Refer to Best practices on
page 15 for more information. Also see Job capacity, size limitations, and speed for details on document and prompt
limits.
In particular, ask the following questions about each document:
l Did aiR for Review and the human reviewer agree on the relevance of the document?
l Read the aiR for Review rationale and considerations. Do they make sense?
l Do the citations make sense?
For all of these, if you see something incorrect, make notes on where aiR seems to be confused and rephrase the
prompts. Here are the most common sources of confusion:
l Insufficient context—For example, an internal acronym, key person, or code word may not have been
defined. To fix this, add it to the proper section of the Case Summary tab.
l Ambiguous instructions or unclear language—To fix this, edit the instructions on the Relevance, Key Docu-
ments, or Issues tabs.
In general, consider how you would help a human reviewer making the same mistakes. For example, if aiR for Review
is having trouble identifying a specific issue, try explaining the criteria for that issue with simpler language.
aiR for Review Guide 39
After you have revised the prompt criteria to address any weak points, run the analysis again. Continue refining the
prompt criteria until results accurately predicts the human coding decisions for all test documents in the sample. To
run validation in Review Center on the prompt criteria, refer to Setting up aiR for Review Prompt Criteria validation on
page 52 for more information.
Note: aiR for Review only looks at the extracted text of each document. If a human reviewer marked a document as
relevant because of an attachment or other criteria beyond the extracted text, aiR for Review will not be able to
match that relevance decision.
For additional resources, refer to these articles on the Community site:
l Workflows for Applying aiR for Review
l aiR for Review example project
l Selecting a Prompt Criteria Iteration Sample for aiR for Review
l Evaluating aiR for Review Prompt Criteria Performance
7.8 Exporting Prompt Criteria
You can export the contents of all available criteria tabs for the currently displayed Prompt Criteria to an MS Word file
using the Export option. This can be helpful for reviewing criteria, saving it to use later, and collaborating with others
on it.
1. In the Prompt Criteria panel, click the More ( ) icon next to the Collapse (<<) icon.
2. Select Export.
aiR for Review Guide 40
3. Click Save and Export to proceed with exporting the content. The exported file is saved to your default browser
download folder.
Audit logs track all export actions from your project.
aiR for Review Guide 41
8 Running the analysis
After setting up the project and developing the prompt criteria, you're ready to run the analysis against the prompt
criteria.
To run the analysis:
1. On the upper right of the dashboard, click Analyze [X] documents.
2. Review the confirmation summary:
l Total Docs—number of documents to be analyzed.
l Est. Run Time—estimated time it will take to analyze and return the results of the documents selected.
This does not include time waiting in the job queue.
l Est. Time to Start—estimated wait time from job submission to the start of the analysis process. Longer
wait times occur when other jobs are queued across tenants.
l Email Notification—enable the toggle if you want to send email notifications when a job completes,
fails, or is canceled. Your email address is automatically entered in the text box. Proceed with entering
more recipient email addresses, separated by commas or semicolons. Email notifications are only sent
when the toggle is enable.
3. Click Start Analysis.
Once the analysis job starts, the results begin to appear beside each document in the Analysis Results panel of the
dashboard. For more information on analyzing the results, see Analyzing aiR for Review results on page 44.
The Analyze button is disabled during the analysis process, and a Cancel option appears. When the job completes,
the Analyze button is active again.
Note: If you try to run a job that is too large or when too many jobs are already running, an error appears. You can
still save and edit the prompt criteria, but you will not be able to start the job. For more information, see Job capacity,
size limitations, and speed on page 25.
After the first analysis completes, use the results to fine-tune the prompt criteria. For more information on:
aiR for Review Guide 42
l the dashboard, see Navigating the aiR for Review dashboard on page 17.
l the fine-tuning process, see Revising the Prompt Criteria on page 39
l the results fields, see Analyzing aiR for Review results on the next page.
aiR for Review Guide 43
9 Analyzing aiR for Review results
When aiR for Review analyzes documents, it makes predictions about the relevance of documents to different topics
or issues. If it predicts that a document is relevant or relates to an issue, it includes a written justification of that
prediction, as well as a counterargument and in-text citations. You can view these predictions, citations, and
justifications either from the Viewer, or as fields on document lists.
9.1 How aiR for Review analysis results work
When aiR for Review finishes its analysis of a document, it returns a prediction about how the document should be
categorized, as well as its reasons for that prediction. This analysis has several parts:
l aiR Prediction—the relevance, key, or issue label that aiR predicts should apply to the document. See Pre-
dictions versus document coding below
l aiR Score—a numerical score that indicates how strongly relevant the document is or how well it matches the
predicted issue. See Understanding document scores on the next page.
l aiR Rationale—an explanation of why aiR chose this score and prediction.
l aiR Considerations—a counterargument explaining why the prediction might possibly be wrong.
l aiR Citation [1-5]—excerpts from the document that support the prediction and rationale.
In general, citations are left empty for non-relevant documents and documents that don't match an issue. However,
aiR occasionally provides a citation for low-scoring documents if it helps to clarify why it was marked non-relevant. For
example, if aiR is searching for changes of venue, it might cite an email that ends with "Hang on, gotta run, more later"
as worth noting, even though it does not consider this a true change of venue request.
9.1.1 Predictions versus document coding
Even though aiR refers to the relevance, key, and issue fields during its analysis, it does not actually write to these
fields. All of aiR's results are stored in aiR-specific fields, such as the Prediction field. This makes it easier to compare
aiR's predictions to human coding while refining the prompt criteria.
If you have refined a set of Prompt Criteria to the point that you are comfortable adopting those predictions, you can
copy those predictions to the coding fields using mass-tagging or other methods.
For ideas on how to integrate aiR for Review results into a larger review workflow, see Using aiR for Review with
Review Center on page 71.
aiR for Review Guide 44
9.1.2 Variability of results
Due to the nature of large language models, output results may vary slightly from one run to another, even using the
same inputs. aiR's scores may shift slightly, typically between adjacent levels, such as from 1-not relevant to 2-
borderline. Significant changes, like moving from 4-very relevant to 1-not relevant, are rare.
9.2 Understanding document scores
aiR scores documents from 0 to 4 according to how relevant they are or how well they match an issue. The higher the
number, the more relevant the document is predicted to be. A score of -1 is assigned to any errored documents.
Because these documents were not properly analyzed, they cannot receive a normal score.
The aiR for Review scores are:
Score Description
-1 The document either encountered an error or could not be analyzed. For more information, see How doc-
ument errors are handled on page 48.
0 The document contains no useful information or is “junk” data, such as an empty document or random
characters.
1 The document is predicted not relevant. aiR did not find any evidence that it relates to the case or issue.
2 The document is predicted borderline relevant. aiR found some content that might relate to the case or
issue. It usually has citations.
3 The document is predicted relevant to the issue. Citations show the relevant text.
4 The document is predicted very relevant to the issue. aiR found direct, strong evidence that the content
relates to the case or issue. Citations show the relevant text.
9.3 Viewing results from the aiR for Review dashboard
Within a project, you can view results using the aiR for Review dashboard. This dashboard includes not just results
fields, but calculated metrics such as the number of documents with predictions that conflict with human coding.
To view the dashboard, select a project from the aiR for Review Projects tab. For detailed information on the
dashboard layout, see Navigating the aiR for Review dashboard on page 17.
9.4 Viewing results for individual documents from the Viewer
From the Viewer, you can see the aiR for Review results for each individual document. Predictions show up in the left-
hand pane, and all citations are automatically highlighted.
Note: You will only see analysis highlights if you have the necessary permissions. Without these, the aiR for Review
Analysis icon does not display. For more information, refer to Permissions.
To view a document's aiR for Review results in the Viewer, click on the aiR for Review Analysis icon ( ) to expand
the pane. The aiR for Review Analysis pane displays the following:
1. Prompt Criteria version
2. Analysis Name
3. Prediction
aiR for Review Guide 45
4. Rationale and Considerations
5. Citation
For more information, see Viewer documentation.
Notes:
l If you run a new job on documents that were part of a previous job, you may temporarily see both sets of res-
ults linked to those documents. The old results will be unlinked after the new job is complete.
l To avoid seeing doubled results, hide the previous result set using the aiR for Review Jobs tab.
9.4.1 Citations and highlighting
A maximum of five citations will be displayed with the document.
To jump to a specific citation, click the citation card. You can also toggle highlighting on or off by clicking the toggle at
the top of the aiR for Review Analysis pane.
aiR for Review Guide 46
9.4.1.1 Citation colors
The highlight colors depend on the type of citation:
l Relevance citation—orange.
l Key Document citation—purple.
l Issue citation—color is chosen in the Color Map application. For more information, see Color Map on the
Relativity documentation site.
If the same passage is cited by two types of results, the highlight blends their colors.
9.4.1.2 Citation order
The results in the aiR for Review Analysis pane are first ordered by:
aiR for Review Guide 47
l Relevance citation
l Key Document citation
l Issue citation
The Issue results are ordered according to each issue choice's Order value. For information on changing the choice
order, see Choices in the Admin guide.
Finally, duplicate results are ordered from most recent to oldest.
9.4.2 Adding aiR for Review fields to layouts
Because of how aiR for Review results fields are structured, you cannot add them directly to layouts. If the highlighting
is not enough, you can add an object list to the layout that shows all linked results. For more information, see Adding
and editing an object list in the Admin guide.
9.5 Filtering and sorting aiR for Review results
Documents have a one-to-many relationship with the aiR for Review's results fields. For example, a single document
might be linked to several Issue results. This creates some limitations when sorting and filtering results:
l Filter one column at a time in the Document list. Combining filters may include more results than you expect.
l If you need to filter by more than one field at a time, we recommend using search conditions instead.
l You can add these fields to views and widgets, but you cannot sort the view or the widget by these fields.
9.6 How document errors are handled
If aiR encounters a problem when analyzing a document, it will not return results for that document. Instead, it scores
the document as -1 and returns an error message in the Error Details column. Your organization is not charged for any
errored documents, and they do not count towards your organization's aiR for Review total document count.
The possible error messages are:
Error message Description Retry?
Completion is not valid The LLM encountered an error. Yes
JSON
Failed to parse com- The large language model (LLM) encountered an error. Yes
pletion
Document text is empty The extracted text of the document was empty. No
Document text is too The document's extracted text was too long to analyze. No
long
Document text is too There was not enough extracted text to analyze in the document. No
short
Model API error A communication error occurred between the large language model (LLM) and Yes
occurred Relativity. This is usually a temporary problem.
Uncategorized error An unknown error occurred. Yes
occurred
Ungrounded citations The results for this document have a chance of including an ungrounded cita- Yes
detected in completion tion. For more information, see Ungrounded citations on the next page.
aiR for Review Guide 48
If the Retry? column says Yes, you may get better results trying to run that same document a second time. For errors
that say No in that column, you will always receive an error running that specific document.
If you retry a document and keep receiving the same error, the document may have permanent problems that aiR for
Review cannot process.
9.7 Ungrounded citations
An ungrounded citation may occur for two reasons:
l When the aiR results citation cannot be found anywhere in the document text. This is usually caused by format-
ting issues. However, just in case the LLM is citing sentences without a source, we mark it as a possible
ungrounded citation.
l When the aiR results citation comes from something other than the document itself, but which is still part of the
full prompt. For example, it might cite text that was part of the Prompt Criteria instead of the document's extrac-
ted text.
When aiR receives the analysis results from the LLM, it checks all citations against the prompt text. Any possible
ungrounded citations are marked as errors, and they receive a score of -1 instead of whatever score they were
originally assigned. If retrying documents with these errors does not succeed, we recommend manually reviewing
them instead.
Actual ungrounded citations are extremely rare. However, highly structured documents, such as Excel spreadsheets
and PDF forms, are more likely to confuse the detector and trigger these errors.
aiR for Review Guide 49
10 aiR for Review Prompt Criteria validation
Prompt Criteria validation gathers metrics to check whether the Prompt Criteria are effective and defensible before
using them on a larger data set. Using aiR for Review and Review Center in tandem, you can set up a smaller
document sample, oversee reviewers, and compare aiR's relevance predictions to actual coding results.
Note: This functionality is currently only enabled for the Relevance and Relevance & Key analysis types.
10.1 Prerequisite
The Review Center application must be installed to run the validation workflow with aiR for Review.
10.2 How validation fits into aiR for Review
aiR for Review leverages the Prompt Criteria across a three-phased workflow:
1. Develop—user write and iterate on the Prompt Criteria (review instructions) and test on a small document set
until aiR’s recommendations align sufficiently with expected relevance and issue classifications.
2. Validate—user leverages the integration between aiR for Review and Review Center to compare results and
validate the Prompt Criteria.
3. Apply—user applies the verified Prompt Criteria on much larger sets of documents.
The Prompt Criteria validation process covers phase 2, Validate.
10.3 High-level Prompt Criteria validation workflow
The diagram details the steps (in orange) for Prompt Criteria validation, which can occur after the Prompt Criteria
develop phase.
aiR for Review Guide 50
The Validate and Apply phases involve several steps that cross between aiR for Review and Review Center:
Process Flow Application Used
1. Identify target review set & choose validation settings. aiR for Review
Set up the validation sample by choosing the sample size, desired
margin of error for the validation statistics, and other settings.
2. Run aiR for Review on the sample to receive predictions.
When a validation sample is created, a Review Center queue is
automatically created. Run the sample documents through aiR for
Review to obtain the relevance predictions that will be used for
comparison with the manual human reviews in Review Center.
3. Manually review and code the documents in the validation sample. Review Center
Human reviewers code the documents in the sample for comparison
to the aiR predictions. For the sake of validation, the human coding
decisions are considered "correct." The Review Center dashboard
tracks reviewers’ progress and compares their choices to aiR.
4. Evaluate the statistical results of the validation.
After human reviewers finish reviewing the validation sample, final
validation statistics display comparing their results with aiR for
Review’s predictions. These results are then evaluated.
5. Accept or reject the validation results.
After reviewing the results, decide whether to accept or reject them. If
the results are accepted, the validated Prompt Criteria can be used for
all documents in the target data source, and the process moves to aiR
for Review to apply the criteria to the larger data set. If they are
rejected, the team goes back to continuing to develop the Prompt
Criteria.
6. Optionally apply the accepted validated Prompt Criteria against the entire aiR for Review
target population.
The validated Prompt Criteria can be applied to the entire target
population by creating an Apply project set. See Using project sets on
page 29 for more information.
aiR for Review Guide 51
10.4 Setting up aiR for Review Prompt Criteria validation
After developing the Prompt Criteria, you can run it through the validation process to gather metrics to ensure they are
effective and defensible before using them on a larger data set. Begin by configuring the validation sample settings in
aiR for Review. Based on these parameters, aiR will pull the random sample and create a Review Center queue for
their evaluation. After humans review and code the documents in Review Center, you can return to aiR for Review to
run the accepted Prompt Criteria on the larger document set.
This general workflow involves:
1) Setting up the validation sample by choosing the sample size, desired margin of error for the validation
statistics, and other settings.
2) Running aiR for Review on the sample to receive relevance predictions.
3) Going to Review Center to begin the human review and coding of the validation sample.
4) Returning to aiR for Review to run the accepted prompt criteria on the larger document set if desired.
Follow the steps below to use aiR for Review for Prompt Criteria validation:
1. Choose the desired project set version number to validate.
2. Click the project set + sign.
3. Click I want to validate the prompt criteria to a document population.
4. Fill out the Validation Settings fields to determine the validation sample. For more information on the fields,
see Validation Settings fields on page 54.
aiR for Review Guide 52
5. Click Create Validation Set & Queue. A validation set is generated in aiR for Review, and a corresponding val-
idation queue is established in Review Center. The project set state will transition from Develop to Validate.
6. Review the confirmation summary modal and provide email addresses, if needed, then click Start Analysis.
The aiR for Review predictions appear in the Validate Metrics tab. The banner lets you know that a validation
queue has been created in Review Center to facilitate review of the validation set there.
7. Click Go to Review Center to begin the human review and coding of the validation sample in Review Center.
Refer to Prompt Criteria validation in Review Center on page 55 for the steps to follow in Review Center.
aiR for Review Guide 53
Note: The Review Center application must be installed to run the validation workflow with aiR for Review. Other-
wise, an error message displays.
In Review Center, human reviewers manually code the sample set of documents. The dashboard generates statistics
comparing the human coding to aiR’s predictions. After reviewers have finished coding all the documents in the
sample, you can evaluate the results and decide whether to accept or reject them. Refer to Applying the validated
results or developing different Prompt Criteria below for more information.
10.4.1 Applying the validated results or developing different Prompt Criteria
The following outlines the next steps in aiR for Review, depending on whether the validation results were accepted or
rejected in Review Center.
If accepted:
l A green check mark displays next to the project set version number in aiR for Review and it's labeled "Validation
Accepted."
l To apply the validated prompt criteria on the larger document set, either click the Run Target Population but-
ton or click the project set + sign and select I want to apply the prompt criteria to the target document pop-
ulation. Then go through the process of reviewing the confirmation summary and clicking Start Analysis. For
details on running an analysis and project sets, refer to Running the analysis and Using project sets on page 29.
Note: If the data source (saved search) exceeds the maximum of 250,000 documents, it is necessary to seg-
ment the data into multiple saved searches and apply each one separately to avoid errors.
l After running the accepted Prompt Criteria version on the target population in aiR for Review, a green check
mark and "Review Complete" notification appear next to the project set. No additional validations can be run on
the project after validation is accepted.
If rejected:
l A red X displays next to the version number.
l Click the + sign to create a new project set and repeat the prompt criteria development and validation process
again.
10.4.2 Validation Settings fields
Enter information into the following fields:
l Target Data Source for Review—select the target population of documents from which the validation sample
is drawn. Be sure this data source contains all documents that will be validated and run through aiR for Review,
regardless of maximum job size. Documents are randomly chosen from this group and may include those used
during the prompt criteria development process.
l Validation Reviewer Groups—select the user groups you want to review the validation sample. They will
access the sample set and code the documents using the Review Center queue tab. We recommend using
experienced reviewers for the validation. It can also be helpful if the reviewers were not involved in the Prompt
Criteria creation and iteration process.
l Sample Settings—these settings determine the characteristics of the sample needed to achieve desired val-
idation results.
aiR for Review Guide 54
l Size—enter the number of documents for the validation sample size. The default is set to 1000.
l Richness Estimate—enter the estimated percentage of relevant documents in the data source used to
determine sample size and confidence levels. A default value of 25% is commonly used after data culling
for production reviews. A lower value will increase the recommended sample size for a given margin of
error.
l Margin of Error Estimate (Recall)—click the plus and minus symbols to adjust the desired margin of
error for the Recall statistic. The default of 5% MoE is a widely accepted threshold for defensible val-
idation and recall calculations.
Advanced Settings
l Treat Borderlines as Relevant—the toggle is enabled by default. When enabled, documents with a score of 2
(Borderline) are classified as Relevant in the validation results. When the toggle is disabled, Borderline doc-
uments are classified as Not Relevant.
l Display aiR Results in Viewer—the toggle is enabled by default. When enabled, reviewers will see aiR for
Review’s predictions during sample validation. While seeing these decisions can help spot true positives, it may
also bias reviewers towards agreeing with the AI-generated predictions. You can disable the toggle to hide pre-
dictions.
l Email Notification Recipients—enter email addresses, separated by semicolons, for individuals to be notified
when manual queue preparation finishes, a queue becomes empty, or an error occurs during queue population.
10.5 Prompt Criteria validation in Review Center
When you validate aiR for Review Prompt Criteria, the validation process compares aiR for Review's AI-based
relevance predictions to human coding decisions. Review Center calculates the validation statistics and helps you
organize, track, and manage the human side of the coding process.
10.5.1 How Review Center fits into the validation process
When you create a validation set, there are several steps that cross between aiR for Review and Review Center:
1. Set up the validation sample—choose the sample size, settings, and desired margin of error for the validation
statistics.
2. Run aiR for Review on the sample—this creates the relevance predictions that will be used for comparison.
3. Code the sample using skilled human reviewers—this records human coding decisions for comparison to
the AI predictions. For the sake of validation, these are considered the "correct" decisions.
4. Review the validation statistics—these statistics measure any differences between the AI predictions and
the human decisions. It also measures what percentage of the overall document set is likely relevant.
5. Accept or reject the results—this either confirms the Prompt Criteria as effective for use with a larger doc-
ument set, or it re-opens it for editing and improvement.
6. Apply or improve the Prompt Criteria—return to aiR for Review to either run the Prompt Criteria on larger
sets of documents, or to improve the Prompt Criteria and try again.
During validation, steps 3 through 5 take place in Review Center. For information on the other steps, see Setting up
aiR for Review Prompt Criteria validation on page 52.
For a general overview, see aiR for Review Prompt Criteria validation on page 50.
aiR for Review Guide 55
10.5.2 Managing the coding process
After you create the Prompt Criteria validation queue, the coding process is similar to any other validation queue in
Review Center. Reviewers code documents using the Review Queues tab, and administrators track and manage the
queue through the main Review Center tab.
10.5.2.1 Administering the queue
As coding progresses, the Review Center dashboard displays metrics and controls related to queue progress. The
main validation statistics will not appear until all documents have been coded and the validation process is complete.
From the dashboard, the queue administrator can pause or cancel the queue, view coding progress, and edit some
settings.
l For information on the Review Center dashboard, see Monitoring a Review Center queue in the Review Center
guide.
l For information on validation queue settings, see Monitoring a validation queue in the Review Center guide.
Note: The statistics produced during Prompt Criteria validation are similar to the ones produced for a regular
Review Center queue, but not identical. For more information, see Prompt Criteria validation statistics on page 58.
10.5.2.2 Coding in the queue
Reviewers access the validation queue from the Review Queues tab like all other queues.
During review:
l Have reviewers code documents from the sample until all documents have been served up.
l We strongly recommend coding every document in the validation queue. Skipping documents lowers the accur-
acy of the validation statistics.
For full reviewer instructions, see Reviewing documents using Review Center in the Review Center guide.
Note: Validation does not check for human error. We recommend that you conduct your own quality checks to
make sure reviewers are coding consistently.
10.5.3 Reviewing validation statistics
When reviewers have finished coding all the documents in the queue, review the validation statistics. You can use
these to determine whether to accept the validation results, or reject them and try again with a different set of Prompt
Criteria.
The statistics for Prompt Criteria validation include:
l Elusion rate—the percentage of documents that aiR predicted as non-relevant, but that were actually relevant.
l Precision—the percentage of documents that aiR predicted as relevant that were truly relevant.
l Recall—the percentage of truly relevant documents that were found using the current Prompt Criteria.
l Richness—the percentage of relevant documents across the entire document set.
l Error rate—the percentage of documents that received errors in aiR for Review.
The ranges listed below each statistic apply the margin of error.
Note: The exact criteria for whether to accept or reject may vary depending on your situation, but the goal is to have
the AI predictions match the decisions of the human reviewers as closely as possible. In general, look for a low
elusion rate and high recall.
aiR for Review Guide 56
For more information on how the statistics are calculated, see Prompt Criteria validation statistics on the next page.
10.5.4 Accepting or rejecting results
When the human coding decisions are complete, you can review how effectively the AI matched human decisions,
then decide whether to accept the results and use the Prompt Criteria as-is, or whether to reject the results and
improve the Prompt Criteria.
After all documents in the validation queue have been reviewed, a ribbon appears underneath the Queue Summary
section. This ribbon has two buttons: one to accept the validation results, and one to reject them.
If you click Accept:
l In aiR for Review, you can no longer create new Develop sets.
l The queue status changes to Validation Complete.
If you click Reject:
l In aiR for Review, you can create a new Develop set.
l The queue status changes to Rejected.
After you make the choice, the Validation Progress strip on the dashboard displays the final validation statistics and a
link back to the aiR for Review project. From there, you can either use the finalized Prompt Criteria on a larger
document set, or edit the Prompt Criteria and continue improving it.
For information on continuing work in the aiR for Review tab, see Setting up aiR for Review Prompt Criteria validation
on page 52.
Note: If you reject this validation, you can run validation again later. Even if you reject the results, Review Center
keeps a record of them. For more information, see Viewing results for previous validation queues below.
10.5.4.1 Manually rejecting validation results
If you change your mind after accepting the validation results, you can still reject them manually.
To reject the results after accepting them:
1. On the right side of the Queue Summary section, click on the three-dot menu and select Reject Validation.
2. Click Reject.
After you have rejected the validation results, you can resume normal reviews in the main queue.
10.5.4.2 Viewing results for previous validation queues
After you have run validation, you can switch back and forth between viewing the statistics for the current validation
attempt and any previous validation queues that were completed or rejected. These queues are considered linked.
Viewing the statistics for linked queues does not affect which queue is active or interrupt reviewers.
To view linked queues:
aiR for Review Guide 57
1. Click the triangle symbol near the right side of the Queue Summary section.
A drop-down menu listing all linked queues appears.
2. Select the queue whose stats you want to view.
When you're done viewing the linked queue's stats, you can use the same drop-down menu to select the main queue
or other linked queues.
10.5.5 How changes affect the validation results
The validation process assumes that the Prompt Criteria, document set, and coding decisions will all remain the same.
If any of these things change, the validation results will also change. Sometimes this can be solved by recalculating
the validation statistics, but often it means creating a new validation queue.
10.5.5.1 Scenarios that require recalculation
The following scenarios can be fixed by recalculating statistics:
l Changing coding decisions on documents within the validation sample
l Re-running aiR for Review to fix errored documents
In these cases, the sample itself is still valid, but the numbers have changed. For these situations, recalculate the
validation results to see accurate statistics. For instructions on how to recalculate results, see Recalculating validation
results below.
10.5.5.2 Scenarios that require a new validation queue
The following scenarios require a new validation queue:
l Changing the Prompt Criteria
l Adding or removing documents from the document set after validation starts
In these cases, the sample or the criteria themselves have changed, so recalculating does not help. For these
situations, create a new validation queue.
10.5.5.3 Recalculating validation results
If you have re-coded any documents from the validation sample, you can recalculate the results without having to re-
run validation. For example, if reviewers had initially skipped documents in the sample or coded them as non-relevant,
you can re-code those documents outside the queue, then recalculate the validation results to include the new coding
decisions.
To recalculate validation results:
1. On the right side of the Queue Summary section, click on the three-dot menu and select Recalculate Val-
idation.
2. Click Recalculate.
10.6 Prompt Criteria validation statistics
aiR for Review Prompt Criteria validation provides several metrics for evaluating your Prompt Criteria. Together, these
metrics can help you determine whether the Prompt Criteria will work as expected across the full data set.
aiR for Review Guide 58
Because Prompt Criteria validation is checking the effectiveness of the Prompt Criteria, rather than checking
completeness of a late-stage review, these statistics are calculated slightly differently than standard Review Center
validation statistics. For the standard Review Center metrics, see Review validation statistics in the Review Center
guide.
For more details on the differences between Prompt Criteria validation statistics and standard Review Center
statistics, see How Prompt Criteria validation differs from other validation types on page 62.
10.6.1 Defining the validation statistics
Prompt Criteria validation centers on the following statistics. For all of these, it uses a 95% confidence interval:
l Elusion rate—the percentage of documents that aiR predicted as non-relevant, but that were actually relevant.
l Precision—the percentage of documents that aiR predicted as relevant that were truly relevant.
l Recall—the percentage of truly relevant documents that were found using the current Prompt Criteria.
l Richness—the percentage of relevant documents across the entire document set.
l Error rate—the percentage of documents that received errors in aiR for Review.
In everyday terms, you can think of these as:
l Elusion rate: "How much of what we’re leaving behind is relevant?"
l Precision: "How much junk is mixed in with what we think is relevant?"
l Recall: "How much of the relevant stuff can we find?”
l Richness: "How much of the overall document set is relevant?"
l Error rate: "How many documents aren't being read at all?"
For each of these metrics, the validation queue assumes that you trust the human coding decisions over aiR's
predictions. It does not second-guess human decisions.
Note: Validation does not check for human error. We recommend that you conduct your own quality checks to
make sure reviewers are coding consistently.
10.6.2 How documents are categorized for calculations
aiR for Review tracks one field that represents whether a document is relevant or non-relevant, with only one choice
for Relevant. Any other choices available on that field are considered non-relevant.
When you validate Prompt Criteria, Review Center uses that field to categorize coding decisions:
l Relevant—the reviewer selected the relevant choice.
l Non-relevant—the reviewer selected a different choice.
l Skipped—the reviewer skipped the document.
aiR for Review's relevance predictions have a similar set of possible values:
l Positive—The document's score is greater than or equal to the cutoff.
l Negative—The document's score is at least zero, but less than the cutoff.
l Error—The document's score is -1, meaning that aiR for Review could not process the document.
If Treat Borderlines as Relevant is enabled, the cutoff score is 2 (Borderline). That means that all documents scored 2,
3, or 4 are predicted positive. If Treat Borderlines as Relevant is disabled, the cutoff score is 3 (Relevant). In that case,
only documents scored 3 or 4 are predicted positive.
aiR for Review Guide 59
10.6.2.1 Variables used in the metric calculations
Based on the possible combinations of coding decisions and relevance predictions, Review Center uses the following
categories when calculating statistics. Each of these categories, except for Skipped, is represented by a variable in
the statistical equations.
Document Category Variable for Equations Coding Decision aiR Prediction
Error E (any) error
Skipped (N/A) skipped (any)
True Positive TP relevant positive
False Positive FP non-relevant positive
False Negative FN relevant negative
True Negative TN non-relevant negative
Note: Different regions and industries may use "relevant," "responsive," or "positive" to refer to documents that
directly relate to a case or project. For the purposes of these calculations, the terms are used interchangeably.
10.6.2.2 How validation handles skipped documents
We strongly recommend coding every document in the validation queue. Skipping documents reduces the accuracy of
the validation statistics. The Prompt Criteria validation statistics ignore skipped documents and do not count them in
the final metrics.
10.6.3 Prompt Criteria validation metric calculations
When you validate a set of Prompt Criteria, each metric is calculated as follows.
10.6.3.1 Elusion rate
This is the percentage of documents that aiR predicted as non-relevant, but that are actually relevant.
Elusion = (False negatives) / (False negatives + true negatives)
The elusion rate gives an estimate of how many relevant documents would be missed if the current Prompt Criteria
were used across the whole document set.
10.6.3.2 Precision
This is the percentage of documents that aiR predicted as relevant that were truly relevant.
aiR for Review Guide 60
Precision = (True positives) / (True positives + false positives)
Precision shares a numerator with the recall metric, but the denominators are different. In precision, the denominator
is "what the Prompt Criteria predicts is relevant;" in recall, the denominator is "what is truly relevant."
10.6.3.3 Recall
This is the percentage of truly relevant documents that were found using the current Prompt Criteria.
Recall = (True positives) / (True positives + false negatives)
Recall shares a numerator with the precision metric, but the denominators are different. In recall, the denominator is
"what is truly relevant;" in precision, the denominator is "what the Prompt Criteria predicts is relevant."
10.6.3.4 Richness
This is the percentage of relevant documents across the entire review.
Richness = (True positives + false negatives) / (True positives + false positives + false negatives + true negatives)
The richness calculation counts the final number of positive documents, then divides it by all groups except errored or
skipped documents.
10.6.3.5 Error rate
This is the percentage of documents that could not be processed by aiR for Review and received a score of -1.
Error rate = (Errored documents) / (Errors + true positives + false positives + false negatives + true negatives)
This could also be written as:
Error rate = (Errored documents) / (Sample size)
The error rate counts all errors, then divides them by the total number of documents in the sample.
aiR for Review Guide 61
10.6.4 How the confidence interval works
When Review Center reports validation statistics, it includes a range underneath the main estimate. The main
estimate is the "point estimate," meaning the value we estimated from the sample, and the range beneath is the
confidence interval.
Prompt Criteria validation uses a 95% confidence interval when reporting statistics. This means that 95% of the time,
the range contains the true statistic for the entire document set.
This interval uses the Clopper-Pearson calculation method, which is statistically conservative. It is also asymmetrical,
so the upper and lower limits of the range may be different distances from the point estimate. This is especially true
when the point estimate is close to 100% or 0%. For example, if a validation sample shows 99% recall, there's lots of
room for that to be an overestimate, but it cannot be an underestimate by more than one percentage point.
10.6.5 How Prompt Criteria validation differs from other validation types
aiR for Review Prompt Criteria validation queues have a few key differences from prioritized review validation queues
in Review Center. The main metrics are the same, but some of the details vary.
In Prompt Criteria validation:
l The validation typically takes place early in the review process, instead of towards the end.
l Skipped documents are excluded from calculations entirely, instead of counting as an unwanted result.
l The validation sample is taken from all documents in the data set, regardless of whether they were previously
coded. This leads to some slight differences in the metric calculations.
aiR for Review Guide 62
11 Managing aiR for Review jobs
Use the aiR for Review Jobs tab to monitor job progress, view prompt criteria details, or cancel the job. You can also
view completed jobs and choose which analysis results are connected to the documents.
11.1 aiR for Review Jobs tab
There are two versions of the aiR for Review Jobs tab: one at the instance level, and one at the workspace level.
l Instance-level tab—shows the most recent 100 jobs across all workspaces. It includes several extra columns
to identify the workspace, matter, and client connected to each job.
l Workspace-level tab—shows all jobs for an individual workspace. Most users only need access to the work-
space-level tab. However, because some of aiR's volume limits are instance-wide, the instance-level tab makes
it easy to see exactly how much capacity is being used.
Both versions of the tab show aiR for Review jobs that have been submitted for analysis. You can use the tab to view
prompt details, cancel queued or in-progress jobs, and manage the job results.
For information on managing tab permissions, see Permissions on page 12.
Note: If the aiR for Review Jobs tab says that aiR for Review is not currently available, check with your
administrator. Your organization might not have an active contract for aiR for Review.
11.2 How aiR for Review document linking works
When aiR for Review analysis has been run multiple times on the same document, each set of results is saved as part
of a separate job. By default, when you look at a document's results, the results from the most recent analysis job are
displayed. However, if you want to see the results from a previous job instead, you can use the aiR for Review Jobs
tab to link an older job's results to the document. Each set of results can be linked or unlinked at any time without
losing any data.
For example, if you realize your current Prompt Criteria gives you less helpful results than a previous Prompt Criteria
did, you can make the previous job's results visible. This immediately gives reviewers access to the old predictions
without needing to re-run the old Prompt Criteria.
If you are viewing results from within the aiR for Review dashboard, the project version you select from the dashboard
controls which job's results you see. If you are viewing results from other parts of Relativity, such as Review Center or
the Document list, the job selected from the aiR for Review Jobs tab takes precedent. For more information on version
selection in the dashboard, see How version controls affect the Viewer on page 39.
11.3 Managing jobs and document linking
You can use the aiR for Review Jobs tab to cancel jobs, hide job results from the document fields, and make hidden
job results visible.
To manage jobs, use the following icons:
l
Cancel symbol ( )—the job is currently queued or in progress. Clicking the symbol cancels the job. Any res-
ults that were already received from the large language model (LLM) will stay in the fields, and those results will
still be billed.
aiR for Review Guide 63
l
Visible or Partially Hidden symbol ( )—some or all of the job results are linked to documents.
l If the Results column says Visible, this means that all documents from this job show this job's results in
the Viewer. Clicking the symbol unlinks the job results and hides them from the Viewer.
l If the Results column says Partially Hidden, this means only some of the documents from this job show
this job's results in the Viewer. For example, if a few documents from this job were later included in a dif-
ferent job, they might have that more recent job's results showing instead. Clicking the symbol gives you
a choice to either unlink and hide the results on all documents, or re-link and make the results visible on
all documents.
l
Hidden symbol ( )—none of the job results are linked to documents. Clicking the symbol re-links the job res-
ults to the documents in the run and makes them visible in the Viewer. If the documents are currently linked to
another job with the same result type, those results will be hidden.
Notes:
l If you run a new job on documents that were part of a previous job, you may temporarily see both sets of res-
ults linked to those documents. The old results will be unlinked after the new job is complete.
l To avoid seeing doubled results, hide the previous result set using the aiR for Review Jobs tab.
11.4 Viewing job details
To see the Prompt Criteria for an aiR for Review job, go to the aiR for Review Jobs tab and click within its row. A detail
panel opens showing the setup details, case summary, fields, and criteria for analysis.
You can control a user's access to the detail panel using both item-level and workspace-level permissions. For more
information, see Permissions on page 12 for Viewing the aiR for Review Jobs tab.
11.5 Jobs tab fields
The following fields appear on the aiR for Review Jobs tab:
l Results—whether the job results are linked and visible on their corresponding documents. For more inform-
ation, see Managing jobs and document linking on the previous page.
aiR for Review Guide 64
l Job ID—the unique ID assigned to a job.
l Project Name—the name of the aiR for Review project associated with the job. To view the project, click on the
project name.
l Prompt Criteria Name—the name of the Prompt Criteria used by the job. If several jobs ran using the same
Prompt Criteria, this name will be the same for those jobs.
l Version—the Prompt Criteria version associated with the job. For more information, see How Prompt Criteria
versioning works on page 38.
l aiR for Review Version—the version number of aiR for Review's internal model at the time the job ran. Please
note that this is different from the large language model version.
l Job Status—the current state of the job. The possible statuses are:
l Not Started
l Queued
l In Progress
l Completed
l Cancelling
l Errored
l Client Name (instance-level only)—the client associated with the job's workspace.
l Matter Name (instance-level only)—the matter name associated with the job's workspace.
l Matter Number (instance-level only)—the matter number associated with the job's workspace.
l Workspace ID (instance-level only)—the ID of the job's workspace.
l Workspace Name (instance-level only)—the name of the job's workspace.
l Doc Count—the number of documents submitted for analysis.
l Docs Successful—the number of documents that were successfully analyzed.
l Docs Pending—the number of documents that are waiting to be analyzed.
l Docs Errored—the number of documents that encountered an error during analysis.
l Docs Skipped—the number of documents that aiR did not return results for. This can happen for reasons such
as canceling a job, network errors, and partial or complete job failures.
l User Name—the user who submitted the job.
l Submitted Time—the time the user submitted the job.
l Completed Time—the time the job successfully completed. If the job failed or was canceled early, this field is
blank.
l Terminated Time—the time the job stopped running, regardless of whether it was canceled, failed, or com-
pleted successfully.
l Job Failure Reason—if the job failed, the reason is listed here. If the job completed successfully, this field is
blank.
l Estimated Wait Time—the initial estimate for how long the job will wait between when the user submits the job
and when the job can start running.
l Estimated Run Time—the initial estimate for how long the job will take to run after the wait time.
aiR for Review Guide 65
12 Creating document views and saved searches
In addition to using the dashboard, you can view and compare the results for large groups of documents by adding
their fields to document views and saved searches.
Each field name is formatted as aiR <review type> Analysis::<fieldname>. For example, the Prediction
field for a Relevance analysis is called aiR Relevance Analysis::Prediction.
For a full field list, see aiR for Review results fields below.
Notes:
l If you run a new job on documents that were part of a previous job, you may temporarily see both sets of res-
ults linked to those documents. The old results will be unlinked after the new job is complete.
l To avoid seeing doubled results, hide the previous result set using the aiR for Review Jobs tab.
12.1 Creating an aiR for Review results view
When creating a view for aiR for Review results, we recommend including these fields:
l Edit
l Control Number
l <Review Field>
l aiR <Review Type> Analysis::Score
l aiR <Review Type> Analysis::Prediction
Because the Rationale, Citation, and Considerations fields have larger blocks of text, those tend to be less helpful for
comparing many documents. However, you can also add those if desired.
For a full field list, see aiR for Review results fields below.
12.2 aiR for Review results fields
The results of every aiR for Review analysis are stored as part of an analysis object. Each of the three result types has
its own object type to match:
l aiR Relevance Analysis
l aiR Issue Analysis
l aiR Key Analysis
Additionally, the results are linked to each of the analyzed documents. These linked fields, called reflected fields,
update to link to the newest results every time the document is analyzed. However, the application keeps a record of
all previous job results, and you can link the documents to a different job at any time. For more information, see
Managing jobs and document linking on page 63.
The reflected fields are the most useful for reviewing analysis results. These are formatted as aiR <review type>
Analysis::<fieldname>. For example, the Prediction field for a Relevance analysis is called aiR Relevance
Analysis::Prediction.
12.2.1 aiR Relevance Analysis fields
The fields for aiR Relevance Analysis are:
aiR for Review Guide 66
Field
Field name Description
type
Name Fixed- The name of this specific result. This formatted as <Document Artifact ID>_
length <Job ID>.
Text
Job ID Fixed- The unique ID of the job this result came from.
length
Text
Score Whole Numerical score indicating how strongly relevant the document is. For more inform-
Number ation, see Understanding document scores on page 45.
Document Multiple The Control Number of the document this result is linked to. If the result is not currently
Object linked to any documents, this field is blank.
Prediction Fixed- aiR's prediction of whether this qualifies as a relevant document.
length
Text
Rationale Fixed- An explanation of why aiR chose this score and prediction.
length
Text
Considerations Fixed- A counterargument explaining why the prediction might possibly be wrong.
length
Text
Citation 1 Fixed- Excerpt from the document that supports the prediction and rationale. This may be
length blank for some documents.
Text
Citation 2 Fixed- Second excerpt from the document that supports the prediction and rationale. This
length may be blank for some documents.
Text
Citation 3 Fixed- Third excerpt from the document that supports the prediction and rationale. This may
length be blank for some documents.
Text
Citation 4 Fixed- Fourth excerpt from the document that supports the prediction and rationale. This may
length be blank for some documents.
Text
Citation 5 Fixed- Fifth excerpt from the document that supports the prediction and rationale. This may
length be blank for some documents.
Text
Error Details Fixed- If the document encountered an error, the error message displays here. For an error
length list, see How document errors are handled on page 48.
Text
12.2.2 aiR Issues Analysis fields
The fields for aiR Issues Analysis are:
aiR for Review Guide 67
Field
Field name Description
type
Name Fixed- The name of this specific result. This formatted as <Document ID>_<Job ID>.
length
Text
Job ID Fixed- The unique ID of the job this result came from.
length
Text
Choice Fixed- The name of the issue choice being analyzed for this result.
Analyzed length
Text
Choice Whole The Artifact ID of the issue choice being analyzed for this result.
Analyzed ID Number
Document Multiple The Control Number of the document this result is linked to. If the result is not currently
Object linked to any documents, this field is blank.
Score Whole Numerical score indicating how well the document matches an issue. For more
Number information, see Understanding document scores on page 45.
Prediction Fixed- aiR's predicted issue choice for this document.
length
Text
Rationale Fixed- An explanation of why aiR chose this score and prediction.
length
Text
Considerations Fixed- A counterargument explaining why the prediction might possibly be wrong.
length
Text
Citation Fixed- Excerpt from the document that supports the prediction and rationale. This may be
length blank for some documents.
Text
Error Details Fixed- If the document encountered an error, the error message displays here. For an error
length list, see How document errors are handled on page 48.
Text
12.2.3 aiR Key Analysis fields
The fields for aiR Key Analysis are:
Field
Field name Description
type
Name Fixed- The name of this specific result. This formatted as <Document ID>_<Job ID>.
length
Text
Job ID Fixed- The unique ID of the job this result came from.
length
Text
Document Multiple The Control Number of the document this result is linked to. If the result is not currently
Object linked to any documents, this field is blank.
Score Whole Numerical score indicating how strongly relevant the document is. For more
aiR for Review Guide 68
Field
Field name Description
type
Number information, see Understanding document scores on page 45.
Prediction Fixed- aiR's prediction of whether this qualifies as a key document.
length
Text
Rationale Fixed- An explanation of why aiR chose this score and prediction.
length
Text
Considerations Fixed- A counterargument explaining why the prediction might possibly be wrong.
length
Text
Citation 1 Fixed- Excerpt from the document that supports the prediction and rationale. This may be
length blank for some documents.
Text
Citation 2 Fixed- Second excerpt from the document that supports the prediction and rationale. This
length may be blank for some documents.
Text
Citation 3 Fixed- Third excerpt from the document that supports the prediction and rationale. This may
length be blank for some documents.
Text
Citation 4 Fixed- Fourth excerpt from the document that supports the prediction and rationale. This may
length be blank for some documents.
Text
Citation 5 Fixed- Fifth excerpt from the document that supports the prediction and rationale. This may
length be blank for some documents.
Text
Error Details Fixed- If the document encountered an error, the error message displays here. For an error
length list, see How document errors are handled on page 48.
Text
aiR for Review Guide 69
13 Running aiR for Review as a mass operation
If you want to run previously refined prompt criteria on a set of documents, you have the option of running the job as a
mass operation from the Documents list page.
To run aiR for Review as a mass operation:
1. From the Documents list, select the documents you want to analyze.
2. From the Mass Operation menu at the bottom of the grid, select aiR for Review.
3. On the modal, click one of the following for the Prompt Criteria:
l Create New—this closes the modal and redirects you to the aiR for Review Projects tab. Keep in mind
this option does not save the previously selected documents in the first step.
l Select Existing—select and load a set of previously created prompt criteria from your workspace. This
only shows prompt criteria that have been run at least once, and it selects the most recent version of
them.
4. After you have loaded a set of prompt criteria, click Start Analysis.
A banner at the top of the page confirms when the analysis job is queued and updates when it finishes.
5. To view and manage jobs that are not part of an existing project, use the aiR for Review Jobs tab. For more
information, see Managing aiR for Review jobs on page 63.
aiR for Review Guide 70
14 Using aiR for Review with Review Center
There are two ways to integrate aiR for Review and Review Center for a larger review workflow.
14.1 Using prompt criteria validation
Validating aiR for Review prompt criteria involves comparing AI-generated relevance predictions generated in aiR with
human coding results in Review Center on a sample of the document set. After it’s validated, you can choose to run
the prompt criteria on the larger document population. For more information, see aiR for Review Prompt Criteria
validation on page 50.
14.2 Using aiR to prioritize documents in a review queue
After analyzing the documents with aiR for Review, you can use aiR's predictions to prioritize which documents to
include in a Review Center queue.
One option for integrating aiR for Review into a larger review workflow is to combine it with Review Center. After
analyzing the documents with aiR for Review, you can use aiR's predictions to prioritize which documents to include in
a Review Center queue.
For example, you may want to review all documents that aiR for Review scored as borderline or above for relevance.
To do that:
1. Set up a saved search for documents where aiR Relevance Analysis::Score is greater than 1. This returns all
documents scored 2 or higher.
2. Create a Review Center queue using that saved search as the data source.
Because of how the aiR for Review fields are structured, you cannot sort by them. However, you can either sort by
another field, or use a prioritized review queue to dynamically serve up documents that may be most relevant.
For more information, see the Review Center guide.
aiR for Review Guide 71
15 Archiving and restoring workspaces
Workspaces with aiR for Review installed can be archived and restored using the ARM application.
When archiving in ARM, check Include Extended Workspace Data under Extended Workspace Data Options. If this
option is not checked during the archive process, the aiR for Review features in the restored workspace will not be
fully functional. If this happens, you will need to manually reinstall aiR for Review in the restored workspace.
Note: If you restore a workspace that includes previous aiR for Review jobs, the pre-restoration jobs will not appear
on the instance-level aiR for Review Jobs tab. The jobs and their results will still be visible at the workspace level.
For more information on using ARM, see ARM Overview on the Relativity documentation site.
aiR for Review Guide 72
Proprietary Rights
This documentation (“Documentation”) and the software to which it relates (“Software”) belongs to Relativity ODA
LLC and/or Relativity’s third party software vendors. Relativity grants written license agreements which contain
restrictions. All parties accessing the Documentation or Software must: respect proprietary rights of Relativity and
third parties; comply with your organization’s license agreement, including but not limited to license restrictions on
use, copying, modifications, reverse engineering, and derivative products; and refrain from any misuse or
misappropriation of this Documentation or Software in whole or in part. The Software and Documentation is
protected by the Copyright Act of 1976, as amended, and the Software code is protected by the Illinois Trade
Secrets Act. Violations can involve substantial civil liabilities, exemplary damages, and criminal penalties,
including fines and possible imprisonment.
©2025. Relativity ODA LLC. All rights reserved. Relativity® is a registered trademark of Relativity ODA
LLC.
aiR for Review Guide 73