OpenShift Container Platform-4.17-Authentication and authorization-en-US
OpenShift Container Platform-4.17-Authentication and authorization-en-US
17
Configuring user authentication and access controls for users and services
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This document provides instructions for defining identity providers in OpenShift Container
Platform. It also discusses how to configure role-based access control to secure the cluster.
Table of Contents
Table of Contents
.CHAPTER
. . . . . . . . . . 1.. .OVERVIEW
. . . . . . . . . . . .OF
. . . AUTHENTICATION
. . . . . . . . . . . . . . . . . . . . AND
. . . . . AUTHORIZATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . . . . . . .
1.1. GLOSSARY OF COMMON TERMS FOR OPENSHIFT CONTAINER PLATFORM AUTHENTICATION AND
AUTHORIZATION 8
1.2. ABOUT AUTHENTICATION IN OPENSHIFT CONTAINER PLATFORM 9
1.3. ABOUT AUTHORIZATION IN OPENSHIFT CONTAINER PLATFORM 10
.CHAPTER
. . . . . . . . . . 2.
. . UNDERSTANDING
. . . . . . . . . . . . . . . . . . . .AUTHENTICATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
..............
2.1. USERS 12
2.2. GROUPS 12
2.3. API AUTHENTICATION 13
2.3.1. OpenShift Container Platform OAuth server 13
2.3.1.1. OAuth token requests 13
2.3.1.2. API impersonation 14
2.3.1.3. Authentication metrics for Prometheus 14
. . . . . . . . . . . 3.
CHAPTER . . CONFIGURING
. . . . . . . . . . . . . . . . THE
. . . . . INTERNAL
. . . . . . . . . . . .OAUTH
. . . . . . . .SERVER
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
..............
3.1. OPENSHIFT CONTAINER PLATFORM OAUTH SERVER 16
3.2. OAUTH TOKEN REQUEST FLOWS AND RESPONSES 16
3.3. OPTIONS FOR THE INTERNAL OAUTH SERVER 16
3.3.1. OAuth token duration options 17
3.3.2. OAuth grant options 17
3.4. CONFIGURING THE INTERNAL OAUTH SERVER’S TOKEN DURATION 17
3.5. CONFIGURING TOKEN INACTIVITY TIMEOUT FOR THE INTERNAL OAUTH SERVER 18
3.6. CUSTOMIZING THE INTERNAL OAUTH SERVER URL 20
3.7. OAUTH SERVER METADATA 21
3.8. TROUBLESHOOTING OAUTH API EVENTS 22
. . . . . . . . . . . 4.
CHAPTER . . .CONFIGURING
. . . . . . . . . . . . . . . .OAUTH
. . . . . . . .CLIENTS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24
..............
4.1. DEFAULT OAUTH CLIENTS 24
4.2. REGISTERING AN ADDITIONAL OAUTH CLIENT 24
4.3. CONFIGURING TOKEN INACTIVITY TIMEOUT FOR AN OAUTH CLIENT 25
4.4. ADDITIONAL RESOURCES 26
.CHAPTER
. . . . . . . . . . 5.
. . MANAGING
. . . . . . . . . . . . .USER-OWNED
. . . . . . . . . . . . . . . .OAUTH
. . . . . . . .ACCESS
. . . . . . . . .TOKENS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
..............
5.1. LISTING USER-OWNED OAUTH ACCESS TOKENS 27
5.2. VIEWING THE DETAILS OF A USER-OWNED OAUTH ACCESS TOKEN 27
5.3. DELETING USER-OWNED OAUTH ACCESS TOKENS 28
5.4. ADDING UNAUTHENTICATED GROUPS TO CLUSTER ROLES 29
.CHAPTER
. . . . . . . . . . 6.
. . .UNDERSTANDING
. . . . . . . . . . . . . . . . . . .IDENTITY
. . . . . . . . . . PROVIDER
. . . . . . . . . . . .CONFIGURATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
..............
6.1. ABOUT IDENTITY PROVIDERS IN OPENSHIFT CONTAINER PLATFORM 31
6.2. SUPPORTED IDENTITY PROVIDERS 31
6.3. REMOVING THE KUBEADMIN USER 32
6.4. IDENTITY PROVIDER PARAMETERS 32
6.5. SAMPLE IDENTITY PROVIDER CR 33
6.6. MANUALLY PROVISIONING A USER WHEN USING THE LOOKUP MAPPING METHOD 34
.CHAPTER
. . . . . . . . . . 7.
. . CONFIGURING
. . . . . . . . . . . . . . . . IDENTITY
. . . . . . . . . . .PROVIDERS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35
..............
7.1. CONFIGURING AN HTPASSWD IDENTITY PROVIDER 35
7.1.1. About identity providers in OpenShift Container Platform 35
7.1.2. About htpasswd authentication 35
7.1.3. Creating the htpasswd file 35
7.1.3.1. Creating an htpasswd file using Linux 35
1
OpenShift Container Platform 4.17 Authentication and authorization
2
Table of Contents
. . . . . . . . . . . 8.
CHAPTER . . .USING
. . . . . . .RBAC
. . . . . . TO
. . . .DEFINE
. . . . . . . . AND
. . . . . APPLY
. . . . . . . .PERMISSIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
..............
8.1. RBAC OVERVIEW 81
8.1.1. Default cluster roles 82
8.1.2. Evaluating authorization 83
8.1.2.1. Cluster role aggregation 84
8.2. PROJECTS AND NAMESPACES 84
8.3. DEFAULT PROJECTS 85
8.4. VIEWING CLUSTER ROLES AND BINDINGS 86
8.5. VIEWING LOCAL ROLES AND BINDINGS 92
8.6. ADDING ROLES TO USERS 94
8.7. CREATING A LOCAL ROLE 96
8.8. CREATING A CLUSTER ROLE 97
8.9. LOCAL ROLE BINDING COMMANDS 97
8.10. CLUSTER ROLE BINDING COMMANDS 98
8.11. CREATING A CLUSTER ADMIN 98
8.12. CLUSTER ROLE BINDINGS FOR UNAUTHENTICATED GROUPS 98
. . . . . . . . . . . 9.
CHAPTER . . .REMOVING
. . . . . . . . . . . .THE
. . . . .KUBEADMIN
. . . . . . . . . . . . . USER
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100
...............
9.1. THE KUBEADMIN USER 100
9.2. REMOVING THE KUBEADMIN USER 100
. . . . . . . . . . . 10.
CHAPTER . . . UNDERSTANDING
. . . . . . . . . . . . . . . . . . . .AND
. . . . .CREATING
. . . . . . . . . . . SERVICE
. . . . . . . . . . ACCOUNTS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
...............
10.1. SERVICE ACCOUNTS OVERVIEW 101
10.2. CREATING SERVICE ACCOUNTS 101
10.3. EXAMPLES OF GRANTING ROLES TO SERVICE ACCOUNTS 102
. . . . . . . . . . . 11.
CHAPTER . . .USING
. . . . . . .SERVICE
. . . . . . . . . .ACCOUNTS
. . . . . . . . . . . . IN
. . .APPLICATIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105
...............
11.1. SERVICE ACCOUNTS OVERVIEW 105
11.2. DEFAULT SERVICE ACCOUNTS 105
11.2.1. Default cluster service accounts 105
11.2.2. Default project service accounts and roles 106
11.2.3. Automatically generated image pull secrets 106
11.3. CREATING SERVICE ACCOUNTS 107
.CHAPTER
. . . . . . . . . . 12.
. . . USING
.......A
. . SERVICE
. . . . . . . . . .ACCOUNT
. . . . . . . . . . . AS
. . . .AN
. . . OAUTH
. . . . . . . . CLIENT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109
...............
12.1. SERVICE ACCOUNTS AS OAUTH CLIENTS 109
3
OpenShift Container Platform 4.17 Authentication and authorization
.CHAPTER
. . . . . . . . . . 13.
. . . SCOPING
. . . . . . . . . . .TOKENS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .112
..............
13.1. ABOUT SCOPING TOKENS 112
13.1.1. User scopes 112
13.1.2. Role scope 112
13.2. ADDING UNAUTHENTICATED GROUPS TO CLUSTER ROLES 112
. . . . . . . . . . . 14.
CHAPTER . . . USING
. . . . . . . .BOUND
. . . . . . . .SERVICE
. . . . . . . . . .ACCOUNT
. . . . . . . . . . .TOKENS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .114
..............
14.1. ABOUT BOUND SERVICE ACCOUNT TOKENS 114
14.2. CONFIGURING BOUND SERVICE ACCOUNT TOKENS USING VOLUME PROJECTION 114
14.3. CREATING BOUND SERVICE ACCOUNT TOKENS OUTSIDE THE POD 117
. . . . . . . . . . . 15.
CHAPTER . . . MANAGING
. . . . . . . . . . . . .SECURITY
. . . . . . . . . . .CONTEXT
. . . . . . . . . . .CONSTRAINTS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .119
..............
15.1. ABOUT SECURITY CONTEXT CONSTRAINTS 119
15.1.1. Default security context constraints 120
15.1.2. Security context constraints settings 124
15.1.3. Security context constraints strategies 125
15.1.4. Controlling volumes 127
15.1.5. Admission control 128
15.1.6. Security context constraints prioritization 129
15.2. ABOUT PRE-ALLOCATED SECURITY CONTEXT CONSTRAINTS VALUES 129
15.3. EXAMPLE SECURITY CONTEXT CONSTRAINTS 131
15.4. CREATING SECURITY CONTEXT CONSTRAINTS 133
15.5. CONFIGURING A WORKLOAD TO REQUIRE A SPECIFIC SCC 134
15.6. ROLE-BASED ACCESS TO SECURITY CONTEXT CONSTRAINTS 136
15.7. REFERENCE OF SECURITY CONTEXT CONSTRAINTS COMMANDS 137
15.7.1. Listing security context constraints 137
15.7.2. Examining security context constraints 138
15.7.3. Updating security context constraints 139
15.7.4. Deleting security context constraints 139
15.8. ADDITIONAL RESOURCES 139
. . . . . . . . . . . 16.
CHAPTER . . . UNDERSTANDING
. . . . . . . . . . . . . . . . . . . .AND
. . . . .MANAGING
. . . . . . . . . . . . POD
. . . . . .SECURITY
. . . . . . . . . . .ADMISSION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .140
...............
16.1. ABOUT POD SECURITY ADMISSION 140
16.1.1. Pod security admission modes 140
16.1.2. Pod security admission profiles 140
16.1.3. Privileged namespaces 141
16.1.4. Pod security admission and security context constraints 141
16.2. ABOUT POD SECURITY ADMISSION SYNCHRONIZATION 141
16.2.1. Pod security admission synchronization namespace exclusions 142
Permanently disabled namespaces 142
Initially disabled namespaces 142
16.3. CONTROLLING POD SECURITY ADMISSION SYNCHRONIZATION 142
16.4. CONFIGURING POD SECURITY ADMISSION FOR A NAMESPACE 143
16.5. ABOUT POD SECURITY ADMISSION ALERTS 144
16.5.1. Identifying pod security violations 144
16.6. ADDITIONAL RESOURCES 144
. . . . . . . . . . . 17.
CHAPTER . . . IMPERSONATING
. . . . . . . . . . . . . . . . . . .THE
. . . . .SYSTEM:ADMIN
. . . . . . . . . . . . . . . . .USER
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .145
...............
17.1. API IMPERSONATION 145
17.2. IMPERSONATING THE SYSTEM:ADMIN USER 145
17.3. IMPERSONATING THE SYSTEM:ADMIN GROUP 145
17.4. ADDING UNAUTHENTICATED GROUPS TO CLUSTER ROLES 145
4
Table of Contents
. . . . . . . . . . . 18.
CHAPTER . . . SYNCING
. . . . . . . . . . .LDAP
. . . . . .GROUPS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .147
...............
18.1. ABOUT CONFIGURING LDAP SYNC 147
18.1.1. About the RFC 2307 configuration file 149
18.1.2. About the Active Directory configuration file 150
18.1.3. About the augmented Active Directory configuration file 151
18.2. RUNNING LDAP SYNC 152
18.2.1. Syncing the LDAP server with OpenShift Container Platform 152
18.2.2. Syncing OpenShift Container Platform groups with the LDAP server 152
18.2.3. Syncing subgroups from the LDAP server with OpenShift Container Platform 153
18.3. RUNNING A GROUP PRUNING JOB 154
18.4. AUTOMATICALLY SYNCING LDAP GROUPS 154
18.5. LDAP GROUP SYNC EXAMPLES 158
18.5.1. Syncing groups using the RFC 2307 schema 158
18.5.2. Syncing groups using the RFC2307 schema with user-defined name mappings 160
18.5.3. Syncing groups using RFC 2307 with user-defined error tolerances 162
18.5.4. Syncing groups using the Active Directory schema 164
18.5.5. Syncing groups using the augmented Active Directory schema 166
18.5.5.1. LDAP nested membership sync example 168
18.6. LDAP SYNC CONFIGURATION SPECIFICATION 171
18.6.1. v1.LDAPSyncConfig 171
18.6.2. v1.StringSource 173
18.6.3. v1.LDAPQuery 174
18.6.4. v1.RFC2307Config 175
18.6.5. v1.ActiveDirectoryConfig 177
18.6.6. v1.AugmentedActiveDirectoryConfig 177
. . . . . . . . . . . 19.
CHAPTER . . . MANAGING
. . . . . . . . . . . . .CLOUD
. . . . . . . . PROVIDER
. . . . . . . . . . . .CREDENTIALS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .179
...............
19.1. ABOUT THE CLOUD CREDENTIAL OPERATOR 179
19.1.1. Modes 179
19.1.2. Determining the Cloud Credential Operator mode 180
19.1.2.1. Determining the Cloud Credential Operator mode by using the web console 181
19.1.2.2. Determining the Cloud Credential Operator mode by using the CLI 184
19.1.3. Default behavior 186
19.1.4. Additional resources 186
19.2. THE CLOUD CREDENTIAL OPERATOR IN MINT MODE 186
19.2.1. Mint mode credentials management 186
19.2.1.1. Mint mode permissions requirements 187
19.2.1.2. Admin credentials root secret format 188
19.2.2. Maintaining cloud provider credentials 188
19.2.3. Additional resources 190
19.3. THE CLOUD CREDENTIAL OPERATOR IN PASSTHROUGH MODE 190
19.3.1. Passthrough mode permissions requirements 191
19.3.1.1. Amazon Web Services (AWS) permissions 191
19.3.1.2. Microsoft Azure permissions 191
19.3.1.3. Google Cloud Platform (GCP) permissions 191
19.3.1.4. Red Hat OpenStack Platform (RHOSP) permissions 191
19.3.1.5. VMware vSphere permissions 191
19.3.2. Admin credentials root secret format 192
19.3.3. Passthrough mode credential maintenance 193
19.3.3.1. Maintaining cloud provider credentials 194
19.3.4. Reducing permissions after installation 195
19.3.5. Additional resources 195
19.4. MANUAL MODE WITH LONG-TERM CREDENTIALS FOR COMPONENTS 196
5
OpenShift Container Platform 4.17 Authentication and authorization
6
Table of Contents
7
OpenShift Container Platform 4.17 Authentication and authorization
authentication
An authentication determines access to an OpenShift Container Platform cluster and ensures only
authenticated users access the OpenShift Container Platform cluster.
authorization
Authorization determines whether the identified user has permissions to perform the requested
action.
bearer token
Bearer token is used to authenticate to API with the header Authorization: Bearer <token>.
8
CHAPTER 1. OVERVIEW OF AUTHENTICATION AND AUTHORIZATION
To interact with an OpenShift Container Platform cluster, users must first authenticate to the OpenShift
Container Platform API in some way. You can authenticate by providing an OAuth access token or an
X.509 client certificate in your requests to the OpenShift Container Platform API.
NOTE
9
OpenShift Container Platform 4.17 Authentication and authorization
NOTE
If you do not present a valid access token or certificate, your request is unauthenticated
and you receive an HTTP 401 error.
Configuring an identity provider: You can define any supported identity provider in OpenShift
Container Platform and add it to your cluster.
Configuring the internal OAuth server: The OpenShift Container Platform control plane includes
a built-in OAuth server that determines the user’s identity from the configured identity provider
and creates an access token. You can configure the token duration and inactivity timeout, and
customize the internal OAuth server URL.
NOTE
Registering an OAuth client: OpenShift Container Platform includes several default OAuth
clients. You can register and configure additional OAuth clients .
NOTE
When users send a request for an OAuth token, they must specify either a default
or custom OAuth client that receives and uses the token.
Managing cloud provider credentials using the Cloud Credentials Operator: Cluster components
use cloud provider credentials to get permissions required to perform cluster-related tasks.
Impersonating a system admin user: You can grant cluster administrator permissions to a user
by impersonating a system admin user .
Administrators can define permissions and assign them to users using the RBAC objects, such as rules,
roles, and bindings. To understand how authorization works in OpenShift Container Platform, see
Evaluating authorization.
You can also control access to an OpenShift Container Platform cluster through projects and
namespaces.
Along with controlling user access to a cluster, you can also control the actions a pod can perform and
the resources it can access using security context constraints (SCCs) .
You can manage authorization for OpenShift Container Platform through the following tasks:
Creating a cluster role and assigning it to a user or group: OpenShift Container Platform
10
CHAPTER 1. OVERVIEW OF AUTHENTICATION AND AUTHORIZATION
Creating a cluster role and assigning it to a user or group: OpenShift Container Platform
includes a set of default cluster roles. You can create additional cluster roles and add them to a
user or group.
Creating a cluster-admin user: By default, your cluster has only one cluster administrator called
kubeadmin. You can create another cluster administrator . Before creating a cluster
administrator, ensure that you have configured an identity provider.
NOTE
After creating the cluster admin user, delete the existing kubeadmin user to
improve cluster security.
Creating service accounts: Service accounts provide a flexible way to control API access without
sharing a regular user’s credentials. A user can create and use a service account in applications
and also as an OAuth client.
Scoping tokens: A scoped token is a token that identifies as a specific user who can perform
only specific operations. You can create scoped tokens to delegate some of your permissions to
another user or a service account.
Syncing LDAP groups: You can manage user groups in one place by syncing the groups stored
in an LDAP server with the OpenShift Container Platform user groups.
11
OpenShift Container Platform 4.17 Authentication and authorization
2.1. USERS
A user in OpenShift Container Platform is an entity that can make requests to the OpenShift Container
Platform API. An OpenShift Container Platform User object represents an actor which can be granted
permissions in the system by adding roles to them or to their groups. Typically, this represents the
account of a developer or administrator that is interacting with OpenShift Container Platform.
Regular users This is the way most interactive OpenShift Container Platform users are represented.
Regular users are created automatically in the system upon first login or can be created
via the API. Regular users are represented with the User object. Examples: joe alice
System users Many of these are created automatically when the infrastructure is defined, mainly for
the purpose of enabling the infrastructure to interact with the API securely. They
include a cluster administrator (with access to everything), a per-node user, users for
use by routers and registries, and various others. Finally, there is an anonymous
system user that is used by default for unauthenticated requests. Examples:
system:admin system:openshift-registry system:node:node1.example.com
Service These are special system users associated with projects; some are created
accounts automatically when the project is first created, while project administrators can create
more for the purpose of defining access to the contents of each project. Service
accounts are represented with the ServiceAccount object. Examples:
system:serviceaccount:default:deployer
system:serviceaccount:foo:builder
Each user must authenticate in some way to access OpenShift Container Platform. API requests with no
authentication or invalid authentication are authenticated as requests by the anonymous system user.
After authentication, policy determines what the user is authorized to do.
2.2. GROUPS
A user can be assigned to one or more groups, each of which represent a certain set of users. Groups are
useful when managing authorization policies to grant permissions to multiple users at once, for example
allowing access to objects within a project, versus granting them to users individually.
In addition to explicitly defined groups, there are also system groups, or virtual groups, that are
automatically provisioned by the cluster.
12
CHAPTER 2. UNDERSTANDING AUTHENTICATION
system:authenticated:oa Automatically associated with all users authenticated with an OAuth access
uth token.
Obtained from the OpenShift Container Platform OAuth server using the
<namespace_route>/oauth/authorize and <namespace_route>/oauth/token endpoints.
The API server creates and distributes certificates to controllers to authenticate themselves.
Any request with an invalid access token or an invalid certificate is rejected by the authentication layer
with a 401 error.
If no access token or certificate is presented, the authentication layer assigns the system:anonymous
virtual user and the system:unauthenticated virtual group to the request. This allows the authorization
layer to determine which requests, if any, an anonymous user is allowed to make.
When a person requests a new OAuth token, the OAuth server uses the configured identity provider to
determine the identity of the person making the request.
It then determines what user that identity maps to, creates an access token for that user, and returns
the token for use.
Every request for an OAuth token must specify the OAuth client that will receive and use the token. The
13
OpenShift Container Platform 4.17 Authentication and authorization
Every request for an OAuth token must specify the OAuth client that will receive and use the token. The
following OAuth clients are automatically created when starting the OpenShift Container Platform API:
1. <namespace_route> refers to the namespace route. This is found by running the following
command:
NOTE
To prevent cross-site request forgery (CSRF) attacks against browser clients, only send
Basic authentication challenges with if a X-CSRF-Token header is on the request. Clients
that expect to receive Basic WWW-Authenticate challenges must set this header to a
non-empty value.
You can configure a request to the OpenShift Container Platform API to act as though it originated
from another user. For more information, see User impersonation in the Kubernetes documentation.
OpenShift Container Platform captures the following Prometheus system metrics during authentication
attempts:
14
CHAPTER 2. UNDERSTANDING AUTHENTICATION
openshift_auth_password_total counts the total number of oc login and web console login
attempts.
15
OpenShift Container Platform 4.17 Authentication and authorization
When a person requests a new OAuth token, the OAuth server uses the configured identity provider to
determine the identity of the person making the request.
It then determines what user that identity maps to, creates an access token for that user, and returns
the token for use.
When requesting an OAuth token using the implicit grant flow (response_type=token) with a client_id
configured to request WWW-Authenticate challenges (like openshift-challenging-client), these are
the possible server responses from /oauth/authorize, and how they should be handled:
302 Location header containing an Use the access_token value as the OAuth
access_token parameter in the URL token.
fragment (RFC 6749 section 4.2.2)
302 Location header containing an error query Fail, optionally surfacing the error (and
parameter (RFC 6749 section 4.1.2.1) optional error_description) query values to
the user.
302 Other Location header Follow the redirect, and process the result
using these rules.
16
CHAPTER 3. CONFIGURING THE INTERNAL OAUTH SERVER
Token Description
Authorize codes Short-lived tokens whose only use is to be exchanged for an access
token.
You can configure the default duration for both types of token. If necessary, you can override the
duration of the access token by using an OAuthClient object definition.
The OAuth client requesting token must provide its own grant strategy.
IMPORTANT
By default, tokens are only valid for 24 hours. Existing sessions expire after this time
elapses.
If the default time is insufficient, then this can be modified using the following procedure.
Procedure
1. Create a configuration file that contains the token duration options. The following file sets this
to 48 hours, twice the default.
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
17
OpenShift Container Platform 4.17 Authentication and authorization
spec:
tokenConfig:
accessTokenMaxAgeSeconds: 172800 1
NOTE
Because you update the existing OAuth server, you must use the oc apply
command to apply the change.
$ oc apply -f </path/to/file.yaml>
$ oc describe oauth.config.openshift.io/cluster
Example output
...
Spec:
Token Config:
Access Token Max Age Seconds: 172800
...
NOTE
If the token inactivity timeout is also configured in your OAuth client, that value overrides
the timeout that is set in the internal OAuth server configuration.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
18
CHAPTER 3. CONFIGURING THE INTERNAL OAUTH SERVER
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
...
spec:
tokenConfig:
accessTokenInactivityTimeout: 400s 1
1 Set a value with the appropriate units, for example 400s for 400 seconds, or 30m for
30 minutes. The minimum allowed timeout value is 300s.
Do not continue to the next step until PROGRESSING is listed as False, as shown in the
following output:
Example output
3. Check that a new revision of the Kubernetes API server pods has rolled out. This will take several
minutes.
Do not continue to the next step until PROGRESSING is listed as False, as shown in the
following output:
Example output
Verification
3. Wait longer than the configured timeout without using the identity. In this procedure’s example,
wait longer than 400 seconds.
19
OpenShift Container Platform 4.17 Authentication and authorization
Example output
WARNING
If you update the internal OAuth server URL, you might break trust from
components in the cluster that need to communicate with the OpenShift OAuth
server to retrieve OAuth access tokens. Components that need to trust the OAuth
server will need to include the proper CA bundle when calling OAuth endpoints. For
example:
1 For self-signed certificates, the ca.crt file must contain the custom CA
certificate, otherwise the login will not succeed.
Prerequisites
You have created a secret in the openshift-config namespace containing the TLS certificate
and key. This is required if the domain for the custom hostname suffix does not match the
cluster domain suffix. The secret is optional if the suffix matches.
TIP
You can create a TLS secret by using the oc create secret tls command.
Procedure
20
CHAPTER 3. CONFIGURING THE INTERNAL OAUTH SERVER
2. Set the custom hostname and optionally the serving certificate and key:
apiVersion: config.openshift.io/v1
kind: Ingress
metadata:
name: cluster
spec:
componentRoutes:
- name: oauth-openshift
namespace: openshift-authentication
hostname: <custom_hostname> 1
servingCertKeyPairSecret:
name: <secret_name> 2
Thus, any application running inside the cluster can issue a GET request to
https://openshift.default.svc/.well-known/oauth-authorization-server to fetch the following
information:
{
"issuer": "https://<namespace_route>", 1
"authorization_endpoint": "https://<namespace_route>/oauth/authorize", 2
"token_endpoint": "https://<namespace_route>/oauth/token", 3
"scopes_supported": [ 4
"user:full",
"user:info",
"user:check-access",
"user:list-scoped-projects",
"user:list-projects"
],
"response_types_supported": [ 5
"code",
"token"
],
"grant_types_supported": [ 6
"authorization_code",
"implicit"
],
21
OpenShift Container Platform 4.17 Authentication and authorization
"code_challenge_methods_supported": [ 7
"plain",
"S256"
]
}
1 The authorization server’s issuer identifier, which is a URL that uses the https scheme and has no
query or fragment components. This is the location where .well-known RFC 5785 resources
containing information about the authorization server are published.
4 JSON array containing a list of the OAuth 2.0 RFC 6749 scope values that this authorization server
supports. Note that not all supported scope values are advertised.
5 JSON array containing a list of the OAuth 2.0 response_type values that this authorization server
supports. The array values used are the same as those used with the response_types parameter
defined by "OAuth 2.0 Dynamic Client Registration Protocol" in RFC 7591.
6 JSON array containing a list of the OAuth 2.0 grant type values that this authorization server
supports. The array values used are the same as those used with the grant_types parameter
defined by OAuth 2.0 Dynamic Client Registration Protocol in RFC 7591.
7 JSON array containing a list of PKCE RFC 7636 code challenge methods supported by this
authorization server. Code challenge method values are used in the code_challenge_method
parameter defined in Section 4.3 of RFC 7636 . The valid code challenge method values are those
registered in the IANA PKCE Code Challenge Methods registry. See IANA OAuth Parameters.
A subset of these errors is related to service account OAuth configuration issues. These issues are
captured in events that can be viewed by non-administrator users. When encountering an unexpected
condition server error during OAuth, run oc get events to view these events under ServiceAccount.
The following example warns of a service account that is missing a proper OAuth redirect URI:
Example output
Running oc describe sa/<service_account_name> reports any OAuth events associated with the
given service account name.
22
CHAPTER 3. CONFIGURING THE INTERNAL OAUTH SERVER
Example output
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason
Message
--------- -------- ----- ---- ------------- -------- ------ -------
3m 3m 1 service-account-oauth-client-getter Warning
NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set
serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI
using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>
Reason Message
NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set
serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI
using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>
Reason Message
NoSAOAuthRedirectURIs [routes.route.openshift.io "<name>" not found,
system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-
redirecturi.<some-value>=<redirect> or create a dynamic URI using
serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]
Reason Message
NoSAOAuthRedirectURIs [no kind "<name>" is registered for version "v1",
system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-
redirecturi.<some-value>=<redirect> or create a dynamic URI using
serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]
Missing SA tokens
Reason Message
NoSAOAuthTokens system:serviceaccount:myproject:proxy has no tokens
23
OpenShift Container Platform 4.17 Authentication and authorization
1. <namespace_route> refers to the namespace route. This is found by running the following
command:
Procedure
1 The name of the OAuth client is used as the client_id parameter when making requests to
<namespace_route>/oauth/authorize and <namespace_route>/oauth/token.
4 The grantMethod is used to determine what action to take when this client requests
tokens and has not yet been granted access by the user. Specify auto to automatically
approve the grant and retry the request, or prompt to prompt the user to approve or deny
the grant.
NOTE
If the token inactivity timeout is also configured in the internal OAuth server
configuration, the timeout that is set in the OAuth client overrides that value.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1 Replace <oauth_client> with the OAuth client to configure, for example, console.
apiVersion: oauth.openshift.io/v1
grantMethod: auto
kind: OAuthClient
metadata:
...
accessTokenInactivityTimeoutSeconds: 600 1
25
OpenShift Container Platform 4.17 Authentication and authorization
Verification
1. Log in to the cluster with an identity from your IDP. Be sure to use the OAuth client that you just
configured.
3. Wait longer than the configured timeout without using the identity. In this procedure’s example,
wait longer than 600 seconds.
26
CHAPTER 5. MANAGING USER-OWNED OAUTH ACCESS TOKENS
Procedure
$ oc get useroauthaccesstokens
Example output
Example output
Procedure
Example output
27
OpenShift Container Platform 4.17 Authentication and authorization
Name: <token_name> 1
Namespace:
Labels: <none>
Annotations: <none>
API Version: oauth.openshift.io/v1
Authorize Token: sha256~Ksckkug-9Fg_RWn_AUysPoIg-_HqmFI9zUL_CgD8wr8
Client Name: openshift-browser-client 2
Expires In: 86400 3
Inactivity Timeout Seconds: 317 4
Kind: UserOAuthAccessToken
Metadata:
Creation Timestamp: 2021-01-11T19:27:06Z
Managed Fields:
API Version: oauth.openshift.io/v1
Fields Type: FieldsV1
fieldsV1:
f:authorizeToken:
f:clientName:
f:expiresIn:
f:redirectURI:
f:scopes:
f:userName:
f:userUID:
Manager: oauth-server
Operation: Update
Time: 2021-01-11T19:27:06Z
Resource Version: 30535
Self Link: /apis/oauth.openshift.io/v1/useroauthaccesstokens/<token_name>
UID: f9d00b67-ab65-489b-8080-e427fa3c6181
Redirect URI: https://oauth-openshift.apps.example.com/oauth/token/display
Scopes:
user:full 5
User Name: <user_name> 6
User UID: 82356ab0-95f9-4fb3-9bc0-10f1d6a6a345
Events: <none>
1 The token name, which is the sha256 hash of the token. Token names are not sensitive and
cannot be used to log in.
2 The client name, which describes where the token originated from.
3 The value in seconds from the creation time before this token expires.
4 If there is a token inactivity timeout set for the OAuth server, this is the value in seconds
from the creation time before this token can no longer be used.
28
CHAPTER 5. MANAGING USER-OWNED OAUTH ACCESS TOKENS
Deleting an OAuth access token logs out the user from all sessions that use the token.
Procedure
Example output
system:scope-impersonation
system:webhook
system:oauth-token-deleter
self-access-reviewer
IMPORTANT
Always verify compliance with your organization’s security standards when modifying
unauthenticated access.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. Create a YAML file named add-<cluster_role>-unauth.yaml and add the following content:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
name: <cluster_role>access-unauthenticated
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: <cluster_role>
subjects:
29
OpenShift Container Platform 4.17 Authentication and authorization
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:unauthenticated
$ oc apply -f add-<cluster_role>.yaml
30
CHAPTER 6. UNDERSTANDING IDENTITY PROVIDER CONFIGURATION
As an administrator, you can configure OAuth to specify an identity provider after you install your
cluster.
NOTE
OpenShift Container Platform user names containing /, :, and % are not supported.
htpasswd Configure the htpasswd identity provider to validate user names and passwords
against a flat file generated using htpasswd .
Keystone Configure the keystone identity provider to integrate your OpenShift Container
Platform cluster with Keystone to enable shared authentication with an OpenStack
Keystone v3 server configured to store users in an internal database.
LDAP Configure the ldap identity provider to validate user names and passwords against an
LDAPv3 server, using simple bind authentication.
Request header Configure a request-header identity provider to identify users from request header
values, such as X-Remote-User. It is typically used in combination with an
authenticating proxy, which sets the request header value.
GitHub or GitHub Configure a github identity provider to validate user names and passwords against
Enterprise GitHub or GitHub Enterprise’s OAuth authentication server.
GitLab Configure a gitlab identity provider to use GitLab.com or any other GitLab instance as
an identity provider.
31
OpenShift Container Platform 4.17 Authentication and authorization
Google Configure a google identity provider using Google’s OpenID Connect integration.
OpenID Connect Configure an oidc identity provider to integrate with an OpenID Connect identity
provider using an Authorization Code Flow.
Once an identity provider has been defined, you can use RBAC to define and apply permissions .
WARNING
If you follow this procedure before another user is a cluster-admin, then OpenShift
Container Platform must be reinstalled. It is not possible to undo this command.
Prerequisites
Procedure
Parameter Description
name The provider name is prefixed to provider user names to form an identity name.
32
CHAPTER 6. UNDERSTANDING IDENTITY PROVIDER CONFIGURATION
Parameter Description
mappingMethod Defines how new identities are mapped to users when they log in. Enter one of the
following values:
claim
The default value. Provisions a user with the identity’s preferred user name. Fails if a
user with that user name is already mapped to another identity.
lookup
Looks up an existing identity, user identity mapping, and user, but does not
automatically provision users or identities. This allows cluster administrators to set
up identities and users manually, or using an external process. Using this method
requires you to manually provision users.
add
Provisions a user with the identity’s preferred user name. If a user with that user
name already exists, the identity is mapped to the existing user, adding to any
existing identity mappings for the user. Required when multiple identity providers
are configured that identify the same set of users and map to the same user names.
NOTE
When adding or changing identity providers, you can map identities from the new
provider to existing users by setting the mappingMethod parameter to add.
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: my_identity_provider 1
mappingMethod: claim 2
type: HTPasswd
htpasswd:
fileData:
name: htpass-secret 3
1 This provider name is prefixed to provider user names to form an identity name.
2 Controls how mappings are established between this provider’s identities and User objects.
33
OpenShift Container Platform 4.17 Authentication and authorization
Prerequisites
Procedure
Where <identity_provider_user_id> is a name that uniquely represents the user in the identity
provider.
3. Create a user identity mapping for the created user and identity:
Additional resources
How to create user, identity and map user and identity in LDAP authentication for
mappingMethod as lookup inside the OAuth manifest
How to create user, identity and map user and identity in OIDC authentication for
mappingMethod as lookup
34
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
4. Apply the resource to the default OAuth configuration to add the identity provider.
NOTE
OpenShift Container Platform user names containing /, :, and % are not supported.
WARNING
To use the htpasswd identity provider, you must generate a flat file that contains the user names and
35
OpenShift Container Platform 4.17 Authentication and authorization
To use the htpasswd identity provider, you must generate a flat file that contains the user names and
passwords for your cluster by using htpasswd.
Prerequisites
Have access to the htpasswd utility. On Red Hat Enterprise Linux this is available by installing
the httpd-tools package.
Procedure
1. Create or update your flat file with a user name and hashed password:
For example:
Example output
To use the htpasswd identity provider, you must generate a flat file that contains the user names and
passwords for your cluster by using htpasswd.
Prerequisites
Have access to htpasswd.exe. This file is included in the \bin directory of many Apache httpd
distributions.
Procedure
1. Create or update your flat file with a user name and hashed password:
For example:
Example output
36
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
Prerequisites
Procedure
1 The secret key containing the users file for the --from-file argument must be named
htpasswd, as shown in the above command.
TIP
You can alternatively apply the following YAML to create the secret:
apiVersion: v1
kind: Secret
metadata:
name: htpass-secret
namespace: openshift-config
type: Opaque
data:
htpasswd: <base64_encoded_htpasswd_file_contents>
htpasswd CR
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: my_htpasswd_provider 1
37
OpenShift Container Platform 4.17 Authentication and authorization
mappingMethod: claim 2
type: HTPasswd
htpasswd:
fileData:
name: htpass-secret 3
1 This provider name is prefixed to provider user names to form an identity name.
2 Controls how mappings are established between this provider’s identities and User objects.
Additional resources
See Identity provider parameters for information on parameters, such as mappingMethod, that
are common to all identity providers.
Prerequisites
Procedure
$ oc apply -f </path/to/CR>
NOTE
If a CR does not exist, oc apply creates a new CR and might trigger the following
warning: Warning: oc apply should be used on resources created by either
oc create --save-config or oc apply. In this case you can safely ignore this
warning.
2. Log in to the cluster as a user from your identity provider, entering the password when
prompted.
$ oc login -u <username>
3. Confirm that the user logged in successfully, and display the user name.
$ oc whoami
38
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
Prerequisites
You have created a Secret object that contains the htpasswd user file. This procedure assumes
that it is named htpass-secret.
You have configured an htpasswd identity provider. This procedure assumes that it is named
my_htpasswd_provider.
You have access to the htpasswd utility. On Red Hat Enterprise Linux this is available by
installing the httpd-tools package.
Procedure
1. Retrieve the htpasswd file from the htpass-secret Secret object and save the file to your file
system:
Example output
Example output
3. Replace the htpass-secret Secret object with the updated users in the users.htpasswd file:
TIP
39
OpenShift Container Platform 4.17 Authentication and authorization
TIP
You can alternatively apply the following YAML to replace the secret:
apiVersion: v1
kind: Secret
metadata:
name: htpass-secret
namespace: openshift-config
type: Opaque
data:
htpasswd: <base64_encoded_htpasswd_file_contents>
4. If you removed one or more users, you must additionally remove existing resources for each
user.
Example output
Be sure to remove the user, otherwise the user can continue using their token as long as it
has not expired.
Example output
Prerequisites
Procedure
3. Under the Identity Providers section, select your identity provider from the Add drop-down
menu.
NOTE
40
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
NOTE
You can specify multiple IDPs through the web console without overwriting existing IDPs.
NOTE
OpenShift Container Platform user names containing /, :, and % are not supported.
You can configure the integration with Keystone so that the new OpenShift Container Platform users
are based on either the Keystone user names or unique Keystone IDs. With both methods, users log in
by entering their Keystone user name and password. Basing the OpenShift Container Platform users on
the Keystone ID is more secure because if you delete a Keystone user and create a new Keystone user
with that user name, the new user might have access to the old user’s resources.
Procedure
Create a Secret object that contains the key and certificate by using the following command:
TIP
41
OpenShift Container Platform 4.17 Authentication and authorization
TIP
You can alternatively apply the following YAML to create the secret:
apiVersion: v1
kind: Secret
metadata:
name: <secret_name>
namespace: openshift-config
type: kubernetes.io/tls
data:
tls.crt: <base64_encoded_cert>
tls.key: <base64_encoded_key>
Procedure
Define an OpenShift Container Platform ConfigMap object containing the certificate authority
by using the following command. The certificate authority must be stored in the ca.crt key of
the ConfigMap object.
TIP
You can alternatively apply the following YAML to create the config map:
apiVersion: v1
kind: ConfigMap
metadata:
name: ca-config-map
namespace: openshift-config
data:
ca.crt: |
<CA_certificate_PEM>
Keystone CR
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
42
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
identityProviders:
- name: keystoneidp 1
mappingMethod: claim 2
type: Keystone
keystone:
domainName: default 3
url: https://keystone.example.com:5000 4
ca: 5
name: ca-config-map
tlsClientCert: 6
name: client-cert-secret
tlsClientKey: 7
name: client-key-secret
1 This provider name is prefixed to provider user names to form an identity name.
2 Controls how mappings are established between this provider’s identities and User objects.
3 Keystone domain name. In Keystone, usernames are domain-specific. Only a single domain is
supported.
4 The URL to use to connect to the Keystone server (required). This must use https.
5 Optional: Reference to an OpenShift Container Platform ConfigMap object containing the PEM-
encoded certificate authority bundle to use in validating server certificates for the configured URL.
6 Optional: Reference to an OpenShift Container Platform Secret object containing the client
certificate to present when making requests to the configured URL.
7 Reference to an OpenShift Container Platform Secret object containing the key for the client
certificate. Required if tlsClientCert is specified.
Additional resources
See Identity provider parameters for information on parameters, such as mappingMethod, that
are common to all identity providers.
Prerequisites
Procedure
43
OpenShift Container Platform 4.17 Authentication and authorization
$ oc apply -f </path/to/CR>
NOTE
If a CR does not exist, oc apply creates a new CR and might trigger the following
warning: Warning: oc apply should be used on resources created by either
oc create --save-config or oc apply. In this case you can safely ignore this
warning.
2. Log in to the cluster as a user from your identity provider, entering the password when
prompted.
$ oc login -u <username>
3. Confirm that the user logged in successfully, and display the user name.
$ oc whoami
NOTE
OpenShift Container Platform user names containing /, :, and % are not supported.
1. Generate a search filter by combining the attribute and filter in the configured url with the
user-provided user name.
2. Search the directory using the generated filter. If the search does not return exactly one entry,
deny access.
3. Attempt to bind to the LDAP server using the DN of the entry retrieved from the search, and
the user-provided password.
5. If the bind is successful, build an identity using the configured attributes as the identity, email
address, display name, and preferred user name.
44
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
The configured url is an RFC 2255 URL, which specifies the LDAP host and search parameters to use.
The syntax of the URL is:
ldap://host:port/basedn?attribute?scope?filter
ldap For regular LDAP, use the string ldap . For secure LDAP (LDAPS), useldaps instead.
host:port The name and port of the LDAP server. Defaults to localhost:389 for ldap and
localhost:636 for LDAPS.
basedn The DN of the branch of the directory where all searches should start from. At the very
least, this must be the top of your directory tree, but it could also specify a subtree in
the directory.
attribute The attribute to search for. Although RFC 2255 allows a comma-separated list of
attributes, only the first attribute will be used, no matter how many are provided. If no
attributes are provided, the default is to use uid. It is recommended to choose an
attribute that will be unique across all entries in the subtree you will be using.
scope The scope of the search. Can be either one or sub . If the scope is not provided, the
default is to use a scope of sub .
When doing searches, the attribute, filter, and provided user name are combined to create a search filter
that looks like:
(&(<filter>)(<attribute>=<username>))
ldap://ldap.example.com/o=Acme?cn?sub?(enabled=true)
When a client attempts to connect using a user name of bob, the resulting search filter will be (&
(enabled=true)(cn=bob)).
If the LDAP directory requires authentication to search, specify a bindDN and bindPassword to use to
perform the entry search.
Procedure
45
OpenShift Container Platform 4.17 Authentication and authorization
1 The secret key containing the bindPassword for the --from-literal argument must be called
bindPassword.
TIP
You can alternatively apply the following YAML to create the secret:
apiVersion: v1
kind: Secret
metadata:
name: ldap-secret
namespace: openshift-config
type: Opaque
data:
bindPassword: <base64_encoded_bind_password>
Procedure
Define an OpenShift Container Platform ConfigMap object containing the certificate authority
by using the following command. The certificate authority must be stored in the ca.crt key of
the ConfigMap object.
TIP
You can alternatively apply the following YAML to create the config map:
apiVersion: v1
kind: ConfigMap
metadata:
name: ca-config-map
namespace: openshift-config
data:
ca.crt: |
<CA_certificate_PEM>
46
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
LDAP CR
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: ldapidp 1
mappingMethod: claim 2
type: LDAP
ldap:
attributes:
id: 3
- dn
email: 4
- mail
name: 5
- cn
preferredUsername: 6
- uid
bindDN: "" 7
bindPassword: 8
name: ldap-secret
ca: 9
name: ca-config-map
insecure: false 10
url: "ldaps://ldaps.example.com/ou=users,dc=acme,dc=com?uid" 11
1 This provider name is prefixed to the returned user ID to form an identity name.
2 Controls how mappings are established between this provider’s identities and User objects.
3 List of attributes to use as the identity. First non-empty attribute is used. At least one attribute is
required. If none of the listed attribute have a value, authentication fails. Defined attributes are
retrieved as raw, allowing for binary values to be used.
4 List of attributes to use as the email address. First non-empty attribute is used.
5 List of attributes to use as the display name. First non-empty attribute is used.
6 List of attributes to use as the preferred user name when provisioning a user for this identity. First
non-empty attribute is used.
7 Optional DN to use to bind during the search phase. Must be set if bindPassword is defined.
8 Optional reference to an OpenShift Container Platform Secret object containing the bind
password. Must be set if bindDN is defined.
9 Optional: Reference to an OpenShift Container Platform ConfigMap object containing the PEM-
encoded certificate authority bundle to use in validating server certificates for the configured URL.
Only used when insecure is false.
10 When true, no TLS connection is made to the server. When false, ldaps:// URLs connect using TLS,
and ldap:// URLs are upgraded to TLS. This must be set to false when ldaps:// URLs are in use, as
these URLs always attempt to connect using TLS.
47
OpenShift Container Platform 4.17 Authentication and authorization
11 An RFC 2255 URL which specifies the LDAP host and search parameters to use.
NOTE
To whitelist users for an LDAP integration, use the lookup mapping method. Before a
login from LDAP would be allowed, a cluster administrator must create an Identity object
and a User object for each LDAP user.
Additional resources
See Identity provider parameters for information on parameters, such as mappingMethod, that
are common to all identity providers.
Prerequisites
Procedure
$ oc apply -f </path/to/CR>
NOTE
If a CR does not exist, oc apply creates a new CR and might trigger the following
warning: Warning: oc apply should be used on resources created by either
oc create --save-config or oc apply. In this case you can safely ignore this
warning.
2. Log in to the cluster as a user from your identity provider, entering the password when
prompted.
$ oc login -u <username>
3. Confirm that the user logged in successfully, and display the user name.
$ oc whoami
48
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
Configure the basic-authentication identity provider for users to log in to OpenShift Container
Platform with credentials validated against a remote identity provider. Basic authentication is a generic
back-end integration mechanism.
NOTE
OpenShift Container Platform user names containing /, :, and % are not supported.
Because basic authentication is generic, you can use this identity provider for advanced authentication
configurations.
IMPORTANT
Basic authentication must use an HTTPS connection to the remote server to prevent
potential snooping of the user ID and password and man-in-the-middle attacks.
With basic authentication configured, users send their user name and password to OpenShift Container
Platform, which then validates those credentials against a remote server by making a server-to-server
request, passing the credentials as a basic authentication header. This requires users to send their
credentials to OpenShift Container Platform during login.
NOTE
This only works for user name/password login mechanisms, and OpenShift Container
Platform must be able to make network requests to the remote authentication server.
User names and passwords are validated against a remote URL that is protected by basic authentication
and returns JSON.
{"error":"Error message"}
{"sub":"userid"} 1
1 The subject must be unique to the authenticated user and must not be able to be modified.
49
OpenShift Container Platform 4.17 Authentication and authorization
A preferred user name using the preferred_username key. This is useful when the unique,
unchangeable subject is a database key or UID, and a more human-readable name exists. This is
used as a hint when provisioning the OpenShift Container Platform user for the authenticated
identity. For example:
Procedure
Create a Secret object that contains the key and certificate by using the following command:
TIP
You can alternatively apply the following YAML to create the secret:
apiVersion: v1
kind: Secret
metadata:
name: <secret_name>
namespace: openshift-config
type: kubernetes.io/tls
data:
tls.crt: <base64_encoded_cert>
tls.key: <base64_encoded_key>
Procedure
Define an OpenShift Container Platform ConfigMap object containing the certificate authority
by using the following command. The certificate authority must be stored in the ca.crt key of
the ConfigMap object.
50
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
TIP
You can alternatively apply the following YAML to create the config map:
apiVersion: v1
kind: ConfigMap
metadata:
name: ca-config-map
namespace: openshift-config
data:
ca.crt: |
<CA_certificate_PEM>
Basic authentication CR
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: basicidp 1
mappingMethod: claim 2
type: BasicAuth
basicAuth:
url: https://www.example.com/remote-idp 3
ca: 4
name: ca-config-map
tlsClientCert: 5
name: client-cert-secret
tlsClientKey: 6
name: client-key-secret
1 This provider name is prefixed to the returned user ID to form an identity name.
2 Controls how mappings are established between this provider’s identities and User objects.
4 Optional: Reference to an OpenShift Container Platform ConfigMap object containing the PEM-
encoded certificate authority bundle to use in validating server certificates for the configured URL.
5 Optional: Reference to an OpenShift Container Platform Secret object containing the client
certificate to present when making requests to the configured URL.
6 Reference to an OpenShift Container Platform Secret object containing the key for the client
certificate. Required if tlsClientCert is specified.
51
OpenShift Container Platform 4.17 Authentication and authorization
Additional resources
See Identity provider parameters for information on parameters, such as mappingMethod, that
are common to all identity providers.
Prerequisites
Procedure
$ oc apply -f </path/to/CR>
NOTE
If a CR does not exist, oc apply creates a new CR and might trigger the following
warning: Warning: oc apply should be used on resources created by either
oc create --save-config or oc apply. In this case you can safely ignore this
warning.
2. Log in to the cluster as a user from your identity provider, entering the password when
prompted.
$ oc login -u <username>
3. Confirm that the user logged in successfully, and display the user name.
$ oc whoami
Example /etc/httpd/conf.d/login.conf
<VirtualHost *:443>
# CGI Scripts in here
DocumentRoot /var/www/cgi-bin
52
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
# SSL Directives
SSLEngine on
SSLCipherSuite PROFILE=SYSTEM
SSLProxyCipherSuite PROFILE=SYSTEM
SSLCertificateFile /etc/pki/tls/certs/localhost.crt
SSLCertificateKeyFile /etc/pki/tls/private/localhost.key
# Handles authentication
<Location /basic/login.cgi>
AuthType Basic
AuthName "Please Log In"
AuthBasicProvider file
AuthUserFile /etc/httpd/conf/passwords
Require valid-user
</Location>
</VirtualHost>
Example /var/www/cgi-bin/login.cgi
#!/bin/bash
echo "Content-Type: application/json"
echo ""
echo '{"sub":"userid", "name":"'$REMOTE_USER'"}'
exit 0
Example /var/www/cgi-bin/fail.cgi
#!/bin/bash
echo "Content-Type: application/json"
echo ""
echo '{"error": "Login failure"}'
exit 0
These are the requirements for the files you create on an Apache HTTPD web server:
login.cgi and fail.cgi must have proper SELinux contexts if SELinux is enabled: restorecon -
RFv /var/www/cgi-bin, or ensure that the context is httpd_sys_script_exec_t using ls -laZ.
login.cgi is only executed if your user successfully logs in per Require and Auth directives.
fail.cgi is executed if the user fails to log in, resulting in an HTTP 401 response.
53
OpenShift Container Platform 4.17 Authentication and authorization
The most common issue relates to network connectivity to the backend server. For simple debugging,
run curl commands on the master. To test for a successful login, replace the <user> and <password> in
the following example command with valid credentials. To test an invalid login, replace them with false
credentials.
Successful responses
{"sub":"userid"}
The subject must be unique to the authenticated user, and must not be able to be modified.
The preferred_username key is useful when the unique, unchangeable subject is a database
key or UID, and a more human-readable name exists. This is used as a hint when provisioning the
OpenShift Container Platform user for the authenticated identity.
Failed responses
A non-200 status or the presence of a non-empty "error" key indicates an error: {"error":"Error
message"}
NOTE
54
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
NOTE
OpenShift Container Platform user names containing /, :, and % are not supported.
NOTE
You can also use the request header identity provider for advanced configurations such
as the community-supported SAML authentication. Note that this solution is not
supported by Red Hat.
For users to authenticate using this identity provider, they must access
https://<namespace_route>/oauth/authorize (and subpaths) via an authenticating proxy. To
accomplish this, configure the OAuth server to redirect unauthenticated requests for OAuth tokens to
the proxy endpoint that proxies to https://<namespace_route>/oauth/authorize.
Set the provider.loginURL parameter to the authenticating proxy URL that will authenticate
interactive clients and then proxy the request to https://<namespace_route>/oauth/authorize.
Set the provider.challengeURL parameter to the authenticating proxy URL that will
authenticate clients expecting WWW-Authenticate challenges and then proxy the request to
https://<namespace_route>/oauth/authorize.
The provider.challengeURL and provider.loginURL parameters can include the following tokens in
the query portion of the URL:
${url} is replaced with the current URL, escaped to be safe in a query parameter.
For example: https://www.example.com/sso-login?then=${url}
IMPORTANT
As of OpenShift Container Platform 4.1, your proxy must support mutual TLS.
IMPORTANT
55
OpenShift Container Platform 4.17 Authentication and authorization
IMPORTANT
For more information about the support scope of Red Hat Technology Preview features,
see Technology Preview Features Support Scope .
The OpenShift CLI (oc) supports the Security Support Provider Interface (SSPI) to allow for SSO flows
on Microsft Windows. If you use the request header identity provider with a GSSAPI-enabled proxy to
connect an Active Directory server to OpenShift Container Platform, users can automatically
authenticate to OpenShift Container Platform by using the oc command line interface from a domain-
joined Microsoft Windows computer.
Procedure
Define an OpenShift Container Platform ConfigMap object containing the certificate authority
by using the following command. The certificate authority must be stored in the ca.crt key of
the ConfigMap object.
TIP
You can alternatively apply the following YAML to create the config map:
apiVersion: v1
kind: ConfigMap
metadata:
name: ca-config-map
namespace: openshift-config
data:
ca.crt: |
<CA_certificate_PEM>
Request header CR
apiVersion: config.openshift.io/v1
56
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: requestheaderidp 1
mappingMethod: claim 2
type: RequestHeader
requestHeader:
challengeURL: "https://www.example.com/challenging-proxy/oauth/authorize?${query}" 3
loginURL: "https://www.example.com/login-proxy/oauth/authorize?${query}" 4
ca: 5
name: ca-config-map
clientCommonNames: 6
- my-auth-proxy
headers: 7
- X-Remote-User
- SSO-User
emailHeaders: 8
- X-Remote-User-Email
nameHeaders: 9
- X-Remote-User-Display-Name
preferredUsernameHeaders: 10
- X-Remote-User-Login
1 This provider name is prefixed to the user name in the request header to form an identity name.
2 Controls how mappings are established between this provider’s identities and User objects.
3 Optional: URL to redirect unauthenticated /oauth/authorize requests to, that will authenticate
browser-based clients and then proxy their request to
https://<namespace_route>/oauth/authorize. The URL that proxies to
https://<namespace_route>/oauth/authorize must end with /authorize (with no trailing slash),
and also proxy subpaths, in order for OAuth approval flows to work properly. ${url} is replaced with
the current URL, escaped to be safe in a query parameter. ${query} is replaced with the current
query string. If this attribute is not defined, then loginURL must be used.
4 Optional: URL to redirect unauthenticated /oauth/authorize requests to, that will authenticate
clients which expect WWW-Authenticate challenges, and then proxy them to
https://<namespace_route>/oauth/authorize. ${url} is replaced with the current URL, escaped to
be safe in a query parameter. ${query} is replaced with the current query string. If this attribute is
not defined, then challengeURL must be used.
IMPORTANT
As of OpenShift Container Platform 4.1, the ca field is required for this identity
provider. This means that your proxy must support mutual TLS.
Optional: list of common names (cn). If set, a valid client certificate with a Common Name ( cn) in
57
OpenShift Container Platform 4.17 Authentication and authorization
Optional: list of common names (cn). If set, a valid client certificate with a Common Name ( cn) in
the specified list must be presented before the request headers are checked for user names. If
7 Header names to check, in order, for the user identity. The first header containing a value is used as
the identity. Required, case-insensitive.
8 Header names to check, in order, for an email address. The first header containing a value is used as
the email address. Optional, case-insensitive.
9 Header names to check, in order, for a display name. The first header containing a value is used as
the display name. Optional, case-insensitive.
10 Header names to check, in order, for a preferred user name, if different than the immutable
identity determined from the headers specified in headers. The first header containing a value is
used as the preferred user name when provisioning. Optional, case-insensitive.
Additional resources
See Identity provider parameters for information on parameters, such as mappingMethod, that
are common to all identity providers.
Prerequisites
Procedure
$ oc apply -f </path/to/CR>
NOTE
If a CR does not exist, oc apply creates a new CR and might trigger the following
warning: Warning: oc apply should be used on resources created by either
oc create --save-config or oc apply. In this case you can safely ignore this
warning.
2. Log in to the cluster as a user from your identity provider, entering the password when
prompted.
$ oc login -u <username>
3. Confirm that the user logged in successfully, and display the user name.
58
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
$ oc whoami
Require the X-Csrf-Token header be set for all authentication requests using the challenge
flow.
Make sure only the /oauth/authorize endpoint and its subpaths are proxied; redirects must be
rewritten to allow the backend server to send the client to the correct location.
NOTE
The https://<namespace_route> address is the route to the OAuth server and can be
obtained by running oc get route -n openshift-authentication.
Prerequisites
Obtain the mod_auth_gssapi module from the Optional channel. You must have the following
packages installed on your local machine:
httpd
mod_ssl
mod_session
apr-util-openssl
mod_auth_gssapi
Generate a CA for validating requests that submit the trusted header. Define an OpenShift
59
OpenShift Container Platform 4.17 Authentication and authorization
Generate a CA for validating requests that submit the trusted header. Define an OpenShift
Container Platform ConfigMap object containing the CA. This is done by running:
TIP
You can alternatively apply the following YAML to create the config map:
apiVersion: v1
kind: ConfigMap
metadata:
name: ca-config-map
namespace: openshift-config
data:
ca.crt: |
<CA_certificate_PEM>
Generate a client certificate for the proxy. You can generate this certificate by using any x509
certificate tooling. The client certificate must be signed by the CA you generated for validating
requests that submit the trusted header.
Procedure
This proxy uses a client certificate to connect to the OAuth server, which is configured to trust the X-
Remote-User header.
1. Create the certificate for the Apache configuration. The certificate that you specify as the
SSLProxyMachineCertificateFile parameter value is the proxy’s client certificate that is used
to authenticate the proxy to the server. It must use TLS Web Client Authentication as the
extended key type.
2. Create the Apache configuration. Use the following template to provide your required settings
and values:
IMPORTANT
Carefully review the template and customize its contents to fit your environment.
# Nothing needs to be served over HTTP. This virtual host simply redirects to
# HTTPS.
<VirtualHost *:80>
DocumentRoot /var/www/html
60
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
RewriteEngine On
RewriteRule ^(.*)$ https://%{HTTP_HOST}$1 [R,L]
</VirtualHost>
<VirtualHost *:443>
# This needs to match the certificates you generated. See the CN and X509v3
# Subject Alternative Name in the output of:
# openssl x509 -text -in /etc/pki/tls/certs/localhost.crt
ServerName www.example.com
DocumentRoot /var/www/html
SSLEngine on
SSLCertificateFile /etc/pki/tls/certs/localhost.crt
SSLCertificateKeyFile /etc/pki/tls/private/localhost.key
SSLCACertificateFile /etc/pki/CA/certs/ca.crt
SSLProxyEngine on
SSLProxyCACertificateFile /etc/pki/CA/certs/ca.crt
# It is critical to enforce client certificates. Otherwise, requests can
# spoof the X-Remote-User header by accessing the /oauth/authorize endpoint
# directly.
SSLProxyMachineCertificateFile /etc/pki/tls/certs/authproxy.pem
<Location /challenging-proxy/oauth/authorize>
# Insert your backend server name/ip here.
ProxyPass https://<namespace_route>/oauth/authorize
AuthName "SSO Login"
# For Kerberos
AuthType GSSAPI
Require valid-user
RequestHeader set X-Remote-User %{REMOTE_USER}s
GssapiCredStore keytab:/etc/httpd/protected/auth-proxy.keytab
# Enable the following if you want to allow users to fallback
# to password based authentication when they do not have a client
# configured to perform kerberos authentication.
GssapiBasicAuth On
# For ldap:
# AuthBasicProvider ldap
# AuthLDAPURL "ldap://ldap.example.com:389/ou=People,dc=my-domain,dc=com?uid?
sub?(objectClass=*)"
</Location>
<Location /login-proxy/oauth/authorize>
# Insert your backend server name/ip here.
ProxyPass https://<namespace_route>/oauth/authorize
61
OpenShift Container Platform 4.17 Authentication and authorization
GssapiCredStore keytab:/etc/httpd/protected/auth-proxy.keytab
# Enable the following if you want to allow users to fallback
# to password based authentication when they do not have a client
# configured to perform kerberos authentication.
GssapiBasicAuth On
</VirtualHost>
NOTE
identityProviders:
- name: requestheaderidp
type: RequestHeader
requestHeader:
challengeURL: "https://<namespace_route>/challenging-proxy/oauth/authorize?${query}"
loginURL: "https://<namespace_route>/login-proxy/oauth/authorize?${query}"
ca:
name: ca-config-map
clientCommonNames:
- my-auth-proxy
headers:
- X-Remote-User
a. Confirm that you can bypass the proxy by requesting a token by supplying the correct client
certificate and header:
b. Confirm that requests that do not supply the client certificate fail by requesting a token
without the certificate:
62
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
d. Run this command to show a 401 response with a WWW-Authenticate basic challenge, a
negotiate challenge, or both challenges:
e. Test logging in to the OpenShift CLI (oc) with and without using a Kerberos ticket:
# kdestroy -c cache_name 1
# oc login -u <username>
# oc logout
# kinit
# oc login
If your configuration is correct, you are logged in without entering separate credentials.
You can use the GitHub integration to connect to either GitHub or GitHub Enterprise. For GitHub
Enterprise integrations, you must provide the hostname of your instance and can optionally provide a
ca certificate bundle to use in requests to the server.
63
OpenShift Container Platform 4.17 Authentication and authorization
NOTE
The following steps apply to both GitHub and GitHub Enterprise unless noted.
NOTE
OpenShift Container Platform user names containing /, :, and % are not supported.
Procedure
For GitHub, click Settings → Developer settings → OAuth Apps → Register a new OAuth
application.
For GitHub Enterprise, go to your GitHub Enterprise home page and then click Settings →
Developer settings → Register a new application.
5. Enter the authorization callback URL, where the end of the URL contains the identity provider
name:
https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-
name>
For example:
https://oauth-openshift.apps.openshift-cluster.example.com/oauth2callback/github
6. Click Register application. GitHub provides a client ID and a client secret. You need these
values to complete the identity provider configuration.
64
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
Identity providers use OpenShift Container Platform Secret objects in the openshift-config
namespace to contain the client secret, client certificates, and keys.
Procedure
TIP
You can alternatively apply the following YAML to create the secret:
apiVersion: v1
kind: Secret
metadata:
name: <secret_name>
namespace: openshift-config
type: Opaque
data:
clientSecret: <base64_encoded_client_secret>
You can define a Secret object containing the contents of a file by using the following
command:
NOTE
Procedure
Define an OpenShift Container Platform ConfigMap object containing the certificate authority
by using the following command. The certificate authority must be stored in the ca.crt key of
the ConfigMap object.
TIP
65
OpenShift Container Platform 4.17 Authentication and authorization
TIP
You can alternatively apply the following YAML to create the config map:
apiVersion: v1
kind: ConfigMap
metadata:
name: ca-config-map
namespace: openshift-config
data:
ca.crt: |
<CA_certificate_PEM>
GitHub CR
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: githubidp 1
mappingMethod: claim 2
type: GitHub
github:
ca: 3
name: ca-config-map
clientID: {...} 4
clientSecret: 5
name: github-secret
hostname: ... 6
organizations: 7
- myorganization1
- myorganization2
teams: 8
- myorganization1/team-a
- myorganization2/team-b
1 This provider name is prefixed to the GitHub numeric user ID to form an identity name. It is also
used to build the callback URL.
2 Controls how mappings are established between this provider’s identities and User objects.
3 Optional: Reference to an OpenShift Container Platform ConfigMap object containing the PEM-
encoded certificate authority bundle to use in validating server certificates for the configured URL.
Only for use in GitHub Enterprise with a non-publicly trusted root certificate.
4 The client ID of a registered GitHub OAuth application. The application must be configured with a
callback URL of https://oauth-openshift.apps.<cluster-name>.<cluster-
domain>/oauth2callback/<idp-provider-name>.
66
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
domain>/oauth2callback/<idp-provider-name>.
5 Reference to an OpenShift Container Platform Secret object containing the client secret issued by
GitHub.
6 For GitHub Enterprise, you must provide the hostname of your instance, such as example.com.
This value must match the GitHub Enterprise hostname value in in the /setup/settings file and
cannot include a port number. If this value is not set, then either teams or organizations must be
defined. For GitHub, omit this parameter.
7 The list of organizations. Either the organizations or teams field must be set unless the hostname
field is set, or if mappingMethod is set to lookup. Cannot be used in combination with the teams
field.
8 The list of teams. Either the teams or organizations field must be set unless the hostname field is
set, or if mappingMethod is set to lookup. Cannot be used in combination with the organizations
field.
NOTE
If organizations or teams is specified, only GitHub users that are members of at least
one of the listed organizations will be allowed to log in. If the GitHub OAuth application
configured in clientID is not owned by the organization, an organization owner must grant
third-party access to use this option. This can be done during the first GitHub login by
the organization’s administrator, or from the GitHub organization settings.
Additional resources
See Identity provider parameters for information on parameters, such as mappingMethod, that
are common to all identity providers.
Prerequisites
Procedure
$ oc apply -f </path/to/CR>
NOTE
67
OpenShift Container Platform 4.17 Authentication and authorization
NOTE
If a CR does not exist, oc apply creates a new CR and might trigger the following
warning: Warning: oc apply should be used on resources created by either
oc create --save-config or oc apply. In this case you can safely ignore this
warning.
You can also access this page from the web console by navigating to (?) Help → Command
Line Tools → Copy Login Command.
$ oc login --token=<token>
NOTE
This identity provider does not support logging in with a user name and password.
4. Confirm that the user logged in successfully, and display the user name.
$ oc whoami
NOTE
OpenShift Container Platform user names containing /, :, and % are not supported.
If you use GitLab version 7.7.0 to 11.0, you connect using the OAuth integration. If you use GitLab
version 11.1 or later, you can use OpenID Connect (OIDC) to connect instead of OAuth.
68
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
Procedure
TIP
You can alternatively apply the following YAML to create the secret:
apiVersion: v1
kind: Secret
metadata:
name: <secret_name>
namespace: openshift-config
type: Opaque
data:
clientSecret: <base64_encoded_client_secret>
You can define a Secret object containing the contents of a file by using the following
command:
NOTE
Procedure
Define an OpenShift Container Platform ConfigMap object containing the certificate authority
by using the following command. The certificate authority must be stored in the ca.crt key of
the ConfigMap object.
TIP
69
OpenShift Container Platform 4.17 Authentication and authorization
TIP
You can alternatively apply the following YAML to create the config map:
apiVersion: v1
kind: ConfigMap
metadata:
name: ca-config-map
namespace: openshift-config
data:
ca.crt: |
<CA_certificate_PEM>
GitLab CR
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: gitlabidp 1
mappingMethod: claim 2
type: GitLab
gitlab:
clientID: {...} 3
clientSecret: 4
name: gitlab-secret
url: https://gitlab.com 5
ca: 6
name: ca-config-map
1 This provider name is prefixed to the GitLab numeric user ID to form an identity name. It is also
used to build the callback URL.
2 Controls how mappings are established between this provider’s identities and User objects.
3 The client ID of a registered GitLab OAuth application. The application must be configured with a
callback URL of https://oauth-openshift.apps.<cluster-name>.<cluster-
domain>/oauth2callback/<idp-provider-name>.
4 Reference to an OpenShift Container Platform Secret object containing the client secret issued by
GitLab.
5 The host URL of a GitLab provider. This could either be https://gitlab.com/ or any other self
hosted instance of GitLab.
6 Optional: Reference to an OpenShift Container Platform ConfigMap object containing the PEM-
encoded certificate authority bundle to use in validating server certificates for the configured URL.
70
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
Additional resources
See Identity provider parameters for information on parameters, such as mappingMethod, that
are common to all identity providers.
Prerequisites
Procedure
$ oc apply -f </path/to/CR>
NOTE
If a CR does not exist, oc apply creates a new CR and might trigger the following
warning: Warning: oc apply should be used on resources created by either
oc create --save-config or oc apply. In this case you can safely ignore this
warning.
2. Log in to the cluster as a user from your identity provider, entering the password when
prompted.
$ oc login -u <username>
3. Confirm that the user logged in successfully, and display the user name.
$ oc whoami
NOTE
OpenShift Container Platform user names containing /, :, and % are not supported.
71
OpenShift Container Platform 4.17 Authentication and authorization
NOTE
Procedure
TIP
You can alternatively apply the following YAML to create the secret:
apiVersion: v1
kind: Secret
metadata:
name: <secret_name>
namespace: openshift-config
type: Opaque
data:
clientSecret: <base64_encoded_client_secret>
You can define a Secret object containing the contents of a file by using the following
command:
Google CR
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
72
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
identityProviders:
- name: googleidp 1
mappingMethod: claim 2
type: Google
google:
clientID: {...} 3
clientSecret: 4
name: google-secret
hostedDomain: "example.com" 5
1 This provider name is prefixed to the Google numeric user ID to form an identity name. It is also
used to build the redirect URL.
2 Controls how mappings are established between this provider’s identities and User objects.
3 The client ID of a registered Google project. The project must be configured with a redirect URI of
https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-
provider-name>.
4 Reference to an OpenShift Container Platform Secret object containing the client secret issued by
Google.
5 A hosted domain used to restrict sign-in accounts. Optional if the lookup mappingMethod is
used. If empty, any Google account is allowed to authenticate.
Additional resources
See Identity provider parameters for information on parameters, such as mappingMethod, that
are common to all identity providers.
Prerequisites
Procedure
$ oc apply -f </path/to/CR>
NOTE
73
OpenShift Container Platform 4.17 Authentication and authorization
NOTE
If a CR does not exist, oc apply creates a new CR and might trigger the following
warning: Warning: oc apply should be used on resources created by either
oc create --save-config or oc apply. In this case you can safely ignore this
warning.
You can also access this page from the web console by navigating to (?) Help → Command
Line Tools → Copy Login Command.
$ oc login --token=<token>
NOTE
This identity provider does not support logging in with a user name and password.
4. Confirm that the user logged in successfully, and display the user name.
$ oc whoami
NOTE
OpenShift Container Platform user names containing /, :, and % are not supported.
NOTE
By default, the openid scope is requested. If required, extra scopes can be specified in the extraScopes
field.
74
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
Claims are read from the JWT id_token returned from the OpenID identity provider and, if specified,
from the JSON returned by the UserInfo URL.
At least one claim must be configured to use as the user’s identity. The standard identity claim is sub.
You can also indicate which claims to use as the user’s preferred user name, display name, and email
address. If multiple claims are specified, the first one with a non-empty value is used. The following table
lists the standard claims:
Claim Description
sub Short for "subject identifier." The remote identity for the user at the
issuer.
preferred_username The preferred user name when provisioning a user. A shorthand name
that the user wants to be referred to as, such as janedoe . Typically a
value that corresponding to the user’s login or username in the
authentication system, such as username or email.
NOTE
Unless your OpenID Connect identity provider supports the resource owner password
credentials (ROPC) grant flow, users must get a token from
<namespace_route>/oauth/token/request to use with command-line tools.
NOTE
GitLab
Keycloak
75
OpenShift Container Platform 4.17 Authentication and authorization
NOTE
Okta
Ping Identity
Procedure
TIP
You can alternatively apply the following YAML to create the secret:
apiVersion: v1
kind: Secret
metadata:
name: <secret_name>
namespace: openshift-config
type: Opaque
data:
clientSecret: <base64_encoded_client_secret>
You can define a Secret object containing the contents of a file by using the following
command:
NOTE
76
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
Procedure
Define an OpenShift Container Platform ConfigMap object containing the certificate authority
by using the following command. The certificate authority must be stored in the ca.crt key of
the ConfigMap object.
TIP
You can alternatively apply the following YAML to create the config map:
apiVersion: v1
kind: ConfigMap
metadata:
name: ca-config-map
namespace: openshift-config
data:
ca.crt: |
<CA_certificate_PEM>
If you must specify a custom certificate bundle, extra scopes, extra authorization request parameters, or
a userInfo URL, use the full OpenID Connect CR.
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: oidcidp 1
mappingMethod: claim 2
type: OpenID
openID:
clientID: ... 3
clientSecret: 4
name: idp-secret
claims: 5
preferredUsername:
- preferred_username
name:
- name
email:
- email
77
OpenShift Container Platform 4.17 Authentication and authorization
groups:
- groups
issuer: https://www.idp-issuer.com 6
1 This provider name is prefixed to the value of the identity claim to form an identity name. It is also
used to build the redirect URL.
2 Controls how mappings are established between this provider’s identities and User objects.
3 The client ID of a client registered with the OpenID provider. The client must be allowed to redirect
to https://oauth-openshift.apps.<cluster_name>.
<cluster_domain>/oauth2callback/<idp_provider_name>.
4 A reference to an OpenShift Container Platform Secret object containing the client secret.
5 The list of claims to use as the identity. The first non-empty claim is used.
6 The Issuer Identifier described in the OpenID spec. Must use https without query or fragment
component.
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: oidcidp
mappingMethod: claim
type: OpenID
openID:
clientID: ...
clientSecret:
name: idp-secret
ca: 1
name: ca-config-map
extraScopes: 2
- email
- profile
extraAuthorizeParameters: 3
include_granted_scopes: "true"
claims:
preferredUsername: 4
- preferred_username
- email
name: 5
- nickname
- given_name
- name
email: 6
- custom_email_claim
- email
78
CHAPTER 7. CONFIGURING IDENTITY PROVIDERS
groups: 7
- groups
issuer: https://www.idp-issuer.com
1 Optional: Reference to an OpenShift Container Platform config map containing the PEM-encoded
certificate authority bundle to use in validating server certificates for the configured URL.
2 Optional: The list of scopes to request, in addition to the openid scope, during the authorization
token request.
4 The list of claims to use as the preferred user name when provisioning a user for this identity. The
first non-empty claim is used.
5 The list of claims to use as the display name. The first non-empty claim is used.
6 The list of claims to use as the email address. The first non-empty claim is used.
7 The list of claims to use to synchronize groups from the OpenID Connect provider to OpenShift
Container Platform upon user login. The first non-empty claim is used.
Additional resources
See Identity provider parameters for information on parameters, such as mappingMethod, that
are common to all identity providers.
Prerequisites
Procedure
$ oc apply -f </path/to/CR>
NOTE
If a CR does not exist, oc apply creates a new CR and might trigger the following
warning: Warning: oc apply should be used on resources created by either
oc create --save-config or oc apply. In this case you can safely ignore this
warning.
As long as the kubeadmin user has been removed, the oc login command provides instructions
79
OpenShift Container Platform 4.17 Authentication and authorization
As long as the kubeadmin user has been removed, the oc login command provides instructions
on how to access a web page where you can retrieve the token.
You can also access this page from the web console by navigating to (?) Help → Command
Line Tools → Copy Login Command.
$ oc login --token=<token>
NOTE
If your OpenID Connect identity provider supports the resource owner password
credentials (ROPC) grant flow, you can log in with a user name and password.
You might need to take steps to enable the ROPC grant flow for your identity
provider.
4. Confirm that the user logged in successfully, and display the user name.
$ oc whoami
Prerequisites
Procedure
3. Under the Identity Providers section, select your identity provider from the Add drop-down
menu.
NOTE
You can specify multiple IDPs through the web console without overwriting existing IDPs.
80
CHAPTER 8. USING RBAC TO DEFINE AND APPLY PERMISSIONS
Cluster administrators can use the cluster roles and bindings to control who has various access levels to
the OpenShift Container Platform platform itself and all projects.
Developers can use local roles and bindings to control who has access to their projects. Note that
authorization is a separate step from authentication, which is more about determining the identity of
who is taking the action.
Authorization Description
object
Rules Sets of permitted verbs on a set of objects. For example, whether a user or service
account can create pods.
Roles Collections of rules. You can associate, or bind, users and groups to multiple roles.
There are two levels of RBAC roles and bindings that control authorization:
Cluster RBAC Roles and bindings that are applicable across all projects. Cluster roles exist cluster-
wide, and cluster role bindings can reference only cluster roles.
Local RBAC Roles and bindings that are scoped to a given project. While local roles exist only in a
single project, local role bindings can reference both cluster and local roles.
A cluster role binding is a binding that exists at the cluster level. A role binding exists at the project level.
The cluster role view must be bound to a user using a local role binding for that user to view the project.
Create local roles only if a cluster role does not provide the set of permissions needed for a particular
situation.
This two-level hierarchy allows reuse across multiple projects through the cluster roles while allowing
customization inside of individual projects through local roles.
During evaluation, both the cluster role bindings and the local role bindings are used. For example:
81
OpenShift Container Platform 4.17 Authentication and authorization
3. Deny by default.
IMPORTANT
admin A project manager. If used in a local binding, an admin has rights to view any resource
in the project and modify any resource in the project except for quota.
basic-user A user that can get basic information about projects and users.
cluster-admin A super-user that can perform any action in any project. When bound to a user with a
local binding, they have full control over quota and every action on every resource in the
project.
cluster-reader A user that can get or view most of the objects but cannot modify them.
edit A user that can modify most objects in a project but does not have the power to view or
modify roles or bindings.
view A user who cannot make any modifications, but can see most objects in a project. They
cannot view or modify roles or bindings.
Be mindful of the difference between local and cluster bindings. For example, if you bind the cluster-
admin role to a user by using a local role binding, it might appear that this user has the privileges of a
cluster administrator. This is not the case. Binding the cluster-admin to a user in a project grants super
administrator privileges for only that project to the user. That user has the permissions of the cluster
role admin, plus a few additional permissions like the ability to edit rate limits, for that project. This
binding can be confusing via the web console UI, which does not list cluster role bindings that are bound
to true cluster administrators. However, it does list local role bindings that you can use to locally bind
cluster-admin.
The relationships between cluster roles, local roles, cluster role bindings, local role bindings, users,
groups and service accounts are illustrated below.
82
CHAPTER 8. USING RBAC TO DEFINE AND APPLY PERMISSIONS
WARNING
The get pods/exec, get pods/*, and get * rules grant execution privileges when they
are applied to a role. Apply the principle of least privilege and assign only the
minimal RBAC rights required for users and agents. For more information, see
RBAC rules allow execution privileges .
Identity
The user name and list of groups that the user belongs to.
Action
The action you perform. In most cases, this consists of:
Project: The project you access. A project is a Kubernetes namespace with additional
annotations that allows a community of users to organize and manage their content in
isolation from other communities.
Verb : The action itself: get, list, create, update, delete, deletecollection, or watch.
83
OpenShift Container Platform 4.17 Authentication and authorization
Bindings
The full list of bindings, the associations between users or groups with a role.
1. The identity and the project-scoped action is used to find all bindings that apply to the user or
their groups.
TIP
Remember that users and groups can be associated with, or bound to, multiple roles at the same time.
Project administrators can use the CLI to view local roles and bindings, including a matrix of the verbs
and resources each are associated with.
IMPORTANT
The cluster role bound to the project administrator is limited in a project through a local
binding. It is not bound cluster-wide like the cluster roles granted to the cluster-admin or
system:admin.
Cluster roles are roles defined at the cluster level but can be bound either at the cluster
level or at the project level.
The default admin, edit, view, and cluster-reader cluster roles support cluster role aggregation , where
the cluster rules for each role are dynamically updated as new rules are created. This feature is relevant
only if you extend the Kubernetes API by creating custom resources.
Most objects in the system are scoped by namespace, but some are excepted and have no namespace,
including nodes and users.
A project is a Kubernetes namespace with additional annotations and is the central vehicle by which
84
CHAPTER 8. USING RBAC TO DEFINE AND APPLY PERMISSIONS
access to resources for regular users is managed. A project allows a community of users to organize and
manage their content in isolation from other communities. Users must be given access to projects by
administrators, or if allowed to create projects, automatically have access to their own projects.
The mandatory name is a unique identifier for the project and is most visible when using the CLI
tools or API. The maximum name length is 63 characters.
The optional displayName is how the project is displayed in the web console (defaults to
name).
The optional description can be a more detailed description of the project and is also visible in
the web console.
Object Description
Policies Rules for which users can or cannot perform actions on objects.
Service Service accounts act automatically with designated access to objects in the project.
accounts
Cluster administrators can create projects and delegate administrative rights for the project to any
member of the user community. Cluster administrators can also allow developers to create their own
projects.
Developers and administrators can interact with projects by using the CLI or the web console.
IMPORTANT
Do not run workloads in or share access to default projects. Default projects are reserved
for running core cluster components.
The following default projects are considered highly privileged: default, kube-public,
kube-system, openshift, openshift-infra, openshift-node, and other system-created
projects that have the openshift.io/run-level label set to 0 or 1. Functionality that relies
on admission plugins, such as pod security admission, security context constraints, cluster
resource quotas, and image reference resolution, does not work in highly privileged
projects.
85
OpenShift Container Platform 4.17 Authentication and authorization
Prerequisites
Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any
resource, including viewing cluster roles and bindings.
Procedure
$ oc describe clusterrole.rbac
Example output
Name: admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
.packages.apps.redhat.com [] [] [* create update
patch delete get list watch]
imagestreams [] [] [create delete
deletecollection get list patch update watch create get list watch]
imagestreams.image.openshift.io [] [] [create delete
deletecollection get list patch update watch create get list watch]
secrets [] [] [create delete deletecollection
get list patch update watch get list watch create delete deletecollection patch update]
buildconfigs/webhooks [] [] [create delete
deletecollection get list patch update watch get list watch]
buildconfigs [] [] [create delete
deletecollection get list patch update watch get list watch]
buildlogs [] [] [create delete deletecollection
get list patch update watch get list watch]
deploymentconfigs/scale [] [] [create delete
deletecollection get list patch update watch get list watch]
deploymentconfigs [] [] [create delete
deletecollection get list patch update watch get list watch]
imagestreamimages [] [] [create delete
deletecollection get list patch update watch get list watch]
imagestreammappings [] [] [create delete
deletecollection get list patch update watch get list watch]
imagestreamtags [] [] [create delete
deletecollection get list patch update watch get list watch]
processedtemplates [] [] [create delete
deletecollection get list patch update watch get list watch]
routes [] [] [create delete deletecollection
get list patch update watch get list watch]
86
CHAPTER 8. USING RBAC TO DEFINE AND APPLY PERMISSIONS
87
OpenShift Container Platform 4.17 Authentication and authorization
list watch]
configmaps [] [] [create delete
deletecollection patch update get list watch]
endpoints [] [] [create delete
deletecollection patch update get list watch]
persistentvolumeclaims [] [] [create delete
deletecollection patch update get list watch]
pods [] [] [create delete deletecollection
patch update get list watch]
replicationcontrollers/scale [] [] [create delete
deletecollection patch update get list watch]
replicationcontrollers [] [] [create delete
deletecollection patch update get list watch]
services [] [] [create delete deletecollection
patch update get list watch]
daemonsets.apps [] [] [create delete
deletecollection patch update get list watch]
deployments.apps/scale [] [] [create delete
deletecollection patch update get list watch]
deployments.apps [] [] [create delete
deletecollection patch update get list watch]
replicasets.apps/scale [] [] [create delete
deletecollection patch update get list watch]
replicasets.apps [] [] [create delete
deletecollection patch update get list watch]
statefulsets.apps/scale [] [] [create delete
deletecollection patch update get list watch]
statefulsets.apps [] [] [create delete
deletecollection patch update get list watch]
horizontalpodautoscalers.autoscaling [] [] [create delete
deletecollection patch update get list watch]
cronjobs.batch [] [] [create delete
deletecollection patch update get list watch]
jobs.batch [] [] [create delete
deletecollection patch update get list watch]
daemonsets.extensions [] [] [create delete
deletecollection patch update get list watch]
deployments.extensions/scale [] [] [create delete
deletecollection patch update get list watch]
deployments.extensions [] [] [create delete
deletecollection patch update get list watch]
ingresses.extensions [] [] [create delete
deletecollection patch update get list watch]
replicasets.extensions/scale [] [] [create delete
deletecollection patch update get list watch]
replicasets.extensions [] [] [create delete
deletecollection patch update get list watch]
replicationcontrollers.extensions/scale [] [] [create delete
deletecollection patch update get list watch]
poddisruptionbudgets.policy [] [] [create delete
deletecollection patch update get list watch]
deployments.apps/rollback [] [] [create delete
deletecollection patch update]
deployments.extensions/rollback [] [] [create delete
deletecollection patch update]
catalogsources.operators.coreos.com [] [] [create update
88
CHAPTER 8. USING RBAC TO DEFINE AND APPLY PERMISSIONS
89
OpenShift Container Platform 4.17 Authentication and authorization
Name: basic-user
Labels: <none>
Annotations: openshift.io/description: A user that can get basic information about projects.
rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
selfsubjectrulesreviews [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.openshift.io [] [] [create]
clusterroles.rbac.authorization.k8s.io [] [] [get list watch]
clusterroles [] [] [get list]
clusterroles.authorization.openshift.io [] [] [get list]
storageclasses.storage.k8s.io [] [] [get list]
users [] [~] [get]
users.user.openshift.io [] [~] [get]
projects [] [] [list watch]
projects.project.openshift.io [] [] [list watch]
90
CHAPTER 8. USING RBAC TO DEFINE AND APPLY PERMISSIONS
projectrequests [] [] [list]
projectrequests.project.openshift.io [] [] [list]
Name: cluster-admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
*.* [] [] [*]
[*] [] [*]
...
2. To view the current set of cluster role bindings, which shows the users and groups that are
bound to various roles:
$ oc describe clusterrolebinding.rbac
Example output
Name: alertmanager-main
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: alertmanager-main
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount alertmanager-main openshift-monitoring
Name: basic-users
Labels: <none>
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
Role:
Kind: ClusterRole
Name: basic-user
Subjects:
Kind Name Namespace
---- ---- ---------
Group system:authenticated
Name: cloud-credential-operator-rolebinding
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: cloud-credential-operator-role
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount default openshift-cloud-credential-operator
91
OpenShift Container Platform 4.17 Authentication and authorization
Name: cluster-admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
Role:
Kind: ClusterRole
Name: cluster-admin
Subjects:
Kind Name Namespace
---- ---- ---------
Group system:masters
Name: cluster-admins
Labels: <none>
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
Role:
Kind: ClusterRole
Name: cluster-admin
Subjects:
Kind Name Namespace
---- ---- ---------
Group system:cluster-admins
User system:admin
Name: cluster-api-manager-rolebinding
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: cluster-api-manager-role
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount default openshift-machine-api
...
Prerequisites
Users with the cluster-admin default cluster role bound cluster-wide can perform any
action on any resource, including viewing local roles and bindings.
Users with the admin default cluster role bound locally can view and manage roles and
bindings in that project.
92
CHAPTER 8. USING RBAC TO DEFINE AND APPLY PERMISSIONS
Procedure
1. To view the current set of local role bindings, which show the users and groups that are bound to
various roles for the current project:
$ oc describe rolebinding.rbac
2. To view the local role bindings for a different project, add the -n flag to the command:
Example output
Name: admin
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: admin
Subjects:
Kind Name Namespace
---- ---- ---------
User kube:admin
Name: system:deployers
Labels: <none>
Annotations: openshift.io/description:
Allows deploymentconfigs in this namespace to rollout pods in
this namespace. It is auto-managed by a controller; remove
subjects to disa...
Role:
Kind: ClusterRole
Name: system:deployer
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount deployer joe-project
Name: system:image-builders
Labels: <none>
Annotations: openshift.io/description:
Allows builds in this namespace to push images to this
namespace. It is auto-managed by a controller; remove subjects
to disable.
Role:
Kind: ClusterRole
Name: system:image-builder
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount builder joe-project
93
OpenShift Container Platform 4.17 Authentication and authorization
Name: system:image-pullers
Labels: <none>
Annotations: openshift.io/description:
Allows all pods in this namespace to pull images from this
namespace. It is auto-managed by a controller; remove subjects
to disable.
Role:
Kind: ClusterRole
Name: system:image-puller
Subjects:
Kind Name Namespace
---- ---- ---------
Group system:serviceaccounts:joe-project
Binding, or adding, a role to users or groups gives the user or group the access that is granted by the
role. You can add and remove roles to and from users and groups using oc adm policy commands.
You can bind any of the default cluster roles to local users or groups in your project.
Procedure
For example, you can add the admin role to the alice user in joe project by running:
TIP
You can alternatively apply the following YAML to add the role to the user:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: admin-0
namespace: joe
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: alice
2. View the local role bindings and verify the addition in the output:
94
CHAPTER 8. USING RBAC TO DEFINE AND APPLY PERMISSIONS
For example, to view the local role bindings for the joe project:
Example output
Name: admin
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: admin
Subjects:
Kind Name Namespace
---- ---- ---------
User kube:admin
Name: admin-0
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: admin
Subjects:
Kind Name Namespace
---- ---- ---------
User alice 1
Name: system:deployers
Labels: <none>
Annotations: openshift.io/description:
Allows deploymentconfigs in this namespace to rollout pods in
this namespace. It is auto-managed by a controller; remove
subjects to disa...
Role:
Kind: ClusterRole
Name: system:deployer
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount deployer joe
Name: system:image-builders
Labels: <none>
Annotations: openshift.io/description:
Allows builds in this namespace to push images to this
namespace. It is auto-managed by a controller; remove subjects
to disable.
Role:
Kind: ClusterRole
95
OpenShift Container Platform 4.17 Authentication and authorization
Name: system:image-builder
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount builder joe
Name: system:image-pullers
Labels: <none>
Annotations: openshift.io/description:
Allows all pods in this namespace to pull images from this
namespace. It is auto-managed by a controller; remove subjects
to disable.
Role:
Kind: ClusterRole
Name: system:image-puller
Subjects:
Kind Name Namespace
---- ---- ---------
Group system:serviceaccounts:joe
Procedure
For example, to create a local role that allows a user to view pods in the blue project, run the
following command:
96
CHAPTER 8. USING RBAC TO DEFINE AND APPLY PERMISSIONS
Procedure
For example, to create a cluster role that allows a user to view pods, run the following command:
You can use the following commands for local RBAC management.
Command Description
$ oc adm policy who-can <verb> <resource> Indicates which users can perform an action on a
resource.
$ oc adm policy add-role-to-user <role> Binds a specified role to specified users in the
<username> current project.
$ oc adm policy remove-role-from-user Removes a given role from specified users in the
<role> <username> current project.
$ oc adm policy remove-user <username> Removes specified users and all of their roles in the
current project.
$ oc adm policy add-role-to-group <role> Binds a given role to specified groups in the current
<groupname> project.
$ oc adm policy remove-role-from-group Removes a given role from specified groups in the
<role> <groupname> current project.
97
OpenShift Container Platform 4.17 Authentication and authorization
Command Description
$ oc adm policy remove-group <groupname> Removes specified groups and all of their roles in the
current project.
Command Description
$ oc adm policy add-cluster-role-to-user Binds a given role to specified users for all projects in
<role> <username> the cluster.
$ oc adm policy remove-cluster-role-from- Removes a given role from specified users for all
user <role> <username> projects in the cluster.
$ oc adm policy add-cluster-role-to-group Binds a given role to specified groups for all projects
<role> <groupname> in the cluster.
$ oc adm policy remove-cluster-role-from- Removes a given role from specified groups for all
group <role> <groupname> projects in the cluster.
Prerequisites
Procedure
NOTE
Before OpenShift Container Platform 4.17, unauthenticated groups were allowed access
to some cluster roles. Clusters updated from versions before OpenShift Container
Platform 4.17 retain this access for unauthenticated groups.
98
CHAPTER 8. USING RBAC TO DEFINE AND APPLY PERMISSIONS
For security reasons OpenShift Container Platform 4.17 does not allow unauthenticated groups to have
default access to cluster roles.
There are use cases where it might be necessary to add system:unauthenticated to a cluster role.
Cluster administrators can add unauthenticated users to the following cluster roles:
system:scope-impersonation
system:webhook
system:oauth-token-deleter
self-access-reviewer
IMPORTANT
Always verify compliance with your organization’s security standards when modifying
unauthenticated access.
99
OpenShift Container Platform 4.17 Authentication and authorization
This user has the cluster-admin role automatically applied and is treated as the root user for the cluster.
The password is dynamically generated and unique to your OpenShift Container Platform environment.
After installation completes the password is provided in the installation program’s output. For example:
WARNING
If you follow this procedure before another user is a cluster-admin, then OpenShift
Container Platform must be reinstalled. It is not possible to undo this command.
Prerequisites
Procedure
100
CHAPTER 10. UNDERSTANDING AND CREATING SERVICE ACCOUNTS
When you use the OpenShift Container Platform CLI or web console, your API token authenticates you
to the API. You can associate a component with a service account so that they can access the API
without using a regular user’s credentials. For example, service accounts can allow:
Each service account’s user name is derived from its project and name:
system:serviceaccount:<project>:<name>
Group Description
An API token
The generated API token and registry credentials do not expire, but you can revoke them by deleting
the secret. When you delete the secret, a new one is automatically generated to take its place.
Procedure
$ oc get sa
Example output
101
OpenShift Container Platform 4.17 Authentication and authorization
$ oc create sa <service_account_name> 1
Example output
TIP
You can alternatively apply the following YAML to create the service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: <service_account_name>
namespace: <current_project>
$ oc describe sa robot
Example output
Name: robot
Namespace: project1
Labels: <none>
Annotations: <none>
Image pull secrets: robot-dockercfg-qzbhb
Mountable secrets: robot-dockercfg-qzbhb
Tokens: robot-token-f4khf
Events: <none>
You can modify the service accounts for the current project. For example, to add the view role
to the robot service account in the top-secret project:
TIP
102
CHAPTER 10. UNDERSTANDING AND CREATING SERVICE ACCOUNTS
TIP
You can alternatively apply the following YAML to add the role:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: view
namespace: top-secret
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: robot
namespace: top-secret
You can also grant access to a specific service account in a project. For example, from the
project to which the service account belongs, use the -z flag and specify the
<service_account_name>
IMPORTANT
If you want to grant access to a specific service account in a project, use the -z
flag. Using this flag helps prevent typos and ensures that access is granted to
only the specified service account.
TIP
You can alternatively apply the following YAML to add the role:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: <rolebinding_name>
namespace: <current_project_name>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: <role_name>
subjects:
- kind: ServiceAccount
name: <service_account_name>
namespace: <current_project_name>
To modify a different namespace, you can use the -n option to indicate the project namespace
it applies to, as shown in the following examples.
For example, to allow all service accounts in all projects to view resources in the my-project
project:
103
OpenShift Container Platform 4.17 Authentication and authorization
TIP
You can alternatively apply the following YAML to add the role:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: view
namespace: my-project
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts
To allow all service accounts in the managers project to edit resources in the my-project
project:
TIP
You can alternatively apply the following YAML to add the role:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: edit
namespace: my-project
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts:managers
104
CHAPTER 11. USING SERVICE ACCOUNTS IN APPLICATIONS
When you use the OpenShift Container Platform CLI or web console, your API token authenticates you
to the API. You can associate a component with a service account so that they can access the API
without using a regular user’s credentials. For example, service accounts can allow:
Each service account’s user name is derived from its project and name:
system:serviceaccount:<project>:<name>
Group Description
An API token
The generated API token and registry credentials do not expire, but you can revoke them by deleting
the secret. When you delete the secret, a new one is automatically generated to take its place.
105
OpenShift Container Platform 4.17 Authentication and authorization
builder Used by build pods. It is given the system:image-builder role, which allows
pushing images to any imagestream in the project using the internal Docker
registry.
NOTE
deployer Used by deployment pods and given the system:deployer role, which allows
viewing and modifying replication controllers and pods in the project.
NOTE
default Used to run all other pods unless they specify a different service account.
All service accounts in a project are given the system:image-puller role, which allows pulling images
from any image stream in the project using the internal container image registry.
NOTE
106
CHAPTER 11. USING SERVICE ACCOUNTS IN APPLICATIONS
NOTE
Prior to OpenShift Container Platform 4.16, a long-lived service account API token secret
was also generated for each service account that was created. Starting with OpenShift
Container Platform 4.16, this service account API token secret is no longer created.
After upgrading to 4.17, any existing long-lived service account API token secrets are not
deleted and will continue to function. For information about detecting long-lived API
tokens that are in use in your cluster or deleting them if they are not needed, see the Red
Hat Knowledgebase article Long-lived service account API tokens in OpenShift
Container Platform.
This image pull secret is necessary to integrate the OpenShift image registry into the cluster’s user
authentication and authorization system.
However, if you do not enable the ImageRegistry capability or if you disable the integrated OpenShift
image registry in the Cluster Image Registry Operator’s configuration, an image pull secret is not
generated for each service account.
When the integrated OpenShift image registry is disabled on a cluster that previously had it enabled, the
previously generated image pull secrets are deleted automatically.
Procedure
$ oc get sa
Example output
$ oc create sa <service_account_name> 1
Example output
TIP
107
OpenShift Container Platform 4.17 Authentication and authorization
TIP
You can alternatively apply the following YAML to create the service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: <service_account_name>
namespace: <current_project>
$ oc describe sa robot
Example output
Name: robot
Namespace: project1
Labels: <none>
Annotations: <none>
Image pull secrets: robot-dockercfg-qzbhb
Mountable secrets: robot-dockercfg-qzbhb
Tokens: robot-token-f4khf
Events: <none>
108
CHAPTER 12. USING A SERVICE ACCOUNT AS AN OAUTH CLIENT
user:info
user:check-access
role:<any_role>:<service_account_namespace>
role:<any_role>:<service_account_namespace>:!
client_id is system:serviceaccount:<service_account_namespace>:
<service_account_name>.
client_secret can be any of the API tokens for that service account. For example:
$ oc sa get-token <service_account_name>
serviceaccounts.openshift.io/oauth-redirecturi.<name>
In its simplest form, the annotation can be used to directly specify valid redirect URIs. For example:
"serviceaccounts.openshift.io/oauth-redirecturi.first": "https://example.com"
"serviceaccounts.openshift.io/oauth-redirecturi.second": "https://other.com"
The first and second postfixes in the above example are used to separate the two valid redirect URIs.
In more complex configurations, static redirect URIs may not be enough. For example, perhaps you want
all Ingresses for a route to be considered valid. This is where dynamic redirect URIs via the
serviceaccounts.openshift.io/oauth-redirectreference. prefix come into play.
For example:
109
OpenShift Container Platform 4.17 Authentication and authorization
"serviceaccounts.openshift.io/oauth-redirectreference.first": "
{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":
{\"kind\":\"Route\",\"name\":\"jenkins\"}}"
Since the value for this annotation contains serialized JSON data, it is easier to see in an expanded
format:
{
"kind": "OAuthRedirectReference",
"apiVersion": "v1",
"reference": {
"kind": "Route",
"name": "jenkins"
}
}
Now you can see that an OAuthRedirectReference allows us to reference the route named jenkins.
Thus, all Ingresses for that route will now be considered valid. The full specification for an
OAuthRedirectReference is:
{
"kind": "OAuthRedirectReference",
"apiVersion": "v1",
"reference": {
"kind": ..., 1
"name": ..., 2
"group": ... 3
}
}
1 kind refers to the type of the object being referenced. Currently, only route is supported.
2 name refers to the name of the object. The object must be in the same namespace as the service
account.
3 group refers to the group of the object. Leave this blank, as the group for a route is the empty
string.
Both annotation prefixes can be combined to override the data provided by the reference object. For
example:
"serviceaccounts.openshift.io/oauth-redirecturi.first": "custompath"
"serviceaccounts.openshift.io/oauth-redirectreference.first": "
{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":
{\"kind\":\"Route\",\"name\":\"jenkins\"}}"
The first postfix is used to tie the annotations together. Assuming that the jenkins route had an Ingress
of https://example.com, now https://example.com/custompath is considered valid, but
https://example.com is not. The format for partially supplying override data is as follows:
110
CHAPTER 12. USING A SERVICE ACCOUNT AS AN OAUTH CLIENT
Type Syntax
Scheme "https://"
Hostname "//website.com"
Port "//:8000"
Path "examplepath"
NOTE
Specifying a hostname override will replace the hostname data from the referenced
object, which is not likely to be desired behavior.
Any combination of the above syntax can be combined using the following format:
<scheme:>//<hostname><:port>/<path>
The same object can be referenced more than once for more flexibility:
"serviceaccounts.openshift.io/oauth-redirecturi.first": "custompath"
"serviceaccounts.openshift.io/oauth-redirectreference.first": "
{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":
{\"kind\":\"Route\",\"name\":\"jenkins\"}}"
"serviceaccounts.openshift.io/oauth-redirecturi.second": "//:8000"
"serviceaccounts.openshift.io/oauth-redirectreference.second": "
{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":
{\"kind\":\"Route\",\"name\":\"jenkins\"}}"
Assuming that the route named jenkins has an Ingress of https://example.com, then both
https://example.com:8000 and https://example.com/custompath are considered valid.
Static and dynamic annotations can be used at the same time to achieve the desired behavior:
"serviceaccounts.openshift.io/oauth-redirectreference.first": "
{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":
{\"kind\":\"Route\",\"name\":\"jenkins\"}}"
"serviceaccounts.openshift.io/oauth-redirecturi.second": "https://other.com"
111
OpenShift Container Platform 4.17 Authentication and authorization
A scoped token is a token that identifies as a given user but is limited to certain actions by its scope.
Only a user with the cluster-admin role can create scoped tokens.
Scopes are evaluated by converting the set of scopes for a token into a set of PolicyRules. Then, the
request is matched against those rules. The request attributes must match at least one of the scope
rules to be passed to the "normal" authorizer for further authorization checks.
user:full - Allows full read/write access to the API with all of the user’s permissions.
user:info - Allows read-only access to information about the user, such as name and groups.
user:list-projects - Allows read-only access to list the projects the user has access to.
role:<cluster-role name>:<namespace or * for all> - Limits the scope to the rules specified
by the cluster-role, but only in the specified namespace .
NOTE
Caveat: This prevents escalating access. Even if the role allows access to
resources like secrets, rolebindings, and roles, this scope will deny access to
those resources. This helps prevent unexpected escalations. Many people do not
think of a role like edit as being an escalating role, but with access to a secret it is.
112
CHAPTER 13. SCOPING TOKENS
system:scope-impersonation
system:webhook
system:oauth-token-deleter
self-access-reviewer
IMPORTANT
Always verify compliance with your organization’s security standards when modifying
unauthenticated access.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. Create a YAML file named add-<cluster_role>-unauth.yaml and add the following content:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
name: <cluster_role>access-unauthenticated
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: <cluster_role>
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:unauthenticated
$ oc apply -f add-<cluster_role>.yaml
113
OpenShift Container Platform 4.17 Authentication and authorization
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
You have created a service account. This procedure assumes that the service account is named
build-robot.
Procedure
IMPORTANT
If you change the service account issuer to a custom one, the previous service
account issuer is still trusted for the next 24 hours.
You can force all holders to request a new bound token either by manually
restarting all pods in the cluster or by performing a rolling node restart. Before
performing either action, wait for a new revision of the Kubernetes API server
pods to roll out with your service account issuer changes.
b. Set the spec.serviceAccountIssuer field to the desired service account issuer value:
spec:
serviceAccountIssuer: https://test.default.svc 1
1 This value should be a URL from which the recipient of a bound token can source the
public keys necessary to verify the signature of the token. The default is
https://kubernetes.default.svc.
114
CHAPTER 14. USING BOUND SERVICE ACCOUNT TOKENS
d. Wait for a new revision of the Kubernetes API server pods to roll out. It can take several
minutes for all nodes to update to the new revision. Run the following command:
Review the NodeInstallerProgressing status condition for the Kubernetes API server to
verify that all nodes are at the latest revision. The output shows
AllNodesAtLatestRevision upon successful update:
AllNodesAtLatestRevision
3 nodes are at revision 12 1
If the output shows a message similar to one of the following messages, the update is still in
progress. Wait a few minutes and try again.
e. Optional: Force the holder to request a new bound token either by performing a rolling node
restart or by manually restarting all pods in the cluster.
WARNING
Restart nodes sequentially. Wait for the node to become fully available before
restarting the next node. See Rebooting a node gracefully for instructions on how to
drain, restart, and mark a node as schedulable again.
115
OpenShift Container Platform 4.17 Authentication and authorization
WARNING
2. Configure a pod to use a bound service account token by using volume projection.
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
securityContext:
runAsNonRoot: true 1
seccompProfile:
type: RuntimeDefault 2
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /var/run/secrets/tokens
name: vault-token
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: [ALL]
serviceAccountName: build-robot 3
volumes:
- name: vault-token
projected:
sources:
- serviceAccountToken:
path: vault-token 4
expirationSeconds: 7200 5
audience: vault 6
2 Sets the default seccomp profile, limiting to essential system calls, to reduce risks.
116
CHAPTER 14. USING BOUND SERVICE ACCOUNT TOKENS
4 The path relative to the mount point of the file to project the token into.
5 Optionally set the expiration of the service account token, in seconds. The default
value is 3600 seconds (1 hour), and this value must be at least 600 seconds (10
minutes). The kubelet starts trying to rotate the token if the token is older than 80
percent of its time to live or if the token is older than 24 hours.
6 Optionally set the intended audience of the token. The recipient of a token should
verify that the recipient identity matches the audience claim of the token, and should
otherwise reject the token. The audience defaults to the identifier of the API server.
NOTE
$ oc create -f pod-projected-svc-token.yaml
The kubelet requests and stores the token on behalf of the pod, makes the token available
to the pod at a configurable file path, and refreshes the token as it approaches expiration.
3. The application that uses the bound token must handle reloading the token when it rotates.
The kubelet rotates the token if it is older than 80 percent of its time to live, or if the token is
older than 24 hours.
Prerequisites
You have created a service account. This procedure assumes that the service account is named
build-robot.
Procedure
Create the bound service account token outside the pod by running the following command:
Example output
eyJhbGciOiJSUzI1NiIsImtpZCI6IkY2M1N4MHRvc2xFNnFSQlA4eG9GYzVPdnN3NkhIV0tRW
mFrUDRNcWx4S0kifQ.eyJhdWQiOlsiaHR0cHM6Ly9pc3N1ZXIyLnRlc3QuY29tIiwiaHR0cHM6L
y9pc3N1ZXIxLnRlc3QuY29tIiwiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4c
CI6MTY3OTU0MzgzMCwiaWF0IjoxNjc5NTQwMjMwLCJpc3MiOiJodHRwczovL2lzc3VlcjIudGV
zdC5jb20iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImRlZmF1bHQiLCJzZXJ2aWNlYW
117
OpenShift Container Platform 4.17 Authentication and authorization
Njb3VudCI6eyJuYW1lIjoidGVzdC1zYSIsInVpZCI6ImM3ZjA4MjkwLWIzOTUtNGM4NC04NjI4L
TMzMTM1NTVhNWY1OSJ9fSwibmJmIjoxNjc5NTQwMjMwLCJzdWIiOiJzeXN0ZW06c2Vydmlj
ZWFjY291bnQ6ZGVmYXVsdDp0ZXN0LXNhIn0.WyAOPvh1BFMUl3LNhBCrQeaB5wSynbnCf
ojWuNNPSilT4YvFnKibxwREwmzHpV4LO1xOFZHSi6bXBOmG_o-
m0XNDYL3FrGHd65mymiFyluztxa2lgHVxjw5reIV5ZLgNSol3Y8bJqQqmNg3rtQQWRML2kpJB
XdDHNww0E5XOypmffYkfkadli8lN5QQD-
MhsCbiAF8waCYs8bj6V6Y7uUKTcxee8sCjiRMVtXKjQtooERKm-
CH_p57wxCljIBeM89VdaR51NJGued4hVV5lxvVrYZFu89lBEAq4oyQN_d6N1vBWGXQMyoihn
t_fQjn-NfnlJWk-3NSZDIluDJAv7e-MTEk3geDrHVQKNEzDei2-Un64hSzb-
n1g1M0Vn0885wQBQAePC9UlZm8YZlMNk1tq6wIUKQTMv3HPfi5HtBRqVc2eVs0EfMX4-x-
PHhPCasJ6qLJWyj6DvyQ08dP4DW_TWZVGvKlmId0hzwpg59TTcLR0iCklSEJgAVEEd13Aa_
M0-
faD11L3MhUGxw0qxgOsPczdXUsolSISbefs7OKymzFSIkTAn9sDQ8PHMOsuyxsK8vzfrR-
E0z7MAeguZ2kaIY7cZqbN6WFy0caWgx46hrKem9vCKALefElRYbCg3hcBmowBcRTOqaFHL
NnHghhU1LaRpoFzH7OUarqX9SGQ
Additional resources
118
CHAPTER 15. MANAGING SECURITY CONTEXT CONSTRAINTS
Default SCCs are created during installation and when you install some Operators or other components.
As a cluster administrator, you can also create your own SCCs by using the OpenShift CLI (oc).
IMPORTANT
Do not modify the default SCCs. Customizing the default SCCs can lead to issues when
some of the platform pods deploy or OpenShift Container Platform is upgraded.
Additionally, the default SCC values are reset to the defaults during some cluster
upgrades, which discards all customizations to those SCCs.
Instead of modifying the default SCCs, create and modify your own SCCs as needed. For
detailed steps, see Creating security context constraints .
Whether a pod can run privileged containers with the allowPrivilegedContainer flag
IMPORTANT
119
OpenShift Container Platform 4.17 Authentication and authorization
IMPORTANT
IMPORTANT
Do not modify the default SCCs. Customizing the default SCCs can lead to issues when
some of the platform pods deploy or OpenShift Container Platform is upgraded.
Additionally, the default SCC values are reset to the defaults during some cluster
upgrades, which discards all customizations to those SCCs.
Instead of modifying the default SCCs, create and modify your own SCCs as needed. For
detailed steps, see Creating security context constraints .
anyuid Provides all features of the restricted SCC, but allows users to run with any UID
and any GID.
hostaccess Allows access to all host namespaces but still requires pods to be run with a UID
and SELinux context that are allocated to the namespace.
WARNING
120
CHAPTER 15. MANAGING SECURITY CONTEXT CONSTRAINTS
hostmount-anyuid Provides all the features of the restricted SCC, but allows host mounts and
running as any UID and any GID on the system.
WARNING
hostnetwork Allows using host networking and host ports but still requires pods to be run with a
UID and SELinux context that are allocated to the namespace.
WARNING
hostnetwork-v2 Like the hostnetwork SCC, but with the following differences:
121
OpenShift Container Platform 4.17 Authentication and authorization
WARNING
nonroot Provides all features of the restricted SCC, but allows users to run with any non-
root UID. The user must specify the UID or it must be specified in the manifest of
the container runtime.
nonroot-v2 Like the nonroot SCC, but with the following differences:
122
CHAPTER 15. MANAGING SECURITY CONTEXT CONSTRAINTS
privileged Allows access to all privileged and host features and the ability to run as any user,
any group, any FSGroup, and with any SELinux context.
WARNING
This is the most relaxed SCC and should be used only for
cluster administration. Grant with caution.
NOTE
123
OpenShift Container Platform 4.17 Authentication and authorization
restricted Denies access to all host features and requires pods to be run with a UID, and
SELinux context that are allocated to the namespace.
In clusters that were upgraded from OpenShift Container Platform 4.10 or earlier,
this SCC is available for use by any authenticated user. The restricted SCC is no
longer available to users of new OpenShift Container Platform 4.11 or later
installations, unless the access is explicitly granted.
restricted-v2 Like the restricted SCC, but with the following differences:
This is the most restrictive SCC provided by a new installation and will be used by
default for authenticated users.
NOTE
Category Description
Controlled by a boolean Fields of this type default to the most restrictive value. For example,
AllowPrivilegedContainer is always set to false if unspecified.
124
CHAPTER 15. MANAGING SECURITY CONTEXT CONSTRAINTS
Category Description
Controlled by an Fields of this type are checked against the set to ensure their value is allowed.
allowable set
CRI-O has the following default list of capabilities that are allowed for each container of a pod:
CHOWN
DAC_OVERRIDE
FSETID
FOWNER
SETGID
SETUID
SETPCAP
NET_BIND_SERVICE
KILL
The containers use the capabilities from this default list, but pod manifest authors can alter the list by
requesting additional capabilities or removing some of the default behaviors. Use the
allowedCapabilities, defaultAddCapabilities, and requiredDropCapabilities parameters to control
such requests from the pods. With these parameters you can specify which capabilities can be
requested, which ones must be added to each container, and which ones must be forbidden, or dropped,
from each container.
NOTE
You can drop all capabilites from containers by setting the requiredDropCapabilities
parameter to ALL. This is what the restricted-v2 SCC does.
RunAsUser
125
OpenShift Container Platform 4.17 Authentication and authorization
...
runAsUser:
type: MustRunAs
uid: <id>
...
MustRunAsRange - Requires minimum and maximum values to be defined if not using pre-
allocated values. Uses the minimum as the default. Validates against the entire allowable range.
...
runAsUser:
type: MustRunAsRange
uidRangeMax: <maxvalue>
uidRangeMin: <minvalue>
...
MustRunAsNonRoot - Requires that the pod be submitted with a non-zero runAsUser or have
the USER directive defined in the image. No default provided.
...
runAsUser:
type: MustRunAsNonRoot
...
...
runAsUser:
type: RunAsAny
...
SELinuxContext
SupplementalGroups
MustRunAs - Requires at least one range to be specified if not using pre-allocated values. Uses
the minimum value of the first range as the default. Validates against all ranges.
FSGroup
MustRunAs - Requires at least one range to be specified if not using pre-allocated values. Uses
126
CHAPTER 15. MANAGING SECURITY CONTEXT CONSTRAINTS
MustRunAs - Requires at least one range to be specified if not using pre-allocated values. Uses
the minimum value of the first range as the default. Validates against the first ID in the first
range.
The allowable values of this field correspond to the volume sources that are defined when creating a
volume:
awsElasticBlockStore
azureDisk
azureFile
cephFS
cinder
configMap
csi
downwardAPI
emptyDir
fc
flexVolume
flocker
gcePersistentDisk
ephemeral
gitRepo
glusterfs
hostPath
iscsi
nfs
persistentVolumeClaim
photonPersistentDisk
portworxVolume
projected
127
OpenShift Container Platform 4.17 Authentication and authorization
quobyte
rbd
scaleIO
secret
storageos
vsphereVolume
none (A special value to disallow the use of all volumes types. Exists only for backwards
compatibility.)
The recommended minimum set of allowed volumes for new SCCs are configMap, downwardAPI,
emptyDir, persistentVolumeClaim, secret, and projected.
NOTE
This list of allowable volume types is not exhaustive because new types are added with
each release of OpenShift Container Platform.
NOTE
In terms of the SCCs, this means that an admission controller can inspect the user information made
available in the context to retrieve an appropriate set of SCCs. Doing so ensures the pod is authorized
to make requests about its operating environment or to generate a set of constraints to apply to the
pod.
The set of SCCs that admission uses to authorize a pod are determined by the user identity and groups
that the user belongs to. Additionally, if the pod specifies a service account, the set of allowable SCCs
includes any constraints accessible to the service account.
NOTE
When you create a workload resource, such as deployment, only the service account is
used to find the SCCs and admit the pods when they are created.
Admission uses the following approach to create the final security context for the pod:
2. Generate field values for security context settings that were not specified on the request.
128
CHAPTER 15. MANAGING SECURITY CONTEXT CONSTRAINTS
If a matching set of constraints is found, then the pod is accepted. If the request cannot be matched to
an SCC, the pod is rejected.
A pod must validate every field against the SCC. The following are examples for just two of the fields
that must be validated:
NOTE
These examples are in the context of a strategy using the pre-allocated values.
If the pod defines a fsGroup ID, then that ID must equal the default fsGroup ID. Otherwise, the pod is
not validated by that SCC and the next SCC is evaluated.
If the SecurityContextConstraints.fsGroup field has value RunAsAny and the pod specification omits
the Pod.spec.securityContext.fsGroup, then this field is considered valid. Note that it is possible that
during validation, other SCC settings will reject other pod fields and thus cause the pod to fail.
If the pod specification defines one or more supplementalGroups IDs, then the pod’s IDs must equal
one of the IDs in the namespace’s openshift.io/sa.scc.supplemental-groups annotation. Otherwise,
the pod is not validated by that SCC and the next SCC is evaluated.
A priority value of 0 is the lowest possible priority. A nil priority is considered a 0, or lowest, priority.
Higher priority SCCs are moved to the front of the set when sorting.
When the complete set of available SCCs is determined, the SCCs are ordered in the following manner:
2. If the priorities are equal, the SCCs are sorted from most restrictive to least restrictive.
3. If both the priorities and restrictions are equal, the SCCs are sorted by name.
By default, the anyuid SCC granted to cluster administrators is given priority in their SCC set. This
allows cluster administrators to run pods as any user by specifying RunAsUser in the pod’s
SecurityContext.
The admission controller is aware of certain conditions in the security context constraints (SCCs) that
129
OpenShift Container Platform 4.17 Authentication and authorization
The admission controller is aware of certain conditions in the security context constraints (SCCs) that
trigger it to look up pre-allocated values from a namespace and populate the SCC before processing
the pod. Each SCC strategy is evaluated independently of other strategies, with the pre-allocated
values, where allowed, for each policy aggregated with pod specification values to make the final values
for the various IDs defined in the running pod.
The following SCCs cause the admission controller to look for pre-allocated values when no ranges are
defined in the pod specification:
2. An SELinuxContext strategy of MustRunAs with no level set. Admission looks for the
openshift.io/sa.scc.mcs annotation to populate the level.
During the generation phase, the security context provider uses default values for any parameter values
that are not specifically set in the pod. Default values are based on the selected strategy:
1. RunAsAny and MustRunAsNonRoot strategies do not provide default values. If the pod needs
a parameter value, such as a group ID, you must define the value in the pod specification.
2. MustRunAs (single value) strategies provide a default value that is always used. For example,
for group IDs, even if the pod specification defines its own ID value, the namespace’s default
parameter value also appears in the pod’s groups.
NOTE
NOTE
By default, the annotation-based FSGroup strategy configures itself with a single range
based on the minimum value for the annotation. For example, if your annotation reads
1/3, the FSGroup strategy configures itself with a minimum and maximum value of 1. If
you want to allow more groups to be accepted for the FSGroup field, you can configure a
custom SCC that does not use the annotation.
NOTE
130
CHAPTER 15. MANAGING SECURITY CONTEXT CONSTRAINTS
allowHostDirVolumePlugin: true
allowHostIPC: true
allowHostNetwork: true
allowHostPID: true
allowHostPorts: true
allowPrivilegedContainer: true
allowedCapabilities: 1
- '*'
apiVersion: security.openshift.io/v1
defaultAddCapabilities: [] 2
fsGroup: 3
type: RunAsAny
groups: 4
- system:cluster-admins
- system:nodes
kind: SecurityContextConstraints
metadata:
annotations:
kubernetes.io/description: 'privileged allows access to all privileged and host
features and the ability to run as any user, any group, any fsGroup, and with
any SELinux context. WARNING: this is the most relaxed SCC and should be used
only for cluster administration. Grant with caution.'
creationTimestamp: null
name: privileged
priority: null
readOnlyRootFilesystem: false
requiredDropCapabilities: 5
- KILL
- MKNOD
- SETUID
- SETGID
runAsUser: 6
type: RunAsAny
seLinuxContext: 7
type: RunAsAny
seccompProfiles:
- '*'
supplementalGroups: 8
type: RunAsAny
users: 9
- system:serviceaccount:default:registry
- system:serviceaccount:default:router
- system:serviceaccount:openshift-infra:build-controller
volumes: 10
- '*'
1 A list of capabilities that a pod can request. An empty list means that none of capabilities can be
requested while the special symbol * allows any capabilities.
131
OpenShift Container Platform 4.17 Authentication and authorization
3 The FSGroup strategy, which dictates the allowable values for the security context.
5 A list of capabilities to drop from a pod. Or, specify ALL to drop all capabilities.
6 The runAsUser strategy type, which dictates the allowable values for the security context.
7 The seLinuxContext strategy type, which dictates the allowable values for the security context.
8 The supplementalGroups strategy, which dictates the allowable supplemental groups for the
security context.
10 The allowable volume types for the security context. In the example, * allows the use of all volume
types.
The users and groups fields on the SCC control which users can access the SCC. By default, cluster
administrators, nodes, and the build controller are granted access to the privileged SCC. All
authenticated users are granted access to the restricted-v2 SCC.
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext: 1
containers:
- name: sec-ctx-demo
image: gcr.io/google-samples/node-hello:1.0
1 When a container or pod does not request a user ID under which it should be run, the effective UID
depends on the SCC that emits this pod. Because the restricted-v2 SCC is granted to all
authenticated users by default, it will be available to all users and service accounts and used in most
cases. The restricted-v2 SCC uses MustRunAsRange strategy for constraining and defaulting the
possible values of the securityContext.runAsUser field. The admission plugin will look for the
openshift.io/sa.scc.uid-range annotation on the current project to populate range fields, as it
does not provide this range. In the end, a container will have runAsUser equal to the first value of
the range that is hard to predict because every project has different ranges.
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000 1
132
CHAPTER 15. MANAGING SECURITY CONTEXT CONSTRAINTS
containers:
- name: sec-ctx-demo
image: gcr.io/google-samples/node-hello:1.0
1 A container or pod that requests a specific user ID will be accepted by OpenShift Container
Platform only when a service account or a user is granted access to a SCC that allows such a user
ID. The SCC can allow arbitrary IDs, an ID that falls into a range, or the exact user ID specific to the
request.
IMPORTANT
Creating and modifying your own SCCs are advanced operations that might cause
instability to your cluster. If you have questions about using your own SCCs, contact Red
Hat Support. For information about contacting Red Hat support, see Getting support.
Prerequisites
Procedure
kind: SecurityContextConstraints
apiVersion: security.openshift.io/v1
metadata:
name: scc-admin
allowPrivilegedContainer: true
runAsUser:
type: RunAsAny
seLinuxContext:
type: RunAsAny
fsGroup:
type: RunAsAny
supplementalGroups:
type: RunAsAny
users:
- my-admin-user
groups:
- my-admin-group
Optionally, you can drop specific capabilities for an SCC by setting the
requiredDropCapabilities field with the desired values. Any specified capabilities are dropped
from the container. To drop all capabilities, specify ALL. For example, to create an SCC that
drops the KILL, MKNOD, and SYS_CHROOT capabilities, add the following to the SCC object:
133
OpenShift Container Platform 4.17 Authentication and authorization
requiredDropCapabilities:
- KILL
- MKNOD
- SYS_CHROOT
NOTE
CRI-O supports the same list of capability values that are found in the Docker documentation.
$ oc create -f scc-admin.yaml
Example output
Verification
Example output
To require a specific SCC, set the openshift.io/required-scc annotation on your workload. You can set
this annotation on any resource that can set a pod manifest template, such as a deployment or daemon
set.
The SCC must exist in the cluster and must be applicable to the workload, otherwise pod admission fails.
An SCC is considered applicable to the workload if the user creating the pod or the pod’s service
account has use permissions for the SCC in the pod’s namespace.
134
CHAPTER 15. MANAGING SECURITY CONTEXT CONSTRAINTS
WARNING
Prerequisites
Procedure
1. Create a YAML file for the deployment and specify a required SCC by setting the
openshift.io/required-scc annotation:
Example deployment.yaml
apiVersion: config.openshift.io/v1
kind: Deployment
apiVersion: apps/v1
spec:
# ...
template:
metadata:
annotations:
openshift.io/required-scc: "my-scc" 1
# ...
$ oc create -f deployment.yaml
Verification
a. View the value of the pod’s openshift.io/scc annotation by running the following command:
b. Examine the output and confirm that the displayed SCC matches the SCC that you defined
in the deployment:
Example output
135
OpenShift Container Platform 4.17 Authentication and authorization
Example output
my-scc
IMPORTANT
Do not run workloads in or share access to default projects. Default projects are reserved
for running core cluster components.
The following default projects are considered highly privileged: default, kube-public,
kube-system, openshift, openshift-infra, openshift-node, and other system-created
projects that have the openshift.io/run-level label set to 0 or 1. Functionality that relies
on admission plugins, such as pod security admission, security context constraints, cluster
resource quotas, and image reference resolution, does not work in highly privileged
projects.
To include access to SCCs for your role, specify the scc resource when creating a role.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
...
name: role-name 1
namespace: namespace 2
...
rules:
- apiGroups:
- security.openshift.io 3
resourceNames:
- scc-name 4
resources:
- securitycontextconstraints 5
verbs: 6
- use
3 The API group that includes the SecurityContextConstraints resource. Automatically defined
when scc is specified as a resource.
136
CHAPTER 15. MANAGING SECURITY CONTEXT CONSTRAINTS
5 Name of the resource group that allows users to specify SCC names in the resourceNames field.
A local or cluster role with such a rule allows the subjects that are bound to it with a role binding or a
cluster role binding to use the user-defined SCC called scc-name.
NOTE
Because RBAC is designed to prevent escalation, even project administrators are unable
to grant access to an SCC. By default, they are not allowed to use the verb use on SCC
resources, including the restricted-v2 SCC.
NOTE
$ oc get scc
Example output
137
OpenShift Container Platform 4.17 Authentication and authorization
Example output
Name: restricted
Priority: <none>
Access:
Users: <none> 1
Groups: <none> 2
Settings:
Allow Privileged: false
Allow Privilege Escalation: true
Default Add Capabilities: <none>
Required Drop Capabilities: KILL,MKNOD,SETUID,SETGID
Allowed Capabilities: <none>
Allowed Seccomp Profiles: <none>
Allowed Volume Types:
configMap,downwardAPI,emptyDir,persistentVolumeClaim,projected,secret
Allowed Flexvolumes: <all>
Allowed Unsafe Sysctls: <none>
Forbidden Sysctls: <none>
Allow Host Network: false
Allow Host Ports: false
Allow Host PID: false
Allow Host IPC: false
Read Only Root Filesystem: false
Run As User Strategy: MustRunAsRange
UID: <none>
UID Range Min: <none>
UID Range Max: <none>
SELinux Context Strategy: MustRunAs
User: <none>
138
CHAPTER 15. MANAGING SECURITY CONTEXT CONSTRAINTS
Role: <none>
Type: <none>
Level: <none>
FSGroup Strategy: MustRunAs
Ranges: <none>
Supplemental Groups Strategy: RunAsAny
Ranges: <none>
1 Lists which users and service accounts the SCC is applied to.
NOTE
To preserve customized SCCs during upgrades, do not edit settings on the default SCCs.
IMPORTANT
To preserve customized SCCs during upgrades, do not edit settings on the default SCCs.
To delete an SCC:
IMPORTANT
Do not delete default SCCs. If you delete a default SCC, it is regenerated by the Cluster
Version Operator.
139
OpenShift Container Platform 4.17 Authentication and authorization
Globally, the privileged profile is enforced, and the restricted profile is used for warnings and audits.
You can also configure the pod security admission settings at the namespace level.
IMPORTANT
Do not run workloads in or share access to default projects. Default projects are reserved
for running core cluster components.
The following default projects are considered highly privileged: default, kube-public,
kube-system, openshift, openshift-infra, openshift-node, and other system-created
projects that have the openshift.io/run-level label set to 0 or 1. Functionality that relies
on admission plugins, such as pod security admission, security context constraints, cluster
resource quotas, and image reference resolution, does not work in highly privileged
projects.
audit pod- Logs audit events if a pod does not comply with the
security.kubernetes.io/audit set profile
warn pod- Displays warnings if a pod does not comply with the
security.kubernetes.io/warn set profile
140
CHAPTER 16. UNDERSTANDING AND MANAGING POD SECURITY ADMISSION
Profile Description
restricted Most restrictive policy; follows current pod hardening best practices
default
kube-public
kube-system
You cannot change the pod security profile for these privileged namespaces.
1. The security context constraint controller may mutate some security context fields per the pod’s
assigned SCC. For example, if the seccomp profile is empty or not set and if the pod’s assigned
SCC enforces seccompProfiles field to be runtime/default, the controller sets the default
type to RuntimeDefault.
2. The security context constraint controller validates the pod’s security context against the
matching SCC.
3. The pod security admission controller validates the pod’s security context against the pod
security standard assigned to the namespace.
The controller examines ServiceAccount object permissions to use security context constraints in each
namespace. Security context constraints (SCCs) are mapped to pod security profiles based on their
field values; the controller uses these translated profiles. Pod security admission warn and audit labels
are set to the most privileged pod security profile in the namespace to prevent displaying warnings and
logging audit events when pods are created.
Applying pods directly might use the SCC privileges of the user who runs the pod. However, user
privileges are not considered during automatic labeling.
141
OpenShift Container Platform 4.17 Authentication and authorization
IMPORTANT
If necessary, you can enable synchronization again by using one of the following methods:
By removing the modified pod security admission label from the namespace
default
kube-node-lease
kube-system
kube-public
openshift
All system-created namespaces that are prefixed with openshift- , except for openshift-
operators
NOTE
IMPORTANT
142
CHAPTER 16. UNDERSTANDING AND MANAGING POD SECURITY ADMISSION
IMPORTANT
Procedure
For each namespace that you want to configure, set a value for the
security.openshift.io/scc.podSecurityLabelSync label:
To disable pod security admission label synchronization in a namespace, set the value of the
security.openshift.io/scc.podSecurityLabelSync label to false.
Run the following command:
To enable pod security admission label synchronization in a namespace, set the value of the
security.openshift.io/scc.podSecurityLabelSync label to true.
Run the following command:
NOTE
Use the --overwrite flag to overwrite the value if this label is already set on the
namespace.
Additional resources
Procedure
For each pod security admission mode that you want to set on a namespace, run the following
command:
143
OpenShift Container Platform 4.17 Authentication and authorization
View the Kubernetes API server audit logs to investigate alerts that were triggered. As an example, a
workload is likely to fail admission if global enforcement is set to the restricted pod security level.
For assistance in identifying pod security admission violation audit events, see Audit annotations in the
Kubernetes documentation.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
Example output
1 test-namespace my-pod
144
CHAPTER 17. IMPERSONATING THE SYSTEM:ADMIN USER
Procedure
TIP
You can alternatively apply the following YAML to grant permission to impersonate
system:admin:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: <any_valid_name>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: sudoer
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: <username>
Procedure
145
OpenShift Container Platform 4.17 Authentication and authorization
As a cluster administrator, you can add unauthenticated users to the following cluster roles in OpenShift
Container Platform by creating a cluster role binding. Unauthenticated users do not have access to non-
public cluster roles. This should only be done in specific use cases when necessary.
system:scope-impersonation
system:webhook
system:oauth-token-deleter
self-access-reviewer
IMPORTANT
Always verify compliance with your organization’s security standards when modifying
unauthenticated access.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. Create a YAML file named add-<cluster_role>-unauth.yaml and add the following content:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
name: <cluster_role>access-unauthenticated
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: <cluster_role>
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:unauthenticated
$ oc apply -f add-<cluster_role>.yaml
146
CHAPTER 18. SYNCING LDAP GROUPS
For more information on configuring LDAP, see Configuring an LDAP identity provider .
NOTE
Sync configuration options that are dependent on the schema used in your LDAP server.
An administrator-defined list of name mappings that maps OpenShift Container Platform group
names to groups in your LDAP server.
The format of the configuration file depends upon the schema you are using: RFC 2307, Active
Directory, or augmented Active Directory.
The LDAP client configuration section of the configuration defines the connections to your LDAP
server.
url: ldap://10.0.0.0:389 1
bindDN: cn=admin,dc=example,dc=com 2
bindPassword: <password> 3
insecure: false 4
ca: my-ldap-ca-bundle.crt 5
1 The connection protocol, IP address of the LDAP server hosting your database, and the port to
connect to, formatted as scheme://host:port.
2 Optional distinguished name (DN) to use as the Bind DN. OpenShift Container Platform uses this if
elevated privilege is required to retrieve entries for the sync operation.
3 Optional password to use to bind. OpenShift Container Platform uses this if elevated privilege is
necessary to retrieve entries for the sync operation. This value may also be provided in an
environment variable, external file, or encrypted file.
147
OpenShift Container Platform 4.17 Authentication and authorization
4 When false, secure LDAP (ldaps://) URLs connect using TLS, and insecure LDAP ( ldap://) URLs
are upgraded to TLS. When true, no TLS connection is made to the server and you cannot use
5 The certificate bundle to use for validating server certificates for the configured URL. If empty,
OpenShift Container Platform uses system-trusted roots. This only applies if insecure is set to
false.
baseDN: ou=users,dc=example,dc=com 1
scope: sub 2
derefAliases: never 3
timeout: 0 4
filter: (objectClass=person) 5
pageSize: 0 6
1 The distinguished name (DN) of the branch of the directory where all searches will start from. It is
required that you specify the top of your directory tree, but you can also specify a subtree in the
directory.
2 The scope of the search. Valid values are base, one, or sub. If this is left undefined, then a scope of
sub is assumed. Descriptions of the scope options can be found in the table below.
3 The behavior of the search with respect to aliases in the LDAP tree. Valid values are never, search,
base, or always. If this is left undefined, then the default is to always dereference aliases.
Descriptions of the dereferencing behaviors can be found in the table below.
4 The time limit allowed for the search by the client, in seconds. A value of 0 imposes no client-side
limit.
5 A valid LDAP search filter. If this is left undefined, then the default is (objectClass=*).
6 The optional maximum size of response pages from the server, measured in LDAP entries. If set to
0, no size restrictions will be made on pages of responses. Setting paging sizes is necessary when
queries return more entries than the client or server allow by default.
base Only consider the object specified by the base DN given for the query.
one Consider all of the objects on the same level in the tree as the base DN for the query.
sub Consider the entire subtree rooted at the base DN given for the query.
148
CHAPTER 18. SYNCING LDAP GROUPS
Dereferencing Description
behavior
groupUIDNameMapping:
"cn=group1,ou=groups,dc=example,dc=com": firstgroup
"cn=group2,ou=groups,dc=example,dc=com": secondgroup
"cn=group3,ou=groups,dc=example,dc=com": thirdgroup
For clarity, the group you create in OpenShift Container Platform should use attributes other than the
distinguished name whenever possible for user- or administrator-facing fields. For example, identify the
users of an OpenShift Container Platform group by their e-mail, and use the name of the group as the
common name. The following configuration file creates these relationships:
NOTE
kind: LDAPSyncConfig
apiVersion: v1
url: ldap://LDAP_SERVICE_IP:389 1
insecure: false 2
bindDN: cn=admin,dc=example,dc=com
bindPassword:
149
OpenShift Container Platform 4.17 Authentication and authorization
file: "/etc/secrets/bindPassword"
rfc2307:
groupsQuery:
baseDN: "ou=groups,dc=example,dc=com"
scope: sub
derefAliases: never
pageSize: 0
groupUIDAttribute: dn 3
groupNameAttributes: [ cn ] 4
groupMembershipAttributes: [ member ] 5
usersQuery:
baseDN: "ou=users,dc=example,dc=com"
scope: sub
derefAliases: never
pageSize: 0
userUIDAttribute: dn 6
userNameAttributes: [ mail ] 7
tolerateMemberNotFoundErrors: false
tolerateMemberOutOfScopeErrors: false
1 The IP address and host of the LDAP server where this group’s record is stored.
2 When false, secure LDAP (ldaps://) URLs connect using TLS, and insecure LDAP ( ldap://) URLs
are upgraded to TLS. When true, no TLS connection is made to the server and you cannot use
ldaps:// URL schemes.
3 The attribute that uniquely identifies a group on the LDAP server. You cannot specify
groupsQuery filters when using DN for groupUIDAttribute. For fine-grained filtering, use the
whitelist / blacklist method.
6 The attribute that uniquely identifies a user on the LDAP server. You cannot specify usersQuery
filters when using DN for userUIDAttribute. For fine-grained filtering, use the whitelist / blacklist
method.
7 The attribute to use as the name of the user in the OpenShift Container Platform group record.
For clarity, the group you create in OpenShift Container Platform should use attributes other than the
distinguished name whenever possible for user- or administrator-facing fields. For example, identify the
users of an OpenShift Container Platform group by their e-mail, but define the name of the group by
the name of the group on the LDAP server. The following configuration file creates these relationships:
kind: LDAPSyncConfig
apiVersion: v1
150
CHAPTER 18. SYNCING LDAP GROUPS
url: ldap://LDAP_SERVICE_IP:389
activeDirectory:
usersQuery:
baseDN: "ou=users,dc=example,dc=com"
scope: sub
derefAliases: never
filter: (objectclass=person)
pageSize: 0
userNameAttributes: [ mail ] 1
groupMembershipAttributes: [ memberOf ] 2
1 The attribute to use as the name of the user in the OpenShift Container Platform group record.
For clarity, the group you create in OpenShift Container Platform should use attributes other than the
distinguished name whenever possible for user- or administrator-facing fields. For example, identify the
users of an OpenShift Container Platform group by their e-mail, and use the name of the group as the
common name. The following configuration file creates these relationships.
kind: LDAPSyncConfig
apiVersion: v1
url: ldap://LDAP_SERVICE_IP:389
augmentedActiveDirectory:
groupsQuery:
baseDN: "ou=groups,dc=example,dc=com"
scope: sub
derefAliases: never
pageSize: 0
groupUIDAttribute: dn 1
groupNameAttributes: [ cn ] 2
usersQuery:
baseDN: "ou=users,dc=example,dc=com"
scope: sub
derefAliases: never
filter: (objectclass=person)
pageSize: 0
userNameAttributes: [ mail ] 3
groupMembershipAttributes: [ memberOf ] 4
1 The attribute that uniquely identifies a group on the LDAP server. You cannot specify
groupsQuery filters when using DN for groupUIDAttribute. For fine-grained filtering, use the
whitelist / blacklist method.
151
OpenShift Container Platform 4.17 Authentication and authorization
3 The attribute to use as the name of the user in the OpenShift Container Platform group record.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
To sync all groups from the LDAP server with OpenShift Container Platform:
NOTE
By default, all group synchronization operations are dry-run, so you must set the -
-confirm flag on the oc adm groups sync command to make changes to
OpenShift Container Platform group records.
18.2.2. Syncing OpenShift Container Platform groups with the LDAP server
You can sync all groups already in OpenShift Container Platform that correspond to groups in the LDAP
server specified in the configuration file.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
NOTE
152
CHAPTER 18. SYNCING LDAP GROUPS
NOTE
By default, all group synchronization operations are dry-run, so you must set the -
-confirm flag on the oc adm groups sync command to make changes to
OpenShift Container Platform group records.
18.2.3. Syncing subgroups from the LDAP server with OpenShift Container Platform
You can sync a subset of LDAP groups with OpenShift Container Platform using whitelist files, blacklist
files, or both.
NOTE
You can use any combination of blacklist files, whitelist files, or whitelist literals. Whitelist
and blacklist files must contain one unique group identifier per line, and you can include
whitelist literals directly in the command itself. These guidelines apply to groups found on
LDAP servers as well as groups already present in OpenShift Container Platform.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
To sync a subset of LDAP groups with OpenShift Container Platform, use any the following
commands:
NOTE
153
OpenShift Container Platform 4.17 Authentication and authorization
NOTE
By default, all group synchronization operations are dry-run, so you must set the -
-confirm flag on the oc adm groups sync command to make changes to
OpenShift Container Platform group records.
For example:
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
$ oc new-project ldap-sync 1
2. Locate the secret and config map that you created when configuring the LDAP identity
provider and copy them to this new project.
The secret and config map exist in the openshift-config project and must be copied to the new
ldap-sync project.
Example ldap-sync-service-account.yaml
154
CHAPTER 18. SYNCING LDAP GROUPS
kind: ServiceAccount
apiVersion: v1
metadata:
name: ldap-group-syncer
namespace: ldap-sync
$ oc create -f ldap-sync-service-account.yaml
Example ldap-sync-cluster-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ldap-group-syncer
rules:
- apiGroups:
- ''
- user.openshift.io
resources:
- groups
verbs:
- get
- list
- create
- update
$ oc create -f ldap-sync-cluster-role.yaml
7. Define a cluster role binding to bind the cluster role to the service account:
Example ldap-sync-cluster-role-binding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ldap-group-syncer
subjects:
- kind: ServiceAccount
name: ldap-group-syncer 1
namespace: ldap-sync
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ldap-group-syncer 2
155
OpenShift Container Platform 4.17 Authentication and authorization
$ oc create -f ldap-sync-cluster-role-binding.yaml
Example ldap-sync-config-map.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: ldap-group-syncer
namespace: ldap-sync
data:
sync.yaml: | 1
kind: LDAPSyncConfig
apiVersion: v1
url: ldaps://10.0.0.0:389 2
insecure: false
bindDN: cn=admin,dc=example,dc=com 3
bindPassword:
file: "/etc/secrets/bindPassword"
ca: /etc/ldap-ca/ca.crt
rfc2307: 4
groupsQuery:
baseDN: "ou=groups,dc=example,dc=com" 5
scope: sub
filter: "(objectClass=groupOfMembers)"
derefAliases: never
pageSize: 0
groupUIDAttribute: dn
groupNameAttributes: [ cn ]
groupMembershipAttributes: [ member ]
usersQuery:
baseDN: "ou=users,dc=example,dc=com" 6
scope: sub
derefAliases: never
pageSize: 0
userUIDAttribute: dn
userNameAttributes: [ uid ]
tolerateMemberNotFoundErrors: false
tolerateMemberOutOfScopeErrors: false
4 This example uses the RFC2307 schema; adjust values as necessary. You can also use a
different schema.
156
CHAPTER 18. SYNCING LDAP GROUPS
$ oc create -f ldap-sync-config-map.yaml
Example ldap-sync-cron-job.yaml
kind: CronJob
apiVersion: batch/v1
metadata:
name: ldap-group-syncer
namespace: ldap-sync
spec: 1
schedule: "*/30 * * * *" 2
concurrencyPolicy: Forbid
jobTemplate:
spec:
backoffLimit: 0
ttlSecondsAfterFinished: 1800 3
template:
spec:
containers:
- name: ldap-group-sync
image: "registry.redhat.io/openshift4/ose-cli:latest"
command:
- "/bin/bash"
- "-c"
- "oc adm groups sync --sync-config=/etc/config/sync.yaml --confirm" 4
volumeMounts:
- mountPath: "/etc/config"
name: "ldap-sync-volume"
- mountPath: "/etc/secrets"
name: "ldap-bind-password"
- mountPath: "/etc/ldap-ca"
name: "ldap-ca"
volumes:
- name: "ldap-sync-volume"
configMap:
name: "ldap-group-syncer"
- name: "ldap-bind-password"
secret:
secretName: "ldap-secret" 5
- name: "ldap-ca"
configMap:
name: "ca-config-map" 6
restartPolicy: "Never"
terminationGracePeriodSeconds: 30
157
OpenShift Container Platform 4.17 Authentication and authorization
activeDeadlineSeconds: 500
dnsPolicy: "ClusterFirst"
serviceAccountName: "ldap-group-syncer"
1 Configure the settings for the cron job. See "Creating cron jobs" for more information on
cron job settings.
2 The schedule for the job specified in cron format. This example cron job runs every 30
minutes. Adjust the frequency as necessary, making sure to take into account how long the
sync takes to run.
3 How long, in seconds, to keep finished jobs. This should match the period of the job
schedule in order to clean old failed jobs and prevent unnecessary alerts. For more
information, see TTL-after-finished Controller in the Kubernetes documentation.
4 The LDAP sync command for the cron job to run. Passes in the sync configuration file that
was defined in the config map.
5 This secret was created when the LDAP IDP was configured.
6 This config map was created when the LDAP IDP was configured.
$ oc create -f ldap-sync-cron-job.yaml
Additional resources
NOTE
These examples assume that all users are direct members of their respective groups.
Specifically, no groups have other groups as members. See the Nested Membership Sync
Example for information on how to sync nested groups.
How the group and users are added to the LDAP server.
What the resulting group record in OpenShift Container Platform will be after synchronization.
NOTE
158
CHAPTER 18. SYNCING LDAP GROUPS
NOTE
These examples assume that all users are direct members of their respective groups.
Specifically, no groups have other groups as members. See the Nested Membership Sync
Example for information on how to sync nested groups.
In the RFC 2307 schema, both users (Jane and Jim) and groups exist on the LDAP server as first-class
entries, and group membership is stored in attributes on the group. The following snippet of ldif defines
the users and group for this schema:
dn: ou=users,dc=example,dc=com
objectClass: organizationalUnit
ou: users
dn: cn=Jane,ou=users,dc=example,dc=com
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
cn: Jane
sn: Smith
displayName: Jane Smith
mail: [email protected]
dn: cn=Jim,ou=users,dc=example,dc=com
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
cn: Jim
sn: Adams
displayName: Jim Adams
mail: [email protected]
dn: ou=groups,dc=example,dc=com
objectClass: organizationalUnit
ou: groups
dn: cn=admins,ou=groups,dc=example,dc=com 1
objectClass: groupOfNames
cn: admins
owner: cn=admin,dc=example,dc=com
description: System Administrators
member: cn=Jane,ou=users,dc=example,dc=com 2
member: cn=Jim,ou=users,dc=example,dc=com
2 Members of a group are listed with an identifying reference as attributes on the group.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
159
OpenShift Container Platform 4.17 Authentication and authorization
OpenShift Container Platform creates the following group record as a result of the above sync
operation:
apiVersion: user.openshift.io/v1
kind: Group
metadata:
annotations:
openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1
openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2
openshift.io/ldap.url: LDAP_SERVER_IP:389 3
creationTimestamp:
name: admins 4
users: 5
- [email protected]
- [email protected]
1 The last time this OpenShift Container Platform group was synchronized with the LDAP
server, in ISO 6801 format.
3 The IP address and host of the LDAP server where this group’s record is stored.
5 The users that are members of the group, named as specified by the sync file.
18.5.2. Syncing groups using the RFC2307 schema with user-defined name
mappings
When syncing groups with user-defined name mappings, the configuration file changes to contain these
mappings as shown below.
LDAP sync configuration that uses RFC 2307 schema with user-defined name mappings:
rfc2307_config_user_defined.yaml
kind: LDAPSyncConfig
apiVersion: v1
groupUIDNameMapping:
"cn=admins,ou=groups,dc=example,dc=com": Administrators 1
rfc2307:
groupsQuery:
baseDN: "ou=groups,dc=example,dc=com"
scope: sub
derefAliases: never
pageSize: 0
groupUIDAttribute: dn 2
160
CHAPTER 18. SYNCING LDAP GROUPS
groupNameAttributes: [ cn ] 3
groupMembershipAttributes: [ member ]
usersQuery:
baseDN: "ou=users,dc=example,dc=com"
scope: sub
derefAliases: never
pageSize: 0
userUIDAttribute: dn 4
userNameAttributes: [ mail ]
tolerateMemberNotFoundErrors: false
tolerateMemberOutOfScopeErrors: false
2 The unique identifier attribute that is used for the keys in the user-defined name mapping. You
cannot specify groupsQuery filters when using DN for groupUIDAttribute. For fine-grained
filtering, use the whitelist / blacklist method.
3 The attribute to name OpenShift Container Platform groups with if their unique identifier is not in
the user-defined name mapping.
4 The attribute that uniquely identifies a user on the LDAP server. You cannot specify usersQuery
filters when using DN for userUIDAttribute. For fine-grained filtering, use the whitelist / blacklist
method.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
OpenShift Container Platform creates the following group record as a result of the above sync
operation:
apiVersion: user.openshift.io/v1
kind: Group
metadata:
annotations:
openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400
openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com
openshift.io/ldap.url: LDAP_SERVER_IP:389
creationTimestamp:
name: Administrators 1
161
OpenShift Container Platform 4.17 Authentication and authorization
users:
- [email protected]
- [email protected]
18.5.3. Syncing groups using RFC 2307 with user-defined error tolerances
By default, if the groups being synced contain members whose entries are outside of the scope defined
in the member query, the group sync fails with an error:
Error determining LDAP group membership for "<group>": membership lookup for user "<user>" in
group "<group>" failed because of "search for entry with dn="<user-dn>" would search outside of the
base dn specified (dn="<base-dn>")".
This often indicates a misconfigured baseDN in the usersQuery field. However, in cases where the
baseDN intentionally does not contain some of the members of the group, setting
tolerateMemberOutOfScopeErrors: true allows the group sync to continue. Out of scope members
will be ignored.
Similarly, when the group sync process fails to locate a member for a group, it fails outright with errors:
Error determining LDAP group membership for "<group>": membership lookup for user "<user>" in
group "<group>" failed because of "search for entry with base dn="<user-dn>" refers to a non-
existent entry".
Error determining LDAP group membership for "<group>": membership lookup for user "<user>" in
group "<group>" failed because of "search for entry with base dn="<user-dn>" and filter "<filter>" did
not return any results".
This often indicates a misconfigured usersQuery field. However, in cases where the group contains
member entries that are known to be missing, setting tolerateMemberNotFoundErrors: true allows the
group sync to continue. Problematic members will be ignored.
WARNING
Enabling error tolerances for the LDAP group sync causes the sync process to
ignore problematic member entries. If the LDAP group sync is not configured
correctly, this could result in synced OpenShift Container Platform groups missing
members.
LDAP entries that use RFC 2307 schema with problematic group membership:
rfc2307_problematic_users.ldif
dn: ou=users,dc=example,dc=com
objectClass: organizationalUnit
ou: users
dn: cn=Jane,ou=users,dc=example,dc=com
objectClass: person
162
CHAPTER 18. SYNCING LDAP GROUPS
objectClass: organizationalPerson
objectClass: inetOrgPerson
cn: Jane
sn: Smith
displayName: Jane Smith
mail: [email protected]
dn: cn=Jim,ou=users,dc=example,dc=com
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
cn: Jim
sn: Adams
displayName: Jim Adams
mail: [email protected]
dn: ou=groups,dc=example,dc=com
objectClass: organizationalUnit
ou: groups
dn: cn=admins,ou=groups,dc=example,dc=com
objectClass: groupOfNames
cn: admins
owner: cn=admin,dc=example,dc=com
description: System Administrators
member: cn=Jane,ou=users,dc=example,dc=com
member: cn=Jim,ou=users,dc=example,dc=com
member: cn=INVALID,ou=users,dc=example,dc=com 1
member: cn=Jim,ou=OUTOFSCOPE,dc=example,dc=com 2
2 A member that may exist, but is not under the baseDN in the user query for the sync job.
To tolerate the errors in the above example, the following additions to your sync configuration file must
be made:
LDAP sync configuration that uses RFC 2307 schema tolerating errors:
rfc2307_config_tolerating.yaml
kind: LDAPSyncConfig
apiVersion: v1
url: ldap://LDAP_SERVICE_IP:389
rfc2307:
groupsQuery:
baseDN: "ou=groups,dc=example,dc=com"
scope: sub
derefAliases: never
groupUIDAttribute: dn
groupNameAttributes: [ cn ]
groupMembershipAttributes: [ member ]
usersQuery:
baseDN: "ou=users,dc=example,dc=com"
scope: sub
derefAliases: never
userUIDAttribute: dn 1
163
OpenShift Container Platform 4.17 Authentication and authorization
userNameAttributes: [ mail ]
tolerateMemberNotFoundErrors: true 2
tolerateMemberOutOfScopeErrors: true 3
1 The attribute that uniquely identifies a user on the LDAP server. You cannot specify usersQuery
filters when using DN for userUIDAttribute. For fine-grained filtering, use the whitelist / blacklist
method.
2 When true, the sync job tolerates groups for which some members were not found, and members
whose LDAP entries are not found are ignored. The default behavior for the sync job is to fail if a
member of a group is not found.
3 When true, the sync job tolerates groups for which some members are outside the user scope
given in the usersQuery base DN, and members outside the member query scope are ignored.
The default behavior for the sync job is to fail if a member of a group is out of scope.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
OpenShift Container Platform creates the following group record as a result of the above sync
operation:
apiVersion: user.openshift.io/v1
kind: Group
metadata:
annotations:
openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400
openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com
openshift.io/ldap.url: LDAP_SERVER_IP:389
creationTimestamp:
name: admins
users: 1
- [email protected]
- [email protected]
1 The users that are members of the group, as specified by the sync file. Members for which
lookup encountered tolerated errors are absent.
In the Active Directory schema, both users (Jane and Jim) exist in the LDAP server as first-class
164
CHAPTER 18. SYNCING LDAP GROUPS
In the Active Directory schema, both users (Jane and Jim) exist in the LDAP server as first-class
entries, and group membership is stored in attributes on the user. The following snippet of ldif defines
the users and group for this schema:
dn: ou=users,dc=example,dc=com
objectClass: organizationalUnit
ou: users
dn: cn=Jane,ou=users,dc=example,dc=com
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: testPerson
cn: Jane
sn: Smith
displayName: Jane Smith
mail: [email protected]
memberOf: admins 1
dn: cn=Jim,ou=users,dc=example,dc=com
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: testPerson
cn: Jim
sn: Adams
displayName: Jim Adams
mail: [email protected]
memberOf: admins
1 The user’s group memberships are listed as attributes on the user, and the group does not exist as
an entry on the server. The memberOf attribute does not have to be a literal attribute on the user;
in some LDAP servers, it is created during search and returned to the client, but not committed to
the database.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
OpenShift Container Platform creates the following group record as a result of the above sync
operation:
165
OpenShift Container Platform 4.17 Authentication and authorization
apiVersion: user.openshift.io/v1
kind: Group
metadata:
annotations:
openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1
openshift.io/ldap.uid: admins 2
openshift.io/ldap.url: LDAP_SERVER_IP:389 3
creationTimestamp:
name: admins 4
users: 5
- [email protected]
- [email protected]
1 The last time this OpenShift Container Platform group was synchronized with the LDAP
server, in ISO 6801 format.
3 The IP address and host of the LDAP server where this group’s record is stored.
5 The users that are members of the group, named as specified by the sync file.
dn: ou=users,dc=example,dc=com
objectClass: organizationalUnit
ou: users
dn: cn=Jane,ou=users,dc=example,dc=com
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: testPerson
cn: Jane
sn: Smith
displayName: Jane Smith
mail: [email protected]
memberOf: cn=admins,ou=groups,dc=example,dc=com 1
dn: cn=Jim,ou=users,dc=example,dc=com
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: testPerson
cn: Jim
166
CHAPTER 18. SYNCING LDAP GROUPS
sn: Adams
displayName: Jim Adams
mail: [email protected]
memberOf: cn=admins,ou=groups,dc=example,dc=com
dn: ou=groups,dc=example,dc=com
objectClass: organizationalUnit
ou: groups
dn: cn=admins,ou=groups,dc=example,dc=com 2
objectClass: groupOfNames
cn: admins
owner: cn=admin,dc=example,dc=com
description: System Administrators
member: cn=Jane,ou=users,dc=example,dc=com
member: cn=Jim,ou=users,dc=example,dc=com
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
OpenShift Container Platform creates the following group record as a result of the above sync
operation:
apiVersion: user.openshift.io/v1
kind: Group
metadata:
annotations:
openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1
openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2
openshift.io/ldap.url: LDAP_SERVER_IP:389 3
creationTimestamp:
name: admins 4
users: 5
- [email protected]
- [email protected]
1 The last time this OpenShift Container Platform group was synchronized with the LDAP
server, in ISO 6801 format.
167
OpenShift Container Platform 4.17 Authentication and authorization
3 The IP address and host of the LDAP server where this group’s record is stored.
5 The users that are members of the group, named as specified by the sync file.
Groups in OpenShift Container Platform do not nest. The LDAP server must flatten group membership
before the data can be consumed. Microsoft’s Active Directory Server supports this feature via the
LDAP_MATCHING_RULE_IN_CHAIN rule, which has the OID 1.2.840.113556.1.4.1941. Furthermore,
only explicitly whitelisted groups can be synced when using this matching rule.
This section has an example for the augmented Active Directory schema, which synchronizes a group
named admins that has one user Jane and one group otheradmins as members. The otheradmins
group has one user member: Jim. This example explains:
How the group and users are added to the LDAP server.
What the resulting group record in OpenShift Container Platform will be after synchronization.
In the augmented Active Directory schema, both users (Jane and Jim) and groups exist in the LDAP
server as first-class entries, and group membership is stored in attributes on the user or the group. The
following snippet of ldif defines the users and groups for this schema:
LDAP entries that use augmented Active Directory schema with nested members:
augmented_active_directory_nested.ldif
dn: ou=users,dc=example,dc=com
objectClass: organizationalUnit
ou: users
dn: cn=Jane,ou=users,dc=example,dc=com
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: testPerson
cn: Jane
sn: Smith
displayName: Jane Smith
mail: [email protected]
memberOf: cn=admins,ou=groups,dc=example,dc=com 1
dn: cn=Jim,ou=users,dc=example,dc=com
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: testPerson
cn: Jim
168
CHAPTER 18. SYNCING LDAP GROUPS
sn: Adams
displayName: Jim Adams
mail: [email protected]
memberOf: cn=otheradmins,ou=groups,dc=example,dc=com 2
dn: ou=groups,dc=example,dc=com
objectClass: organizationalUnit
ou: groups
dn: cn=admins,ou=groups,dc=example,dc=com 3
objectClass: group
cn: admins
owner: cn=admin,dc=example,dc=com
description: System Administrators
member: cn=Jane,ou=users,dc=example,dc=com
member: cn=otheradmins,ou=groups,dc=example,dc=com
dn: cn=otheradmins,ou=groups,dc=example,dc=com 4
objectClass: group
cn: otheradmins
owner: cn=admin,dc=example,dc=com
description: Other System Administrators
memberOf: cn=admins,ou=groups,dc=example,dc=com 5 6
member: cn=Jim,ou=users,dc=example,dc=com
1 2 5 The user’s and group’s memberships are listed as attributes on the object.
When syncing nested groups with Active Directory, you must provide an LDAP query definition for both
user entries and group entries, as well as the attributes with which to represent them in the internal
OpenShift Container Platform group records. Furthermore, certain changes are required in this
configuration:
The groupsQuery:
For clarity, the group you create in OpenShift Container Platform should use attributes other than the
distinguished name whenever possible for user- or administrator-facing fields. For example, identify the
users of an OpenShift Container Platform group by their e-mail, and use the name of the group as the
169
OpenShift Container Platform 4.17 Authentication and authorization
LDAP sync configuration that uses augmented Active Directory schema with nested
members: augmented_active_directory_config_nested.yaml
kind: LDAPSyncConfig
apiVersion: v1
url: ldap://LDAP_SERVICE_IP:389
augmentedActiveDirectory:
groupsQuery: 1
derefAliases: never
pageSize: 0
groupUIDAttribute: dn 2
groupNameAttributes: [ cn ] 3
usersQuery:
baseDN: "ou=users,dc=example,dc=com"
scope: sub
derefAliases: never
filter: (objectclass=person)
pageSize: 0
userNameAttributes: [ mail ] 4
groupMembershipAttributes: [ "memberOf:1.2.840.113556.1.4.1941:" ] 5
1 groupsQuery filters cannot be specified. The groupsQuery base DN and scope values are
ignored. groupsQuery must set a valid derefAliases.
2 The attribute that uniquely identifies a group on the LDAP server. It must be set to dn.
4 The attribute to use as the name of the user in the OpenShift Container Platform group record.
mail or sAMAccountName are preferred choices in most installations.
5 The attribute on the user that stores the membership information. Note the use of
LDAP_MATCHING_RULE_IN_CHAIN.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
NOTE
170
CHAPTER 18. SYNCING LDAP GROUPS
NOTE
OpenShift Container Platform creates the following group record as a result of the above sync
operation:
apiVersion: user.openshift.io/v1
kind: Group
metadata:
annotations:
openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1
openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2
openshift.io/ldap.url: LDAP_SERVER_IP:389 3
creationTimestamp:
name: admins 4
users: 5
- [email protected]
- [email protected]
1 The last time this OpenShift Container Platform group was synchronized with the LDAP
server, in ISO 6801 format.
3 The IP address and host of the LDAP server where this group’s record is stored.
5 The users that are members of the group, named as specified by the sync file. Note that
members of nested groups are included since the group membership was flattened by the
Microsoft Active Directory Server.
IMPORTANT
There is no support for binary attributes. All attribute data coming from the LDAP server
must be in the format of a UTF-8 encoded string. For example, never use a binary
attribute, such as objectGUID, as an ID attribute. You must use string attributes, such as
sAMAccountName or userPrincipalName, instead.
18.6.1. v1.LDAPSyncConfig
171
OpenShift Container Platform 4.17 Authentication and authorization
LDAPSyncConfig holds the necessary configuration options to define an LDAP group sync.
172
CHAPTER 18. SYNCING LDAP GROUPS
18.6.2. v1.StringSource
StringSource allows specifying a string inline, or externally via environment variable or file. When it
contains only a string value, it marshals to a simple JSON string.
173
OpenShift Container Platform 4.17 Authentication and authorization
18.6.3. v1.LDAPQuery
LDAPQuery holds the options necessary to build an LDAP query.
174
CHAPTER 18. SYNCING LDAP GROUPS
18.6.4. v1.RFC2307Config
RFC2307Config holds the necessary configuration options to define how an LDAP group sync interacts
with an LDAP server using the RFC2307 schema.
175
OpenShift Container Platform 4.17 Authentication and authorization
176
CHAPTER 18. SYNCING LDAP GROUPS
18.6.5. v1.ActiveDirectoryConfig
ActiveDirectoryConfig holds the necessary configuration options to define how an LDAP group sync
interacts with an LDAP server using the Active Directory schema.
18.6.6. v1.AugmentedActiveDirectoryConfig
AugmentedActiveDirectoryConfig holds the necessary configuration options to define how an LDAP
group sync interacts with an LDAP server using the augmented Active Directory schema.
177
OpenShift Container Platform 4.17 Authentication and authorization
178
CHAPTER 19. MANAGING CLOUD PROVIDER CREDENTIALS
By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO
can be configured to operate in several different modes. If no mode is specified, or the
credentialsMode parameter is set to an empty string ( ""), the CCO operates in its default mode.
19.1.1. Modes
By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO
can be configured to operate in mint, passthrough, or manual mode. These options provide transparency
and flexibility in how the CCO uses cloud credentials to process CredentialsRequest CRs in the cluster,
and allow the CCO to be configured to suit the security requirements of your organization. Not all CCO
modes are supported for all cloud providers.
Mint: In mint mode, the CCO uses the provided admin-level cloud credential to create new
credentials for components in the cluster with only the specific permissions that are required.
Passthrough: In passthrough mode, the CCO passes the provided cloud credential to the
components that request cloud credentials.
Manual mode with long-term credentials for components: In manual mode, you can manage
long-term cloud credentials instead of the CCO.
Manual mode with short-term credentials for components: For some providers, you can use
the CCO utility (ccoctl) during installation to implement short-term credentials for individual
components. These credentials are created and managed outside the OpenShift Container
Platform cluster.
Nutanix X [1]
179
OpenShift Container Platform 4.17 Authentication and authorization
VMware vSphere X
1. This platform uses the ccoctl utility during installation to configure long-term credentials.
19.1.2.1. Determining the Cloud Credential Operator mode by using the web console
You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the
web console.
181
OpenShift Container Platform 4.17 Authentication and authorization
NOTE
Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform
(GCP) clusters support multiple CCO modes.
Prerequisites
You have access to an OpenShift Container Platform account with cluster administrator
permissions.
Procedure
1. Log in to the OpenShift Container Platform web console as a user with the cluster-admin role.
6. In the YAML block, check the value of spec.credentialsMode. The following values are possible,
though not all are supported on all platforms:
'': The CCO is operating in the default mode. In this configuration, the CCO operates in mint
or passthrough mode, depending on the credentials provided during installation.
IMPORTANT
AWS and GCP clusters support using mint mode with the root secret deleted.
An AWS, GCP, or global Microsoft Azure cluster that uses manual mode might be
configured to create and manage cloud credentials from outside of the cluster
with AWS STS, GCP Workload Identity, or Microsoft Entra Workload ID. You can
determine whether your cluster uses this strategy by examining the cluster
Authentication object.
7. AWS or GCP clusters that use the default ('') only: To determine whether the cluster is
operating in mint or passthrough mode, inspect the annotations on the cluster root secret:
a. Navigate to Workloads → Secrets and look for the root secret for your cloud provider.
NOTE
182
CHAPTER 19. MANAGING CLOUD PROVIDER CREDENTIALS
AWS aws-creds
GCP gcp-credentials
b. To view the CCO mode that the cluster is using, click 1 annotation under Annotations, and
check the value field. The following values are possible:
If your cluster uses mint mode, you can also determine whether the cluster is operating
without the root secret.
8. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating
without the root secret, navigate to Workloads → Secrets and look for the root secret for your
cloud provider.
NOTE
AWS aws-creds
GCP gcp-credentials
If you see one of these values, your cluster is using mint or passthrough mode with the root
secret present.
If you do not see these values, your cluster is using the CCO in mint mode with the root
secret removed.
9. AWS, GCP, or global Microsoft Azure clusters that use manual mode only: To determine
whether the cluster is configured to create and manage cloud credentials from outside of the
cluster, you must check the cluster Authentication object YAML values.
A value that contains a URL that is associated with your cloud provider indicates that
the CCO is using manual mode with short-term credentials for components. These
183
OpenShift Container Platform 4.17 Authentication and authorization
clusters are configured using the ccoctl utility to create and manage cloud credentials
from outside of the cluster.
An empty value ('') indicates that the cluster is using the CCO in manual mode but was
not configured using the ccoctl utility.
19.1.2.2. Determining the Cloud Credential Operator mode by using the CLI
You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the
CLI.
NOTE
Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform
(GCP) clusters support multiple CCO modes.
Prerequisites
You have access to an OpenShift Container Platform account with cluster administrator
permissions.
Procedure
2. To determine the mode that the CCO is configured to use, enter the following command:
The following output values are possible, though not all are supported on all platforms:
'': The CCO is operating in the default mode. In this configuration, the CCO operates in mint
or passthrough mode, depending on the credentials provided during installation.
IMPORTANT
184
CHAPTER 19. MANAGING CLOUD PROVIDER CREDENTIALS
IMPORTANT
AWS and GCP clusters support using mint mode with the root secret deleted.
An AWS, GCP, or global Microsoft Azure cluster that uses manual mode might be
configured to create and manage cloud credentials from outside of the cluster
with AWS STS, GCP Workload Identity, or Microsoft Entra Workload ID. You can
determine whether your cluster uses this strategy by examining the cluster
Authentication object.
3. AWS or GCP clusters that use the default ('') only: To determine whether the cluster is
operating in mint or passthrough mode, run the following command:
This command displays the value of the .metadata.annotations parameter in the cluster root
secret object. The following output values are possible:
If your cluster uses mint mode, you can also determine whether the cluster is operating without
the root secret.
4. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating
without the root secret, run the following command:
If the root secret is present, the output of this command returns information about the secret.
An error indicates that the root secret is not present on the cluster.
5. AWS, GCP, or global Microsoft Azure clusters that use manual mode only: To determine
whether the cluster is configured to create and manage cloud credentials from outside of the
cluster, run the following command:
This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster
Authentication object.
185
OpenShift Container Platform 4.17 Authentication and authorization
An output of a URL that is associated with your cloud provider indicates that the CCO is
using manual mode with short-term credentials for components. These clusters are
configured using the ccoctl utility to create and manage cloud credentials from outside of
the cluster.
An empty output indicates that the cluster is using the CCO in manual mode but was not
configured using the ccoctl utility.
By default, the CCO determines whether the credentials are sufficient for mint mode, which is the
preferred mode of operation, and uses those credentials to create appropriate credentials for
components in the cluster. If the credentials are not sufficient for mint mode, it determines whether they
are sufficient for passthrough mode. If the credentials are not sufficient for passthrough mode, the
CCO cannot adequately process CredentialsRequest CRs.
If the provided credentials are determined to be insufficient during installation, the installation fails. For
AWS, the installation program fails early in the process and indicates which required permissions are
missing. Other providers might not provide specific information about the cause of the error until errors
are encountered.
If the credentials are changed after a successful installation and the CCO determines that the new
credentials are insufficient, the CCO puts conditions on any new CredentialsRequest CRs to indicate
that it cannot process them because of the insufficient credentials.
To resolve insufficient credentials issues, provide a credential with sufficient permissions. If an error
occurred during installation, try installing again. For issues with new CredentialsRequest CRs, wait for
the CCO to try to process the CR again. As an alternative, you can configure your cluster to use a
different CCO mode that is supported for your cloud provider.
With mint mode, each cluster component has only the specific permissions it requires. The automatic,
continuous reconciliation of cloud credentials in mint mode allows actions that require additional
credentials or permissions, such as upgrading, to proceed.
NOTE
186
CHAPTER 19. MANAGING CLOUD PROVIDER CREDENTIALS
NOTE
By default, mint mode requires storing the admin credential in the cluster kube-system
namespace. If this approach does not meet the security requirements of your
organization, you can remove the credential after installing the cluster .
When using the CCO in mint mode, ensure that the credential you provide meets the requirements of
the cloud on which you are running or installing OpenShift Container Platform. If the provided
credentials are not sufficient for mint mode, the CCO cannot create an IAM user.
The credential you provide for mint mode in Amazon Web Services (AWS) must have the following
permissions:
iam:CreateAccessKey
iam:CreateUser
iam:DeleteAccessKey
iam:DeleteUser
iam:DeleteUserPolicy
iam:GetUser
iam:GetUserPolicy
iam:ListAccessKeys
iam:PutUserPolicy
iam:TagUser
iam:SimulatePrincipalPolicy
The credential you provide for mint mode in Google Cloud Platform (GCP) must have the following
permissions:
resourcemanager.projects.get
serviceusage.services.list
iam.serviceAccountKeys.create
iam.serviceAccountKeys.delete
iam.serviceAccountKeys.list
iam.serviceAccounts.create
187
OpenShift Container Platform 4.17 Authentication and authorization
iam.serviceAccounts.delete
iam.serviceAccounts.get
iam.roles.create
iam.roles.get
iam.roles.list
iam.roles.undelete
iam.roles.update
resourcemanager.projects.getIamPolicy
resourcemanager.projects.setIamPolicy
Each cloud provider uses a credentials root secret in the kube-system namespace by convention, which
is then used to satisfy all credentials requests and create their respective secrets. This is done either by
minting new credentials with mint mode , or by copying the credentials root secret with passthrough
mode.
The format for the secret varies by cloud, and is also used for each CredentialsRequest secret.
apiVersion: v1
kind: Secret
metadata:
namespace: kube-system
name: aws-creds
stringData:
aws_access_key_id: <base64-encoded_access_key_id>
aws_secret_access_key: <base64-encoded_secret_access_key>
apiVersion: v1
kind: Secret
metadata:
namespace: kube-system
name: gcp-credentials
stringData:
service_account.json: <base64-encoded_service_account>
The process for rotating cloud credentials depends on the mode that the CCO is configured to use.
188
CHAPTER 19. MANAGING CLOUD PROVIDER CREDENTIALS
The process for rotating cloud credentials depends on the mode that the CCO is configured to use.
After you rotate credentials for a cluster that is using mint mode, you must manually remove the
component credentials that were created by the removed credential.
Prerequisites
Your cluster is installed on a platform that supports rotating cloud credentials manually with the
CCO mode that you are using:
For mint mode, Amazon Web Services (AWS) and Google Cloud Platform (GCP) are
supported.
You have changed the credentials that are used to interface with your cloud provider.
The new credentials have sufficient permissions for the mode CCO is configured to use in your
cluster.
Procedure
2. In the table on the Secrets page, find the root secret for your cloud provider.
AWS aws-creds
GCP gcp-credentials
3. Click the Options menu in the same row as the secret and select Edit Secret.
4. Record the contents of the Value field or fields. You can use this information to verify that the
value is different after updating the credentials.
5. Update the text in the Value field or fields with the new authentication information for your
cloud provider, and then click Save.
a. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role.
AWS: AWSProviderSpec
GCP: GCPProviderSpec
189
OpenShift Container Platform 4.17 Authentication and authorization
{
"name": "ebs-cloud-credentials",
"namespace": "openshift-cluster-csi-drivers"
}
{
"name": "cloud-credential-operator-iam-ro-creds",
"namespace": "openshift-cloud-credential-operator"
}
You do not need to manually delete the credentials from your provider console. Deleting
the referenced component secrets will cause the CCO to delete the existing credentials
from the platform and create new ones.
Verification
To verify that the credentials have changed:
2. Verify that the contents of the Value field or fields have changed.
In passthrough mode, the Cloud Credential Operator (CCO) passes the provided cloud credential to the
components that request cloud credentials. The credential must have permissions to perform the
installation and complete the operations that are required by components in the cluster, but does not
need to be able to create new credentials. The CCO does not attempt to create additional limited-
scoped credentials in passthrough mode.
NOTE
190
CHAPTER 19. MANAGING CLOUD PROVIDER CREDENTIALS
NOTE
Manual mode is the only supported CCO configuration for Microsoft Azure Stack Hub.
The credential you provide for passthrough mode in AWS must have all the requested permissions for all
CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are
running or installing.
To locate the CredentialsRequest CRs that are required, see Manually creating long-term credentials
for AWS.
The credential you provide for passthrough mode in Azure must have all the requested permissions for
all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are
running or installing.
To locate the CredentialsRequest CRs that are required, see Manually creating long-term credentials
for Azure.
The credential you provide for passthrough mode in GCP must have all the requested permissions for all
CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are
running or installing.
To locate the CredentialsRequest CRs that are required, see Manually creating long-term credentials
for GCP.
To install an OpenShift Container Platform cluster on RHOSP, the CCO requires a credential with the
permissions of a member user role.
To install an OpenShift Container Platform cluster on VMware vSphere, the CCO requires a credential
with the following vSphere privileges:
Category Privileges
191
OpenShift Container Platform 4.17 Authentication and authorization
Category Privileges
The format for the secret varies by cloud, and is also used for each CredentialsRequest secret.
apiVersion: v1
kind: Secret
metadata:
namespace: kube-system
name: aws-creds
stringData:
aws_access_key_id: <base64-encoded_access_key_id>
aws_secret_access_key: <base64-encoded_secret_access_key>
apiVersion: v1
kind: Secret
metadata:
namespace: kube-system
name: azure-credentials
stringData:
azure_subscription_id: <base64-encoded_subscription_id>
azure_client_id: <base64-encoded_client_id>
azure_client_secret: <base64-encoded_client_secret>
azure_tenant_id: <base64-encoded_tenant_id>
azure_resource_prefix: <base64-encoded_resource_prefix>
azure_resourcegroup: <base64-encoded_resource_group>
azure_region: <base64-encoded_region>
192
CHAPTER 19. MANAGING CLOUD PROVIDER CREDENTIALS
On Microsoft Azure, the credentials secret format includes two properties that must contain the
cluster’s infrastructure ID, generated randomly for each cluster installation. This value can be found
after running create manifests:
Example output
mycluster-2mpcn
azure_resource_prefix: mycluster-2mpcn
azure_resourcegroup: mycluster-2mpcn-rg
apiVersion: v1
kind: Secret
metadata:
namespace: kube-system
name: gcp-credentials
stringData:
service_account.json: <base64-encoded_service_account>
apiVersion: v1
kind: Secret
metadata:
namespace: kube-system
name: openstack-credentials
data:
clouds.yaml: <base64-encoded_cloud_creds>
clouds.conf: <base64-encoded_cloud_creds_init>
apiVersion: v1
kind: Secret
metadata:
namespace: kube-system
name: vsphere-creds
data:
vsphere.openshift.example.com.username: <base64-encoded_username>
vsphere.openshift.example.com.password: <base64-encoded_password>
193
OpenShift Container Platform 4.17 Authentication and authorization
Platform before upgrading. To locate the CredentialsRequest CRs that are required for your cloud
provider, see Manually creating long-term credentials for AWS, Azure, or GCP.
If your cloud provider credentials are changed for any reason, you must manually update the secret that
the Cloud Credential Operator (CCO) uses to manage cloud provider credentials.
The process for rotating cloud credentials depends on the mode that the CCO is configured to use.
After you rotate credentials for a cluster that is using mint mode, you must manually remove the
component credentials that were created by the removed credential.
Prerequisites
Your cluster is installed on a platform that supports rotating cloud credentials manually with the
CCO mode that you are using:
For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud
Platform (GCP), Red Hat OpenStack Platform (RHOSP), and VMware vSphere are
supported.
You have changed the credentials that are used to interface with your cloud provider.
The new credentials have sufficient permissions for the mode CCO is configured to use in your
cluster.
Procedure
2. In the table on the Secrets page, find the root secret for your cloud provider.
AWS aws-creds
Azure azure-credentials
GCP gcp-credentials
RHOSP openstack-credentials
3. Click the Options menu in the same row as the secret and select Edit Secret.
4. Record the contents of the Value field or fields. You can use this information to verify that the
value is different after updating the credentials.
5. Update the text in the Value field or fields with the new authentication information for your
cloud provider, and then click Save.
6. If you are updating the credentials for a vSphere cluster that does not have the vSphere CSI
194
CHAPTER 19. MANAGING CLOUD PROVIDER CREDENTIALS
6. If you are updating the credentials for a vSphere cluster that does not have the vSphere CSI
Driver Operator enabled, you must force a rollout of the Kubernetes controller manager to apply
the updated credentials.
NOTE
If the vSphere CSI Driver Operator is enabled, this step is not required.
To apply the updated vSphere credentials, log in to the OpenShift Container Platform CLI as a
user with the cluster-admin role and run the following command:
While the credentials are rolling out, the status of the Kubernetes Controller Manager Operator
reports Progressing=true. To view the status, run the following command:
$ oc get co kube-controller-manager
Verification
To verify that the credentials have changed:
2. Verify that the contents of the Value field or fields have changed.
Additional resources
After installation, you can reduce the permissions on your credential to only those that are required to
run the cluster, as defined by the CredentialsRequest CRs in the release image for the version of
OpenShift Container Platform that you are using.
To locate the CredentialsRequest CRs that are required for AWS, Azure, or GCP and learn how to
change the permissions the CCO uses, see Manually creating long-term credentials for AWS, Azure, or
GCP.
195
OpenShift Container Platform 4.17 Authentication and authorization
Using manual mode with long-term credentials allows each cluster component to have only the
permissions it requires, without storing an administrator-level credential in the cluster. This mode also
does not require connectivity to services such as the AWS public IAM endpoint. However, you must
manually reconcile permissions with new release images for every upgrade.
For information about configuring your cloud provider to use manual mode, see the manual credentials
management options for your cloud provider.
NOTE
An AWS, global Azure, or GCP cluster that uses manual mode might be configured to use
short-term credentials for different components. For more information, see Manual
mode with short-term credentials for components.
NOTE
196
CHAPTER 19. MANAGING CLOUD PROVIDER CREDENTIALS
NOTE
This credentials strategy is supported for Amazon Web Services (AWS), Google Cloud
Platform (GCP), and global Microsoft Azure only. The strategy must be configured during
installation of a new OpenShift Container Platform cluster. You cannot configure an
existing cluster that uses a different credentials strategy to use this feature.
Cloud providers use different terms for their implementation of this authentication method.
Additional resources
The AWS Security Token Service (STS) and the AssumeRole API action allow pods to retrieve access
keys that are defined by an IAM role policy.
The OpenShift Container Platform cluster includes a Kubernetes service account signing service. This
service uses a private key to sign service account JSON web tokens (JWT). A pod that requires a
service account token requests one through the pod specification. When the pod is created and
assigned to a node, the node retrieves a signed service account from the service account signing service
and mounts it onto the pod.
Clusters that use STS contain an IAM role ID in their Kubernetes configuration secrets. Workloads
assume the identity of this IAM role ID. The signed service account token issued to the workload aligns
with the configuration in AWS, which allows AWS STS to grant access keys for the specified IAM role to
the workload.
AWS STS grants access keys only for requests that include service account tokens that meet the
following conditions:
The token name and namespace match the service account name and namespace.
The public key pair for the service account signing key used by the cluster is stored in an AWS S3
197
OpenShift Container Platform 4.17 Authentication and authorization
The public key pair for the service account signing key used by the cluster is stored in an AWS S3
bucket. AWS STS federation validates that the service account token signature aligns with the public
key stored in the S3 bucket.
The following diagram illustrates the authentication flow between AWS and the OpenShift Container
Platform cluster when using AWS STS.
Token signing is the Kubernetes service account signing service on the OpenShift Container
Platform cluster.
The Kubernetes service account in the pod is the signed service account token.
Requests for new and refreshed credentials are automated by using an appropriately configured AWS
IAM OpenID Connect (OIDC) identity provider combined with AWS IAM roles. Service account tokens
that are trusted by AWS IAM are signed by OpenShift Container Platform and can be projected into a
pod and used for authentication.
The signed service account token that a pod uses expires after a period of time. For clusters that use
AWS STS, this time period is 3600 seconds, or one hour.
The kubelet on the node that the pod is assigned to ensures that the token is refreshed. The kubelet
attempts to rotate a token when it is older than 80 percent of its time to live.
You can store the public portion of the encryption keys for your OIDC configuration in a public or private
198
CHAPTER 19. MANAGING CLOUD PROVIDER CREDENTIALS
You can store the public portion of the encryption keys for your OIDC configuration in a public or private
S3 bucket.
The OIDC spec requires the use of HTTPS. AWS services require a public endpoint to expose the OIDC
documents in the form of JSON web key set (JWKS) public keys. This allows AWS services to validate
the bound tokens signed by Kubernetes and determine whether to trust certificates. As a result, both S3
bucket options require a public HTTPS endpoint and private endpoints are not supported.
To use AWS STS, the public AWS backbone for the AWS STS service must be able to communicate with
a public S3 bucket or a private S3 bucket with a public CloudFront endpoint. You can choose which type
of bucket to use when you process CredentialsRequest objects during installation:
By default, the CCO utility (ccoctl) stores the OIDC configuration files in a public S3 bucket and
uses the S3 URL as the public OIDC endpoint.
As an alternative, you can have the ccoctl utility store the OIDC configuration in a private S3
bucket that is accessed by the IAM identity provider through a public CloudFront distribution
URL.
Using manual mode with the AWS Security Token Service (STS) changes the content of the AWS
credentials that are provided to individual OpenShift Container Platform components. Compare the
following secret formats:
apiVersion: v1
kind: Secret
metadata:
namespace: <target_namespace> 1
name: <target_secret_name> 2
data:
aws_access_key_id: <base64_encoded_access_key_id>
aws_secret_access_key: <base64_encoded_secret_access_key>
apiVersion: v1
kind: Secret
metadata:
namespace: <target_namespace> 1
name: <target_secret_name> 2
stringData:
credentials: |-
[default]
sts_regional_endpoints = regional
role_name: <operator_role_name> 3
web_identity_token_file: <path_to_token> 4
199
OpenShift Container Platform 4.17 Authentication and authorization
4 The path to the service account token inside the pod. By convention, this is
/var/run/secrets/openshift/serviceaccount/token for OpenShift Container Platform components.
OpenShift Container Platform components require the following permissions. These values are in the
CredentialsRequest custom resource (CR) for each component.
NOTE
These permissions apply to all resources. Unless specified, there are no request
conditions on these permissions.
ec2:CreateTags
ec2:DescribeAvailabi
lityZones
ec2:DescribeDhcpOp
tions
ec2:DescribeImages
ec2:DescribeInstanc
es
ec2:DescribeInternet
Gateways
ec2:DescribeSecurity
Groups
ec2:DescribeSubnets
ec2:DescribeVpcs
ec2:DescribeNetwork
Interfaces
ec2:DescribeNetwork
InterfaceAttribute
ec2:ModifyNetworkIn
terfaceAttribute
ec2:RunInstances
ec2:TerminateInstan
200
CHAPTER 19. MANAGING CLOUD PROVIDER CREDENTIALS
ec2:TerminateInstan
Component Custom resource Requiredces
permissions for
services
Elastic load balancing
elasticloadbalancing:
DescribeLoadBalanc
ers
elasticloadbalancing:
DescribeTargetGrou
ps
elasticloadbalancing:
DescribeTargetHealt
h
elasticloadbalancing:
RegisterInstancesWit
hLoadBalancer
elasticloadbalancing:
RegisterTargets
elasticloadbalancing:
DeregisterTargets
iam:PassRole
iam:CreateServiceLin
kedRole
kms:Decrypt
kms:Encrypt
kms:GenerateDataKe
y
kms:GenerateDataKe
yWithoutPlainText
kms:DescribeKey
kms:RevokeGrant[1]
kms:CreateGrant [1]
kms:ListGrants [1]
ec2:CreateTags
ec2:DescribeAvailabi
lityZones
ec2:DescribeDhcpOp
201
OpenShift Container Platform 4.17 Authentication and authorization
ec2:DescribeDhcpOp
Component Custom resource Requiredtions
permissions for
services
ec2:DescribeImages
ec2:DescribeInstanc
es
ec2:DescribeInstanc
eTypes
ec2:DescribeInternet
Gateways
ec2:DescribeSecurity
Groups
ec2:DescribeRegions
ec2:DescribeSubnets
ec2:DescribeVpcs
ec2:RunInstances
ec2:TerminateInstan
ces
elasticloadbalancing:
DescribeLoadBalanc
ers
elasticloadbalancing:
DescribeTargetGrou
ps
elasticloadbalancing:
DescribeTargetHealt
h
elasticloadbalancing:
RegisterInstancesWit
hLoadBalancer
elasticloadbalancing:
RegisterTargets
elasticloadbalancing:
DeregisterTargets
iam:PassRole
iam:CreateServiceLin
kedRole
kms:Decrypt
kms:Encrypt
202
CHAPTER 19. MANAGING CLOUD PROVIDER CREDENTIALS
kms:GenerateDataKe
Component Custom resource Requiredypermissions for
services
kms:GenerateDataKe
yWithoutPlainText
kms:DescribeKey
kms:RevokeGrant[1]
kms:CreateGrant [1]
kms:ListGrants [1]
iam:GetUser
iam:GetUserPolicy
iam:ListAccessKeys
203
OpenShift Container Platform 4.17 Authentication and authorization
s3:CreateBucket
s3:DeleteBucket
s3:PutBucketTaggin
g
s3:GetBucketTaggin
g
s3:PutBucketPublicA
ccessBlock
s3:GetBucketPublicA
ccessBlock
s3:PutEncryptionCon
figuration
s3:GetEncryptionCo
nfiguration
s3:PutLifecycleConfi
guration
s3:GetLifecycleConfi
guration
s3:GetBucketLocatio
n
s3:ListBucket
s3:GetObject
s3:PutObject
s3:DeleteObject
s3:ListBucketMultipa
rtUploads
s3:AbortMultipartUpl
oad
s3:ListMultipartUploa
dParts
204
CHAPTER 19. MANAGING CLOUD PROVIDER CREDENTIALS
elasticloadbalancing:
DescribeLoadBalanc
ers
Route 53
route53:ListHostedZ
ones
route53:ListTagsFor
Resources
route53:ChangeReso
urceRecordSets
Tag
tag:GetResources
sts:AssumeRole
ec2:DescribeInstanc
eStatus
ec2:DescribeInstanc
eTypes
ec2:UnassignPrivateI
pAddresses
ec2:AssignPrivateIp
Addresses
ec2:UnassignIpv6Ad
dresses
ec2:AssignIpv6Addre
sses
ec2:DescribeSubnets
ec2:DescribeNetwork
Interfaces
205
OpenShift Container Platform 4.17 Authentication and authorization
ec2:CreateSnapshot
ec2:CreateTags
ec2:CreateVolume
ec2:DeleteSnapshot
ec2:DeleteTags
ec2:DeleteVolume
ec2:DescribeInstanc
es
ec2:DescribeSnapsh
ots
ec2:DescribeTags
ec2:DescribeVolume
s
ec2:DescribeVolume
sModifications
ec2:DetachVolume
ec2:ModifyVolume
ec2:DescribeAvailabi
lityZones
ec2:EnableFastSnap
shotRestores
kms:ReEncrypt*
kms:Decrypt
kms:Encrypt
kms:GenerateDataKe
y
kms:GenerateDataKe
yWithoutPlainText
kms:DescribeKey
kms:RevokeGrant[1]
kms:CreateGrant [1]
206
CHAPTER 19. MANAGING CLOUD PROVIDER CREDENTIALS
kms:ListGrants [1]
Component Custom resource Required permissions for
services
Certain Operators managed by the Operator Lifecycle Manager (OLM) on AWS clusters can use manual
mode with STS. These Operators authenticate with limited-privilege, short-term credentials that are
managed outside the cluster. To determine if an Operator supports authentication with AWS STS, see
the Operator description in OperatorHub.
Additional resources
Additional resources
Requests for new and refreshed credentials are automated by using an appropriately configured OpenID
Connect (OIDC) identity provider combined with IAM service accounts. Service account tokens that are
trusted by GCP are signed by OpenShift Container Platform and can be projected into a pod and used
for authentication. Tokens are refreshed after one hour.
The following diagram details the authentication flow between GCP and the OpenShift Container
Platform cluster when using GCP Workload Identity.
Using manual mode with GCP Workload Identity changes the content of the GCP credentials that are
provided to individual OpenShift Container Platform components. Compare the following secret
content:
apiVersion: v1
kind: Secret
metadata:
namespace: <target_namespace> 1
name: <target_secret_name> 2
data:
service_account.json: <service_account> 3
208
CHAPTER 19. MANAGING CLOUD PROVIDER CREDENTIALS
{
"type": "service_account", 1
"project_id": "<project_id>",
"private_key_id": "<private_key_id>",
"private_key": "<private_key>", 2
"client_email": "<client_email_address>",
"client_id": "<client_id>",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url":
"https://www.googleapis.com/robot/v1/metadata/x509/<client_email_address>"
}
2 The private RSA key that is used to authenticate to GCP. This key must be kept secure and is not
rotated.
Content of the Base64 encoded service_account.json file using GCP Workload Identity
{
"type": "external_account", 1
"audience": "//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-
pool/providers/test-provider", 2
"subject_token_type": "urn:ietf:params:oauth:token-type:jwt",
"token_url": "https://sts.googleapis.com/v1/token",
"service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-
/serviceAccounts/<client_email_address>:generateAccessToken", 3
"credential_source": {
"file": "<path_to_token>", 4
"format": {
"type": "text"
}
}
}
3 The resource URL of the service account that can be impersonated with these credentials.
4 The path to the service account token inside the pod. By convention, this is
/var/run/secrets/openshift/serviceaccount/token for OpenShift Container Platform components.
19.5.2.3. OLM-managed Operator support for authentication with GCP Workload Identity
Certain Operators managed by the Operator Lifecycle Manager (OLM) on GCP clusters can use manual
mode with GCP Workload Identity. These Operators authenticate with limited-privilege, short-term
209
OpenShift Container Platform 4.17 Authentication and authorization
credentials that are managed outside the cluster. To determine if an Operator supports authentication
with GCP Workload Identity, see the Operator description in OperatorHub.
Additional resources
CCO-based workflow for OLM-managed Operators with Google Cloud Platform Workload
Identity
19.5.2.4. Application support for GCP Workload Identity service account tokens
Applications in customer workloads on OpenShift Container Platform clusters that use Google Cloud
Platform Workload Identity can authenticate by using GCP Workload Identity. To use this authentication
method with your applications, you must complete configuration steps on the cloud provider console
and your OpenShift Container Platform cluster.
Additional resources
Additional resources
The following diagram details the authentication flow between Azure and the OpenShift Container
Platform cluster when using Microsoft Entra Workload ID.
Using manual mode with Microsoft Entra Workload ID changes the content of the Azure credentials that
are provided to individual OpenShift Container Platform components. Compare the following secret
formats:
apiVersion: v1
kind: Secret
metadata:
namespace: <target_namespace> 1
name: <target_secret_name> 2
data:
azure_client_id: <client_id> 3
azure_client_secret: <client_secret> 4
azure_region: <region>
azure_resource_prefix: <resource_group_prefix> 5
azure_resourcegroup: <resource_group_prefix>-rg 6
azure_subscription_id: <subscription_id>
azure_tenant_id: <tenant_id>
type: Opaque
211
OpenShift Container Platform 4.17 Authentication and authorization
3 The client ID of the Microsoft Entra ID identity that the component uses to authenticate.
4 The component secret that is used to authenticate with Microsoft Entra ID for the <client_id>
identity.
6 The resource group. This value is formed by the <resource_group_prefix> and the suffix -rg.
apiVersion: v1
kind: Secret
metadata:
namespace: <target_namespace> 1
name: <target_secret_name> 2
data:
azure_client_id: <client_id> 3
azure_federated_token_file: <path_to_token_file> 4
azure_region: <region>
azure_subscription_id: <subscription_id>
azure_tenant_id: <tenant_id>
type: Opaque
3 The client ID of the user-assigned managed identity that the component uses to authenticate.
OpenShift Container Platform components require the following permissions. These values are in the
CredentialsRequest custom resource (CR) for each component.
212
CHAPTER 19. MANAGING CLOUD PROVIDER CREDENTIALS
Microsoft.Network/lo
adBalancers/read
Microsoft.Network/lo
adBalancers/write
Microsoft.Network/ne
tworkInterfaces/read
Microsoft.Network/ne
tworkSecurityGroups
/read
Microsoft.Network/ne
tworkSecurityGroups
/write
Microsoft.Network/pu
blicIPAddresses/join/
action
Microsoft.Network/pu
blicIPAddresses/read
Microsoft.Network/pu
blicIPAddresses/writ
e
Microsoft.Compute/a
vailabilitySets/read
Microsoft.Compute/a
vailabilitySets/write
Microsoft.Compute/d
iskEncryptionSets/re
ad
Microsoft.Compute/d
isks/delete
Microsoft.Compute/g
alleries/images/versi
ons/read
Microsoft.Compute/s
213
OpenShift Container Platform 4.17 Authentication and authorization
Microsoft.Compute/s
Component Custom resource Requiredkus/read
permissions for
services
Microsoft.Compute/vi
rtualMachines/delete
Microsoft.Compute/vi
rtualMachines/extens
ions/delete
Microsoft.Compute/vi
rtualMachines/extens
ions/read
Microsoft.Compute/vi
rtualMachines/extens
ions/write
Microsoft.Compute/vi
rtualMachines/read
Microsoft.Compute/vi
rtualMachines/write
Microsoft.ManagedId
entity/userAssignedI
dentities/assign/actio
n
Microsoft.Network/ap
plicationSecurityGro
ups/read
Microsoft.Network/lo
adBalancers/backen
dAddressPools/join/a
ction
Microsoft.Network/lo
adBalancers/read
Microsoft.Network/lo
adBalancers/write
Microsoft.Network/ne
tworkInterfaces/delet
e
Microsoft.Network/ne
tworkInterfaces/join/
action
Microsoft.Network/ne
tworkInterfaces/load
Balancers/read
Microsoft.Network/ne
tworkInterfaces/read
Microsoft.Network/ne
tworkInterfaces/write
Microsoft.Network/ne
tworkSecurityGroups
/read
214
CHAPTER 19. MANAGING CLOUD PROVIDER CREDENTIALS
Microsoft.Network/ne
Component Custom resource RequiredtworkSecurityGroups
permissions for
services /write
Microsoft.Network/pu
blicIPAddresses/dele
te
Microsoft.Network/pu
blicIPAddresses/join/
action
Microsoft.Network/pu
blicIPAddresses/read
Microsoft.Network/pu
blicIPAddresses/writ
e
Microsoft.Network/ro
uteTables/read
Microsoft.Network/vir
tualNetworks/delete
Microsoft.Network/vir
tualNetworks/read
Microsoft.Network/vir
tualNetworks/subnet
s/join/action
Microsoft.Network/vir
tualNetworks/subnet
s/read
Microsoft.Resources/
subscriptions/resour
ceGroups/read
Microsoft.Storage/st
orageAccounts/blob
Services/containers/
blobs/write
Microsoft.Storage/st
orageAccounts/blob
Services/containers/
blobs/read
Microsoft.Storage/st
orageAccounts/blob
Services/containers/
blobs/add/action
Microsoft.Storage/st
orageAccounts/blob
Services/containers/
215
OpenShift Container Platform 4.17 Authentication and authorization
blobs/move/action
Component Custom resource Required permissions for
services
General permissions
Microsoft.Storage/st
orageAccounts/blob
Services/read
Microsoft.Storage/st
orageAccounts/blob
Services/containers/r
ead
Microsoft.Storage/st
orageAccounts/blob
Services/containers/
write
Microsoft.Storage/st
orageAccounts/blob
Services/generateUs
erDelegationKey/acti
on
Microsoft.Storage/st
orageAccounts/read
Microsoft.Storage/st
orageAccounts/write
Microsoft.Storage/st
orageAccounts/delet
e
Microsoft.Storage/st
orageAccounts/listK
eys/action
Microsoft.Resources/
tags/write
Microsoft.Network/dn
sZones/A/write
Microsoft.Network/pr
ivateDnsZones/A/del
ete
Microsoft.Network/pr
ivateDnsZones/A/writ
e
216
CHAPTER 19. MANAGING CLOUD PROVIDER CREDENTIALS
Microsoft.Network/ne
tworkInterfaces/write
Microsoft.Compute/vi
rtualMachines/read
Microsoft.Network/vir
tualNetworks/read
Microsoft.Network/vir
tualNetworks/subnet
s/join/action
Microsoft.Network/lo
adBalancers/backen
dAddressPools/join/a
ction
217
OpenShift Container Platform 4.17 Authentication and authorization
Microsoft.Network/vir
tualNetworks/subnet
s/read
Microsoft.Network/vir
tualNetworks/subnet
s/write
Microsoft.Storage/st
orageAccounts/delet
e
Microsoft.Storage/st
orageAccounts/fileSe
rvices/read
Microsoft.Storage/st
orageAccounts/fileSe
rvices/shares/delete
Microsoft.Storage/st
orageAccounts/fileSe
rvices/shares/read
Microsoft.Storage/st
orageAccounts/fileSe
rvices/shares/write
Microsoft.Storage/st
orageAccounts/listK
eys/action
Microsoft.Storage/st
orageAccounts/read
Microsoft.Storage/st
orageAccounts/write
218
CHAPTER 19. MANAGING CLOUD PROVIDER CREDENTIALS
Microsoft.Compute/s
napshots/*
Microsoft.Compute/vi
rtualMachineScaleSe
ts/*/read
Microsoft.Compute/vi
rtualMachineScaleSe
ts/read
Microsoft.Compute/vi
rtualMachineScaleSe
ts/virtualMachines/wr
ite
Microsoft.Compute/vi
rtualMachines/*/read
Microsoft.Compute/vi
rtualMachines/write
Microsoft.Resources/
subscriptions/resour
ceGroups/read
Certain Operators managed by the Operator Lifecycle Manager (OLM) on Azure clusters can use
manual mode with Microsoft Entra Workload ID. These Operators authenticate with short-term
credentials that are managed outside the cluster. To determine if an Operator supports authentication
with Workload ID, see the Operator description in OperatorHub.
Additional resources
219
OpenShift Container Platform 4.17 Authentication and authorization
220