0% found this document useful (0 votes)
402 views134 pages

HCS 12 5 SRND PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
402 views134 pages

HCS 12 5 SRND PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 134

Cisco Hosted Collaboration Solution Release 12.

5 Solution Reference
Network Design Guide
First Published: 2019-06-25
Last Modified: 2019-11-05

Americas Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 527-0883
THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS,
INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS.

THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH
THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY,
CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.

The following information is for FCC compliance of Class A devices: This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15
of the FCC rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment
generates, uses, and can radiate radio-frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications.
Operation of this equipment in a residential area is likely to cause harmful interference, in which case users will be required to correct the interference at their own expense.

The following information is for FCC compliance of Class B devices: This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to part 15 of
the FCC rules. These limits are designed to provide reasonable protection against harmful interference in a residential installation. This equipment generates, uses and can radiate radio
frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. However, there is no guarantee that interference
will not occur in a particular installation. If the equipment causes interference to radio or television reception, which can be determined by turning the equipment off and on, users are
encouraged to try to correct the interference by using one or more of the following measures:

• Reorient or relocate the receiving antenna.

• Increase the separation between the equipment and receiver.

• Connect the equipment into an outlet on a circuit different from that to which the receiver is connected.

• Consult the dealer or an experienced radio/TV technician for help.

Modifications to this product not authorized by Cisco could void the FCC approval and negate your authority to operate the product.

The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB’s public domain version of
the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.

NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED "AS IS" WITH ALL FAULTS.
CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE.

IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT
LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS
HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network
topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional
and coincidental.

All printed copies and duplicate soft copies of this document are considered uncontrolled. See the current online version for the latest version.

Cisco has more than 200 offices worldwide. Addresses and phone numbers are listed on the Cisco website at www.cisco.com/go/offices.

Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com
go trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any
other company. (1721R)
© 2019 Cisco Systems, Inc. All rights reserved.
THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS,
INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS.

THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH
THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY,
CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.

The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB's public domain version of
the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.

NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED “AS IS" WITH ALL FAULTS.
CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE.

IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT
LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS
HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network
topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional
and coincidental.

All printed copies and duplicate soft copies of this document are considered uncontrolled. See the current online version for the latest version.

Cisco has more than 200 offices worldwide. Addresses and phone numbers are listed on the Cisco website at www.cisco.com/go/offices.

Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com
go trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any
other company. (1721R)
© 2019 Cisco Systems, Inc. All rights reserved.
CONTENTS

PREFACE Change History xi


Change History xi

CHAPTER 1 System Architecture 1


Cisco HCS System Architecture 1
Functional Layers 2
Customer/Customer Premises Equipment Layer 2
UC Infrastructure Layer 3
Telephony Aggregation Layer 3
Management Layer 3
SP Cloud Layer 3
Data Center Architecture 3
Data Center Deployment Concepts 3
Points of Delivery 3
Small Medium Business Solutions 4
Deployment Comparison HCS 4
Dedicated Instance 5
Dedicated Server 5
Partitioned Unity Connection 6
HCS Data Center Architecture and Components 6
Solution Architecture 6
Architecture Considerations and Layers 6
Data Center Design for Large PoD 7
Data Center Aggregation Layer 8
Access-to-Aggregation Connectivity 9
Data Center UCS and Access Layer 9

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
v
Contents

HCS Deployment on Vblock 10


HCS on FlexPoD 10
Service Insertion 10
Storage Integration 11
Traffic Patterns and Bandwidth Requirements for Cisco HCS 11
Data Center Design for Small PoD 13
Small PoD Architecture 14
Small Pod Deployment Models 16
Small PoD Redundancy 18
Options for Storage Connectivity 19
Small PoD Storage Setup 20
PSTN Connectivity to Small PoD 20
Small PoD Layer 2 Scale 20
Virtual Machines per CPU Core for Small PoD 20
Small PoD Layer 2 Control Plane 20
Small PoD Layer 3 Scale 21
Data Center Design for Micro Node 21
Micro Node Deployment Models 22
Virtualization Architecture 23
Capacity and Blade Density 23
VMware Feature Support 24
Service Fulfillment System Architecture 24
Service Fulfillment Architectural Layers 25
Hosted Collaboration Mediation - Fulfillment Layer 25
Domain Management Layer 29
Device Layer 29
HCS License Management 30
License Management Overview 30
HCS License Manager (HLM) 31
HCM-F License Dashboard 33
Prime License Manager (PLM) 34
License Management for Collaboration Flex Plan - Hosted 34
Coresident Prime License Manager 35
Overview of Smart Licensing 35

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
vi
Contents

Cisco Prime Collaboration Assurance Overview 40


Voice and Video Unified Dashboard 41
Device Inventory/Inventory Management 42
Voice and Video Endpoint Monitoring 43
Diagnostics 43
Fault Management 44
Reports 44
Cisco Expressway 45
Aggregation System Architecture 46
Session Border Controller (SBC) in HCS 46

CHAPTER 2 Network Architecture 47


Service Provider IP Infrastructure 47
Service Provider IP Connectivity Requirements 47
HCS Traffic Types 48
Traffic Type and Requirements 48
HCS Management IP Addressing Scheme 49
Service Provider NAT/PAT Design 50
Grouping VLANs and VLAN Numbering 51
VPN Options 51
Service Provider IP infrastructure design MPLS VPN 51
HCS Tenant Connectivity Over Internet Model 53
FlexVPN 56
AnyConnect VPN 56
Signaling Aggregation Infrastructure 57
IMS Network Integration 59
Features and Services 61
IMS Supplementary Services for VoLTE 62
SS7 Network Interconnect 62
Central PSTN Gateways 63

CHAPTER 3 Applications 65
Core UC Applications and Integrations 65
IP Multimedia Subsystem Network Architecture and Components 67

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
vii
Contents

Essential IMS Network Elements 67

Video Call Flow in HCS Deployments 68


Intra-Enterprise Point-to-Point Video Calling 68
HCS Hosted Inter-Enterprise Point-to-Point Video Calling 68
Non-HCS to HCS Enterprise Point-to-Point Video Calling 69
HCS Enterprise Video 70
Fax 71
Supported Fax Gateways 71
Inbound Fax from PSTN 71
Outbound Fax to PSTN 72
Fax Within the Customer 72
Cisco Webex Meetings - Cisco HCS Deployment 72
Cisco Webex Cloud Connected Audio 72

Enterprise User Calls Into Cisco Webex and Calls from Cisco Webex CCA to Enterprise Users 75
External Users Call into Cisco Webex and Calls from Cisco Webex CCA to External Users 76
Mobility 76
Mobile Connect 77
Mobile Connect Mid-Call Features 77
Enterprise Feature Access 79
Mobile Voice Access Enterprise 80
Mobile Voicemail Avoidance 80
Clientless FMC Integration with NNI or SS7 81
Clientless FMC Integration with IMS 84
Mobile Clients and Devices 85
Cisco Jabber 85
IMS Clients 85
Cisco Proximity for Mobile Voice 85
Assurance Considerations and Impact to HCM-F 86
Cisco Hosted Collaboration Mediation Fulfillment Impact 86
Cisco Collaboration Clients and Applications 87
Endpoints - Conference 87
Directory 88
LDAP Integration 88
Cisco Unified CM User Data Service (UDS) 88

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
viii
Contents

LDAP Directory 88
Cisco Webex Directory Integration 89
Client Services Framework Cache 89
Directory Search 89
Client Services Framework – Dial Plan Considerations 89
Translation Patterns 90
Application Dialing Rules 90
Directory Lookup Rules 90
Client Transformation 90
Deploying Client Services Framework 90
Design Considerations for Client Services Framework 90
Deployment Models for Jabber Clients 91
Push Notifications 91
Cisco Webex Hybrid Services Architecture Overview 91
Cisco Cloud Collaboration Management 92

CHAPTER 4 Third-Party Applications and Integrations 93


Third-Party Applications and Integrations 93
Third-party PBX Integration in Cisco HCS 93

CHAPTER 5 OTT Deployment and Secured Internet with Collaboration Edge Expressway 97
Cisco Expressway Over-the-Top Solution Overview 97
Supported Functionality 98
Endpoint Support 99

Design Highlights 99
Expressway Sizing and Scaling 100
Virtual Machine Options 101
Cisco HCS Clustered Deployment Design 101
Network Elements 102
Internal Network Elements 102
Cisco Expressway Control 102
DNS 102
DHCP Server 102
Router 102

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
ix
Contents

DMZ Network Element 102


Expressway-E 102
External Network Elements 103
EX60 103
DNS (Host) 103
NTP Server Pool 103
NAT Devices and Firewalls 103
SIP Domain 103
Jabber Client SSO OTT 103
BtoB Calls Shared Edge Expressway 104
Cisco Expressway Over-the-Top Solution Overview 104
Supported Functionality 105
Endpoint Support 105
Design Highlights 105
Cisco Expressway Sizing and Scaling 105
Virtual Machine Options 106
Network Elements 106

CHAPTER 6 Quality of Service Considerations 109


Quality of Service Considerations 109
Guidelines for Implementing Quality of Service 110
Quality of Service Domains 114
Cross-Platform Classification and Marking 115
Quality of Service for Audio and Video Media from Softphones 120
QOS Enforcement Using a Trusted Relay Point (TRP) 120
Client Services Framework – Instant Messaging and Presence Services 120
Client Services Framework – Audio, Video and Web Conferencing Services 121
Client Services Framework – Contact Management 121

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
x
Change History
• Change History, on page xi

Change History
Date Description

November 5, 2019 Rebranded Webex.

June 18, 2019 Initial release of document. Changes since the 11.5
release include adding information about Smart
Licensing and removing information about
components that are no longer supported as part of
the solution.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
xi
Change History
Change History

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
xii
CHAPTER 1
System Architecture
• Cisco HCS System Architecture, on page 1
• Functional Layers, on page 2
• Data Center Architecture, on page 3
• Virtualization Architecture, on page 23
• Service Fulfillment System Architecture, on page 24
• Cisco Prime Collaboration Assurance Overview, on page 40
• Cisco Expressway, on page 45
• Aggregation System Architecture, on page 46

Cisco HCS System Architecture


Cisco Hosted Collaboration Solution (HCS) is intended for both hosted and managed deployments. Cisco
HCS delivers a full set of Cisco unified communications and collaboration services.
Cisco HCS optimizes data center (DC) environments, reducing the operation footprint of service provider
(SP) environments. HCS provides tools to provision, manage, and monitor the entire architecture to deliver
service in an automated way, assuring reliability and security throughout SP operations.
The Cisco HCS architecture consists of multiple functional network components. Each plays a specific role
in the solution. Leveraging the framework provided by Cisco IP hardware and software products, Cisco HCS
delivers unparalleled performance and capabilities to address current and emerging unified communications
needs in the enterprise marketplace as a hosted managed-service offer.
The Cisco Unified Communications Services family of products optimizes functionality, reduces configuration
and maintenance requirements, and interoperates with numerous applications. Cisco HCS provides these
capabilities while maintaining high availability (HA), quality of service (QoS), and security.
The figure that follows provides a high-level view of Cisco HCS.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
1
System Architecture
Functional Layers

Figure 1: High-Level View of Cisco HCS

The rest of this guide describes the Cisco HCS architecture in more detail. Other Cisco HCS deployments
such as Micro Node or Small PoD (not shown in the preceding diagram) are introduced in Data Center
Architecture, on page 3.

Functional Layers
Cisco Hosted Collaboration Solution is an end-to-end cloud-based collaboration architecture that, on a high
level, may be distributed into the following functional layers:
• Customer/customer-premises equipment (CPE) layer
• UC infrastructure layer
• Aggregation layer
• Management layer
• SP network/cloud layer

These layers, shown in Cisco HCS System Architecture, on page 1 as an overlay on the overall HCS
Architecture. Each of these functional layers has a distinct purpose in HCS architecture, as described below.

Customer/Customer Premises Equipment Layer


The customer/customer premises equipment (CPE) layer provides connectivity to end devices (phones, mobile
devices, local gateways, and so on). In addition to end user interfaces, this layer provides connectivity from
the customer site to the provider's network.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
2
System Architecture
UC Infrastructure Layer

UC Infrastructure Layer
Cisco Unified Computing System (UCS) hardware in the SP data center runs unified communications (UC)
applications for multiple hosted business solutions. Virtualization, which enables multiple instances of an
application to run on the same hardware, is highly leveraged so that UC application instances are dedicated
for each hosted business. The ability to create new virtual machines dynamically allows the SP to add new
hosted businesses on the same UCS hardware.

Telephony Aggregation Layer


The aggregation layer provides multiple options for interfaces to SIP trunking, Mobile, and IP Multimedia
Subsystem (IMS) through a common aggregation node for multiple hosted businesses.

Management Layer
Management tools support easy service activation, interoperability with existing SP OSS, and other management
activities including service fulfillment and assurance.

SP Cloud Layer
The SP cloud layer leverages existing services in the SP network such as PSTN and regulatory functions. In
the Cisco HCS system architecture, the UC infrastructure components are deployed as single tenants (dedicated
per customer) in the cloud. These dedicated components and other management components run on virtual
machines running on UCS hardware.

Data Center Architecture


Data Center Deployment Concepts
Points of Delivery
Cisco cloud architecture is designed around a set of modular data center (DC) components that consist of
building blocks of resources called "Points of Delivery" (PoDs). PoDs are comprised of shared resource pools
of network, storage, and compute. Each of these components is virtualized and used by multiple customers
securely, so that each cloud customer appears to have its own set of physical resources.
This modular architecture provides a predictable set of resource characteristics (network, compute, and storage
resource pools, power, and space consumption) per unit that are added repeatedly as needed. For this discussion,
the aggregation layer switch-pair, services layer nodes, and one or more integrated computer stacks are
contained within a PoD.
In Cisco HCS, there are several scales of data center PoDs, including the following:
• Large PoD - suitable for an deployment with a higher number of customers, with customers either large
or small in size
• Small PoD - suitable for small-to-medium business environments of less than 80 customers

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
3
System Architecture
Small Medium Business Solutions

• Micro Node - suitable for small-to-medium business deployments of less than 20 customers and for using
smaller capacity hardware components.

For more information, refer to the following documents:


• Cisco Hosted Collaboration Solution Release 12.5 Capacity Planning Guide
• Cisco Hosted Collaboration Solution Release 12.5 End-to-End Planning Guide

Small Medium Business Solutions


The classic Service Provider Cisco HCS data center infrastructure model includes Nexus 7000 switches, SAN,
UCS with B-series blade servers, or Perimeta SBC, and so on, that support a large number of end users across
a high number of customers. This involves considerable initial cost and is suitable for large service providers.
For service providers with less than 940 tenants or shared clusters, there are a number of ways that you can
deploy the data center infrastructure using smaller hardware components and shared application models to
optimize scale and cost.
You can deploy the Cisco HCS small/medium business solution on any of the data center infrastructure models:
Large PoD, Small PoD, or Micro Node.

Deployment Comparison HCS


Review the following table to see a comparison of the different options available for each deployment model.

Table 1: Comparison—Data Center Infrastructure Models

Function or Product Large PoD Small PoD Micro Node


Number of tenants*** Up to 940 Approximately 80 Up to 20
Note Storage switches
such as Cisco MDS
9000 switches are
optional and aren't
required for Small
PoD deployments.

Aggregation Nexus 7000, Nexus 9396, Nexus 5500 Nexus 5500


Nexus 9508
Cisco Unified Compute System UCS with B-series blades UCS with B-series blades UCS C-series servers
(UCS)
Storage Fabric interconnect, SAN or Fabric interconnect, SAN or DAS (Local) storage
NAS storage NAS storage
Media/Signaling Anchoring
Device (Multi-VRF-Enabled for
Multiple customers)
Security ASA 5585-X ASA 5555-X ASA 5555-X
(Optional) Site-to-Site VPN SBC SBC SBC
Concentrator
(Optional) Line Side Access SBC SBC SBC

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
4
System Architecture
Dedicated Instance

Function or Product Large PoD Small PoD Micro Node


(Optional) Shared Cisco Expressway-C and Expressway-C and Expressway-C and
Expressway for Business to Expressway-E on UCS B-series Expressway-E on UCS B-series Expressway-E on UCS C-series
Business Dialing with Non HCS
Enterprises over Internet
(Optional) Cisco Expressway Expressway-C and Expressway-C and Expressway-C and
Expressway-E on UCS B-series Expressway-E on UCS B-series Expressway-E on UCS C-series
Cisco Prime Collaboration Yes Yes Yes*
Assurance available
(Optional) Dedicated Instance Yes Yes Yes
(Optional) OTT Remote Access Yes Yes Yes
with Expressway
(Optional) Shared RMS with Yes Yes Yes
Expressway
(Optional) Jabber Guest Yes Yes Yes
(Optional) Cisco Webex CCA Yes Yes Yes
(Optional) Business to Business Yes Yes Yes
Video through Shared
Expressway
*Micro Node deployments only support Cisco Prime Collaboration Assurance with two virtual machine server instances.

Dedicated Instance
The Service Provider Cisco HCS Data Center infrastructure model includes Nexus 7000 switches, SAN disks,
UCS with B-series blades, and a supported Session Border Controller (SBC), which support a large number
of end users across a high number of customers. This infrastructure model involves considerable initial cost
and is suitable for large service providers.
For service providers with fewer than 940 customers, there are a number of ways that you can deploy the data
center infrastructure using to optimize scale and cost.
You can deploy the Cisco HCS solution on any of the data center infrastructure models: Large PoD, Small
PoD, or Micro Node.
Dedicated instance refers to the model of applications where there is a separate application instance (Cisco
Unified Communications Manager) for each customer. In one C-series server there can be different customer
instances based on how applications are distributed in the server. Any reference to a UC application such as
Unified Communications Manager, Unified Communications Manager IM and Presence, Cisco Unity
Connection, Cisco Emergency Responder, and CUAC, that does not include "Shared" or "Partitioned" as part
of the title implies that it is a dedicated instance.

Dedicated Server
Dedicated server refers to an Cisco HCS model of applications available for Micro Node deployments where
one C-series server contains only one customer, but may have one or more UC applications running on the

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
5
System Architecture
Partitioned Unity Connection

same server for that customer (for example Cisco Unified Communications Manager or Cisco Unity
Connection).

Partitioned Unity Connection


To help the Cisco HCS solution scale more customers on the same hardware, you can partition a single Cisco
Unity Connection instance to support multiple customer domains.
Cisco Unity Connection exposes the configuration and provisioning to support multiple customers by REST
APIs. The Cisco HCS service fulfillment layer uses the partitioned Unity Connection REST APIs to allow
Cisco HCS service providers to configure and provision customers into the partitioned Unity Connection.
Cisco HCS continues to support the dedicated Cisco Unity Connection in addition to the new partitioned
instance. Partitioned Unity Connection is not a new product with a new SKU. The HCS administrator and
domain managers must decide the role of Unity Connection as either regular or partitioned.

Note The cluster limit for Unified CM and IM/P is one. For more information on Partitioned Unity Connection,
see the documentation as follows:

• Cisco Unity Connection


• Cisco Unified Communications Domain Manager Maintain and Operate Guide
• Design Guide for Cisco Unity Connection

• Use Cisco Unified Communications Domain Manager to provision partitioned Cisco Unity Connection
if you are running version Cisco Unified Communications Domain Manager 10.6(1).

HCS Data Center Architecture and Components


HCS data center design delivers a flexible, optimal data center solution that can easily scale to a large number
of physical and virtual servers. The design supports network virtualization to separate customers while including
virtualized network services, such as virtual firewalls.
The HCS data center architecture leverages the UCS platform aggregated into the data center core and
aggregation switches. The architecture is based on a standard layered approach to improve scalability,
performance, flexibility, resiliency, and maintenance.

Solution Architecture
The solution is optimized toward data center environments to reduce the operation footprints of service provider
environments. It provides a set of tools to provision, manage, and monitor the entire architecture to deliver
an automated service that assures reliability and security throughout the data center operations.

Architecture Considerations and Layers


Cisco HCS System Architecture, on page 1 gives an overview of the HCS Architecture and the placement
of the Data Center UC infrastructure layer and its connectivity to the rest of the layers.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
6
System Architecture
Data Center Design for Large PoD

• Customer/CPE Layer: This layer provides the connectivity to the end devices that includes phones,
mobile devices, and local gateways. In addition to the end user interfaces, this layer provides connectivity
from the customer site to the provider's network
• UC infrastructure layer: The UC infrastructure layer is constructed around the HCS data center design
to provide a highly scalable, reliable, cost effective, and secure environment to host multiple HCS
customers that meet the unique SLA requirements for each application/customer.
In this architecture, the UC layer services components (such as Unified Communications Manager, Cisco
Unity Connection, the Management layer, and IM and Presence Service) are deployed as a single tenant
(dedicated per customer) in the cloud on the multi-tenant UC infrastructure. Expressway-E and
Expressway-C provide secure signaling and media paths through the firewalls into the enterprise for the
key protocols identified. The hardware is shared using the virtualization among many enterprises and
the software (applications) is dedicated per customer. Expressway is used for secure access into the
enterprise from the internet as opposed to other access methods (MPLS VPN, IPSEC, Anyconnect, and
so on.)
• Telephony Aggregation Layer: This layer is required in a Cisco HCS deployment to aggregate all the
HCS customers at a higher layer to centralize the routing decision for all the off-net and inter-enterprise
communication. A session border controller (SBC) in the aggregation layer functions as a media and
signaling anchoring device. In this layer, the SBC functions as a Cisco HCS demarcation that normalizes
all communication between Cisco HCS and the external network, either a different IP network or the IP
Multimedia Subsystem (IMS) cloud.

Data Center Design for Large PoD

Note The information in this section applies to all data center infrastructure deployment models; any differences
are noted in Data Center Design for Small PoD, on page 13 and Data Center Design for Micro Node, on page
21.

Within the data center backbone, the Large PoD design provides the option to scale up the available network
bandwidth by leveraging port-channel technology between the different layers. With Virtual Port Channels
(vPC) , it also offers multipathing and node/link redundancy without blocking any links.
When they deploy the Cisco HCS Large PoD solution, service providers require isolation at a per-customer
level. Unique resources can be assigned to each customer. These resources can include different policies,
pools, and quality of service definitions.
Virtualization at different layers of a network allows for logical isolation without dedicating physical resources
to each customer; some of the isolation features are as follows:
• VRF-Lite provides aggregation of customer traffic at Layer 3
• Multicontext ASA configuration provides dedicated firewall service context for each of the customers
• VLAN provides Layer 2 segregation all the way to the VM level

To support complete segregation of all the Cisco HCS customers, Cisco recommends that you have separate
Virtual Routing and Forwarding (VRF) entries for each Cisco HCS customer. Each customer is assigned a
VRF identity. VRF information is carried across all the hops within a Layer 3 domain, and then is mapped
into a one or more VLANs within a Layer 2 domain. Communication between VRFs is not allowed by default

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
7
System Architecture
Data Center Aggregation Layer

for privacy protection of each customer. The multimedia communication between customers is allowed only
through the Session Border Controller (SBC).
The following figure shows the Cisco HCS Solution architecture with all the data center components for a
Large PoD deployment. For more information on the Small PoD deployment architecture, refer to Small PoD
Architecture, on page 14. For more information on the Micro Node deployment architecture, refer to Micro
Node Deployment Models, on page 22.
Figure 2: Physical Data Center Deployment for Large PoD

Nexus 7000 switches are used as the aggregation switches and there is no core layer within the Service Provider
Cisco HCS Data Center. The aggregation device has Layer 3 northbound and Layer 2 southbound traffic.
In the Cisco HCS Large PoD architecture, it is assumed that the VRF for each customer terminates at the
MPLS PE level and runs the VRF-lite between the PE and the Nexus 7000 aggregation. In this case, the Nexus
7000 acts as a CE router from the MPLS cloud perspective.
As shown in the preceding figure, the Access layer provides connectivity to the servers. This is the first
oversubscription point which aggregates all server traffic onto the Gigabit ethernet or 10 Gigabit ethernet
port-channel uplink to the aggregation layer.
Cisco recommends that you use 62xx series fabric interconnect; this requires UCS Manager 2.0.

Data Center Aggregation Layer


The current recommended Cisco HCS Large PoD deployment deploys a pair of Nexus 7000 switches which
serve as the aggregation layer switches. In this model, there is no core layer, only an aggregation layer which
serves as the Layer 3 and Layer 2 termination point. The pair of Nexus 7000 switches connect Layer 3

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
8
System Architecture
Access-to-Aggregation Connectivity

northbound to the MPLS PE routers, and southbound to either fabric interconnect or a Nexus 5000 switch at
Layer 2, depending on the scale of the deployment.
In this configuration it is not necessary to define separate VDCs, therefore resources such as VLANs, VRFs,
HSRP groups, BGP peers, and so on, are available at the chassis level.
For more details on components in the aggregation layer, see Aggregation System Architecture, on page 46.

Access-to-Aggregation Connectivity
Access-layer devices are dual-homed to the aggregation pair of switches for redundancy. When spanning-tree
protocol is used in this design, there is Layer 2 loop and one of the uplinks is in blocking mode. This limits
the bandwidth to half if multiple links are deployed between the access and the aggregation layers. These
uplinks are configured as trunks to forward multiple VLANs. Based on spanning-tree root existence of each
VLAN and if these VLANs are load-balanced across multiple aggregation switches, some VLANs are active
on one link and the rest of the VLANs are active on the second link. This provides a way to achieve some
level of load-balancing. However this design is complex and involves administrative overhead of configuration.
We recommend that you use the virtual port channel. This allows you to create a Layer 2 port channel interface
distributed across two different physical switches. Logically it is one port-channel. Virtual port channels
interoperate with STP to provide a loop free topology. The best practices to achieve that is to make the 7000
aggregation layer the logical root, assign same priority for all instances in both 7000 and configure peer switch
feature. For more information about best practices see http://www.cisco.com/c/dam/en/us/td/docs/switches/
datacenter/sw/design/vpc_design/vpc_best_practices_design_guide.pdf

Data Center UCS and Access Layer


The access layer provides physical access to the UCS compute infrastructure and connectivity to the storage
area network (SAN) arrays.
Fiber Channel is recommended for SAN connectivity. Connectivity between the UCS 6200 series access layer
switches and UCS blade server chassis is based on 10 Gbps Fiber Channel over Ethernet links, which carry
Ethernet data traffic and Fiber Channel storage traffic. The network components in this layer include Cisco
6200 Fabric Interconnects and Nexus 1000v.

Prerequisite and Components

Note This section does not apply to HCS Micro Node deployments.

Data center access layer includes the following components:


• Cisco 6200 Fabric Interconnect
• Nexus 1000V
Since Nexus 1000V implementation requires VCenter readiness you should plan it later in the Data
Center setup. See Nexus 1000V Implementation Guide at http://www.cisco.com/en/US/products/ps9902/
tsd_products_support_series_home.html for details.

You must set up the following before you implement UCS and Cisco 6200 Fabric Interconnect:
• IP infrastructure as described in Implementing Service Provider IP Infrastructure.
• UCS Chassis Basic physical setup, cabling and connectivity.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
9
System Architecture
HCS Deployment on Vblock

• Cisco 6200 Fabric Interconnect HA Cluster setup.


• Connectivity to Cisco Unified Computing System Manager (UCSM).

HCS Deployment on Vblock


Vblock leverages preintegrated common Cisco, VMware, and EMC components to provide a suite of
productized infrastructure stacks that scale from very small to very large infrastructure deployments for both
initial purchase and for ultimate scale. A Vblock deployment works better on a greenfield deployment as
existing infrastructure assets will not typically be used with the Vblock, because Vblock comes in a prepackaged
format in a rack. Purchasing a Vblock can best be equated to purchasing a prefabricated home. You know it
will be built fast, you know it will be built to specification, and you know it will provide the basic infrastructure
services required for a large variety of use cases.
For more details, see Vblock Solution for Hosted Collaboration at http://www.vce.com/asset/documents/
hcs-on-vblock-data-sheet.pdf.

HCS on FlexPoD
FlexPoD is a predesigned base configuration that is built on the Cisco Unified Computing System (UCS),
Cisco Nexus data center switches, and NetApp Fabric-Attached Storage (FAS) components and includes a
range of software partners. FlexPoD can scale up for greater performance and capacity or it can scale out for
environments that need consistent, multiple deployments. FlexPoD is a baseline configuration, but also has
the flexibility to be sized and optimized to accommodate many different use cases.
Cisco and NetApp have developed FlexPoD as a platform that can address current virtualization needs and
simplify data center evolution to IT as a Service (ITaaS) infrastructure. Cisco and NetApp have provided
documentation for best practices for building the FlexPoD shared infrastructure stack. As part of the FlexPoD
offering, Cisco and NetApp designed a reference architecture. Each customer's FlexPoD system may vary in
its exact configuration. Once a FlexPoD unit is built it can easily be scaled as requirements and demand
change. This includes scaling both up (adding additional resources within a FlexPoD unit) and out (adding
additional FlexPoD units).
For more detailed information about FlexPoD, click the following link: http://www.cisco.com/en/US/netsol/
ns1137/index.html.

Service Insertion
Integration of network services such as firewall capabilities and server load balancing is a critical component
of designing the data center architecture. The aggregation layer is a common location for integration of these
services since it typically provides the boundary between Layer 2 and Layer 3 in the data center and allows
service devices to be shared across multiple access layer switches. The Nexus 7000 Series does not currently
support services modules.
For HCS data center architecture, Cisco Adaptive Security Appliance (ASA) is recommended for firewall
services. The ASA can be deployed in Layer 2 or Layer 3 multicontext mode depending on the requirement
of the service provider.
As an example, if a service provider wants to terminate the VPN at the ASA security appliance, the ASA has
to be deployed in Layer 3 mode, because Layer 2 mode does not support the VPN termination. The service
provider can specify the customer VLANs that need to go through the ASA for security purposes; the rest of
the traffic will not go through the ASA security appliance.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
10
System Architecture
Storage Integration

Storage Integration

Note This section does not apply to HCS Micro Node as these deployments have local storage on UCS C-Series
so MDS is not required.

Another important factor changing the landscape of the data center access layer is the convergence of storage
and IP data traffic onto a common physical infrastructure, referred to as a unified fabric. The unified fabric
architecture offers cost savings in multiple areas including server adapters, rack space, power, cooling, and
cabling. The Cisco Nexus family of switches spearheads this convergence of storage and data traffic through
support of Fiber Channel over Ethernet (FCoE) switching in conjunction with high-density 10-Gigabit Ethernet
interfaces. Server nodes may be deployed with converged network adapters that support both IP data and
FCoE storage traffic, allowing the server to use a single set of cabling and a common network interface.
Note that the Cisco Nexus family of switches also supports direct LUN connectivity to SAN storage using
FC connectivity. With licensing, the ports can be fiber channel switched directly to an external storage array.
The Fabric Interconnect connects the Cisco HCS platform to the storage network using the MDS 9000 series
switches with multiple physical links (fiber channel) for high availability. In Cisco HCS deployment of the
data center, all the link connections between any components are deployed in a redundant mode to provide
high level of resilience.

Traffic Patterns and Bandwidth Requirements for Cisco HCS


This section provides basic guidelines on bandwidth capacity for UC applications.
Figure 3: Traffic Flow Patterns in Cisco HCS

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
11
System Architecture
Data Center Bandwidth Capacity

Data Center Bandwidth Capacity

Note This section applies to HCS Large PoD deployments only.

• These numbers are best estimates based on the UC applications only.


• The numbers may change if some other applications or data traffic is included, for example, IVR.
• Bandwidth capacity is strictly based on the traffic coming into the DC.
• You may need to configure QoS for voice traffic to make sure other data traffic does not use the bandwidth.
• The bandwidth requirement within the data center for the UC application is not high, because only
signaling traffic gets into the data center.
• The main bandwidth issue may occur at the core IP network level and at the SBC level, which does the
media anchoring outside of the data center.
• Based on the table below:
• 10,000 users requires 283 Mbps
• 50,000 users requires 1.415 Gbps

• Eight-port 10GE module on Nexus 7000.


• Each 10GE port potentially supports up to 350,000 user signaling traffic.

Note This table provides a basic guideline for deploying UC on UCS.

Table 2: Bandwidth Usage for UC Applications on UCS Hardware

Numbers of Phones BHCA (Calls Bandwidth SP Control Traffic Total Bandwidth


(Subscribers) per Phone with Encryption
per Hour)
1000 phones 10 619 bps (includes register type 619 kbps Approximately 0.62
messages and call-specific data) Mbps
10% phones using 2 91.56 Kbps (6.711 codec) 9156 Kbps Approximately 9.2
voicemail Mbps
10% phones using MOH 1 91.56 Kbps (6.711 codec) 9156 Kbps Approximately 9.2
service (software base) Mbps
5 contact center phones 30 1.53 Kbps 7.695 Kbps
10% phones using shared 4 343 bps 34.3 Kbps
line

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
12
System Architecture
Data Center Oversubscription in Network Layers

Data Center Oversubscription in Network Layers


In a three-tier data center network architecture, oversubscription is commonly calculated at the access and
aggregation layers based on the ratio of network interfaces facing the server farm versus the number of
interfaces facing the data center core. This view of oversubscription is primarily switch-level and is relevant
to traffic flow from the UC Applications per customer instance out to customer premises over the VPN cloud.
Calculation of oversubscription must also take into account links that are blocking between the access and
aggregation layers due to STP. With STP in place, if the interface from an access switch to the STP root switch
in the aggregation layer fails and the backup link becomes forwarding, then there is no change to the effective
oversubscription rate after the link has failed. Technologies such as vPC in NX-OS and VSS on the Catalyst
6500 offer alternatives to STP where all links are normally forwarding. This can reduce the effective
oversubscription ratio when all links are active and healthy. From a high availability perspective, if it is
necessary to maintain a determined oversubscription rate even in the event of single link failure, additional
interfaces may be required since there may be no blocking path to transition to a forwarding state and provide
additional bandwidth when a primary link fails.
For example, in a simple access layer consisting of multiple 48-port 1-Gigabit Ethernet switches with two
10-Gigabit Ethernet uplinks, the ratio of server interface bandwidth to uplink bandwidth is 48 Gbps/10 Gbps
if one of the interfaces is in an STP blocking state for redundancy. This results in a 4.8:1 ratio of
oversubscription at the access layer for a given VLAN. Multiple VLANs may be configured on an access
switch with their forwarding paths alternating between the two uplinks to distribute load and take better
advantage of the available uplink bandwidth. If vPC or VSS is in use and the two 10 Gbps uplinks are instead
forming a port channel where both links are forwarding, the ratio is now 48Gbps/20 Gbps, which would result
in an effective oversubscription ratio of 2.4:1 at the access layer when all links are active.
Extend this example to include an aggregation layer built with chassis switches that each have 64 ports of
10-Gigabit Ethernet. If you allocate eight ports from each aggregation switch into each core switch, and an
8-port channel between the two aggregation switches, that leaves 40 ports for actual aggregation of the access
layer, (assuming a simple model as an example without Services Chassis or appliances in the mix). The
oversubscription ratio facing the core for each aggregation switch in this example would be 400 Gbps/160
Gbps, which reduces to a ratio of 2.5:1.

Data Center Design for Small PoD


The architecture described in this section is referred to as the Cisco Hosted Collaboration Solution (HCS)
Small Point of Delivery (PoD). The Small PoD is designed to support up to 45 customers, and enables you
to deploy and offer Cisco HCS with a lower entry-level cost than the traditional Cisco HCS Large PoD.
This HCS DC architecture Small PoD design is optimized using lower cost and smaller-scale Nexus 5500
Series aggregation switches with a Layer 3 Daughter Card, ASA 5555-X Series next-generation firewalls,
ASR 1002-X Router VPN and using the latest developments in the underlying devices.
The HCS Small PoD Design can also use Nexus 5600 switches, which come with integrated Layer 3 functions,
eliminating the need for a Layer 3 daughter card.
With the Small PoD, you can:
• Start offering HCS with minimal investment
• Expand the HCS offering across data centers with minimal investment at a location close to the Point of
Delivery

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
13
System Architecture
Small PoD Architecture

Although this section discusses the options to scale either horizontally or by migration to a Large PoD, each
design has its own pros and cons. You must perform the necessary due diligence concerning the scale and
growth needed before you decide on the Small PoD option.
Refer to the Cisco Hosted Collaboration Solution Compatibility Matrix at http://www.cisco.com/en/US/partner/
products/ps11363/products_device_support_tables_list.html for a list of Small PoD hardware components.
The sections that follow discuss the details of the Small PoD model and its impacts such as scale, performance
and reliability.

Small PoD Architecture


The Cisco Hosted Collaboration Solution (HCS) Small PoD architecture design includes layering and
partitioning the compute, network, storage, and services layer.
The following figure shows the compute layer, which consists of one to four Cisco Unified Computing System
(UCS) 5108 chassis connected through the Fabric Interconnect (FI) to the Nexus 5548 switches that are
equipped with Layer 3 functionality. Use the Nexus 5548 switch at the aggregation layer to provide northbound
connectivity to the WAN Edge/MPLS Provider Edge (PE) router.
Figure 4: HCS Small PoD Physical Network

The complete system as shown in the figure is a single PoD that connects to the WAN Edge/MPLS Provider
Edge (PE) router. You can connect multiple PoDs to the WAN Edge/MPLS PE router as long as you address
the bandwidth requirements of each PoD. For more information, refer to Traffic Patterns and Bandwidth
Requirements for Cisco HCS, on page 11.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
14
System Architecture
Small PoD Architecture

The Cisco Adaptive Security Appliance (ASA) 5555-X which provides virtual firewalls for every customer
using firewall contexts, provides the perimeter security for the HCS customers. The ASA connects to the
Nexus 5548UP in a redundant manner to provide availability during failures. To provide redundancy, configure
vPC links on Nexus 5000 and the ether channels on the ASA.
To support site-to-site Virtual Private Networks (VPN), use the Cisco ASR 1000 Series Aggregation Services
Router (ASR) as the Site-to-Site VPN Concentrator. The ASR 1000 is configured for Virtual Routing and
Forwarding (VRF) aware VPN to support the VPN tunnels from the customer premises.
You can use the a third-party SBC to aggregate the traffic to and from the public switched telephone network
(PSTN) and inter-customer traffic.
The key elements for a Small PoD deployment are as follows:
1. The UCS 5108 chassis configuration uses the same configuration as a standard HCS deployment that
is equipped with B-Series half-width servers (as recommended for Cisco HCS).
2. Each FI in the FI pair are connected with two links from each UCS 5108 chassis.
3. One option is to directly connect the storage to the FIs with virtual SANs (VSANs) distributed across
the two FIs if the version of the UCS is 2.1 or above. The two links between each FI and redundant
storage processors on the storage system provide high availability during failures. This deployment
does not require MDS switches and assumes the 2.1 and later versions for FI. For more information,
refer to the UCS Direct Attached Storage and FC Zoning Configuration Example, available at
http://www.cisco.com/en/US/products/ps11350/products_configuration_
example09186a0080c0a508.shtml. The recommended connectivity configuration uses an MDS 9200,
9500 or 9700 series with security and encryption enabled.
4. You can also connect the storage at the Nexus 5548 switches, if the version of the Cisco UCS Manager
(pre 2.1) that is deployed does not support direct connectivity from the FI without a switch.
5. Equip the Nexus 5000 with a Layer 3 Daughter Card to configure Layer 3 functionality. The access and
aggregation layer functions are collapsed into the Nexus 5000 pair in this deployment.
6. Configure the Nexus 5000 in the aggregation layer using the Large PoD configuration, which includes
the following configuration:
• Border Gateway Protocol (BGP) toward the PE for each customer
• North and south VRF for each customer
• North and south HSRP instances for each customer, along with static routes

7. Connect the Adaptive Security Appliance (ASA) to the Nexus 5000 at the aggregation level, as in the
Cisco HCS Large PoD environment.
8. If centralized PSTN routing is needed, deploy an SBC for centralized call aggregation as in a Cisco
HCS Large PoD deployment.
9. Attach Customer Premises Equipment (CPE) devices to the PE for MPLS VPN between the customer
premises and the data center.
10. To support Local Breakout (LBO), use an Integrated Services Router (ISR) G2 Series. The same
equipment can be used as a CPE.
11. If you deploy Small PoDs geographically across data centers, you must meet the delay requirements as
specified for Clustering Over the WAN (CoW).

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
15
System Architecture
Small Pod Deployment Models

12. Backup and restore is performed using standard Cisco HCS procedures. Refer to the Cisco Hosted
Collaboration Solution Release 12.5 Maintain and Operate Guide, available at http://www.cisco.com/
en/US/partner/products/ps11363/prod_maintenance_guides_list.html.

The following figure shows the HCS Small PoD system architecture from a logical topology perspective. The
Nexus 5000 Aggregation node is split logically into a north VRF and a south VRF for each customer. A Layer
3 (L3) firewall context (on ASA 5555-X) is inserted in the routed mode to provide perimeter firewall services.
In the figure, an SBC is used to interconnect to the PSTN. It also provides logical separation for each customer
within the same box using VRFs/VLANs and adjacency features.
Figure 5: HCS Small PoD Logical Network

Small Pod Deployment Models


The following figure shows the Small PoD deployment with an SBC.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
16
System Architecture
Small Pod Deployment Models

Figure 6: HCS Small PoD with an SBC

This figure shows the storage connection with two options. The solid line FC connections are for direct storage
connection at the FI. The dashed FC connections are for storage connection at the Nexus 5500 or 5600,
depending on which is being used in the deployment. The additional FC links between the FI and Nexus to
carry the storage traffic from the FI to the Storage System are also shown. Other options using FCOE are
possible but not covered in this document.
The following figure shows the Small PoD deployment with an SBC.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
17
System Architecture
Small PoD Redundancy

Figure 7: HCS Small PoD with an SBC

Similar to the Large PoD deployment, the Small PoD deployment model uses the ASR 1000 as the Site-to-Site
VPN Concentrator to connect customers over the internet.
With the Small PoD deployment, service providers can still deploy multiple data centers and deploy clustering
over WAN for all the Unified Communications (UC) applications to support geo-redundancy. To accomplish
this, deploy a Small PoD in multiple data centers, or deploy a Small PoD in one data center and a Large PoD
in the other data center. Follow the standard HCS disaster recovery procedures as recommended.

Small PoD Redundancy


For a Small PoD deployment, redundancy of the system is the same as redundancy for the HCS Large PoD.
Redundancy includes applications using CoW, redundant security appliances (ASA), redundant Site-to-Site
VPN concentrators, network components (redundant Nexus 5000), blade servers, fabric interconnect, virtual
Port Channels (vPC), and physical link level redundancy.
You must deploy an SBC in redundant active/standby mode for box-to-box redundancy as recommended in
standard HCS documentation.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
18
System Architecture
Options for Storage Connectivity

Options for Storage Connectivity


This design proposes two options for storage connectivity. It highlights the pros and cons of each option and
the applicability to the small deployment in HCS.
1. Storage connection option—F1 to storage direct attach
The Cisco UCS 2.1 FI supports direct connection of the storage systems, which simplifies the configuration
and connectivity for HCS Small PoD deployment. If the version of the UCS firmware is 2.1 or above,
attach the storage directly at the FI without the use of fabric switches.
The direct connectivity can be either Fiber Channel over Ethernet (FCoE) or Fiber Channel (FC) depending
on the support on the storage system.

Pros: • Avoids an extra hop to the storage and improves latency


• Removes the need for additional license on Nexus 5000 for storage
connectivity
• Reduces port consumption
• Reduces cabling between F1 and Nexus 5000
• Simplifies configuration on Nexus 5000
• Makes Nexus 5000 resources available for data traffic
• Provides the option to connect NAS to FI using appliance ports

Cons: • Cannot extend storage beyond a single pair of FI. However, since Small
PoD deployment does not span more than one FI pair, this disadvantage
does not impact Small PoD deployment.

2. Storage connection option—Storage connected at Nexus 5000


Attach the storage to the Nexus 5548 switches as shown in Small PoD Architecture, on page 14 (dashes
lines). If firmware 2.1 series is installed and is to be used on the FI, attaching the storage directly to the
FI is an option. Connect additional links for Fiber Channel over Ethernet (FCoE) or use Fiber Channel
(FC) links between FI and Nexus 5000 to handle the storage traffic.

Pros: • Storage can be shared across FI pairs. This is not a requirement in Small
PoD deployments.

Cons: • Requires extra hop to the storage and impacts latency


• Requires additional license on Nexus 5000 for storage connectivity
• Increases port consumption
• Increases cabling between Fi and Nexus 5000
• Increases configuration on Nexus 5000
• Shares Nexus 5000 resources to handle storage and data traffic

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
19
System Architecture
Small PoD Storage Setup

Small PoD Storage Setup


When you deploy the Cisco HCS Small PoD, Cisco recommends that you use the local SAN storage. The
SAN storage can be from a UCS approved vendor, if it supports the IOPs specified for UC applications.

PSTN Connectivity to Small PoD


If you require centralized breakout, deploy either a centralized SBC or a dedicated SBC for each customer.

Small PoD Layer 2 Scale


Efficiency of resource utilization and multi-customer solutions are directly dependent on the amount of
virtualization implemented in a data center. Scale of the VMs drives the scale requirement of the network
components in terms of port densities and Layer 2 (L2) capacity. The key resources that define at L2 scale
are the VLANs and MAC addresses. Both of these resources are not an issue for HCS Small PoD to support
45 customers.
For more information, refer to http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/
configuration_limits/limits_521/nexus_5000_config_limits_521.html and http://www.cisco.com/en/US/docs/
unified_computing/ucs/sw/configuration_limits/2.0/b_UCS_Configuration_Limits_2_0.html.

Virtual Machines per CPU Core for Small PoD


Server virtualization provides the ability to run multiple server instances in a single physical blade. Essentially,
this involves allocating a portion of the processor and memory capacity per VM. The processor capacity is
allocated as vCPUs by assigning a portion of the processor frequency.
Cisco HCS application deployments in general require 1:1 allocation of a virtual vCPU except for some
specific UC VMs which can oversubscribe a core.

Small PoD Layer 2 Control Plane


To build Layer 2 (L2) access/aggregation layers, design the L2 control plane to address the scale challenge.
Placement of the spanning-tree root is key to determine the optimum path to link services as well as to provide
a redundant path to address network failure conditions. To provide uniformity in the network virtualization
independent of equipment connected to the L2 network, it is important to support a variety of spanning-tree
standards, including IEEE 802.1ad, Rapid Spanning Tree Protocol (RSTP), Multiple Spanning Tree (MST),
and Per-VLAN Spanning Tree (PVST). In HCS, Cisco recommends specific configuration of the MST. HCS
VMDC design implements multiple spanning trees (MST) as the spanning tree protocol. In HCS, Cisco
recommends that you deploy two MSTs at the aggregation layer because the Nexus 5000 aggregation switch
is the root. Providing two MST instances and distributing the customers/VLANs on the two sides distributes
the load of VLANs/customers among the two Nexus 5000.
The PVST has more overhead in terms of the BPDUs based on the number of VLANs (one STP instance per
VLAN),whereas MST does not depend on the number of VLANs.
The use of vPCs provides better bandwidth because the links are aggregated and not blocked by the spanning
tree protocol.

Note When you change VLAN to MST instance mapping, the system restarts MST. Cisco recommends that you
map VLANs to the MST instance at the time of initial configuration.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
20
System Architecture
Small PoD Layer 3 Scale

Small PoD Layer 3 Scale


Scaling the Layer 3 domain depends on the following:
• BGP peering: Peering is implemented between the MPLS edge and the aggregation layers, and also
between aggregation peers for every customer. The edge layer terminates the IP/MPLS VPNs and the
traffic is then fed to the aggregation layer by way of the VRF Lite.
• HSRP instances: Used to virtualize and provide a redundant L3 path between the services, edge, and
aggregation layers.
• VRF instances: A VRF instance can be used to define a single network container representing a service
class. The network container here refers to the Logical isolation of a per-customer HCS Application
instance.
• Routing tables and convergence: Though individual customer routing tables are expected to be small,
scale of the VRFs (customers) introduces challenges to the convergence of the routing tables on failure
conditions within the data center. The architecture uses four static routes for each customer, and the
routes must be distributed to the BGP side.
• Static routes: In Cisco HCS, static routes are used to route the traffic to and from the security appliance,
to and from the SBC, and for traffic coming from the premise to the network management domains.
• Security: Firewall/NAT services consume IP address pools to statically perform network address
translation of the UC instances toward the Management Domain. These are address pools toward
management to statically perform network address translation of the overlapping UC addresses.

For more information, refer to Cisco Hosted Collaboration Solution Release 12.5 Capacity Planning Guide.

Data Center Design for Micro Node


To reduce the initial cost and support small partners, the small medium business solution supports the Micro
Node infrastructure model. “Micro Node” refers to a smaller deployment model that uses different aggregation
and compute hardware than the components used in an HCS Large PoD environment.
The Cisco HCS Micro Node model requires a low investment of initial hardware to onboard a small number
of customers and allows hardware to be added as customers grow. The following table shows the range of
hardware configurations possible with the Micro Node infrastructure model.

Table 3: Micro Node Capacities

Minimum Maximum
Nexus 5500 or 5600 switches 2 2
Adaptive Security Appliance (ASA 2 2
5555-X)

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
21
System Architecture
Micro Node Deployment Models

Minimum Maximum
C-series servers 5 - one for applications Cisco ~24 - twenty-one for applications
Unified Communications Manager, Cisco Unified Communications
Cisco Unity Connection, and Cisco Manager, Cisco Unity Connection,
Unified Communications Manager and Cisco Unified Communications
IM and Presence Service, - four Manager IM and Presence Service
systems required for management (that includes redundancy), - three
and up to seven systems for a full for management applications
deployment
(Optional) SBC 1 (optional) 20 (or 1 optional SBC)
Clusters 1 20
Users Minimum OVA supports 1,000 20,000 users with 20 application
users with one application cluster clusters

Micro Node Deployment Models


The key elements of the Micro Node model are as follows:
• Use of C-series physical servers, rather than B-series blade servers
• Use of local disks instead of SAN shared storage
• There is no MDS switch since there is no SAN
• There are no Cisco UCS 6200 Series Fabric Interconnects
• Use of Nexus 5548 with Layer 2 and Layer 3 as an aggregation device, similar to a Small PoD deployment
• Use of Security Appliance (ASA 5555-X)
• Requires unique IP addresses within the cluster for all the customers
Overlapping IP addresses can be used between clusters

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
22
System Architecture
Virtualization Architecture

Figure 8: Micro Node with C-series, Cisco Unified Border Element (SP Edition)

In the Micro Node deployment, the infrastructure redundancy is highly recommended and required to keep
the infrastructure downtime to a minimum. In Micro Node, you use a vPC port channel between the Nexus
5548 and the Security appliance, and vPC from the Nexus 5548. Each C-series chassis has dual links, one to
each redundant Nexus 5548.

Note When you deploy a Nexus 5548 as a Layer 3/Layer 2 device, there is no redundancy of the Layer 3 module
within the Nexus 5548.

With the Micro Node deployment, you can still deploy multiple data centers and deploy clustering over WAN
for all the Unified Communications applications to support geo-redundancy for the applications.

Virtualization Architecture
Capacity and Blade Density
The UCS blades are rapidly growing in terms of capacity and performance. To take advantage in the growth
of systems with an increasing number of processor cores, our virtualization support is changing in two ways.
First, the blades supported are based on a support specification rather than Cisco certifying specific hardware.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
23
System Architecture
VMware Feature Support

The virtualized UC applications will support hardware based on a minimum set of specs (processor type,
RAM, I/O devices, and so on).

VMware Feature Support


VMware feature support varies for each UC application. The most complete matrix of feature support can be
found at http://docwiki.cisco.com/wiki/Unified_Communications_Virtualization.

Service Fulfillment System Architecture


Service Fulfillment is the HCS management framework that primarily deals with new customer and subscriber
provisioning. The goal is to provide sufficient out-of-the-box capability without development effort for the
service provider.
The HCS management framework includes:
• Administrative and End User Self-Service Portals
• APIs
• Backup and restore
• Billing interface
• Centralized License Management
• Contact Center Domain Manager (CCDM)
• Framework services
• Hosted Collaboration Mediation Fulfillment (HCM-F)

These strategies should help realize the architectural goals of HCS service fulfillment, which are:
• Minimizing the need for multiple interfaces
• Maximizing common executable across multiservice domains
• Simplified management of subscribers, customers, sites, and databases, for example
• Integrating additional multiservice domains (rapid, simple for deployment, extensible and open)
• Northbound integration with service provider OSS/BSS systems
• Supporting rapid deployment scenarios
• SP hosted services
• Private cloud
• Reseller
• White label
• Supporting an ecosystem of Cisco and service provider products

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
24
System Architecture
Service Fulfillment Architectural Layers

Service Fulfillment Architectural Layers


The following sections describe the HCS service fulfillment architectural layers. Each management and
integration layer provides incremental value. Lower levels that are not included in higher level abstraction
remain accessible. A layer defines the responsibilities that are needed for routing information both internally
and to subsequent layers. The overall Service Fulfillment solution in HCS comprises the following three
logical layers:
• Hosted Collaboration Mediation Fulfillment Layer (HCM-F)
• Domain Management Layer
• Device Layer

Hosted Collaboration Mediation - Fulfillment Layer


The HCM-F layer provides a centralized data repository shared by various HCS management applications
and acts as a control point for monitoring various HCS solution components. It also includes HCS reporting,
license management and platform management capabilities. Configuration data is synchronized from Domain
Managers into the centralized data repository to represent the installation. Monitoring components are configured
as defined by the administrator to monitor and report observed events and alarms. The HCM-F Layer also
includes several other services for implementing service inventory and platform management.
HCM-F Applications Node delivers the following main functions and services:
1. A centralized database for the Cisco HCS solution: the Shared Data Repository
2. Synchronization of the Shared Data Repository with domain managers: Multiple synchronization services
populate the Shared Data Repository and keep it updated when configuration changes are applied through
these domain managers. The following services populate and update the Shared Data Repository:
• UCSMSync service: Updates the Shared Data Repository when configuration changes are applied
through the UCS Managers.
• vCenterSync service: Updates the Shared Data Repository when configuration changes are applied
through the vCenters.

3. The Cisco HCM-F Administrative UI: Allows configuration of management and monitoring of UC
applications through Cisco HCM-F services by automatic and manual changes to the Shared Data
Repository.
4. Services to create and license UC application servers:
• Cisco HCS IPA Service
• Cisco HCS License Manager Service

5. Prime Collaboration Assurance:


• Cisco HCS Fulfillment service
• Cisco HCS DMA service
• Cisco HCS Provisioning Adapter (CHPA) service

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
25
System Architecture
Prime Collaboration Deployment for UC Applications

Based on data extracted from the Shared Data Repository, these three services work together to
automatically configure the Cisco Prime Collaboration Assurance to monitor Unified Communications
Applications and customer equipment.
6. An HCS Northbound Interface (NBI) API service: Provides a programmable interface for integration with
Service Provider OSS/BSS systems.
7. Billing services through Service Inventory: Provides the service provider with reports on customers,
subscribers, and devices. These reports are used by the service provider to generate billing records for
their customers.
8. Platform Manager: An installation, upgrade, restart and backup management client for Cisco Unified
Communications Manager, Cisco Unified Communications Manager IM and Presence Service, and Cisco
Unity Connection applications. The Platform Manager allows you to manage and monitor the installation,
upgrade, restart and backup of these servers. You can configure the system server inventory as well as
select, schedule, and monitor upgrades of one or more servers across one or more clusters. You access
the Platform Manager through the Cisco HCM-F administrative interface.
The figure below displays how HCM-F Application Node fits into the HCS solution and interactions
between various HCM-F services and other solution components.

Figure 9: Hosted Collaboration Mediation - Fulfillment Application Node architecture

Prime Collaboration Deployment for UC Applications


Cisco Prime Collaboration Deployment helps you to manage Unified Communications (UC) applications. Its
functions are to:
• Migrate a cluster of UC servers to a new cluster (such as MCS to virtual, or virtual to virtual).

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
26
System Architecture
Prime Collaboration Deployment for UC Applications

Tip Cisco Prime Collaboration Deployment does not delete the source cluster VMs
after migration is complete. You can fail over to the source VMs if there is a
problem with the new VMs. When you are satisfied with the migration, you can
manually delete the source VMs.

• Perform operations on clusters, such as:


• Upgrade
• Switch version
• Restart

• Fresh install a new release UC cluster


• Change IP addresses or hostnames in clusters (for a network migration).
Cisco Prime Collaboration Deployment supports simple migration and network migration. Changing IP
addresses or hostnames is not required for a simple migration. For more information, see the at Prime
Collaboration Deployment Guide.

The functions that are supported by the Cisco Prime Collaboration Deployment can be found in the at Prime
Collaboration Deployment Administration Guide.
The functions that are supported by Platform Manager are listed in the following tables. Each table identifies
the UC applications and versions that the functions support. The support for UC applications and their versions
is irrespective of Cisco HCS releases.

Table 4: Supported Tasks for Cisco Unified Applications

Product and Cluster Migration to Upgrade Task Restart Task Switch Version Fresh Install a Readdress
Functions Discovery 10.x/11.x/12.x (Upgrade Task New Task (Change
Cluster Application 10.x/11.x/12.x Hostname or IP
Server or Cluster Addresses for
Install COP One or More
Files) Nodes in a
Cluster)

Cisco Unified 9.0.(1), 9.1(1), From From 9.0.(1), 9.1(1), 9.0.(1), 9.1(1), 10.x, 11.x, 10.x, 11.x, 12.x
Communications 9.1(2), 10.x, 9.1(2),10.x, 9.1(2), 10.x, 12.x
6.1(5), 7.1(3), 10.5(x), 11.x,
Manager 11.x, 12.x 11.x, 12.x 11.x, 12.x
7.1(5), 8.0(1), 12.x
8.0(2), 8.0(3),
To
8.5(1), 8.6(1),
8.6(2), 9.0.(1), 10.5(x), 11.x,
9.1(1), 9.1(2), 12.x
10.x, 11.x,
12.x
To
10.x, 11.x,
12.x

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
27
System Architecture
Prime Collaboration Deployment for UC Applications

Product and Cluster Migration to Upgrade Task Restart Task Switch Version Fresh Install a Readdress
Functions Discovery 10.x/11.x/12.x (Upgrade Task New Task (Change
Cluster Application 10.x/11.x/12.x Hostname or IP
Server or Cluster Addresses for
Install COP One or More
Files) Nodes in a
Cluster)

Cisco Unified 9.0(1), 9.1(1), From9.0(1), From 9.0(1), 9.1(1), 9.0(1), 9.1(1), 10.x, 11.x, Not
IM and 10.x, 11.x, 9.1(1), 10.x, 10.x, 11.x, 10.x, 11.x, 12.x Supported*
10.5(x), 11.x,
Presence 12.x 11.x, 12.x 12.x 12.x
12.x
Service
To10.x, 11.x,
To
12.x
10.5(x), 11.x,
NOTE: Prime
12.x
Collaboration
Deployment
Migration
11x/12.0+ to
11x+/12.0 is
not supported,
if “11x+/12.0”
is identical
version as in
same major,
same minor,
same MR,
same SU/ES.

Cisco Unity 8.6.1, 8.6.2, Not Supported 10.5(x), 11.x, 8.6(1), 8.6(2), 8.6(1), 8.6(2), 10.x, 11.x, 10.x, 11.x, 12.x
Connection 9.x, 10.x, 11.x, 12.x 9.x, 10.x, 11.x, 9.x, 10.x, 11.x, 12.x
12.x 12.x 12.x
To
10.5(x), 11.x
and 12.x

Note *Changing a hostname in Cisco Unified IM and Presence Service must be done manually. Refer to the version
of the Changing IP Address and Hostname for Cisco Unified Communications Manager and IM and Presence
Service document that applies to your configuration.

Cisco supports virtualized deployments of Cisco Prime Collaboration Deployment. The application is deployed
by using an OVA that contains the preinstalled application. This OVA is obtained with a licensed copy of
Cisco Unified Communications Manager software. For more information about how to extract and deploy
the PCD_VAPP.OVA file, see the Cisco Prime Collaboration Deployment Administration Guide.
In your Cisco HCS environment, install only one instance of Cisco Prime Collaboration Deployment, which
must have the following:
• Access to all Cisco Unified Communications Manager clusters for all customers, including those behind
a NAT

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
28
System Architecture
IP Addressing for HCS Applications

• A fixed, nonoverlapping IP address

Use the Cluster Discovery feature to find application clusters on which to perform fresh installs, migration,
and upgrade functions. Perform this discovery on a blade-by-blade basis.
For more information about features, installation, configuration and administration, best practices, and
troubleshooting, see the following documents:
• Prime Collaboration Deployment Administration Guide
• Release Notes for Cisco Prime Collaboration Deployment

IP Addressing for HCS Applications


One VLAN for each customer must be dedicated for each Cisco HCS Enterprise customer. Overlapping
addresses for customer UC infrastructure applications of each customer is supported.
The option to select the address pool from which addresses will be assigned for the customer's UC infrastructure
applications (Cisco HCS Instance) is also supported. This option is necessary to avoid any conflicts with
customer-premises addressing schemes.

Note When deploying Cisco HCS in the hosted environment, you must not have NAT between any end device
(phone) and the Cisco Unified Communications Manager (UC application) on the line side, because some of
the mid-call features may not function properly. However, when Over The Top access is supported (using
Expressway, etc.), there can be NAT in front of the endpoint. It is also recommended that the HCS Management
applications not be deployed within a NAT. Using NAT between the vCenter Server system and ESXi/ESX
hosts is an unsupported configuration. For more details, see http://kb.vmware.com/kb/1010652

Domain Management Layer


The domain management layer comprises Domain Managers that manage services and devices. Examples of
services are security and voice. Each domain manages a specific service or set of services. Domain Managers
integrate with the HCM-F Layer.

Device Layer
This layer interfaces with the Domain Manager layer and comprises Cisco Unified Communications Manager,
Cisco Unity Connection, Unified CMIP, and Cisco Webex modeled as devices from the Cisco HCS perspective.
Cisco HCS Application and Infrastructure layer delivers a full set of Cisco UC and collaboration services,
including:
• Voice
• Video
• Messaging and presence
• Audio conferencing
• Mobility
• Contact center

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
29
System Architecture
HCS License Management

• Collaboration

HCS License Management


License Management Overview

Note In this document, the term License Manager refers to both Enterprise License Manager and Prime License
Manager.

HLM runs as a stand-alone Java application on the Hosted Collaboration Mediation Fulfillment platform,
utilizing Cisco Hosted Collaboration Mediation Fulfillment service infrastructure and message framework.
There is one HLM per deployment of Cisco HCS. HLM and its associated License Manager manage licenses
for Cisco Unified Communications Manager, Cisco Unity Connection, and TelePresence Room.
If it is not running, start HLM using the following command: utils service start Cisco HCS License Manager
Service. This service must run to provide HLM functionality.

Note There is no licensing requirement for Cisco Unified Communications Manager IM and Presence Service, and
Cisco Unified Communications Manager IM.

HCS supports multiple deployment modes. A deployment mode can be Cisco HCS, Cisco HCS-Large Enterprise
(HCS-LE), or Enterprise. Each Prime License Manager is added with a deployment mode and all UC clusters
added to the License Manager must have the same deployment mode of License Manager. License Managers
with different deployment mode can be added to HCM-F. When adding License Manager, the default
deployment mode is selected, but it can be manually changed by selecting a different deployment mode from
the drop-down menu.
Through the Cisco Hosted Collaboration Mediation Fulfillment NBI or GUI, an administrator can create,
read, or delete a License Manager instance in Cisco HCM-F. A Cisco Hosted Collaboration Mediation
Fulfillment administrator cannot perform any licensing management function until HLM validates its connection
to the installed License Manager and its license file is uploaded. HLM exposes an interface to list all of the
License Manager instances.
After the administrator adds and validates a License Manager instance to the HLM, you can assign a customer
to the License Manager. This action does not automatically assign all Cisco Unified CM and Cisco Unity
Connection clusters within this customer to that License Manager. The administrator must assign each Cisco
Unified CM or Cisco Unity Connection cluster to a License Manager after the associated customer is assigned
to that License Manager. If the customer is not assigned to License Manager, the cluster assignment fails, and
you are advised to associate the customer with a License Manager first.
The administrator can unassign a UC cluster from a License Manager through the HLM NBI or GUI.
For more information about Prime License Manager, see Cisco Prime License Manager User Guide.
HLM supports License Report generation. The report includes all customers on the system with aggregate
license consumption at the customer level.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
30
System Architecture
HCS License Manager (HLM)

Note Customers that are assigned to Enterprise Licensing Manager 9.0 are not reported. The license usage of 9.0
clusters that are assigned to Enterprise Licensing Manager 9.1 is not counted in the report either.

An optional field Deal ID at the customer level is included in the report. Each customer has zero or more Deal
IDs that can be configured through the HCM-F GUI.
The Administrator requests the system-level Cisco HCS license report through the HLM GUI or NBI. The
report request generates two files: csv, and xlsx format. Both files are saved into the HLM license report
repository (/opt/hcs/hlm/reports/system) for download. The retention period of the report is set
to 60 days by default.

HCS License Manager (HLM)


License Management provides a simple, centralized multi-customer, user-based licensing system. This system
handles licensing fulfillment, supports allocation and reconciliation of licenses across supported products,
and provides enterprise-level reporting of license usage and entitlement.
With HCS, the License Manager server is installed stand-alone in the HCS management domain with HCM-F.
There can be multiple instances of Prime License Manager. This occurs when a Service Provider has resellers
and wants to segregate the HCS licenses it provides to each reseller or because the number of UC clusters and
Unity connection instances exceeds the 1000-cluster capacity of a single Prime License Manager.
Figure 10: HCS License Manager Overview

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
31
System Architecture
Multiple Deployment Mode

License Manager manages licensing for Unified CM and Cisco Unity Connection clusters in an enterprise.
Cisco Hosted Collaboration Solution supports only standalone Prime License Manager.
Cisco Emergency Responder (CER) enhances the existing emergency 9-1-1 functionality offered by Cisco
Unified Communications Manager by sending emergency calls to the appropriate Public Safety Answering
Point (PSAP). Cisco Emergency Responder is ordered as a Cisco HCS add-on license.
For more information, see the Cisco Unified Communications Domain Manager Maintain and Operate Guide
and Cisco Hosted Collaboration Solution License Management.

Multiple Deployment Mode


Cisco HCS supports the following deployment modes:
• Cisco HCS
• Cisco HCS Large Enterprise
• Enterprise

Each deployment mode must have its own License Manager, and all UC clusters added to the License Manager
must have the same deployment mode as the License Manager. When you add a License Manager, Default
Deployment Mode is automatically selected. You can select a different deployment mode from the Default
Deployment Mode drop-down list.

Note Cisco HCM-F supports License Managers with different deployment modes.

Add a License Manager

Step 1 From the side menu, select License Management > License Manager Summary.
Step 2 Click Add New.
Step 3 Enter the following information:
Field Description
Name The name of the License Manager instance.

Hostname The hostname/IP Address of the License Manager instance. If hostname is specified,
then it must be a fully qualified domain name. If IP address is specified, then ensure
that the IP address specified is the NAT IP Address of License Manager.
Note If the License Manager is in Application Space, ensure that the Hostname
field has the NAT IP Address of License Manager specified.

License Manager Cluster The License Manager Cluster Capacity is set at 1000 and cannot be edited.
Capacity
User ID The OS administrator user ID associated with the License Manager.

Password The password associated with the user ID.

Re-enter Password Re-enter the password associated with the user ID.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
32
System Architecture
HCM-F License Dashboard

Field Description
Deployment Mode Select the required Deployment Mode from the drop-down list.
Note Licenses of Cisco Collaboration Flex Plan work only in HCS mode.

Step 4 Click Save.


Note For detailed assistance on HCS Collaboration Flex Plan licensing, see Cisco Hosted Collaboration Solution
License Management.

HCM-F License Dashboard


The License Dashboard (Infrastructure Manager > License Dashboard) of HCM-F displays the license
summary details at PLM, Customer, and User level. It also displays the license consumption summary details
at VA (virtual account ) level if you are using VA to manage the UC cluster licenses. The license information
is fetched from the service inventory report, therefore, we recommend triggering Service Inventory Jobs
(scheduled or on-demand) for license information to be present in the License Dashboard. The license details
that HCM-F admin can track are as follows:
• Overall license details including all the license managers (PLMs and VAs)
• License details at each PLM and Virtual Account level
• License consumption details at each Customer level
• License consumption details at user-level for a customer
• License compliance status

For the License Dashboard to be available in HCM-F, ensure that the:


• Service Inventory service is running.
• Service Inventory Daily Report is scheduled with versions.

The following SI Report versions support License Summary:


• 10.6.1
• 10.6.3

In Shared Architecture setup, the License Dashboard may not provide the accurate data.
In a co-existing deployment, Cisco Hosted Collaboration Mediation Fulfillment can be configured to have
Unified Communications Domain Manager 8.x, 10.x and UC applications for the License Dashboard to provide
the accurate data.
Apart from the supported SI Report versions, the License Dashboard is not available.
For details on License Dashboard REST APIs, see Cisco Hosted Collaboration Mediation Fulfillment Developer
Guide.
For more information on Smart Licensing, see Cisco Hosted Collaboration Solution Smart Licensing Guide.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
33
System Architecture
Prime License Manager (PLM)

Prime License Manager (PLM)


Each Prime License Manager server supports up to 1000 Unified Communications application clusters. If you
have more than 1000 Unified Communications application clusters, you must install and set up another Prime
License Manager server.
You may assign more than one customer to the same Prime License Manager server. If a customer has multiple
clusters, you can either assign all the clusters for the customer to the same Prime License Manager server or
each cluster for the customer to different Prime License manager server. The total number of clusters for all
customers assigned to the same Prime License Manager server cannot exceed 1000.
It is required that you install Prime License Manager in the service provider space or in the same management
network as HCM-F so that Prime License Manager can access all Unified Communications application clusters.
Prime License Manager periodically connects to the clusters to update license counts and to grant licenses.
Prime License Manager supports NAT and can be in a NAT environment with its own private address.
Prime License Manager is a management application that runs on the same ISO as Unified Communications
Manager in vCenter. Virtual machine specifications are defined in the pre-built OVA that is provided by
Cisco.
Prime License Manager provides licenses to UC apps in an HCS environment. A separate OVA is available
to deploy this application in vCenter. Prime License Manager virtual machine specifications are defined in
the pre-built OVA that is provided by Cisco. Prime License Manager must be installed as a dedicated standalone
server to support HCS licensing.

License Management for Collaboration Flex Plan - Hosted


License management of Collaboration Flex Plan - Hosted is performed by HLM, which is managed by HCM-F.
With HCM-F a partner uses HCS License Manager (HLM) to manage Prime License Manager (PLM). Each
instance of PLM is co-resident with the Cisco Unified Communications Manager (CUCM) cluster and is
dedicated to one end-customer.

Note PLM must run in HCS mode.

For details on installing and configuring Cisco Prime License Manager, see the Cisco Prime License Manager
User Guide
The license types available for Collaboration Flex Plan - Hosted are:
• Cisco HCS Standard licenses for your knowledge workers.
• Cisco HCS Foundation for public space phones.
• Cisco HCS Essential licenses for analog phones such as fax machines.
• Cisco HCS Standard Messaging license for voicemail.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
34
System Architecture
Coresident Prime License Manager

Figure 11: HCS and Collaboration Flex Plan - Hosted - License Management

For more information, see the Cisco Unified Communications Licensing page at http://www.cisco.com/c/en/
us/products/unified-communications/unified-communications-licensing/index.html.

Coresident Prime License Manager


If you require separate licenses per customer, Prime License Manager can reside in Cisco Unified
Communications Manager (Coresident PLM).
The Standalone PLM generally resides in the Service Provider space. However, Coresident PLM resides in
the Application Space along with Unified CM in the same Virtual Machine.
When a PLM is added in HCM-F (License Management > License Manager Summary), the Network
Space field signifies where the PLM is located, and not which address space to use to reach PLM.
Use the following values depending on the PLM location:
• If Standalone PLM located in Service Provider space, use Service Provider Space.
• If Coresident PLM is located in Application space, use Application Space.

Note Before adding the Coresident PLM, ensure to add Unified CM cluster and applications in HCM-F with all
the network settings and credentials.

Ensure that the License Management service is started to activate Cisco Prime License Manager Resource
API and Cisco Prime License Manager Resource Legacy API using the CLI commands:
• utils service activate Cisco Prime LM Resource API
• utils service activate Cisco Prime LM Resource Legacy API

Overview of Smart Licensing


Smart Licensing is a cloud-based, software license management solution that enables you to automate
time-consuming, manual licensing tasks. The Smart Licensing solution allows you to easily track the status
of your license and software usage trends.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
35
System Architecture
Overview of Smart Licensing

It is a Cisco initiative to move all the licenses to the cloud. The purpose of this initiative is to simplify the
license management for HCS partners and enable them to adopt Cisco’s cloud-based license management
system. Smart Licensing helps in overcoming most of the limitations with the traditional PAK-based licenses.
Most of the Cisco products including routing, switching, security, collaboration, and so on supports smart
licensing.
Smart Licensing in HCS depends on Cisco Smart Software Manager (CSSM), and HCM-F. In CSSM you
can activate and manage all Cisco licenses. HCM-F simplifies the complexities of registration or activation
of UC Applications with CSSM, management of Smart Licenses, generate licensing reports for inventory,
and billing purposes. HCM-F also provides licensing dashboards for consumption details and compliance
status.
PLM is not supported for UC applications cluster versions higher than 11.x. Register all the 12.x UC applications
cluster to CSSM.
HCM-F currently supports registration of UC Applications to Prime License Manager (PLM) for consuming
the traditional PAK-based licenses. UC application versions 11.x or earlier supports registration through PLM.
For more information about PLM, see Cisco Hosted Collaboration Solution License Management.
Smart Licensing helps simplify three core functions:
• Purchasing: The software that you have installed in your network can automatically self-register
themselves, without Product Activation Keys (PAKs).
• Management: You can automatically track activations against your license entitlements. Also, you do
not need to install the license file on every node. You can create License Pools (logical grouping of
licenses) to reflect your organization structure. Smart Licensing offers you Cisco Smart Software Manager,
a centralized portal that enables you to manage all your Cisco software licenses from one centralized
website.
• Reporting: Through the portal, Smart Licensing offers an integrated view of the licenses you purchased
and the licenses that are deployed in your network. You can use this data to make better purchase decisions,
based on your consumption.

Cisco Smart Software Licensing helps you to procure, deploy, and manage licenses easily, where devices
register and report license consumption, removing the need for product activation keys (PAK). It Pools license
entitlements in a single account and allow you to move licenses freely through the network, wherever you
need them. It is enabled across Cisco products and managed by a direct cloud-based or mediated deployment
model.
The Cisco Smart Software Licensing service registers the product instance, reports license usage, and obtains
the necessary authorization from Cisco Smart Software Manager.
HCM-F enables the user to perform multiple tasks, such as, change the license deployment to Hosted
Collaboration Solution (HCS), setting the transport mode to UC Applications, create token in CSSM, register
the UC applications and validate the same, and so on. If there is a failure while performing the tasks, HCM-F
collects the error messages from the UC application or CSSM, and updates the HCM-F Job entry with the
issue details.
CSSM reports at smart account-level and product level. However, user information is not available at these
levels. HCM-F provides the Service Inventory report and the HLM report of license usage at customer-level
and virtual account level. It also provides Licensing dashboards to display the usage.
You can use Smart Licensing to:
• See the license usage and count.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
36
System Architecture
Smart Versus Traditional Licensing

• See the status of each license type.


• See the product licenses registered on Cisco Smart Software Manager.
• Renew License Authorization with Cisco Smart Software Manager.
• Renew the License Registration.
• Deregister with Cisco Smart Software Manager .

The deployment option for Smart Licensing:


Cisco Smart Software Manager
The Cisco Smart Software Manager (CSSM) is a cloud-based service that handles system licensing.
HCM-F can connect to CSSM either directly or through a proxy server. HCM-F and UC applications
use the selected Transport Mode. We recommend using a proxy server to connect to CSSM instead of
connecting directly. Cisco Smart Software Manager allows you to:
• Manage and track licenses.
• Move licenses across virtual account.
• Remove registered product instance.

To track smart account-related alerts, change the preference settings, and configure email notification.
Navigate to Smart Software Licensing in Cisco Smart Software Manager.
For additional information, go to https://software.cisco.com.

Smart Versus Traditional Licensing

Traditional (node locked) licensing Smart (dynamic) licensing

You procure the license and manually install it on the Your device requests the licenses that it needs from
PLM. CSSM.

Node-locked licenses - license is associated with a Pooled licenses - Smart accounts are the company
specific device. account specific that can be used with any compatible
device in your company.

No common install base location to view the licenses Licenses are stored securely on Cisco servers that are
that are purchased or software usage trends. accessible 24x7x365.

No easy means to transfer licenses from one device Licenses can be moved between product instances
to another. without a license transfer, which greatly simplifies
the reassignment of a software license as part of the
Return Material Authorization (RMA) process.

Limited visibility into all software licenses being used Complete view of all Smart Software Licenses used
in the network. Licenses are tracked only on per node in the network using a consolidated usage report of
basis. software licenses and devices in one easy-to-use
portal.

Cisco Smart Software Manager (CSSM)


Cisco Smart Software Manager allows product instances to register and report license consumption.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
37
System Architecture
Smart Accounts and Virtual Accounts

You can use Cisco Smart Software Manager to:


• Manage and track licenses
• Move licenses across virtual account
• Remove registered product instance

Note Enable Javascript 1.5 or a later version in your browser.

For details on Cisco Smart Software Manager (CSSM), see https://software.cisco.com/.

Smart Accounts and Virtual Accounts


Smart Account
Cisco Smart Account is an account where all products that are enabled for Smart Licensing are deposited.
Cisco Smart Account allows you to manage and activate your licenses to devices, monitor license use,
and track Cisco license purchases.
Virtual Account
Smart Licencing allows you to create multiple license Pools or virtual accounts within the Smart Software
Manager portal. Using the Virtual Accounts option, you can aggregate licenses into discrete bundles that
are associated with a cost center so that one section of an organization cannot use the licenses of another
section of the organization. For example, if you segregate your company into different geographic regions,
you can create a virtual account for each region to hold the licenses and product instances for that region.
For details on Cisco Smart Accounts and Virtual Accounts, see https://software.cisco.com/.

Smart Licensing Deployment Options


The following options are available for connecting to CSSM:
Proxy (Cloud access through an HTTPs proxy)
In a proxy deployment method, Cisco products send usage information through a proxy server.

Note Proxy is the recommended transport mode.

Direct (Direct Cloud Access)


In a direct cloud-access deployment method, Cisco products send usage information directly.

License Modes in Hosted Collaboration Mediation- Fulfillment


Currently, HCM-F supports the license modes of HCS, HCS-LE, and Enterprise. In a single HCM-F, one
PLM can be in an HCS mode, another can be in an HCS-LE mode, and the third one can be in the Enterprise
mode. The licensing mode is assigned to the PLM in version 11.x or earlier when it is created in the HCM-F.
During the UC cluster assignment process to the PLM, the mode of the UC application is automatically
changed to reflect the licensing mode of the PLM.
From HCS 12.5 release, Smart account or virtual account in CSSM does not have a concept of license mode.
The license mode is only within HCM-F and UC applications. In HCM-F you need to set the license mode
to a virtual account so that during assignment phase the UC Application could be assigned the same mode.
Once the virtual accounts are synced from CSSM to HCM-F, set the license mode before cluster assignment.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
38
System Architecture
Default and Override at Each Level

Default and Override at Each Level


At system level, 'Default license mode' is set in HCM-F. You can also set the license mode at each individual
SA (Smart Account) level. By default SA level license mode is set at system level default value. It's simple
and automatically takes care of entire license mode assignment.
1. When the VAs (Virtual Accounts) are synced from CSSM to HCM-F, VAs get assigned to the license
mode of the SAs.
2. For each VA, the admin can change the license mode before assigning a cluster to VA.
3. Once the cluster is assigned to a VA, the licensing mode of the VA can't be changed. You can change the
licensing mode of the VA only after the clusters are unassigned from the VA.

Advantage: Its simple and automatically takes care of entire license mode assignment.
Disadvantage: There's a risk of how it's interpreted. For example, if admin updates SA level licensing mode,
the license mode doesn't change for the virtual accounts. However, any new virtual account synced is assigned
to this license mode. Also, the license mode settings at SA level may show one type whereas the license mode
settings at individual Virtual Accounts level may show a different type.

Set License Mode at VA level before Cluster Assignment


• Default license mode always exists and is used only for PLM-based assignments.
• No license mode is present at SA (Smart Account) level.
• The license mode exists at the VA level. However, it has the value 'None' when the VA is synced from
CSSM.

Cloud Connectivity
Set the transport mode in HCM-F to connect HCM-F and UC applications to CSSM.
The first option is Proxy transport mode (Connection to Cisco Smart Software Manager through proxy server)
where data transfer happens directly over the Internet to the Cloud server through an HTTPs proxy, either
Smart Call Home Transport Gateway or off the shelf HTTPs proxy such as Apache.
The third option is Direct transport mode (Direct connection to Cisco Smart Software Manager on cisco.com)
where data transfer happens over the Internet to the CSSM (Cloud server) directly from the devices to the
cloud through HTTPs. In Direct transport mode, HCM-F connects directly to the Cisco Smart Software
Manager on cisco.com.
When Smart Account is provisioned with client credentials (Client ID and Client Secret) in HCM-F, the
HCM-F authenticates with the Cisco Authentication Gateway with client credentials. HCM-F gets the access
token from Cisco Authentication Gateway for communicating with CSSM.

Supported Licensing Model


The supported license types for HCM-F Smart Licensing are:
• HCS UCM Essential
• HCS UCM Basic
• HCS UCM Foundation
• HCS UCM Standard

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
39
System Architecture
License Authorization Status

• HCS UCM TelePresence Room


• HCS Emergency Responder
• HCS Unity Connection Basic
• HCS Unity Connection Enhanced
• HCS Unity Connection Speech Connect
• HCS Unity Connection Standard

Smart Accounts provide full visibility into all types of Cisco software licenses except for Right-To-Use (RTU)
licenses. The greatest benefit of a Smart Account is achieved when consuming a Smart License.
• For Smart Licensing, no PAKs are required and it’s easy to order and activate Smart Licenses.
• For Classic, PAK-based licenses, you gain enterprise-wide visibility of PAK licenses and devices that
are assigned to the Smart Account.
• For Cisco Enterprise Agreements (EA), you benefit from simplified EA management, enterprise-wide
visibility, and automatic license fulfillment.

Smart Accounts are the gateway to three different portals:


• For Smart Licenses, there is the Cisco Smart Software Manager, where Smart Licenses are stored and
managed.
• For Classic Licenses, there is the License Registration Portal, where Classic Licenses are deposited and
managed.
• For EAs, the EA Workspace is a tool where users can manage their Enterprise Agreement licensing
activities all in one place.

When the user orders the licenses in CCW or Cisco commerce, user should select the smart account and virtual
account, so that all the licenses are sent to the virtual account.

License Authorization Status


The license authorization is renewed automatically every 30 days. The authorization status expires after 90
days if it is not connected to Cisco Smart Software Manager.
For more information about license authorization status for the UC applications, see
• Cisco Unified Call Manager: Authorization Status for Unified Call Manager
• Cisco Unity Connection: Authorization Status for Unity Connection
• Cisco Emergency Responder: Authorization Status for Emergency Responder

Cisco Prime Collaboration Assurance Overview


Cisco Prime Collaboration Assurance offers integrated monitoring and diagnostics for Cisco Unified
Communications, Cisco TelePresence, and the underlying network infrastructure. It expedites operator
resolution of service quality issues before they affect end users and helps avoid system and service outages
for a greater end-user qualify of experience.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
40
System Architecture
Voice and Video Unified Dashboard

Cisco Prime Collaboration Advanced includes three separate modules: Provisioning, Assurance, and Analytics.
Prime Collaboration Analytics helps you to identify the traffic trend, technology adoption trend,
over-and-under-utilized resources, and device resource usages in your network. You can also track intermittent
and recurring network issues and address service quality issues using the Prime Collaboration Analytics
Dashboards. Prime Collaboration Assurance in MSP mode supports only three features of Analytics: Traffic
Analysis, UC System Performance, and Service Experience/Call Quality. Cisco Prime Collaboration Standard
includes a subset of the features available in the Provisioning and Assurance modules. The Analytics module
and Cisco Prime Collaboration Contact Center Assurance are available as part of the Cisco Prime Collaboration
Advanced offer only.
Cisco Prime Collaboration Standard is included with Cisco Unified Workspace Licensing and Cisco User
Connect Licensing for Cisco Unified Communications. It provides essential provisioning and assurance
management to support deployments of Cisco Unified Communications Manager 10.0 and later.
Cisco Prime Collaboration Assurance features include the following:
• Support for Cisco Unified Communications components include Cisco Unified Communications Manager,
Cisco Unity Connection and Cisco Unified Communications Manager IM and Presence Service.
• Complete view of Contact Center through Dashboards that enable end-to-end monitoring of your Contact
Center components.
• Support for Contact Center Topology view, fault management, and alarm correlation.
• Fault monitoring for core Cisco Unified Communications components (Unified Communications Manager,
Cisco Unity Connection).
• Support for TelePresence components including Cisco TelePresence Video Communication Server (Cisco
Expressway).
• Contextual cross launch of serviceability pages of Cisco Unified Communications components.
• Role Based Access Control (RBAC).
• Fault Management, Diagnostics, and Reports.
• Single-Sign On and Analytics

For more information, see http://www.cisco.com/c/en/us/products/collateral/cloud-systems-management/


prime-collaboration/white-paper-c11-731624.html

Note Cisco Hosted Collaboration Solution supports a single HCM-F with one or more PCA that is used to monitor.
Different versions of Prime Collaboration Assurance running in the same environment is not supported.

Voice and Video Unified Dashboard


The Cisco Prime Collaboration Assurance dashboards enable end-to-end monitoring of your voice and video
collaboration network. A summary of the information displayed is as follows:

Dashboard Description Cisco Prime Collaboration


Assurance Options

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
41
System Architecture
Device Inventory/Inventory Management

Service Experience Information about sessions and Cisco Prime Collaboration


alarms. Assurance Advanced
Alarm Information about management Cisco Prime Collaboration
devices. Assurance
Performance Provides details on critical Cisco Prime Collaboration
performance metrics of each Assurance Advanced
managed element.
Contact Center Topology Information about the Contact Cisco Prime Collaboration Contact
Center components such as CUIC, Center Assurance
Finesse, SocialMiner, MediaSense,
CVP and Unified CCE.
Utilization Monitor Information about endpoints and Cisco Prime Collaboration
their utilization, conferencing Assurance Advanced
devices, and license usage.

Dashboard Description Cisco Prime Collaboration


Assurance Options
Call Quality Information about quality of Cisco Prime Collaboration
service. Assurance Advanced
Alarm Information about Alarm Cisco Prime Collaboration
summaries. Assurance
Performance Provides details on critical Cisco Prime Collaboration
performance metrics of each Assurance Advanced
managed element.
Contact Center Topology Information about the Contact Cisco Prime Collaboration Contact
Center components such as CUIC, Center Assurance
Finesse, SocialMiner, MediaSense,
CVP and Unified CCE.

Refer to the Prime Collaboration Dashboards to learn how the dashlets are populated after deploying the Cisco
Prime Collaboration Assurance servers.

Device Inventory/Inventory Management


You can discover and manage all endpoints that are registered to Cisco Unified Communications Manager
(phones and TelePresence), Cisco Expressway (TelePresence), CTS-Manager (TelePresence) and Cisco TMS
(TelePresence). In addition to managing the endpoints, you call also manage multipoint switches, application
managers, call processors, routers, and switches that are part of your voice and video collaboration network.
As part of the discovery, the device interface and peripheral details are also retrieved and stored in the Cisco
Prime Collaboration Assurance database.
After the discovery is complete, you can perform the following device management tasks:
• Group devices into user defined groups.
• Edit visibility settings for managed devices.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
42
System Architecture
Voice and Video Endpoint Monitoring

• Customize event settings for devices.


• Rediscover devices.
• Update inventory for managed devices.
• Suspend and resume the management of a managed device.
• Add or remove devices from a group.
• Manage device credentials.
• Export device details.

Voice and Video Endpoint Monitoring


Service operators need to quickly isolate the source of any service degradation in the network for all voice
and video sessions in an enterprise.
For Prime Collaboration Assurance 11.1 and earlier
Cisco Prime Collaboration Assurance provides a detailed analysis of the end-to-end media path, including
specifics about endpoints, service infrastructure, and network-related issues.
For video endpoints, Cisco Prime Collaboration Assurance enables you to monitor all point-to-point, multisite,
and multipoint video collaboration sessions. These sessions can be ad hoc, static, or scheduled with one of
the following statuses:
• In-progress
• Scheduled
• Completed
• No Show

Cisco Prime Collaboration Assurance periodically imports information from:


• The management applications (Cisco TMS) and conferencing devices (CTMS, Cisco TS) on the scheduled
sessions.
• The call and conferences control devices (Cisco Unified CM and Cisco Expressway) shown on the
registration and call status of the endpoints.

In addition, Cisco Prime Collaboration Assurance continuously monitors active calls supported by the Cisco
Unified Communications system and provides near real-time notification when the voice quality of a call fails
to meet a user-defined quality threshold. Cisco Prime Collaboration Assurance also allows you to perform
call classification based on a local dial plan.
See Prerequisites for Setting Up the Network for Monitoring in Cisco Prime Collaboration Network Monitoring,
Reporting, and Diagnostics Guide, 9.x and later to understand how to monitor IP Phones and TelePresence.

Diagnostics
Prime Collaboration uses Cisco Medianet technology to identify and isolate video issues. It provides media
path computation, statistics collection, and synthetic traffic generation.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
43
System Architecture
Fault Management

When network devices are medianet-enabled, Prime Collaboration provides:


• Flow-related information along the video path using Mediatrace
• Snapshot views of all traffic at network hot spots using Performance Monitor
• The ability to initiate synthetic video traffic from network devices using the IP Service Level Agreement
(IP SLA) and Video Service Level Agreement Agent (VSAA) to assess video performance on a network.

In addition, for IP phones, Prime Collaboration uses the IP SLA to monitor the reachability of key phones in
the network. A phone status test consists of:
• A list of IP phones to test.
• A configurable test schedule.
• IP SLA-based pings from an IP SLA-capable device (for example, a switch, a router, or a voice router)
to the IP phones. Optionally, it also pings from the Prime Collaboration server to IP phones.

For Cisco Prime Collaboration Release 11.5 and later


Cisco Medianet Technology is not supported.

Fault Management
Prime Collaboration ensures near real-time quick and accurate fault detection. After identifying an event,
Prime Collaboration groups it with related events and performs fault analysis to determine the root cause of
the fault.
Prime Collaboration allows to monitor the events that are of importance to you. You can customize the event
severity and enable to receive notifications from Prime Collaboration, based on the severity.
Prime Collaboration generate traps for alarms and events and sends notifications to the trap receiver. These
traps are based on events and alarms that are generated by the Prime Collaboration server. The traps are
converted into SNMPv2c notifications and are formatted according to the CISCO-EPM-NOTIFICATION-MIB.

Reports
Prime Collaboration Assurance provides the following predefined reports and customizable reports:
• Inventory Reports—Provide IP phone, audio phone, video phone, SRST phone, audio SIP phone, and
IP communicator inventory details. Inventory reports also provide information about CTI applications,
ATA devices, and the Cisco 1040 Sensor. Provides information on managed or unmanaged devices, and
the endpoints displayed in the Endpoints Diagnostics page.
• Call Quality Event History Reports—Provide the history of call quality events. Event History reports
can display information for both devices and clusters. You can use Event History to generate customized
reports of specific events, specific dates, and specific device groups.
• CDR & CMR Reports — Provides call details such as call category type, call class, call duration,
termination type, call release code, and so on.
• NAM & Sensor Reports— Provides call details collected form Sensor or NAM such as MOS, jitter, time
stamp, and so on.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
44
System Architecture
Cisco Expressway

• TelePresence Endpoint Reports — Provides details on completed and in-progress conference, endpoint
utilization, and No Show endpoints. TelePresence reports also provide a list of conferencing devices and
their average and peak utilization in your network.
• Activity Reports—Provide information about IP phones and video phones that have undergone a status
change during the previous 1 to 30 days.

Cisco Expressway
Cisco Expressway can be deployed in Cisco Hosted Collaboration Solution for Collaboration Edge to support
Over the Top (OTT) connectivity for HCS Endpoints and for Business to Business calls using a shared
Expressway.

Shared Expressway Business to Business (B2B)


For Business to Business calls using a shared Expressway: Cisco HCS provides the option to deploy the
expressway as a shared component across tenants to enable Business to Business calls to/from any non-HCS
businesses thru Internet. This enables sharing of the rich media licenses across multiple enterprises. Cisco
Expressway provides secure signaling and media paths through the firewalls into the enterprise for the key
protocols identified. Traversal links established from the control platform toward each extend will be used to
carry multiplexed traffic through the firewall. Each protocol is to be secured at the edge with TLS and
username/password will also be used as an authentication mechanism for soft clients.

Shared Expressway OTT


For Collaboration Edge to support Over the Top (OTT) connectivity for HCS Endpoints: Cisco Collaboration
Edge architecture supports the Cisco Expressway Series components Cisco Expressway-Connect and Cisco
Expressway-Extend in the TelePresence and Collaboration Edge video solutions.
For OTT deployments: to optimize DCI media traffic, Cisco Expressway-Connect is deployed in the outside
VLAN's subnet and Cisco Expressway-Extend is isolated in a DMZ. The highlights of this design are:
• Cisco Expressway-Extend does not share the VLAN on the inside with endpoints outside of the VLAN.
• Cisco Expressway-Connect is treated as an internal endpoint.
• Inter DC media when endpoints dial into Voice mail or MOH server in the other Data Center (in CoWan
deployments) is not routed through DCI, but through the MPLS network.

Connectivity to Unified Communications Manager (for OTT) can be either secure or non-secure from remote
endpoints. Two distinct sessions, such as TCP and TLS, will be established with session traffic multiplexed
over these connections.
Audio and Video media streams are to be secured with SRTP and BFCP, IX and FECC are also negotiated
and relayed through the edge components.

Note For OTT deployments, hard endpoints must have client certificates to connect to the edge and therefore, must
be configured in secure mode.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
45
System Architecture
Aggregation System Architecture

To install or upgrade Collaboration Edge components Cisco Expressway-Extend and Cisco


Expressway-Connect, see http://www.cisco.com/c/en/us/support/unified-communications/expressway-series/
tsd-products-support-series-home.html.

Note The content under the title OTT Deployment and Secured Internet with Collaboration Edge Expressway is
existing content in the current SRND that has been added for context.

Aggregation System Architecture


Aggregation Layer

Session Border Controller (SBC) in HCS


The SBC also is as a media and signaling anchoring device. In the aggregation layer, the SBC is used as a
Cisco HCS demarcation, which normalizes all the communication between the Cisco HCS and the outside
world, either a different IP network or the IP Multimedia Subsystem (IMS) cloud.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
46
CHAPTER 2
Network Architecture
• Service Provider IP Infrastructure, on page 47
• Signaling Aggregation Infrastructure, on page 57

Service Provider IP Infrastructure


This section covers the configuration of the service provider (SP) MPLS/IP Core used to transport traffic from
the customer sites to the Data Center hosting HCS. The PE devices establish L3VPNs for each customer to
ensure traffic isolation while the P devices provide efficient transport across the SP backbone.
You can implement NAT services for Service Assurance, management or security. Hosted Collaboration
Solution (HCS) defines an environment where customer-specific applications are hosted in the service provider
Data Centers (DCs) rather than on premises. Inevitably, this also means that applications for multiple customers
are hosted in the same SP DC. Customer Premises Equipment (CPE), IP endpoints within the customer
premises require connectivity to the service provider data center, which is provided through a robust SP IP
infrastructure.
This section takes a look at basic IP connectivity requirements and outlines the HCS IP deployment model
and functions of devices at each layer within the SP IP infrastructure. This section also outlines various design
considerations for IP based services such as Dynamic Host Configuration Protocol (DHCP), network address
translation (NAT), Domain Name System (DNS, and Network Time Protocol (NTP), and provides more
details on connectivity requirements, maximum transmission unit (MTU) size, and addressing recommendations.

Service Provider IP Connectivity Requirements


To understand overall service provider IP Infrastructure connectivity, you must understand the following:
• What devices require IP connectivity for an end-to-end service
• What traffic traverses service provider and Enterprise customer networks

The following table outlines the set of components and their intended placement within the end-to-end system
architecture.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
47
Network Architecture
HCS Traffic Types

Table 5: Components Requiring IP Connectivity Device Category Network Placement - Sample Device

Device category Network placement Sample device


IP Endpoints Customer Premises IP phones (voice and video),
Presence/IM Clients (Cisco Unified
Personal Communicator)
SP Managed components Customer Premises SRST CPE, Media Resources CPE,
Media Resources Managed CPE
Per-Customer Servers SP Data Center Cisco Unified Communications
Manager, Cisco Unity Connection,
Cisco Unified Communications IM
and Presence Service
Shared Management Components SP Data Center Cisco Prime Collaboration
Assurance (PCA) for HCS

Multitenant signaling aggregation SP VoIP aggregation within IP Third- party SBC


Infrastructure

Each of the devices shown in the preceding table requires IP connectivity to one or more other devices.

HCS Traffic Types


There are typically multiple traffic flows for different components within the service provider infrastructure,
each with a distinct purpose and requirement. These traffic flows are documented as follows.

Traffic Type and Requirements


Signaling
• For each customer, on-premises endpoints must have reachability to its per-customer services components
in the service provider's data center.
• Each per-customer instance of Unified Communications Manager in the service provider data center
must have reachability to a multitenant signaling aggregation component in the service provider's data
center or VoIP network.
• On-premises components of one customer must not have reachability to the per-customer services
components of another customer.

Media
• On-premises endpoints of one customer must have reachability to on-premises endpoints of another
customer for interenterprise on-net calls.
• On-premises endpoints must have reachability to PSTN media gateway.
• (MGW) in the service provider's data center.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
48
Network Architecture
Management

Management
• Per-customer management components in the service provider's data center must have reachability to
multitenant management components in the service provider's data center.
• In the case of managed CPE or SRST routers, the on-premises CPE management address must have
reachability to per-customer management components in the service provider's data center. For Cisco
Prime Unified Operations Manager, this must currently be accomplished without using PAT or NAT.
• On-premises LDAP server to the customer IM and Presence Service server instance in the service
provider's data center.

Data
• Connectivity between multiple sites within an enterprise customer.
• No direct connectivity between sites of different enterprise customers.
• Because multiple enterprise customers share service provider IP (and data center) infrastructure as a
transport medium, some fundamental design and security constraints must be addressed:
• On-premises components of one enterprise must not negatively impact hosted components of other
enterprises or the service provider network in general.
• Customer traffic must be segregated as it passes through the service provider IP (and data center)
infrastructure. This is because multiple customers use the same infrastructure to access applications
hosted in the service provider data center.
• While providing over traffic segregation, the service provider must support some intercustomer
communication. For example, media for intercustomer on-net calls can be sent over an IP network
between endpoints in two different enterprises without being sent to the PSTN.
• IP network design must consider potential overlapping address spaces of both on-premises and
hosted components for multiple enterprises.

HCS Management IP Addressing Scheme


You must deploy the following Cisco HCS and HCS-related management components within a global address
space:

Note The use of network address translation (NAT) address space is not recommended for management applications
such as Cisco Hosted Collaboration Mediation Fulfillment Layer (HCM-F) when they are accessed from
customer Unified Communications applications.

• Hosted Collaboration Mediation Fulfillment Layer (HCM-F)


• Cisco Prime Collaboration Assurance
• Unified Communications Manager
• vCenter
• Prime License Manager

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
49
Network Architecture
Service Provider NAT/PAT Design

These components must be directly accessed from the individual customer domains without network address
translation of the management components.
Figure 12: HCS Management Addressing Scheme

The deployment scheme shown in the preceding figure is the preferred and validated method, which enables
all management features to work correctly.

Note Some deployments do not follow the above recommended best practice, and problems with some features
have been encountered; for example, platform upgrade manager or automation of assurance provisioning. We
highly recommend that you migrate noncomplying deployments to the above Cisco HCS supported and
validated deployment deployment (in other words, addresses of management applications such as HCM-F)
must be directly accessible (without NAT) from the UC applications, whereas the UC applications can have
their addresses translated (NAT) while being accessed from management applications.

Service Provider NAT/PAT Design


With the use of per-customer MPLS VPN, Cisco HCS end customers can end up using overlapping IP addresses.
Another possibility is that a service provider will use a fixed, overlapping subnet with possibly the same IP
addresses for all customers to simplify operation complexities.
While MPLS provides the ability to use overlapping subnets across multiple customers, it also causes problems
for out-of-band management of overlapping customer subnets. HCS recommended design uses NAT between
management systems and customer UC applications. which use overlapping addresses.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
50
Network Architecture
Grouping VLANs and VLAN Numbering

Grouping VLANs and VLAN Numbering


Cisco recommends that when you design Layer 2 for a Cisco HCS deployment, you group the VLANs based
on their usage. The current Service Provider Cisco HCS data center design assumes that each end customer
consumes only two VLANs; however, it is possible to configure four VLANs for each end customer.
Use the following VLAN numbering scheme if four VLANs are configured for each end customer:
• 0100 to 0999: UC Apps (100 to 999 are the customer IDs)
• 1100 to 1999: outside VLANs (100 to 999 are the customer IDs)
• 2100 to 2999: hcs-mgmt ( 100 to 999 are the customer IDs)
• 3100 to 3999: Services ( 100 to 999 are the customer IDs)
• Use all the unused VLANs x000 to x099 (where x is 1, 2, or 3) and VLANS 4000 to 4095 for other
purposes

Use the following number scheme if only two VLANs are configured for each end customer:
• 0100 to 1999: UC Apps (100 to 999 are the customer IDs for Group 1)
• 2100 to 3999: outside VLANs (100 to 999 are the customer IDs for Group 1)

Use the following numbering scheme for additional end customers:


• 2100 to 2999: UC Apps (100 to 999 are the customer IDs for Group 2)
• 3100 to 3999: outside (100 to 999 are the customer IDs for Group 2)
• Use the unused VLANs for other purposes

While this is the recommended grouping of VLANS to help you scale the number of customers that can be
hosted on a Cisco HCS platform, you may reach the upper limit of customers due to limitations in other areas
of the Cisco HCS solution.

VPN Options
The following VPN options are supported in an HCS deployment:
1. MPLS VPN
2. Site-to-Site IPsec VPN
3. FlexVPN
4. AnyConnect VPN
5. For access options that do not require VPN, see Cisco Expressway Over-the-Top Solution Overview, on
page 97

Service Provider IP infrastructure design MPLS VPN


Cisco HCS recommended IP infrastructure design looks to satisfy all the connectivity requirements for both
services and management, as outlined in an earlier section, securely with complete segregation between
customers in multitenant service provider data centers.
Cisco HCS reference IP infrastructure design revolves around the following two key principles:

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
51
Network Architecture
Service Provider IP infrastructure design MPLS VPN

• Use of MPLS VPN and VLAN to provide customer traffic isolation and segregation

Figure 13: High-level Service Provider IP Design

Endpoints in individual customer sites connect to the service provider network through MPLS Provider Edge
(PE) devices. Customer traffic may be untagged, in which case physical interfaces are used on MPLS PE
devices. Or the service provider may choose to use a bump-in-the-wire and may aggregate multiple customers
on the same physical MPLS PE interface, in which case each customer is assigned its own VLAN and each
customer is terminated on a customer-specific sub-interface with 802.1Q encapsulation that matches the
VLAN sent by customer.
The customer-facing MPLS PE device is responsible for implementing per-customer MPLS Layer 3 Virtual
Private Network (VPN), which provides customer traffic separation through the service provider MPLS-IP
infrastructure.
As an MPLS VPN PE node this device is responsible for the following:
• Defining customer-specific VRF
• Assigning customer-facing interfaces to VRF
• Implementing PE-CE Routing protocol for route exchange
• Implementing Multiprotocol BGP (M-BGP) for VPN route exchange through the MPLS Core
• Routing redistribution between PE-CE and M-BGP routing protocol

MPLS Provider (P) routers are core service provider routers, responsible for high-speed data transfer through
the service provider backbone. Depending upon overall service provider design, this P router may or may not
be part of M-BGP deployment. Other than regular service provider routing and MPLS operations, there is no
specific Cisco HCS-related requirement.
Per-customer MPLS VPN services initiated at the customer-facing MPLS PE devices are terminated at the
data center facing MPLS PEs. The implementation at data center core facing MPLS PEs is the same as the
customer-facing PE device. This effectively means that MPLS L3 VPN is used only in the service provider
MPLS/IP core for customer data transport.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
52
Network Architecture
HCS Tenant Connectivity Over Internet Model

Note Use of labels for MPLS VPN may push the packet size beyond the default maximum of 1500 bytes that may
cause fragmentation in some cases. A good practice is to increase MTU size to accommodate these added
bytes.

The data center core-facing interfaces on the MPLS PE implement a per-customer sub-interface, which is
configured for the customer VRF and is a VLAN unique to each customer. In other words, customer traffic
handoff from service provider core to the data center core devices is based on per-customer VLAN. Data
center infrastructure uses this VLAN to implement VRF-Lite for customer traffic separation.
A similar approach is used to hand over customer traffic to the Session Border Controller (SBC). Any
intercustomer calls, or any calls to PSTN are done through SBC. The Nexus 7000 device hands off customer
traffic to SBC using a per-customer sub-interface, similar to data center handoff. The Session Border Controller
is responsible to correctly route customer calls, based on the configuration within SBC.

HCS Tenant Connectivity Over Internet Model


Cisco HCS using IPSec aware VPN technology allows you to enable an alternative option in the HCS
P(roductized) V(alidated) S(olution) to offer SMBs without MPLS VPN connectivity a secure low cost
alternative to connect to the service provider's data center in the cloud using the internet. This setup does not
require any VRF configuration on the customer premise side, which eliminates the need of costly MPLS VPN.

Note This solution is meant to enable a Cisco HCS tenant site and not a single user.

IPsec is a framework of open standards. It provides security for the transmission of sensitive information over
unprotected networks such as the Internet. IPsec acts at the network layer, protecting and authenticating IP
packets between participating IPsec devices or peers, such as Cisco routers.
Figure 14: Architecture for Site Connectivity Over Internet

In the above diagram of the IP gateway, the device service provider typically has in their IP cloud for the
Internet connectivity. There is no mandate on which IP router one may use, as long as it provides the IP routing

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
53
Network Architecture
HCS Tenant Connectivity Over Internet Model

capabilities for the incoming traffic over IPSec to the appropriate VPN concentrator in the service provider's
HCS data center for IPSec VPN tunnel termination. As shown in the diagram, the VPN concentrator
recommended for this kind of deployment is ASR 1000, which sits inside the Service Provider Cisco HCS
Data Center as a centralized VPN concentrator. This is called Site-Site IPSec VPN tunnel on ASR router.
Figure 15: Detailed Architecture for Connectivity Over Internet

As shown above, the cloud for the MPLS traffic and cloud for the Internet traffic are considered to be different
from one another in terms of how they ingress to the service provider's network. For the traffic coming out
of Internet, the IP gateway is the ingress point, where as in the case of the traffic coming from the MPLS
cloud, the PE is the ingress point.
The above architecture applies to the aggregation-only layer in the above design within the data center.
Deploy the VPN concentrator as other services are typically deployed in this layer. Use the ASR 1000 dedicated
as a VPN concentrator as encryption and decryption happen on the ASR 1000. Running other services may
impact the performance overall.
There are multiple ways to deploy this solution within the Service Provider Cisco HCS Data Center using
two different techniques.
1. Use Layer 3 between the IP gateway and ASR 1000. In this case, the Nexus 7000 switch is used as a
router.
The Nexus 7000 acts as a default gateway for ingress and egress traffic for encrypted traffic in the global
routing table.
2. Use Layer 2 technology between the IP gateway and ASR 1000, and in this case the Nexus 7000 switch
is transparent to the traffic and ASR 1000.
ASR 1000 acts as a default gateway for ingress and the IP gateway is used as a egress default gateway
for encrypted traffic in the global routing table.

You can deploy using the Layer 2 connectivity between the IP gateway and ASR 1000. This keeps this
inter-connectivity architecture as an overlay network on top of the Cisco HCS VPN based network.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
54
Network Architecture
HCS Tenant Connectivity Over Internet Model

There are multiple ways to deploy this over the Internet solution within the SP's data center.
1. Bring the IPsec tunnel directly to the ASR 1000 (VPN concentrator), which decrypts into VRF and connects
to the south VRF on Nexus 7000 using a static route per tenant. On this tenant, it points to the Nexus 7000
aggregation and similarly builds a static route per tenant on Nexus 7000 for any outgoing traffic. You
also require one more static route on the Nexus 7000 toward the SBC for any inter SMB traffic or PSTN
traffic.
2. 2. Bring the IPsec tunnel directly to the ASR 1000 K (VPN concentrator) and connect it to the Nexus
7000 aggregation using dynamic routing protocol BGP. Dynamic BGP also has the advantage to redistribute
the IPSec RRI routes from ASR to Nexus 7000 automatically.
In the diagram below, the ASR 1000-VPN decrypts into VRF and this VRF is connected to the Northbound
VRF on N7000. Then it goes to ASA Outside, and from ASA Inside to Southbound VRF on the Nexus 7000,
then to UC Applications.
Figure 16: Detailed Architecture for Connectivity Over Internet

The IP address on the Customer Premise Equipment (CPE) and the VPN concentrator need to be in the public
domain from the reachability perspective. For all the various different customer sites, there is only one common
public IP address, which they use to connect.
IPSec tunnels are sets of security associations (SAs) that are established between two IPsec peers. The SAs
define the protocols and the algorithms to be applied to sensitive packets and specify the keying material to
be used by the two peers. SAs are unidirectional and are established per security protocol (AH or ESP)
With IPsec, you define the traffic that should be protected between two IPsec peers by configuring access
lists and applying these access lists to interfaces by way of crypto map sets. Therefore, traffic may be selected
on the basis of source and destination address, and optionally Layer 4 protocol, and port.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
55
Network Architecture
FlexVPN

Note The access lists used for IPsec are used only to determine the traffic that should be protected by IPsec, and
not the traffic that should be blocked or permitted through the interface. Separate access lists define blocking
and permitting at the interface.

Access lists associated with IPsec crypto map entries also represent the traffic that a device requires to be
protected by IPsec. Inbound traffic is processed against the crypto map entries--if an unprotected packet
matches a permit entry in a particular access list associated with an IPsec crypto map entry, that packet is
dropped because it was not sent as an IPsec-protected packet.
Cisco recommends static IP addresses on the CPE device and on the VPN concentrator to avoid teardown of
the IPSec tunnel. If the CPE device is using the DHCP or dynamic IP address scheme, there is no way to
establish the tunnel from the central site to the remote site.

FlexVPN
FlexVPN is deployed in HCS as a site-to-site VPN, between the customer site and the hosted HCS datacenter.
The FlexVPN based site-to-site VPN is easy to configure with IKEv2 smart defaults feature. The deployment
model only requires a customer to have internet access and FlexVPN capable routers from HCS. Both dedicated
or shared Cisco Unified Communications Manager can be used to offer HCS to customers behind the FlexVPN.
The following key assumptions are made with regard to the FlexVPN support:
• Endpoints deployed in the customer premise are directly accessible at layer 3 level from UC Applications
deployed in the HCS data center.
• No NAT is assumed between the customer endpoints and the UC applications.
• The Customer VPN client router may be connected to the Internet domain from behind a NAT enabled
internet facing router.
• The VPN client router’s WAN facing address may be private and may be dynamically assigned.
• The VPN server for the HCS may need to support the configuration, such that a common public IP can
be used for all customer VPN client router connectivity.
• Dual Tunnels can be established to two different FlexVPN server routers and tracking enabled at the
client side to failover.

AnyConnect VPN
Cisco AnyConnect VPN Client provides secure SSL connections for remote users. You can secure connections
through the Cisco ASA 5500 Series using SSL and DTLS protocols. It provides a broad desktop and mobile
OS platform support.
ASA for AnyConnect is independent of the existing Firewall ASA in Cisco HCS. You need one ASA per
cluster as multi-context SSL VPN support in ASA is not available yet. AnyConnect split tunneling allows
only the configured applications to go through the VPN tunnel while other Internet traffic from that endpoint
goes outside of the VPN.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
56
Network Architecture
Signaling Aggregation Infrastructure

Figure 17: AnyConnect

Signaling Aggregation Infrastructure


The Cisco HCS Aggregation layer provides a centralized interconnect to the SP cloud. Aggregation layer is
a demarcation for Cisco HCS and a central point for all off-net calling capabilities to Unified Communications
applications at the Unified Communications infrastructure layer. The aggregation layer enables common
services to be delivered to multiple hosted businesses in a uniform manner. The services typically include:
• SIP/PSTN trunking
• Mobile, IMS
• TelePresence
• Contact Center Integration
• Regulatory Services (Emergency Services)
• Cisco Webex Cloud Connected Audio

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
57
Network Architecture
Signaling Aggregation Infrastructure

Figure 18: Cisco HCS Aggregation Layer

Cisco HCS offers a number of deployment models, and depends on type of services, interconnect and
aggregation component preferences. The different aggregation components can be deployed in various
combinations to provide different services. In each case an “HCS demarcation” point exists which provides
a logical and administrative separation between the service provider network and Cisco HCS solution for the
purposes of network interconnect. The following figure shows the different deployment models and the
demarcation in each case.
Figure 19: Deployment Models and Cisco HCS Demarcation

Cisco HCS offers a number of deployment models that can be used depending on customer services,
interconnect and component preferences.
The different aggregation components can be deployed in various combinations to provide different services.
In each case an “HCS demarcation” point exists which provides a logical and administration separation between
the SP network and HCS solution for the purposes of network interconnect.
The third party SBC deployment models requires Service Providers to manage:
• Validations and integration southbound (HCS) and north bound (SIP PSTN or IMS) integration
• Feature and roadmap management

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
58
Network Architecture
IMS Network Integration

• Support services

In addition, the aggregation layer provides the following functions depending on the device used, for example:
• Multi VRF Support and Multi Customer Support
• Media Anchoring
• Protocol Conversions - Signaling Protocol, DTMF 2833 <> Notify, Late <> Early Offer
• Security, Access, Control Network Demarcation, Admission Control and Topology Hiding
• Routing—All Cisco HCS intercustomer calls traverse the aggregation layer and calls are switched by
the service provider's switch.

The following table provides further details on the specific attributes of each deployment model.

Table 6: Aggregation Deployment Models

Deployment Model Attributes


Third-party SBC In this deployment model the service provider has chosen to use an
existing aggregation infrastructure or a third-party SBC.
Cisco HCS demarcation point in this case is SIP trunks from Cisco
Unified Communications Manager clusters in the Application layer.

Per Customer SIP Trunking With this deployment model, a Cisco HCS customer chooses to deploy
a dedicated SIP trunk as opposed to using a centralized SBC. This may
also be advantageous for Cisco HCS deployments where the Service
Provider has not offered a centralized SBC.

IMS Network Integration


Originally, IMS was developed to provide core network functions for the mobile subscribers through the
mobile access network and it evolved to provide these core network functions through other access networks
as well.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
59
Network Architecture
IMS Network Integration

Figure 20: Cisco HCS IMS Integration

Peer-based business trunking: The IMS and the NGCN networks connect as peers through Interconnect
Border Control Functions (IBCF). The business subscribers are not necessarily provisioned in HSS. The point
of interconnection between peer network and IMS is the IMS Ici interface.
Application Server (AS): In this model Cisco HCS/Unified Communications Manager appears as the
Application Server in the IMS network for the mobile phones, and the ISC (IMS Service Control) interface
is used between IMS and Cisco HCS. The key requirement here is for Unified Communications Manager to
support ISC interface route header for application sequencing so that Mobile Service Provider can mesh up
features delivered by multiple application servers for the same call. Other significant requirements include
the support of P-Charging-Vector and P-Charging-Function addresses.
Highlights of the Unified Communications Manager IMS Application Server feature are as follows:
• A phone type "IMS-integrated Mobile (Basic)" is introduced. This is modeled after Cisco Mobile Client.
Note that not all MI (Mobility Identity) attributed are available for IMS client.
• SIP trunk type 'ISC'. The ISC trunk in Cisco Unified Communications Manager is added support to Route
Header. Unified Communications Manager will use the top Route Header in the initial INVITE to decide
how to handle this request, either as originating call or terminating call or as regular SIP call.

New calls flows are based on a half call model for calls involving IMS-integrated clients. These are significantly
different from the normal call flow in Cisco Unified Communications Manager. When the initial request

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
60
Network Architecture
Features and Services

(INVITE) is received on an SIP ISC trunk, the top most Route-Header must correspond to the Unified
Communications Manager (the ISC trunk configuration shall have the ability to specify this URI to validate
the route header) and there is at least one other Route-Header (corresponding to S-CSCF). If these conditions
are not met, Unified Communications Manager fails the request with "403 Forbidden".
• DTMF and other features for the IMS-integrated Mobile are similar to Cisco Mobile Client features
(hold/exclusive hold/resume/conference/transfer/dusting).
• P-Charging-Vector: The P-Charging-Vector header is defined by 3GPP to correlate charging records
generated from different entities that are related to the same session. It contains the following parameters:
ICID and IOI. Cisco Unified Communications Manager will use cluster ID, concatenated with a unique
number as the icid_value. The IOI identifies both originating and terminating networks involved in a
session/transaction.

Features and Services


All DTMF-based features offered to mobile clients in pre-10.0 Cisco HCS releases are applicable for
IMS-integrated Clients in the Cisco HCS ISC interface model. However, signaling-based mid-call features
are not supported over ISC interface.
IMS Application servers provide originating services and terminating services. The following Unified
Communications Manager services are classified here:
Originating services: Call anchoring service to enable SNR features, Enterprise dial plan and class of service
features, DTMF features. When mobile number is dialed, the call is forked to the mobile and shared desk
phone and both ring. This is different from mobile client behavior in previous releases.
Terminating services: Call anchoring service to enable SNR features, DTMF features.
Unified Communications Manager can deliver the existing native Mobility feature through ISC interface to
a mobile subscriber:
• Enterprise dial plan (including extension dialing)
• Enterprise policy (that is, class of service through Calling Search Space)
• Single Number Reach through both enterprise DN
• Single Number Reach even when someone dials the Mobile DN (that is, also rings the shared devices)
• Call move between mobile and desk
• Single VM and MWI
• Mobile BLF presence status
• DTMF-based mid-call features (hold/resume/transfer/conference/park/dust)
• Some shared-link features including remote-in-use from desk and Barge from shared-disk

IMS-integrated Mobile feature support in Unified Communications Manager includes:


• Mid-call Enterprise Feature Access Support Using DTMF-You can configure DTMF feature codes as
service parameters: enterprise hold (default equals *81), enterprise exclusive hold (default equals *82),
resume (default equals *83), transfer (default equal *84), conference (default equals *85) and dusting
(default equals *74).

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
61
Network Architecture
IMS Supplementary Services for VoLTE

IMS Supplementary Services for VoLTE


The following IMS supplementary services (per GSMA IR.92 specification) are provided. These services are
applicable for IMS clients provisioned in Unified Communications Manager:
• ID Service—The existing Unified Communications Manager features Calling Number/Name,
Presentation/Restriction, Privacy, and so on, are applicable for IMS client. There is no new functionality
here.
• Third-party Registration—This feature allows IMS network to send the Registration information to the
Unified Communications Manager Application Server over the ISC interface when the IMS client registers
with the IMS network. The Visited Network information in this Registration is saved by the Unified
Communications Manager, which can be used to determine if the client is roaming or not.
• Hold, Retrieve—This feature implements Hold/Resume in ISC interface as per the relevant specification.
• Transfer—This existing Unified Communications Manager feature is applicable for an IMS client. There
is no new functionality.
• Conference—This is an ad hoc conference feature as defined in the relevant specification for IMS client.
• Call Waiting—This is a Unified Communications Manager existing feature. There is no new functionality.
• Call Forwarding—This is a Unified Communications Manager existing feature mapped to IMS flavors.
• Call Barring—This is a new feature applicable for IMS clients. This allows blocking incoming calls to
a client or outgoing calls from an IMS client. This has variations - roaming call, international call, and
so on.
• MWI (Message Waiting Indication)—The IMS client subscribes to MWI and receives notification.

SS7 Network Interconnect


The Cisco AS5400XM Gateways offer an integrated version of the Cisco Signaling Link Terminal (SLT). In
this case, the gateways function as a trunking gateway and SLT. SS7 signaling links and InterMachine Trunks
(IMTs) are terminated directly on the gateway from the PSTN.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
62
Network Architecture
Central PSTN Gateways

Figure 21: SS7 Interconnect with Integrated SLT

Central PSTN Gateways


In HCS the role of the centralized gateway is to provide the interconnectivity to the PSTN through the SS7.
In this architecture the centralized gateway routes all the incoming calls from the PSTN to the aggregation
device over IP.
The choice of gateway depends on the total number of T1s or E1s required (in one physical location) to connect
to the PSTN. Generally Cisco AS5400 gateways will be used for PSTN connectivity in the service provider
cloud. Cisco products, such as the Cisco 38XX Series of enterprise voice gateways, may be used as the trunking
gateway if the scale of deployment is low or a widely distributed PSTN interconnect is required, such as when
providing least cost routing to international destinations.
The Cisco AS5400 connects to the PSTN through T1 or E1 trunks and is controlled by the Service Provider
hosted aggregation device using the MGCP control protocol. High-density (up to 20 E1,16 T1, or 1 CT3 with
672 simultaneous calls), low power consumption, and universal port digital signal processors (DSPs) make
the Cisco AS5400XM Series Universal Gateways ideal for Hosted HCS.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
63
Network Architecture
Central PSTN Gateways

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
64
CHAPTER 3
Applications
• Core UC Applications and Integrations, on page 65
• IP Multimedia Subsystem Network Architecture and Components, on page 67
• Video Call Flow in HCS Deployments, on page 68
• Fax, on page 71
• Cisco Webex Meetings - Cisco HCS Deployment, on page 72
• Cisco Webex Cloud Connected Audio , on page 72
• Mobility, on page 76
• Assurance Considerations and Impact to HCM-F, on page 86
• Cisco Hosted Collaboration Mediation Fulfillment Impact, on page 86
• Cisco Collaboration Clients and Applications, on page 87
• Endpoints - Conference, on page 87
• Directory, on page 88
• Client Services Framework – Dial Plan Considerations, on page 89
• Translation Patterns, on page 90
• Application Dialing Rules, on page 90
• Directory Lookup Rules, on page 90
• Client Transformation, on page 90
• Deploying Client Services Framework, on page 90
• Deployment Models for Jabber Clients, on page 91
• Push Notifications, on page 91
• Cisco Webex Hybrid Services Architecture Overview, on page 91
• Cisco Cloud Collaboration Management, on page 92

Core UC Applications and Integrations


Cisco Unified Communications Manager (CUCM)
Cisco Unified Communications Manager (Unified CM) provides reliable, secure, scalable, and manageable
call control and session management. Consolidate your communications infrastructure and enable your people
and teams to communicate simply with IP telephony, high-definition video, unified messaging, instant
messaging and presence.
For more information, see the CUCM documentation: https://www.cisco.com/c/en/us/support/
unified-communications/unified-communications-manager-version-12-5/model.html

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
65
Applications
Core UC Applications and Integrations

Cisco Unity Connection (CUC)


Cisco Unity Connection is a robust unified messaging and voicemail solution that provides users with flexible
message access options and IT with management simplicity.
For more information, see the CUC documentation: https://www.cisco.com/c/en/us/support/
unified-communications/unity-connection/tsd-products-support-series-home.html

Cisco Emergency Responder (CER)


Coupled with Cisco Unified Communications Manager, Cisco Emergency Responder surpasses traditional
PBX capabilities by introducing user or phone moves and changes at no cost, and dynamic tracking of user
and phone locations for emergency 9-1-1 safety and security purposes.
For more information, see the CER documentation: https://www.cisco.com/c/en/us/support/
unified-communications/emergency-responder/tsd-products-support-series-home.html

Cisco Unified Attendant Console (CUAC)


Cisco Unified Attendant Consoles (UACs) can help you ensure that your teams handle all calls efficiently
and professionally. CUACs combine superior call routing and distribution tools with support for Cisco Unified
IP Phones and Unified Communications Manager. You may choose the standard or advanced option depending
on scale requirments.
Standard version documentation: https://www.cisco.com/c/en/us/support/unified-communications/
jabber-windows/products-installation-guides-list.htmlhttps://www.cisco.com/c/en/us/support/
unified-communications/unified-attendant-console-standard/model.html
Advanced version documentation: https://www.cisco.com/c/en/us/support/unified-communications/
unified-attendant-console-advanced/model.html

Cisco Expressway
Cisco Expressway offers users outside your firewall simple, highly secure access to all collaboration workloads,
including video, voice, content, IM, and presence. Users can collaborate with people who are on third-party
systems and endpoints or in other companies; teleworkers and Cisco Jabber mobile users can work more
effectively on their device of choice.
For more information, see the Cisco Expressway documentation: https://www.cisco.com/c/en/us/support/
unified-communications/expressway-series/tsd-products-support-series-home.html

Cisco Paging Server


Cisco Paging Server is designed for applications of any size for customers of Cisco Unified Communications
Manager. The InformaCast software application offers essential paging functions through Cisco IP Phones
with emergency notification capabilities built-in. The solution provides business-critical corporate
communications as well as reliable security awareness for many industries.
For more information, see the Cisco Paging Server documentation: https://www.cisco.com/c/en/us/support/
unified-communications/paging-server/tsd-products-support-series-home.html

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
66
Applications
IP Multimedia Subsystem Network Architecture and Components

IP Multimedia Subsystem Network Architecture and


Components
The IP Multimedia Subsystem or IP Multimedia Core Network Subsystem (IMS) is an architectural framework
for delivering IP multimedia services. IMS was originally designed by the wireless standards body 3rd
Generation Partnership Project (3GPP). Descriptions of the essential IMS network elements referred to in this
section and how Cisco Unified Communications Manager functions as an application server (AS) in the IMS
network follow.
The high-level topology of an IMS network using Cisco Unified Communications Manager as an application
server is shown in the following figure.

Essential IMS Network Elements


Essential IMS network elements include:
• Home Subscriber Server (HSS), or User Profile Server Function (UPSF)
Master subscriber database that supports the IMS network entities that handle calls. HSS contains
subscription-related information (subscriber profiles), performs subscriber authentication and authorization,
and can provide information about subscriber locations and IP information. HSS is similar to the GSM
Home Location Register (HLR).
• Proxy Call Session Control Function (P-CSCF)
Session Initiation Protocol (SIP) proxy that is the first point of contact for the IMS terminal. P-CSCF
can be in the SBC, but is not in our application.
• Serving-CSCF (S-CSCF)
The central node in the signaling plane. S-CSCF provides routing services, typically using electronic
numbering (ENUM) lookups, and handles SIP registrations that enable S-CSCF to bind the user location
(the IP address of the IMS terminal) and the SIP address.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
67
Applications
Video Call Flow in HCS Deployments

• Interrogating-CSCF (I-CSCF)
The P-CSCF forwards registration requests to an I-CSCF, which interrogates HSS to obtain the address
of the relevant S-CSCF to process the SIP initiation request. For call processing, SIP requests are sent
to I-CSCF.
• SIP application servers (AS)
Servers that host and execute services and interface with the S-CSCF using SIP.CUCM) functions as an
AS in the configuration via an ISC interface.

Video Call Flow in HCS Deployments


Intra-Enterprise Point-to-Point Video Calling
Point-to-point video calling is supported for all the video endpoints supported on Unified Communications
Manager.
Figure 22: Intra-Enterprise Call

HCS Hosted Inter-Enterprise Point-to-Point Video Calling


In Cisco HCS all inter-enterprise calls are routed through the aggregation layer with the SBC providing the
demarcation point. From the leaf cluster perspective, the video calls are handled the same way as intra-enterprise

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
68
Applications
Non-HCS to HCS Enterprise Point-to-Point Video Calling

calls. However, the trunks carrying video traffic to and from the SBC need to be appropriately configured to
handle video sessions.
Depending on the service provider, aggregation layer routing options, inter-enterprise audio calls may be
hairpinned at the SBC or at a Softswitch in the service provider domain.
Regardless of routing infrastructure within the SP domain, we assume that the SP network preserves the Video
SDP (attributes) so that the inter-enterprise audio call can succeed as a video call if both endpoints support
Video.
Figure 23: Inter-Enterprise Call

Non-HCS to HCS Enterprise Point-to-Point Video Calling


In Cisco HCS, every call made to a non-Cisco HCS user traverses the SBC that is deployed as the demarcation
point between Cisco HCS Customer instance and the aggregation layer. Video calls from non-Cisco HCS
users are treated very similarly to an interenterprise call with the assumption that the video SDP will be
preserved and compatible for the video sessions. Also all video sessions are expected to be using SIP signaling.
The following diagram captures the interconnectivity for non-Cisco HCS Video calls.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
69
Applications
HCS Enterprise Video

Figure 24: Non-Cisco HCS to Cisco HCS Enterprise Call

Depending on the SP network and requirements, you can configure an SBC with a dedicated adjacency to the
non-Cisco HCS video cloud or the SP network can directly connect to the video cloud and provide the routing
across Cisco HCS and non-Cisco HCS video endpoints.
Within Cisco HCS, the SBC is configured to validate the interworking of Non-Cisco HCS Video Signaling.
Non-Cisco HCS video signaling includes calls to and from external Cisco TelePresence Systems, which can
either be a scheduled or ad hoc meeting on the Cisco TelePresence Systems. However some of the features
specific to Cisco TelePresence Systems like One Button To Push are not available on the Video Endpoints
registered with the HCS leaf clusters.

HCS Enterprise Video


This following sections focus on HCS enterprise architecture video offerings, which enable external video
service interoperability and integration.
The HCS TelePresence network architecture diagram below shows the multiple functions and features provided
by the common components along with the connectivity model.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
70
Applications
Fax

Figure 25: HCS TP Network Architecture

Fax
For most customers, there is a requirement to provide fax service to the end users. This includes inbound fax
from the PSTN and outbound Fax to the PSTN and Fax over VoIP between sites.
The fax machines are connected to a VG, which communicates preferably in SIP with the Unified
Communications Manager.

Supported Fax Gateways


For a complete list of supported fax models, see the Cisco Hosted Collaboration Solution Compatibility
Matrix.

Inbound Fax from PSTN


The fax arrives on one of the dedicated DID numbers used for fax.
The call flow is as follows:
• PSTN user dials a managed fax number.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
71
Applications
Outbound Fax to PSTN

• Call comes in on Broadsoft, then from there is sent to SBC through SIP and from SBC to Unified
Communications Manager SIP trunk.
• Unified Communications Manager sends the call to VG based on DN.
• Once the local fax provides fax tone, the fax session is established end-to-end and the fax is received by
the local fax.

Outbound Fax to PSTN


Dial the remote fax number to be reached and the fax machine sends the fax out to PSTN over the central
breakout.
The call flow is as follows:
• You dial a fax number.
• Call goes out through VG to Unified Communications Manager.
• Unified Communications Manager sends the call to SBC, SBC to Broadsoft, and from Broadsoft then
out to PSTN.
• After the remote side provides fax tone, the fax session is established end to end and the fax goes out to
the destination.

Note To configure Inbound and Outbound fax from the MGCP gateway, see the http://www.cisco.com/c/en/us/
tech/voice/gateway-protocols/tsd-technology-support-troubleshooting-technotes-list.html for detailed
information.

Fax Within the Customer


End users dial a fax DN or fax public number to reach a dedicated fax machine at another site. This fax call
does not go over to PSTN but stays on-net because Unified Communications Manager recognizes the DN as
one of the local DN.

Cisco Webex Meetings - Cisco HCS Deployment


Cisco Webex Meetings offers Web-based document/application/desktop sharing as a cloud based Over-The-Top
(OTT) pass-through service from Cisco Webex cloud, that is, a Cisco HCS service provider typically does
not host any Cisco Webex infrastructure. Cisco Webex Meetings provides collaboration features such as
document sharing, application sharing, and desktop sharing. Cisco HCS enables provisioning of Cisco Webex
Meetings user accounts through Unified Communications Domain Manager through the Cisco Webex API.

Cisco Webex Cloud Connected Audio


Cisco Webex Cloud Connected Audio enables on premise callers to connect to Cisco Webex audio over
IP-based networks using IP-based signaling and media.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
72
Applications
Cisco Webex Cloud Connected Audio

Calls received by an SBC or on the PSTN-Webex adjacency are routed out of the WebEx-CCA adjacency.
Similarly, call back calls received on Cisco Webex CCA adjacency are routed out of PSTN-Cisco Webex
adjacency towards the PSTN provider network. The PSTN network then routes the call to the final destination.
Cisco Webex Cloud Connected Audio allows Cisco Webex enabled enterprises to use native PSTN connectivity
instead of using the Cisco Webex PSTN connectivity. Within an SBC, all calls originating from HCS tenants
from leaf clusters towards Cisco Webex are routed to the PSTN provider network, which routes the call back
to an SBC on a dedicated PSTN-Cisco Webex adjacency. For deployments with LBO, calls still use Cisco
Webex PSTN.
This is done by creating a dedicated adjacency towards Cisco Webex. This adjacency is used to send and
receive Cisco Webex audio calls from and to users joining Cisco Webex Meetings hosted by HCS tenant
enterprises Cisco Webex MeetingsThe following diagram captures architecture that will be supported in HCS
for integrating/connecting to the Cisco Webex for cloud connected audio feature.
Figure 26: Cisco Webex Collaboration Cloud Audio Architecture

All signaling and media sessions specific to Cisco Webex audio between the leaf clusters and the Cisco Webex
are routed through the Session Border Controller (SBC) on the existing sip trunk/Adjacency between leaf
clusters and SBC. All Enterprises enabled for Cloud Connected Audio are configured with the same non-disable
meeting number and the same or different E.164 numbers on per enterprise basis on Cisco Webex. Cisco
Webex uses the meeting IDs to uniquely identify the meeting ownership.
The leaf clusters are configured to route the calls to the Cisco Webex number over the same SIP trunk
configured towards the SBC. The SBC is configured to route the calls specific to Cisco Webex number over
a shared trunk/adjacency to Cisco Webex. The SBC is configured to uniquely identify the enterprise or Cisco
Unified Communications Manager initiating the Cisco Webex audio call.
In the figure below, the SBC hands over the Cisco Webex call to the Service Provider PSTN switch. The
service provide PSTN switch does the number analysis and other various routing methodologies to identify
the termination of a unique SIP trunk to an SBC for calls destined to Cisco Webex CCA. The SBC determines
the destination adjacency is Cisco Webex CCA after receiving calls on this specific adjacency.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
73
Applications
Cisco Webex Cloud Connected Audio

For call back calls requested during a specific enterprise hosted session, the Cisco Webex routes the calls to
the SBC with additional parameters to uniquely identify the enterprise that needs to handle and complete the
call.
For call back calls from Cisco Webex CCA, the SBC hands over all call invites to the service provider PSTN
switch. The switch identifies the termination a subscriber under a hosted customer site or PSTN if the user
joins a meeting from PSTN or sites that do not have Central Break Out and depend local connection to PSTN.
All non-enterprise users have to dial the enterprise specific Cisco Webex number that is routed through the
SBC to the enterprise specific leaf cluster.
These call flows include both signalling and media information as they follow the same path.
Figure 27: Central Breakout for Cisco Webex CCA

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
74
Applications
Enterprise User Calls Into Cisco Webex and Calls from Cisco Webex CCA to Enterprise Users

Enterprise User Calls Into Cisco Webex and Calls from Cisco Webex CCA to
Enterprise Users
Figure 28: Routing for Hosted Enterprise User Joining a Meeting

For Dial in:


Depending on the configuration enabled on Cisco Unified Communications Manager, the enterprise users
can either dial an enterprise owned e.164 number or an extension dedicated for Cisco Webexaudio sessions.
Leaf clusters transform this number to a CCA number and route it to the SBC over existing sip trunks for
onward routing to Cisco Webex cloud. The service provider PSTN switch identifies a unique SIP trunk to an
SBC to carry all call traffic towards Cisco Webex CCA. The SBC selects the destination adjacency towards
Cisco Webex CCA to route the calls.
The Cisco Webex cloud includes additional parameters to uniquely identify the enterprise so that the SBC
can route it to the corresponding enterprise for call back calls. This behavior allows the call back calls to be
handled by the correct enterprise, regardless of the called user's number or location.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
75
Applications
External Users Call into Cisco Webex and Calls from Cisco Webex CCA to External Users

External Users Call into Cisco Webex and Calls from Cisco Webex CCA to
External Users
Figure 29: Routing Callbacks Over PSTN

External users can dial an enterprise owned e.164 number dedicated for Cisco Webex audio sessions. When
external users from PSTN dial-in to the meeting, the service provide PSTN switch identifies the unique SIP
trunk to and SBC for calls destined to Cisco Webex CCA. The SBC routes the call to destination adjacency
towards Cisco Webex CCA.
Callback calls from Cisco Webex CCA are handed over to PSTN by the service provider PSTN switch to
route to the user joining the meeting. This behavior allows the call back calls to be handled by the correct
enterprise, regardless of the called user's number or location.

Mobility
Cisco HCS offers Mobile Unified Communications solutions and applications that deliver features and
functionality of the enterprise environment to mobile workers wherever they might be. With Mobile Unified
Communications solutions, mobile users can handle business calls on a multitude of devices and access
enterprise applications whether moving around the office building, between office buildings, or between
geographic locations outside the enterprise.
The following are a set of mobility features that are offered through HCS:
• Mobile Connect: Includes Desk Phone Pickup, Remote Destination Pickup, Mid Call Features
• Enterprise Feature Access: Two-stage dialing without an IVR feature

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
76
Applications
Mobile Connect

• Mobile Voice Access: Two-stage dialing with an IVR feature


• Single Enterprise Voice mailbox: Unanswered calls made to the user's enterprise number and extended
to the user's mobile phone are sent to the enterprise voicemail system rather than mobile carrier's voicemail
system
• Clientless Fixed Mobile Convergence (FMC) Integration to mobile networks

Mobile Connect
The Mobile Connect feature allows an incoming call to an enterprise user to be offered to the user's IP desk
phone and up to ten configurable remote destinations. Typically, a user's remote destination is their mobile
or cellular telephone. After the call is offered to both the desktop and remote destination phone, the user can
answer any of the phones. When the user answers the call on one of the remote destination phones, or on the
IP desk phone, the user has the option to hand off or pick up the call on the other phone.
Mobile Connect supports the following scenarios:
• Desk Phone Pickup: When a call to the enterprise number has been made by or answered at the desk
phone, the user can switch or move the active call to the remote destination.
• Remote Destination Pickup: When a call to the enterprise number has been made by or answered at
the remote destination, the user can switch or move the active call to the desk phone.

Mobile Connect Mid-Call Features


When a user answers a Mobile Connect call at the remote destination device, the user can invoke mid-call
features such as hold, resume, transfer, conference, directed call park, and session handoff by sending DTMF
digits from the remote destination phone to Unified Communications Manager through the PSTN.
When the mid-call feature hold, transfer, conference, or directed call park is invoked, MoH is forwarded from
Unified Communications Manager to the held party. In-progress calls can be transferred to another phone or
directed call park number, or additional phones can be conferenced using enterprise conference resources.
The session handoff mid-call feature enables movement of the active call to the desk phone, but the call rings
in to the desk phone, rather than using hold/resume. With the session handoff mid-call feature, call audio is
maintained between the remote destination and the far-end until the call is answered at the desk phone.
Mid-call features are invoked at the remote destination phone by a series of DTMF digits forwarded to Unified
Communications Manager. After these digit sequences are received by Unified Communications Manager,
they are matched to the configured Enterprise Feature Access Codes for hold, exclusive hold, resume, transfer,
and conference Unified Communications Manager, and the appropriate function is performed.
Mid-call features can be invoked on smartphones and manually. The following tables show the key sequences
for smartphones and manual invocation.
Media resource allocation for mid-call features, such as hold and conference, is determined by the remote
destination profile (RDP) configuration. The media resource group list (MRGL), of the device pool configured
for the RDP, is used to allocate a conference bridge for the conferencing mid-call feature. The User Hold
Audio Source and Network Hold MoH Audio Source settings of the RDP, in combination with the media
resource group list (MRGL) of the device pool, is used to determine the appropriate MoH stream that is sent
to a held device.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
77
Applications
Mobile Connect Mid-Call Features

Table 7: Smartphone Key Sequences

Mid-call Enterprise Smartphone Smartphone key sequence Smartphone behaviour


feature feature feature name
access code
(default)
Directed call — Enterprise 1. Press Enterprise Directed Call Park soft Smartphone sends *82.
park directed call key.
Then the smartphone automatically does
park
2. Enter <Directed_Call_Park_Number>. the following when the directed call park
number is entered:
To retrieve a parked call, the user must use 1. Makes a new call to preconfigured
Mobile Voice Access or Enterprise Feature Enterprise Feature Access DID.
Access Two-Stage Dialing to place a call
to the directed call park number. When 2. Sends a preconfigured PIN number
entering the directed call park number to when Enterprise Feature Access
be dialed, it must be prefixed with the answers it, followed by *84, followed
appropriate call park retrieval prefix. by direct call park number, followed
by *84.

Conference *85 Enterprise 1. Press Enterprise Conference soft key. Smartphone sends *82.
Conference
2. Enter <Conference_Target/DN>. Then the smartphone automatically does
the following when the conference target
3. When conference target answers, press DN is entered:
Enterprise Conference soft key.
1. Makes a new call to preconfigured
Enterprise Feature Access DID.
2. Sends a preconfigured PIN number
when Enterprise Feature Access
answers it, followed by *85, followed
by conference target/DN.

Session #74 Enterprise With the mid-call session handoff feature,


handoff session MoH is not forwarded to the far-end
handoff because the far-end is never placed on hold.
Instead, the original audio path is
maintained until the mobile user answers
the handoff call at the desk phone.
Once the call is answered, the call legs are
shuffled at the enterprise gateway and the
audio path is maintained.

Table 8: Manual Key Sequences

Mid-call feature Enterprise feature Manual key sequence


access code (default)
Hold *81 Enter *81

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
78
Applications
Enterprise Feature Access

Mid-call feature Enterprise feature Manual key sequence


access code (default)
Exclusive Hold *82 Enter *82
Resume *83 Enter *83
Transfer *84 1. Enter *82 (Exclusive Hold).
2. Make a new call to Enterprise Feature Access Code.
3. When connected, enter <PIN_number> # *84#
<Transfer_Target/DN>#.
4. When answered by transfer target (for consultative transfer) or upon
ringback (for early attended transfer), enter *84.

Directed Call Park — 1. Enter *82 (Exclusive Hold).


2. Make a new call to Enterprise Feature Access DID.
3. When connected, enter <PIN_number> # *84#
<Directed_Call_Park_Number>#.

To retrieve a parked call, the user must use Mobile Voice Access or
Enterprise Feature Access Two-Stage Dialing to place a call to the
directed call park number to be dialed that must be prefixed with the
appropriate call park retrieval prefix.

Conference *85 1. Enter *82 (Exclusive Hold).


2. Make a new call to Enterprise Feature Access Code.
3. When connected, enter <PIN_number> # *85#
<Conference_Target/DN>#.
4. When the conference target answers, enter *85.

Session handoff *74 1. Enter *74.


2. Answer at the desk phone when it rings or the light flashes.

Enterprise Feature Access


Enterprise Feature Access includes the mid-call features, and also adds two-stage dialing, providing mobile
users with the ability to place calls from their mobile phone as if they were calling from their enterprise IP
desk phone. No IVR prompts are required. This feature also provides the ability to mask a user's mobile phone
number when sending outbound caller ID. For example, the user's enterprise number is sent as caller ID to
ensure that returned calls to the user are made to the enterprise number that result in enterprise call anchoring.
The system-configured Enterprise Feature Access DID is answered by Unified Communications Manager.
The user then uses the phone key pad or smartphone soft keys to input authentication and the number to be
dialed. These inputs are received without prompts.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
79
Applications
Mobile Voice Access Enterprise

An administrator must configure a number of service parameters for this feature that are available in the
Administration Guide for Cisco Unified Communications Manager , available at http://www.cisco.com/c/en/
us/support/unified-communications/unified-communications-manager-callmanager/
products-maintenance-guides-list.html.

Mobile Voice Access Enterprise


Mobile Voice Access Enterprise is the same as Enterprise Feature Access, except it relies on the VoiceXML
gateway for IVR prompts. The IVR platform can be an ISR at the customer premises or at the data center.

Mobile Voicemail Avoidance


The Mobile Voicemail Avoidance feature provides a single voice mailbox for all enterprise business calls.
This feature prevents a user from having to check multiple mailboxes (enterprise, cellular, home) for calls to
their enterprise phone number that are unanswered. This feature provides two methods for sending all enterprise
voicemail to a single mailbox:
• Timer Control method—(Default) With this method the system relies on a set of timers (one for each
remote destination) in conjunction with system call-forward timers to ensure that, when a call is forwarded
to a voicemail system on ring-no-answer, the enterprise voicemail system receives the call. Perform one
of the tasks in the table below to achieve this behavior.
• User Control method— With this method the system relies on a DTMF confirmation tone from the
remote destination when the call is answered to determine if the call was received by the user or a
nonenterprise voicemail system. If a DTMF tone is received by the system, then the system recognizes
that the user answered the call and pressed a key to generate the DTMF tone. However, if the DTMF
tone is not received by the system, the system assumes the call leg was answered by a nonenterprise
voicemail system and it disconnects the call leg.

Note The User Control method depends on successful relay of the DTMF tone from the remote destination on the
mobile voice network or PSTN to Cisco Unified Communications Manager. The DTMF tone must be sent
out-of-band to Unified Communications Manager. If DTMF relay is not properly configured on the network
and system, DTMF is not received and all call legs to remote destinations relying on the user control method
are disconnected. The system administrator should ensure proper DTMF interoperation and relay across the
enterprise telephony network prior to enabling the user control method. If DTMF cannot be effectively relayed
from the PSTN to Unified Communications Manager, the Timer Control method should be used instead.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
80
Applications
Clientless FMC Integration with NNI or SS7

Table 9: Single Enterprise Voice Mailbox Tasks

Task Description
Ensure the forward-no-answer time Make sure that the global Forward No Answer Timer field in Unified
is shorter at the desk phone than at Communications Manager or the No Answer Ring Duration field under
the remote destination phones the individual phone line is configured with a value that is less than the
amount of time a remote destination phone rings before forwarding to
the remote destination voice mailbox. In addition, you can use the Delay
Before Ringing Timer parameter under the Remote Destination
configuration page to delay the ringing of the remote destination phone
in order to further lengthen the amount of time that must pass before a
remote destination phone forwards to its own voice mailbox. However,
when adjusting the Delay Before Ringing Timer parameter, take care
to ensure that the global Unified Communications Manager Forward
No Answer Timer (or the line-level No Answer Ringer Duration field)
is set sufficiently high enough so that the mobility user has time to
answer the call on the remote destination phone. You can set the Delay
Before Ringing Timer parameter for each remote destination; it is set
to 4000 milliseconds by default.
Ensure that the remote destination Set the Answer Too Late Timer parameter under the Remote Destination
phone stops ringing before it is configuration page to a value that is less than the amount of time that a
forwarded to its own voice mailbox remote destination phone rings before forwarding to its voice mailbox.
This ensures that the remote destination phone stops ringing before the
call can be forwarded to its own voice mailbox. You can set the Answer
Too Late Timer parameter for each remote destination; it is set to 19,000
milliseconds by default

Clientless FMC Integration with NNI or SS7


Mobile service providers can provide an enhanced FMC experience to the end users by force routing all calls
from/to MSISDN through Unified Communications Manager and using Cisco Unity Connections as a single
Unified Communications Manager platform.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
81
Applications
Clientless FMC Integration with NNI or SS7

Figure 30: Clientless FMC Integration

Users can extend the following business features to any mobile device and provide value proposition to MSP
by reducing churn and sticky services:
• Enterprise dial plan and calling policy without special client: Same dialing policy, call barring as your
desk (including ext dialing).
• Enterprise Caller-ID: Replace mobile number with enterprise caller ID.
• Single Number Reach through both Fixed or Mobile DN: Simultaneous ring for all shared-devices
regardless of identity.
• Seamless handoff between devices: Seamless transition of active call between mobile and desk, or soft
phone.
• True Single Business Voicemail: Single voice mailbox across multiple phone numbers.
• Native Message Waiting Indicator: MWI for business voicemail.
• DTMF-based Mid-Call features: Music on hold, conference, transfer, call park, session handoff, and call
move are invoked through DTMF star codes.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
82
Applications
Clientless FMC Integration with NNI or SS7

The following call flow shows forced calls through Cisco HCS.
Figure 31: Clientless FMC Integration - Forced Calls

Call flow description for the preceding figure:


1. On-net (short code) or E.164 number is dialed on mobile.
2. CS domain "VPN" IN application force routes all calls through Cisco HCS platform.
3. Call is initiated to any destination CLI : MSISDN B.
4. Unified Communications Manager matches the inbound call to the remote destination associated with
ISDN A and routes the call as if it originated from ISDN A.
5. Call is initiated to IP Phone or Communicator Associated with ISDN C CLI= ISDN A.

With the call flow shown in the preceding figure, all of the dialing experience is the same as the enterprise
office location. All calling policies/restrictions apply to both fixed and mobile originations. Fixed credentials
are presented for off-net calls rather than mobile. However, this feature does rely on the MSP to provide the
IN VPN application to trigger the forced routing to Cisco HCS platform.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
83
Applications
Clientless FMC Integration with IMS

Figure 32: Mobile Terminated Calls Forced through Cisco HCS

Call flow description for the preceding figure:


1. Call is inbound for MSISDN B.
2. CS domain “VPN” IN application force routes all calls through Cisco HCS platform.
3. Call is routed to Cisco HCS domain.
4. Unified Communications Manager matches the inbound call to the remote destination associated with
ISDN A and routes the call as per normal routing.
5. Call is initiated to MSISDN B with a prefix to prevent circular routing.
6. CS domain removes the prefix and routes the call to the mobile.
7. MSISDN B is alerted.

Clientless FMC Integration with IMS


The IP Multimedia Subsystem (IMS) Application Server (AS) is based on the IMS Service Control (ISC)
interface (3GPP specification TS 24.229 v 9). However, in the Cisco HCS solution, the Session Border
Controller (SBC) is included between IMS and Unified Communications Manager acting as Application
Server. The SBC supports the media anchoring, DTMF conversion, and some SIP header manipulations.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
84
Applications
Mobile Clients and Devices

ISC interface is defined as call processing control interface between S-CSCF and application server. This
interface runs SIP normal protocol as defined by RFC 3261, with additional enhancement to signify "origination"
or "termination" call leg toward the application server.

Mobile Clients and Devices


Mobile devices, including dual-mode smartphones and the clients that run on them, afford an enterprise the
ability to provide customized voice, video, and data services to users while inside the enterprise and to leverage
the mobile carrier network as an alternate connection method for general voice and data services.
Other services and applications you can leverage through Cisco mobile clients and services include enterprise
directory, enterprise voicemail, and XMPP-based enterprise IM (instant messaging) and presence. Further,
you can deploy these clients and devices in conjunction with Cisco Unified Mobility so that users can leverage
additional features and functions with their mobile device, such as Mobile Connect, and single enterprise
voice mailbox.

Cisco Jabber
Cisco Jabber is a set of mobile clients for Android and Apple iOS mobile devices including iPhone and iPad
that provide the ability to make voice and video calls over IP on the enterprise WLAN network or over the
mobile data network. Cisco Jabber also provides the ability to access the corporate directory and enterprise
voicemail services, and XMPP-based enterprise IM and Presence services.
The set of mobile clients for Android and Apple iOS include the following:
• Cisco Jabber for Android and iPhone
• Cisco Jabber for iPad

IMS Clients
To provide HCS FMC services, Unified Communications Manager defines generic mobile phones. They
include “IMS Mobile(Basic)” and “Carrier Integrated Mobile”.

Cisco Proximity for Mobile Voice


Cisco Proximity for Mobile Voice allows users of Apple iOS and Android smartphones and tablets to wirelessly
connect to Cisco IP Phones and endpoints through a Bluetooth pairing. For more information about Intelligent
Proximity for Mobile Voice, refer to http://www.cisco.com/go/proximity and the product documentation for
the endpoints.
This feature
• Sends audio of a cellular-terminated active call to the specified Cisco endpoint speaker or handset
for superior audio quality. Audio play-out of the cellular-terminated call can be moved back and forth
between the DX, 8851, or 8861 and the mobile device. Because the Bluetooth-paired mobile device
appears on the endpoints as another line, cellular calls on the Bluetooth-paired mobile device can also
be initiated using the DX or 8800 IP endpoint.
• Imports mobile device contacts and share call history with the DX Series, 8851 or 8861 endpoints
using the Bluetooth Phone Book Access Profile (PBAP), to simplify the call management process. You
can also view signal strength and battery level of the mobile device on the Cisco IP Phone.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
85
Applications
Assurance Considerations and Impact to HCM-F

Note • A maximum of 1500 contacts can be imported and displayed on Cisco IP


Phones 8851 and 8861. Each contact item can have a maximum of five
numbers with a maximum of 30 characters for each number. The allowed
contact name is a maximum of 60 characters.
• When an Android phone is connected, the user sees an alert pop-up asking
whether Phone Book Access is allowed. The number must select Allowed
within 30 seconds in order to import the Android phone book to Cisco IP
Phones 8851 and 8861.

Because Cisco Proximity for Mobile Voice relies on Bluetooth pairing, there is no requirement to run an
application or client on the mobile device. All communication and interaction occurs over the standard-based
Bluetooth interfaces.

Assurance Considerations and Impact to HCM-F


• There is no UCS Manager support for the C-series servers; they must be managed locally.
• The vCenter that manages the VMs in the data center is not used to manage the VMs on the C-series.
The C-series ESXi hosts are managed locally with an on-premises vCenter or a VClient.
• The Cisco Prime Unified Operations Manager that is deployed in the data center is used to assure the
Unified Communications applications on the premises. When the WAN link is down, the status of the
nodes on the premises is not available.
• Service assurance SIA and RCA are based on the Cisco Prime Unified Operations Manager events only
to cover application status. VM status and UCS-C status is not covered.
• HCM-F does not support C-servers.
• The C-series servers are not configured in HCM-F, either manually or through automatic synchronization.
• Since the ESXi hosts that are associated with the C-series servers running on the customer premises are
not managed by vCenter, these VMs are not synchronized into the HCM-F from vCenter.
• The hardware that is associated with the C-series does not in the Shared Data Repository, so service
assurance is not able to do service impact analysis and root cause analysis based on events from the
C-series server.
• Service assurance is not able to perform service impact analysis and root cause analysis based on events
from the vCenter (no C-series ESXi hosts in vCenter).

Cisco Hosted Collaboration Mediation Fulfillment Impact


• HCM-F does not support C-series servers.
• The C-series servers cannot be configured in HCM-F, manually or through automatic sync.
• The ESXi hosts associated with the C-series servers are synced into SDR from vCenter.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
86
Applications
Cisco Collaboration Clients and Applications

• Virtual machines running on the C-series servers are synced from vCenter.
• Because the hardware associated with the C-series does not appear in SDR, service assurance is not able
to do some service impact analysis and root cause analysis based on events from the C-series server. The
reason is because SDR does not show which ESXi Host is associated with each C-series server.

Cisco Collaboration Clients and Applications


Cisco Collaboration Clients and Applications provide an integrated user experience and extend the capabilities
and operations of the Cisco Unified Communications System. These clients and applications enable
collaboration both inside and outside the company boundaries by bringing together, in a single easy to use
collaboration client, applications such as online meetings, presence notification, instant messaging, audio,
video, voicemail, and many more.
Several Cisco collaboration clients and applications are available. Third-party XMPP clients and applications
are also supported. Cisco clients use the Cisco Unified Client Services Framework to integrate with underlying
Unified Communication services through a common set of interfaces. In general, each client provides support
for a specific operating system or device type. Use this document to determine which collaboration clients
and applications are best suited for your deployment. The client-specific sections of this document also provide
relevant deployment considerations, planning, and design guidance around integration into the Cisco Unified
Communications System.
The Cisco Unified Communications System supports the following collaboration clients and applications:
• Cisco Jabber for Windows and Mac
The Cisco Jabber client streamlines communications and enhances productivity by unifying presence,
instant messaging, video, voice, voice messaging, screen sharing, and conferencing capabilities securely
into one client on your desktop. Cisco Jabber for Mac and Cisco Jabber for Windows deliver highly
secure, clear, and reliable communications. They offer flexible deployment models, are built on open
standards, and integrate with commonly used desktop applications. With the Cisco Jabber client, you
can communicate and collaborate effectively from anywhere you have an Internet connection
• Webex Teams
Whether on the go, at a desk, or together in a meeting room, Webex Teams helps speed up projects, build
better relationships, and solve business challenges. It’s got all the team collaboration tools you need to
keep work moving forward and connects with the other tools you use to simplify life.
• Third-party XMPP clients and applications
Cisco Unified Communications Manager IM and Presence Service, with support for SIP/SIMPLE and
Extensible Messaging and Presence Protocol (XMPP), provides support of third-party clients and
applications to communicate presence and instant messaging updates between multiple clients. Third-party
XMPP clients, MomentIM, Adium, Ignite Realtime Spark, Pidgin, and others, allow for enhanced
interoperability across various desktop operating systems. In addition, web-based applications can obtain
presence updates, instant messaging, and roster updates using the HTTP interface with SOAP, REST,
or BOSH (based on the Cisco AJAX XMPP Library API).

Endpoints - Conference
Be sure to consider requirements for conference endpoints as part of your Cisco HCS deployment:

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
87
Applications
Directory

• The Cisco Telepresence MX Series turns any conference room into a video collaboration hub by connecting
teams face to face at a moment's notice. MX Series features the MX700 and MX800 systems for medium
and large rooms, and gives you flexibility to deploy and scale video depending on the needs of your
business.
For more information, see the Cisco Telepresence MX Series documentation: https://www.cisco.com/c/
en/us/support/collaboration-endpoints/telepresence-mx-series/tsd-products-support-series-home.html
• The Cisco Webex DX Series offers all-in-one desktop collaboration, clearing desktop clutter while adding
high-quality video conferencing. Enjoy all-in-one HD video and voice, with unified communications
features that can replace your IP phone. With the Cisco Webex Room OS, you can whiteboard and
annotate shared content with the touchscreen.
For more information, see the Cisco Webex DX Series documentation: https://www.cisco.com/c/en/us/
support/collaboration-endpoints/desktop-collaboration-experience-dx600-series/
tsd-products-support-series-home.html

Directory
LDAP Integration
Any access to a corporate directory for user information requires LDAP synchronization with Unified
Communications Manager. However, if a deployment includes both an LDAP server and Unified
Communications Manager that does not have LDAP synchronization enabled, then the administrator should
ensure consistent configuration across Unified Communications Manager and LDAP when configuring user
directory number associations.

Cisco Unified CM User Data Service (UDS)


UDS provides clients with a contact search service on Cisco Unified Communications Manager. You can
synchronize contact data into the Cisco Unified CM User database from Microsoft Active Directory or other
LDAP directory sources. Clients can then automatically retrieve that contact data directly from Unified CM
using the UDS REST interface.

LDAP Directory
You can configure a corporate LDAP directory to satisfy a number of different requirements, including the
following:
• User provisioning: you can provision users automatically from the LDAP directory into the Cisco Unified
Communications Manager database using directory integration. Cisco Unified CM synchronizes with
the LDAP directory content so that you avoid having to add, remove, or modify user information manually
each time a change occurs in the LDAP directory.
• User authentication: you can authenticate users using the LDAP directory credentials. Cisco IM and
Presence synchronizes all the user information from Cisco Unified Communications Manager to provide
authentication for client users.
• User lookup: you can enable LDAP directory lookups to allow Cisco clients or third-party XMPP clients
to search for contacts in the LDAP directory.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
88
Applications
Cisco Webex Directory Integration

Cisco Webex Directory Integration


Cisco Webex Directory Integration is achieved through the Cisco Webex Administration Tool. WebEx imports
a comma-separated value (CSV) file of your enterprise directory information into its Cisco Webex Messenger
service. For more information, refer to the documentation at: http://www.webex.com/webexconnect/orgadmin/
help/index.htm?toc.htm?17444.htm.

Client Services Framework Cache


The Client Services Framework maintains a local cache of contact information derived from previous directory
queries and contacts already listed, as well as the local address book or contact list. If a contact for a call
already exists in the cache, the Client Services Framework does not search the directory. If a contact does not
exist in the cache, the Client Services Framework performs a directory search.

Directory Search
When a contact cannot be found in the local Client Services Framework cache or contact list, a search for
contacts can be made. The Cisco Webex Messenger user can utilize a predictive search whereby the cache,
contact list, and local Outlook contact list are queried as the contact name is being entered. If no matches are
found, the search continues to query the corporate directory (Cisco Webex Messenger database).

Client Services Framework – Dial Plan Considerations


Dial plan and number normalization considerations must be taken into account when deploying the Client
Services Framework as part of any Unified Communications endpoint strategy. The Client Services Framework,
as part of a Unified Communications collaboration client, will typically use the directory for searching,
resolving, and adding contacts. The number that is associated with those contacts must be in a form that the
client can recognize, resolve, and dial.
Deployments may vary, depending on the configuration of the directory and Unified CM. In the case where
the directory contains E.164 numbering (for example, +18005551212) for business, mobile, and home telephone
numbers and Unified CM also contains an E.164 dial plan, the need for additional dial rules is minimized
because every lookup, resolution, and dialed event results in an E.164 formatted dial string.
If a deployment of Unified CM has implemented a private dial plan (for example, 5551212), then translation
of the E.164 number to a private directory number needs to occur on Unified CM. Outbound calls can be
translated by Unified CM translation patterns that allow the number being dialed (for example, +18005551212)
to be presented to the endpoint as the private number (5551212 in this example). Inbound calls can be translated
by means of directory lookup rules. This allows an incoming number of 5551212 to be presented for reverse
number lookup caller identification as +18005551212.
Private numbering plan deployments may arise, where the dial plan used for your company and the telephone
number information stored in the LDAP directory may require the configuration of translation patterns and
directory lookup rules in Cisco Unified Communications Manager to manage number format differences.
Directory lookup rules define how to reformat the inbound call ID to be used as a directory lookup key.
Translation patterns define how to transform a phone number retrieved from the LDAP directory for outbound
dialing.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
89
Applications
Translation Patterns

Translation Patterns
Translation patterns are used by Unified CM to manipulate the dialed digits before a call is routed, and they
are strictly handled by Unified CM. Translation patterns are the recommended method for manipulating dialed
numbers.

Application Dialing Rules


Application dialing rules can be used as an alternative to translation patterns to manipulate numbers that are
dialed. Application dialing rules can automatically strip numbers from, or add numbers to, phone numbers
that the user dials. Application dial rules are configured in Unified CM and are downloaded through TFTP
to the client from Unified CM. Translation patterns are the recommended method for manipulating dialed
numbers.

Directory Lookup Rules


Directory lookup rules transform caller identification numbers into numbers that can be looked up in the
directory. A directory lookup rule specifies which numbers to transform based on the initial digits and the
length of the number. Directory lookup rules are configured in Unified CM and are downloaded through TFTP
to the client from Unified CM.

Client Transformation
Before a call is placed through contact information, the client application removes everything from the phone
number to be dialed, except for letters and digits. The application transforms the letters to digits and applies
the dialing rules. The letter-to-digit mapping is locale-specific and corresponds to the letters found on a
standard telephone keypad for that locale. For example, for a US English locale, 1-800-4UCSRND transforms
to 18004827763. Users cannot view or modify the client transformed numbers before the application places
the call.

Deploying Client Services Framework


Because the Client Services Framework is a fundamental building block for Unified Communications client
integration and communication, it is necessary to deploy these devices to a number of users. Cisco recommends
using the Bulk Administration Tool for the Client Services Framework deployment. The administrator can
create a phone template for device pool, device security profile, and phone buttons, and can create a CSV data
file for the mapping of device name to directory number. The administrator can also create a User template
to include user groups and CTI, if enabled, as well as a CSV data file to map users to the appropriate controlled
device.

Design Considerations for Client Services Framework


Observe the following design considerations when deploying the Cisco Unified Client Services Framework:

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
90
Applications
Deployment Models for Jabber Clients

• The administrator must determine how to install, deploy, and configure the Unified Client Services
Framework in their organization. Cisco recommends using a well known installation package such as
Altiris to install the application
• The userid and password configuration of the Cisco Unified Client Services Framework user must match
the userid and password of the user stored in the LDAP server to allow for proper integration of the
Unified Communications and back-end directory components.
• The directory number configuration on Cisco Unified CM and the telephoneNumber attribute in LDAP
should be configured with a full E.164 number. A private enterprise dial plan can be used, but it might
involve the need to use translation patterns or application dialing rules and directory lookup rules.
• The use of deskphone mode for control of a Cisco Unified IP Phone uses CTI; therefore, when sizing a
Unified CM deployment, you must also account for other applications that require CTI usage.
• For firewall and security considerations, the port usage required for the Client Services Framework and
corresponding applications being integrated can be found in the product release notes for each application.
• To reduce the impact on the amount of traffic (queries and lookups) to the back-end LDAP servers,
configure concise LDAP search bases for the Client Services Framework rather than a top-level search
base for the entire deployment.

Deployment Models for Jabber Clients


Cisco Jabber for Windows and Jabber for Mac clients support the following deployment models:
• On-Premises
• Cloud-Based
• Hybrid (Cloud-Based and On-Premises)
Your choice of deployment will depend primarily upon your product choice for IM and presence and the
requirement for additional services such as voice and video, voicemail, and deskphone control. For the latest
information on Jabber and its deployment, see the installation and upgrade guides for your release:
https://www.cisco.com/c/en/us/support/unified-communications/jabber-windows/
products-installation-guides-list.html

Push Notifications
Cisco Hosted Collaboration Solution can leverage push notifications for a variety of purposes, including:
• Apple iOS notifications
• Smart Licensing for Cisco products
• Endpoint activation

Cisco Webex Hybrid Services Architecture Overview


Cisco Webex Hybrid Services link your on-premises equipment with the Cisco Webex. For each hybrid
service, when you register your environment to the cloud, a software connector is installed automatically on
your equipment. Your connector communicates securely with our service in the cloud. These services
complement your existing environment and provide augmented features for your users.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
91
Applications
Cisco Cloud Collaboration Management

To integrate Cisco Webex Hybrid Services with Cisco HCS, you must consider:
• Network topology and interconnect options
• Customer and Service Provider administrator responsibilities
• Cisco Webex Hybrid Services connector components and existing Cisco HCS components

Cisco HCS integration must also consider configurations, call flows, the Cisco Webex Hybrid Services call
model, and bandwidth calculations. For more information, see the Cisco Webex Hybrid Services Integration
Reference Guide.

Cisco Cloud Collaboration Management


Cisco Cloud Collaboration Management is the web interface to Cisco Webex administration. It allows the
administrator to enable users for Cisco Webex and for Hybrid Services. It is also used to register the
Expressway-C connector host to the Cisco Webex and to manage connectors directly from the Cisco Webex.
For more information about Cisco Cloud Collaboration Management, see the Cisco Webexadministration
guides.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
92
CHAPTER 4
Third-Party Applications and Integrations
• Third-Party Applications and Integrations, on page 93
• Third-party PBX Integration in Cisco HCS, on page 93

Third-Party Applications and Integrations


The Cisco Hosted Collaboration Solution (HCS) uses the same UC applications that are used to deploy
on-premises deployments. Cisco and its partners certify many applications through the Interoperability
Verification Testing (IVT) program for integration into Cisco UC products and these are cataloged in the
Cisco Developer Network (CDN) at http://developer.cisco.com.
Because Cisco HCS uses Cisco Hosted Collaboration Mediation Fulfillment (HCM-F) with Unified
Communications Domain Manager as domain manager, be aware that automated provisioning may not be
available through Unified Communications Domain Manager, and manual provisioning may be necessary.
HCS Service Assurance may not apply to these integrations.

Third-party PBX Integration in Cisco HCS


To accommodate third-party PBXs that are already deployed at customer locations, and to enable seamless
dialing and future migration, Cisco HCS supports the integration of third-party PBXs.
Third-party PBXs are integrated in Cisco HCS by allowing the configuration of SIP and H323 trunks toward
these IPPBXs and the corresponding provisioning from Cisco Unified Communications Domain Manager.
The third-party PBXs are integrated at the leaf UC cluster dedicated for customers as shown in the following
figure. The integration is limited to basic DN-to-DN dialing across the Cisco HCS and the third-party IPPBX.
Feature transparency and special signaling like DPNSS is outside the scope of the supported configuration in
Cisco HCS. However, special cases of third-party PBX integration may be supported on a case-by-case basis,
based on the requirements and support available within Cisco HCS.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
93
Third-Party Applications and Integrations
Third-party PBX Integration in Cisco HCS

Figure 33: Third-Party PBX Integration

As shown in the preceding figure, the leaf cluster Cisco Unified Communications Manager deployed on a
per-customer basis within Cisco HCS can be deployed and configured with either the central breakout option
or the local breakout option.
Key architectural assumptions for the preceding deployment are as follows:
• Centralized handling of PSTN connectivity and routing policies at the Cisco HCS Unified Communications
Manager.
• Unified Communications Manager provides a legacy PBX integration.
• DNs and E.164 patterns belonging to third-party PBX endpoints are independently routed to the PBX
over a SIP or H323 trunk.
• DNs of Cisco HCS endpoints are served directly by Unified Communications Manager. Unified
Communications Manager can provide Single Number Reach (SNR) services to Cisco HCS users and
can include DNs of Lync clients.
• No feature transparency or interworking occurs across Cisco HCS and third-party PBX clients.
• Emergency call handling integration is done independently on the IPPBX.
• Cisco HCS UC deployment can be configured to provide voicemail to the third-party PBX endpoints
using an independent SIP trunk to the Cisco Unity Connection.

The diagrams that follow describe the various call flows that are supported as part of third-party PBX
integration.
As shown in the following figure, the SNR feature can be configured for Cisco HCS endpoints and users, so
that calls arriving at the Cisco HCS endpoints can be sent to the third-party PBX endpoints.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
94
Third-Party Applications and Integrations
Third-party PBX Integration in Cisco HCS

Figure 34: Call Flow for PSTN to Cisco HCS Endpoint with SNR to Third-Party Endpoint

Figure 35: Call Flow for PSTN to Third-Party Endpoint

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
95
Third-Party Applications and Integrations
Third-party PBX Integration in Cisco HCS

Figure 36: Call Flow for Cisco HCS Endpoint to Third-Party Endpoint

For more information on third-party PBX integration in Cisco HCS, see Third-party PBX SIP Integration for
Cisco Hosted Collaboration Solution and CUCILync Integration Guide for Cisco Hosted Collaboration
Solution, available at http://www.cisco.com/en/US/partner/products/ps11363/prod_maintenance_guides_
list.html.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
96
CHAPTER 5
OTT Deployment and Secured Internet with
Collaboration Edge Expressway
• Cisco Expressway Over-the-Top Solution Overview, on page 97
• Supported Functionality, on page 98
• Endpoint Support , on page 99
• Design Highlights, on page 99
• Expressway Sizing and Scaling, on page 100
• Virtual Machine Options, on page 101
• Cisco HCS Clustered Deployment Design, on page 101
• Network Elements, on page 102
• Jabber Client SSO OTT, on page 103
• BtoB Calls Shared Edge Expressway, on page 104

Cisco Expressway Over-the-Top Solution Overview


The Cisco Expressway product allows VPN-less connection from mobile 'Bring your own' devices and allows
a user to access all the collaboration tools they have in the office environment when they are outside of the
office.
Cisco Unified Communications mobile and remote access is a core part of the Cisco Collaboration Edge
Architecture. It allows endpoints such as Cisco Jabber to have their registration, call control, provisioning,
messaging and presence services provided by Cisco Unified Communications Manager (Unified CM) when
the endpoint is not within the enterprise network. The Expressway provides secure firewall traversal and
line-side support for Unified CM registrations.
The following diagram shows the Cisco Expressway architecture for Cisco HCS.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
97
OTT Deployment and Secured Internet with Collaboration Edge Expressway
Supported Functionality

Figure 37: Cisco Expressway Architecture

Supported Functionality
The following Cisco Jabber functions are supported without a VPN connection:
• IM and Presence
• Make and receive voice and video calls
• Mid call control (transfer, conference, mute, hold, park, handoff to mobile , and so on)
• Communications history (view placed, missed, received calls)
• Directory search: The HTTP proxy will allow Jabber to use the CUCM User Data Service (UDS)
• Escalate to Web conference (MeetingPlace / Cisco Webex)
• Screen share / file transfer when Jabber is in SSO mode
• Visual Voicemail (view, play, delete, filter by, sort by over HTTP)

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
98
OTT Deployment and Secured Internet with Collaboration Edge Expressway
Endpoint Support

Endpoint Support
The Cisco Collaboration Edge Architecture provides enabling for any-to-any collaboration for many types of
endpoint devices. For the Cisco Expressway implementation in Cisco Hosted Collaboration Solutions the
following endpoints are supported:
• Jabber Desktop - Windows
• Jabber Mobile - iPhone, Android, and iPad
• Hard endpoints - EX60, EX90
• Cisco DX Series endpoints
• 7800 Series IP phones
• 8800 Series IP phones

Note DX/78XX/88XX endpoints have a fixed certificate trust list that is not configurable by the administrator. The
Cisco Collaboration Edge Architecture needs to have a certificate signed by a real certificate authority.

Design Highlights
The Cisco Expressway OTT Solution provides the following design highlights:
• Expressway-E is treated as an SBC and like any other endpoint is routed through the Firewall.
• Unified CM provides call control for both mobile and on-premises endpoints.
• Signaling traverses the Expressway solution between the mobile endpoint and UCM.
• Media traverses the Expressway solution and is relayed between endpoints directly; all media is encrypted
between the Expressway-C and the mobile endpoint.
The following diagram illustrates these highlights.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
99
OTT Deployment and Secured Internet with Collaboration Edge Expressway
Expressway Sizing and Scaling

Figure 38: Cisco Expressway Design Highlights

Expressway Sizing and Scaling


The Expressway-E and Expressway-C platforms can be deployed in clusters of four (for n+2 redundancy)
with the possibility of deploying multiple clusters if necessary. Clusters of six VMs can accommodate up to
four times the maximum call capacity.
The three deployment options (distributed as OVAs) have the following specifications:
• Small deployment
• 2 Core
• 3600 Mhz
• 4 GB of RAM
• 132 GB Disk
• 1 GB nic (2500 registrations)
• 100 video or 200 audio calls

• Medium deployment
• 2 Core
• 4800 Mhz
• 6 GB of RAM
• 132 GB Disk

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
100
OTT Deployment and Secured Internet with Collaboration Edge Expressway
Virtual Machine Options

• 1 GB nic (2500 registrations)


• 100 video or 200 audio calls

• Large deployment
• 8 Core
• 25600 Mhz
• 8 GB of RAM
• 132 GB Disk
• 10 GB nic (2500 registrations)
• 500 video or 1000 audio calls

For more information, see http://www.cisco.com/c/en/us/support/unified-communications/expressway/


model.html.

Virtual Machine Options


In Large PoD HCS, it is expected that Expressway-C and Expressway-E will run on B-series blades. The
above resource needs should be considered in the HCS recommendation for scaling/sizing.
For information on Expressway VM sizing options see Cisco Hosted Collaboration Solution Compatibility
Matrix.

Cisco HCS Clustered Deployment Design


In this scenario each network element is clustered in the tested design. Other Cisco Expressway deployment
options are available in the Cisco Expressway documentation. In HCS verification, only the Cisco Expressway-C
was clustered.
Figure 39: Cisco HCS Clustered Deployment

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
101
OTT Deployment and Secured Internet with Collaboration Edge Expressway
Network Elements

Network Elements
Internal Network Elements
The internal network elements are devices which are hosted on the organization's local area network.
Elements on the internal network have an internal network domain name. This internal network domain name
is not resolvable by a public DNS. For example, the Expressway-C is configured with an internally resolvable
name of vcsc.i nternal-domain.net, which resolves to an IP address of 10.0.0.2 by the internal DNS servers.

Cisco Expressway Control


The Expressway-C is a SIP Registrar & Proxy and H.323 Gatekeeper for devices that are located on the
internal network.
Expressway-C is configured with a traversal client zone to communicate with the Expressway-E to allow
inbound and outbound calls to traverse the NAT device.

DNS
DNS servers are used by Expressway-C to perform DNS lookups (resolve network names on the internal
network).

DHCP Server
The DHCP server provides host, IP gateway, DNS server, and NTP server addresses to endpoints located on
the internal network.

Router
The router device acts as the gateway for all internal network devices to route towards the DMZ (to the NAT
device internal address).

DMZ Network Element


Expressway-E
Expressway-E is a SIP Registrar & Proxy and H.323 Gatekeeper for devices that are located outside the
internal network (for example, home users and mobile worker registering across the internet and 3rd party
businesses making calls to, or receiving calls from this network).
Expressway-E is configured with a traversal server zone to receive communications from Expressway-C in
order to allow inbound and outbound calls to traverse the NAT device.
Expressway-E has a public network domain name. For example, Expressway-E is configured with an externally
resolvable name of vcse.example.com (which resolves to an IP address of 192.0.2.2 by the external / public
DNS servers).

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
102
OTT Deployment and Secured Internet with Collaboration Edge Expressway
External Network Elements

External Network Elements


EX60
This is an example remote endpoint that is registering to the Cisco Expressway via the internet.

DNS (Host)
This is the DNS owned by the service provider that hosts the external domain (DNS (external 1 & external
2). This is also the DNS used by the Cisco Expressway to perform DNS lookups.

NTP Server Pool


An NTP server pool that provides the clock source used to synchronize both internal and external devices.

NAT Devices and Firewalls


The example deployment includes:
• The NAT (PAT) device performing port address translation functions for network traffic routed from
the internal network to addresses in the DMZ (and beyond - towards remote destinations on the internet).
• The firewall device on the public-facing side of the DMZ. This device allows all outbound connections
and inbound connections on specific ports.
• The home firewall NAT (PAT) device which performs port address and firewall functions for network
traffic originating from the EX60 device.

SIP Domain
• DNS SRV records are configured in the public (external) and local (internal) network DNS server to
enable routing of signaling request messages to the relevant infrastructure elements (for example, before
an external endpoint registers, it will query the external DNS servers to determine the IP address of the
Cisco Expressway).
• The internal SIP domain is the same as the public DNS name. This enables both registered and
non-registered devices in the public internet to call endpoints registered to the internal and external
infrastructure (Expressway-C and Expressway-E).

Jabber Client SSO OTT


You can enable Single Sign-On for Jabber client endpoints that access unified communications services from
outside of the network (over the top, OTT). Single Sign-on OTT relies on the following:
• The secure traversal capabilities of the Expressway pair at the edge of the network
• The trust relationship between the customer provisioning authority and the external identity provider.

Endpoints connect using one identity and one authentication mechanism to access multiple unified
communications services. Authentication is owned by the IdP. No authentication occurs at the Expressway
or at the internal unified communications services.
Cisco Jabber determines whether it is inside your network before it requests a unified communications service.
When Jabber is outside of the network, it requests the service from the Expressway-E on the edge of the

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
103
OTT Deployment and Secured Internet with Collaboration Edge Expressway
BtoB Calls Shared Edge Expressway

network. If SSO is enabled at the edge, the Expressway-E redirects Jabber to the IdP with a request to
authenticate the user.
The IdP challenges Jabber to identify itself. After the identify is authenticated, the IdP redirects the Jabber
service request to the Expressway-E with a signed assertion that the identity is authentic.
Because the Expressway-E trusts the IdP, it passes the request to the appropriate service inside the network.
The unified communications service trust the IdP and the Expressway-E, so it provides the requested service
to the Jabber client.
The provisioning of Jabber Client SSO involves such tasks as downloading the federation metadata file,
configuring Unified CM and Cisco Unity Connection, configuring SAML SSO, and configuring AD FS. For
more information, see the Cisco Unified Communications Domain Manager Maintain and Operate Guide.
This feature is supported in the following deployment models:
• IdP and the directory are in the customer premises, with LDAP synchronization from the Directory server
to CUCM and then to CUCDM
• IdP and the directory are in the customer premises, with LDAP synchronization from the Directory server
to CUCM and then to CUCDM
• IdP and the directory are in a per-customer domain in the Data Center, with LDAP synchronization from
the Directory server to CUCM and then to CUCDM
• IdP and the directory are in a per-customer domain in the Data Center, with LDAP synchronization from
the Directory server to CUCM and then to CUCDM

References
For information about SSO for Jabber clients, see the "Enabling Jabber Client Single Sign-On" topic in the
Cisco Hosted Collaboration Solution Release 12.5 Customer Onboarding Guide.
For information about SSO for Cisco collaboration solutions, see the SAML SSO Deployment Guide for Cisco
Unified Communications Applications: http://www.cisco.com/c/en/us/support/unified-communications/
unified-communications-manager-callmanager/products-maintenance-guides-list.html.

BtoB Calls Shared Edge Expressway


Cisco Expressway Over-the-Top Solution Overview
Cisco Expressway enables secure connectivity options that allow dialing to and from non-HCS enterprises
reachable through the internet.
Cisco HCS currently provides connectivity to the PSTN network and managed conferencing services through
a managed connectivity. As part of the evolution of the Cisco HCS architecture, URI-based B2B connectivity
to internet users is supported in Cisco HCS to allow tenants to use URIs to dial and receive calls from any
non-HCS enterprise users through the internet. This is achieved by deploying a shared Expressway-E and
Expressway-C products behind the session border controller, or SBC.
Expressway-E is configured to route any URI out in the internet by doing a DNS SRV resolution and route
securely.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
104
OTT Deployment and Secured Internet with Collaboration Edge Expressway
Supported Functionality

From the HCS Endpoint the dialed URIs if they do not belong to the dialing tenant are routed on a dedicated
adjacency to the SBC where special call-policies are configured which route these URI's to Expressway-C
for onward routing to the user on Internet through Expressway-E.
Options are available within the call-policies to select all or block certain URIs within the SBC:
• This feature allows all HCS tenants to use URIs to dial and receive calls from any non-HCS Enterprise
users thru the internet. Rich media licenses are therefore shared on the Expressway products. This is a
trunking-based solution.
• This feature differs from the Collaboration Edge/OTT Expressway feature where Jabber and TC-based
endpoints register via the internet and are configured per HCS customer. This is a registration-based
solution.

Supported Functionality
The following functions of Cisco Expressway are supported within Cisco Hosted Collaboration Solution:
• Non-HCS Enterprise users can dial into Cisco HCS using the HCS user's URI.
• Cisco HCS users can dial non-HCS video users using URIs.

Endpoint Support
• All Cisco HCS supported Video Endpoints can make and receive calls using Shared Expressway.
• Remote non-HCS endpoints must conform to Cisco Telepresence Interface specifications to successfully
make and receive video calls.

Design Highlights
The Cisco Shared Expressway for Business to Business calling solution features the following design highlights:
• Expressway-E is treated as a session border controller (SBC), and like any other endpoint, is routed
through the firewall. Expressway-E is deployed in the DMZ with one interface (NIC) facing the internet
and the other interface (NIC) connected to the Expressway-C.
• SBC peers with the shared Expressway-C and provides the connectivity to each tenant's leaf cluster over
a dedicated adjacency exclusively used for URI dialing.
• Cisco Unified Communications Manager is configured with a dedicated trunk toward the SBC for URI
dialing.
• Cisco Unified Communications Manager is provisioned with wildcard SIP route patterns to route to SBC.
• SBC performs onward routing.
• Signaling traverses the Expressway solution between the Internet non-HCS Endpoint and SBC.
• All media is encrypted between the Expressway-E and the remote non-HCS endpoint.

Cisco Expressway Sizing and Scaling


On an 8-core VM, Cisco Expressway will scale up to 500 concurrent HD calls or 2000 audio sessions. The
Expressway-E and Expressway-C platforms can be deployed in clusters of four (for n+2 redundancy), with
the possibility of deploying multiple clusters if necessary.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
105
OTT Deployment and Secured Internet with Collaboration Edge Expressway
Virtual Machine Options

For information on Cisco Expressway scale targets, see the Cisco Hosted Collaboration Solution Compatibility
Matrix and Cisco Expressway documentation.

Virtual Machine Options


For a Large PoD deployment of Cisco Hosted Collaboration Solution, it is expected that Expressway-C and
Expressway-E will run on B-series blades. These resource needs should be considered in the recommendations
for Cisco HCS scaling/sizing.
In an SMB deployment of Cisco HCS, it is possible to use the C-series server for both Expressway-E and
Expressway-C. It is also possible to run Expressway-E on a C-series server, and run Expressway-C on B-series
along with other applications.
For information on Cisco Expressway VM sizing options, see the Cisco Hosted Collaboration Solution
Compatibility Matrix.

Network Elements
Internal Network Elements
The internal network elements are devices which are hosted on the organization's local area network.
Elements on the internal network have an internal network domain name. This internal network domain name
is not resolvable by a public DNS. For example, the Expressway-C is configured with an internally resolvable
name of vcsc.internal-domain.net, which resolves to an IP address of 10.0.0.2 by the internal DNS servers.

Element Description

Cisco Expressway Control Expressway-C is configured with a traversal client zone to communicate with
the Expressway-E to allow inbound and outbound calls to traverse the NAT
device.

DNS DNS servers are used by Expressway-C to perform DNS lookups (resolve
network names on the internal network).

DMZ Network Elements

Element Description

Expressway-E Expressway-E in Business-to-Business deployment is used to terminate or


connect to third-party enterprises through the Internet to receive and make Video
calls using URI based routing.
Expressway-E is configured with a traversal server zone to receive
communications from Expressway-C in order to allow inbound and outbound
calls to traverse the NAT device.
Expressway-E has a public network domain name. For example, Expressway-E
is configured with an externally resolvable name of vcse.example.com (which
resolves to an IP address of 192.0.2.2 by the external/public DNS servers).

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
106
OTT Deployment and Secured Internet with Collaboration Edge Expressway
Network Elements

External Network Elements

Element Description

DNS (Host) This is the DNS owned by the service provider that hosts the external domains
(DNS). This is also the DNS used by Cisco Expressway to perform DNS lookups.
NTP Server Pool An NTP server pool that provides the clock source used to synchronize both
internal and external devices.
NAT Devices and The example deployment includes:
Firewalls
• The NAT(PAT) device performing port address translation functions for
network traffic routed from the internal network to addresses in the DMZ
(and beyond- towards remote destinations on the internet).
• The firewall device on the public-facing side of the DMZ. This device
allows all outbound connections and inbound connections on specific ports.

SIP Domain • DNS SRV records are configured in the public (external) and local (internal)
network DNS server to enable routing of signaling request messages to the
relevant infrastructure elements (for example, third-party enterprises query
an external DNS for Cisco HCS enterprise Domains to determine the IP
address of the shared expressway-E.).
• The internal SIP domain is the same as the public DNS name. This enables
both registered and non-registered devices in the public internet to call
endpoints registered to the internal and external infrastructure.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
107
OTT Deployment and Secured Internet with Collaboration Edge Expressway
Network Elements

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
108
CHAPTER 6
Quality of Service Considerations
• Quality of Service Considerations, on page 109
• Guidelines for Implementing Quality of Service, on page 110
• Quality of Service for Audio and Video Media from Softphones, on page 120

Quality of Service Considerations


A communications network forms the backbone of any successful organization. These networks transport a
multitude of applications, including realtime voice, high-quality video and delay-sensitive data. Networks
must provide predictable, measurable, and sometimes guaranteed services by managing bandwidth, delay,
jitter and loss parameters on a network.
The Quality of Service (QoS) technique is used to manage network resources and is considered the key
enabling technology for network convergence. The objective of QoS technologies is to make voice, video,
and data convergence appear transparent to end users. QoS technologies allow different types of traffic to
contend inequitably for network resources. Voice, video, and critical data applications may be granted priority
or preferential services from network devices so that the quality of these strategic applications does not degrade
to the point of being unusable. Therefore, QoS is a critical, intrinsic element for successful network convergence.
Service availability is a crucial foundation element of QoS. The network infrastructure must be designed to
be highly available before you can successfully implement QoS. The transmission quality of the network is
determined by the following factors:
• Loss–A relative measure of the number of packets that were not received compared to the total number
of packets transmitted. Loss is typically a function of availability. If the network is Highly Available,
then loss during periods of non-congestion would be essentially zero. During periods of congestion,
however, QoS mechanisms can determine which packets are more suitable to be selectively dropped to
alleviate the congestion –The finite amount of time it takes a packet to reach the receiving endpoint after
being transmitted from the sending endpoint. In the case of voice, this is the amount of time it takes for
a sound to travel from the speaker's mouth to a listener's ear..
• Delay –The finite amount of time it takes a packet to reach the receiving endpoint after being transmitted
from the sending endpoint. In the case of voice, this is the amount of time it takes for a sound to travel
from the speaker's mouth to a listener's ear.
• Delay variation (Jitter)–The difference in the end-to-end delay between packets. For example, if one
packet requires 100 ms to traverse the network from the source endpoint to the destination endpoint and
the following packet requires 125 ms to make the same trip, then the delay variation is 25 ms.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
109
Quality of Service Considerations
Guidelines for Implementing Quality of Service

This section provides some high level guidelines for implementing Quality of Service (QoS) in a Service
Provider Cisco HCS Data Center network that serves as a transport for multiple applications, including
delay-sensitive (Unified Communications applications) and others such as Collaboration. These applications
may enhance business processes, but stretch network resources. QoS can provide secure, predictable,
measurable, and guaranteed services to these applications by managing delay, delay variation (jitter), bandwidth,
and packet loss in a network.
QoS is a fundamental requirement for the Cisco HCS multi-customer solution for differentiated service support:
• QoS provides the means for fine-tuning network performance to meet application requirements
• QoS enables delay and bandwidth commitments to be met without gross over-provisioning
• QoS is a prerequisite for admission control
• Being able to guarantee SLAs is a primary differentiator for SP versus public cloud offerings

There is a misconception that by over-provisioning the network you can provide great service because you
have enough bandwidth to handle all the data flowing on your network. Over-provisioning may not provide
the handling of data in all circumstances, for the following reasons:
• Complexity with over-provisioning approach is in ensuring that the network is overprovisioned in all
circumstances
• Overprovisioning is not always possible and at times congestion may be unavoidable
• Capacity planning failures
• Network failure situations
• Unexpected traffic demands/bandwidth unavailability
• DDOS attacks
• TCP has a habit of eating 'abundant' bandwidth
• Fate sharing – in these cases there is no differentiation between premium and best effort
• In congestion all services degrade

Guidelines for Implementing Quality of Service


Traffic is processed based on how you classify it and the policies that you create and apply to traffic classes.
To configure QoS features, use the following steps:
• Create traffic classes by classifying the incoming and outgoing packets that match criteria such as IP
address or QoS fields.
• Create policies by specifying actions to take on the traffic classes, such as limiting, marking, or dropping
packets.
• Apply policies to a port, port channel, VLAN or a sub interface.

Use classification to partition traffic into classes. Classify the traffic based on the port characteristics (class
of service [CoS] field) or the packet header fields that include IP precedence, Differentiated Services Code
Point (DSCP), Layer 2 to Layer 4 parameters, and the packet length.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
110
Quality of Service Considerations
Guidelines for Implementing Quality of Service

The values used to classify traffic are called match criteria. When you define a traffic class, you can specify
multiple match criteria, you can choose to not match on a particular criterion, or you can determine the traffic
class by matching any or all criteria.
Traffic that fails to match any class is assigned to a default class of traffic called class-default.
Normally within the SP cloud, there are four classes of traffic (Real-time, Signaling/Control, Critical, and
Best Effort) within an SP network. This does not mean that only four types of traffic can be defined and you
can not define QoS in a more granular fashion. In general, service providers define the maximum number of
QoS classes at the edge of the customer (meaning the CPE device on the Cisco HCS end customer premises)
to utilize the WAN bandwidth efficiently without compromising the critical data. As the traffic comes toward
the SP cloud and data center, it is marked into bigger buckets based on the SLAs and bandwidth requirements.
When deploying the hosted collaboration services in the cloud, the network management traffic plays a very
key role in terms of monitoring, fulfillment and so on, and needs to be prioritized within the HCS data center
and within the SP cloud, as management applications may be residing in another data center monitoring HCS
applications in other data center.

Table 10: Cisco Baseline QoS Marking

Application L3 Classification-PHB L3 Classification - DSCP IETF RFC


Routing CS6 48 RFC 2474
Voice EF 46 RFC 3246
Interactive video AF41 34 RFC 2597
Streaming video CS4 32 RFC 2474
Mission-critical data AF31 26 RFC 2597
Call signaling CS3 24 RFC 2474
Transactional data AF21 18 RFC 2597
Network management CS2 16 RFC 2474
Bulk data AF11 10 RFC 2597
Best effort 0 0 RFC 2474
Scavenger CS1 8 RFC 2474

RFC 4594 has some differences, which you should know so that you can understand how the classes are
differentiated and assign various PHB values.

Table 11: RFC 4594 Differences

Application L3 Classification - PHB L3 Classification - DSCP IETF RFC


Network control CS6 48 RFC 2474
VoIP telephony EF 46 RFC 3246
Call signaling CS5 40 RFC 2474
Multimedia conferencing AF41 34 RFC 2597
Real-time interactive CS4 32 RFC 2474

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
111
Quality of Service Considerations
Guidelines for Implementing Quality of Service

Application L3 Classification - PHB L3 Classification - DSCP IETF RFC


Multimedia streaming AF31 26 RFC 2597
Broadcast video CS3 24 RFC 2474
Low-latency data AF21 18 RFC 2597
OAM CS2 16 RFC 2474
High-throughput data AF11 10 RFC 2597
Best effort DF 0 RFC 2474
Low-priority data CS1 8 RFC 3662

The following is a list of nomenclature changes between the Cisco baseline and the RFC 4594.

Table 12: Nomenclature Changes Between Cisco Baseline and RFC 4594

Cisco QoS Baseline Class Names RFC 4594 Class Names


Routing Network Control
Voice VoIP Telephony
Interactive Video Multimedia Conferencing
Streaming Video Multimedia Streaming
Transactional Video Low-Latency Data
Network Management Operations/Administration/Management (OAM)
Bulk Data High-Throughput Data
Scavenger Low-Priority Data

Note In a Cisco HCS deployment, we recommend that you follow the Cisco baseline table for all QoS configurations.
There are some minor and significant differences between Cisco baseline and industry baseline RFC 4594,
but the RFC 4594 is informational, meaning it is recommended but not a requirement. For example, in RFC
4594 now the streaming video is changed from CS4 to AF31 (drop precedence of 1) and named as Multimedia
streaming.

Another difference is that the QoS baseline marking recommendation of CS3 for Call Signaling was changed
in RFC 4594 to mark Call Signaling to CS5.

Note Giving the guideline of Cisco baseline and RFC reference does not mean it is mandatory to use those classes.
This is a baseline and every deployment may be different because eight codepoints simply do not give enough
granularity; for example, although Cisco baseline recommends CS2 for OAM, according to NGN, we
recommend CS7 for OAM.

A new application class has been added to RFC 4594 - Real-time interactive. This addition allows for a service
differentiation between elastic conferencing applications (which would be assigned to the Multimedia

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
112
Quality of Service Considerations
Guidelines for Implementing Quality of Service

Conferencing class) and inelastic conferencing applications (which would include high-definition applications,
like Cisco TelePresence, in the real-time interactive class). Elasticity refers to the ability of the application to
function despite experiencing minor packet loss. Multimedia Conferencing uses the AF4 class and is subject
to markdown (and potential dropping) policies, while the real-time interactive class uses CS4 and is not subject
to markdown or dropping policies.
A second new application class was added to RFC 4594 -Broadcast video. This addition allows for a service
differentiation between elastic and inelastic streaming media applications. Multimedia Streaming uses the
AF3 class and is subject to markdown (and potential dropping) policies, while broadcast video uses the CS3
class and is not subject to markdown or dropping policies.

Note The most significant of the differences between Cisco's QoS baseline and RFC 4594 is the recommendation
to mark Call Signaling to CS5. Cisco does not change this value and we recommend that you use the value
of CS3 for call signaling.

Classification and marking of traffic flows creates a trust boundary within the network edges.
Within the trust boundaries, received CoS or DSCP values are simply accepted and matched rather than
remarked. Classification and marking are applied at the network edge, close to the traffic source, in Service
Provider Cisco HCS Data Center design, at the Nexus 1000V virtual access switch for traffic originating from
Unified Communications applications and at the MPLS WAN edge for traffic entering the Service Provider
Cisco HCS Data Center infrastructure. The trust boundary in Service Provider Cisco HCS Data Center is at
the Nexus 7000 Access/Aggregation device connecting to the UCS (and Nexus 1000V), and on the Nexus
7000 DC Core connecting to the MPLS WAN edge router as follows:
Figure 40: Trust Boundaries and Policy Enforcement Points From Cisco HCS Customer to Service Provider Data Center

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
113
Quality of Service Considerations
Quality of Service Domains

Figure 41: Trust Boundaries and Policy Enforcement Points - Service Provider Data Center to Cisco HCS Customer Site

Quality of Service Domains


There are three distinct diffserv QoS domains:
• SP data center
• SP NGN
• HCS customer site

Traditionally, network and bandwidth resource provisioning for VPN networks was implemented based on
the concept of specifying traffic demand for each node pair belonging to the VPN and reserving resources for
these point-to-point pipes between the VPN endpoints. This is what has come to be termed the resource "pipe"
model. The more recently introduced "hose" model for point-to-cloud services defines a point-to-multipoint
resource provisioning model for VPN QoS, and is specified in terms of ingress committed rate and egress
committed rate with edge conditioning. In this model, the focus is on the total amount of traffic that a node
receives from the network (that is, customer aggregate) and the total amount of traffic it injects into the
network.
Figure 42: Point to Multipoint Resource Provisioning Model for VPN QoS

Any SLAs that are applied would be committed across each domain; thus, SP end-end SLAs would be a
concatenation of domain SLAs (IP/NGN + SP DC). Within the VMDC SP DC QoS domain SLAs must be

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
114
Quality of Service Considerations
Cross-Platform Classification and Marking

committed from DC edge to edge: at the PE southbound (into the DC) in practice there would thus be an SLA
per-customer per class, aligning with the IP/NGN SLA and at the N1000VV northbound there would be an
SLA per VNIC per VM (optionally per class per VNIC per VM). As this model requires per-customer
configuration at the DC edges only (that is, PE and N1000V), there is no per-customer QoS requirement at
the core/aggregation/access layers of the infrastructure as shown below:
Figure 43: Per-Customer QoS Configuration

Note There is no requirement to enable any QoS on the ASA.

Note Inter-customer or off-net traffic goes through SBC, which means all the signaling and media is terminated
and re-originated by the SBC. This step erases the QoS setting of all the outgoing traffic. Make sure the SBC
QoS policy is similar to what is set by the applications or DC edge (Nexus 1000V) or else the policy may get
changed by SBC.

Cross-Platform Classification and Marking


As previously stated, the VMDC QoS model must support the requirements of Cisco HCS and it will align
with the IP NGN QoS model. To this end, suggested classifications and markings, aligned across the SDU
Systems Architectures and in particular with the HCS model, are summarized in the following table. This
provides a unified framework facilitating future additions of various traffic types into the VMDC architecture
in addition to the Cisco HCS-specific traffic.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
115
Quality of Service Considerations
Cross-Platform Classification and Marking

Table 13: Class to Queue Mapping

VMDC 8 Class COS VMDC HCS VMDC NGN VMDC (Unified Cisco HCS 6 4 Class Model
Model Aligned 8 Class Aligned 8 Class Communications Class Model Nexus 7000
Model Model System 6xx0) 6 Fabric
Class Model
Network Mgmt + 7 Network Mgmt + Network Mgmt + Network Mgmt Network Mgmt Queue 1
Service control VM control VM control (COS 7) + (COS 7) +
Service control Service control
Network control 6 Network control Network control
(COS 7) + (COS 7) +
Network control Network control
(COS 6) (COS 6)
Priority #1 5 Voice bearer Res VoIP / Bus Priority #1 Voice bearer
Real-time
Bandwidth #1 4 Interactive Video Video streaming Bandwidth #1 Interactive Video Queue 2
(Priority 2)
Bandwidth #2 3 Call Video interactive FCOE Call
Control/FCOE / FCOE (Bandwidth #2) Control/FCOE
Bandwidth #3 2 Business Critical Bus critical Bus critical Business Critical Queue 3
"Gold" in-contract (COS in-contract (COS
2) 2)
Bus critical Bus critical
out-of-contract out-of-contract
(COS 1) (COS 1)

Bandwidth #4 1 Webex Silver In-contract Silver In-contract Webex Queue 4


"Silver" collaboration data (COS 2) (COS 2) collaboration data
(interactive) Out-of-contract + Standard data
Out-of-contract
(COS 1) (COS 1)

Standard 0 Standard data Standard data Standard


(Bandwidth #5)
"Bronze"

The number of classes supported within the SP DC QoS domain is limited by the number of CoS markings
available (up to eight), and the number of queues/thresholds supported by each DC platform. To ensure a
seamless extension of NGN services, the number of classes would ideally (at a minimum) match the number
available across the IP/NGN.
The following table shows all the classes with PHB values, with admission requirements for some classes,
and maps to various applications.

Table 14: Application Classes, Behavior and Examples

Application Class Per-Hop Behavior Admission Control Queuing and Application


Dropping Examples
VoIP telephony EF Required Priority Queue (PQ) Cisco IP Phone
(G.711, G.729)

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
116
Quality of Service Considerations
Cross-Platform Classification and Marking

Application Class Per-Hop Behavior Admission Control Queuing and Application


Dropping Examples
Broadcast video CS5 Required PQ (optional) Cisco IP Video
Surveillance / Cisco
Enterprise TV
Realtime interactive CS4 Required PQ (optional) Cisco TelePresence
Multimedia AF4 Required BW Queue + DSCP Cisco Unified
conferencing WRED Personal
Communicator,
WebEx
Multimedia AF3 Recommended BW Queue + DSCP Cisco Digital Media
streaming WRED System (VoDs)
Network control CS6 N/A BW Queue EIGRP, OSPF,
BGP, HSRP, IKE
Call signaling CS3 N/A BW Queue SCCP, SIP, H.323
OAM CS2 N/A BW Queue SNMP, SSH, Syslog
Transactional data AF2 N/A BW Queue + DSCP ERP Apps, CRM
WRED Apps, Database
Apps
Bulk data AF1 N/A BW Queue + DSCP Email, FTP, Backup
WRED Apps, Content
Distribution
Best effort DF N/A Default Queue + Default Class
RED
Scavenger CS1 N/A Min BW Queue YouTube, iTunes,
(Deferential) BitTorent, Xbox
Live

In general, four classes (sometimes read as five classes due to the fact the signaling and control may be defined
differently) is the recommended model for provisioning QoS for Voice, Video and Data. Some of these classes
can be gradually split into more granular classes, as shown in the following figure. Classification
recommendations remain the same, but you can combine multiple DSCPs into a single queuing class.
• The Real-Time queue is for voice and video traffic in general, as they are time-sensitive applications.
• Signaling/control includes all the control signaling, meaning call signaling, and also includes the
management control traffic including the vMotion traffic.
• Critical data includes any bulk data transfer, which may include databases, and so on.
• The last best effort class includes anything other than the traffic described in the preceding text, for
example, Internet traffic.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
117
Quality of Service Considerations
Cross-Platform Classification and Marking

Figure 44: QoS Class Models

An example of queuing policy on the Nexus 7000 in the HCS data center is as follows:
Figure 45: Example Queuing Policy

The Cisco NX-OS device processes the QoS policies that you define based on whether they are applied to
ingress or egress packets. The system performs actions for QoS policies only if you define them under the
type qos service policies.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
118
Quality of Service Considerations
Cross-Platform Classification and Marking

The recommended Cisco HCS QoS model appears in the following table.

Table 15: Cisco HCS QoS Model

HCS Traffic EXP/COS DSCP PHB BW Res Nexus 1000 Unified Nexus ASR9000
(N5000, ASA, Communications 7000-Egress
N7000-Ingress) System
Network 7 CS7 AF 6%(vmdc) 6% Default in Unified 1p7q4t-out-q7
Mgmt WRED Communications
System
Network 6 CS6 AF 4%(vmdc) 10% Platinum (10%) 1p7q4t-out-q6
Control + WRE
vMotion
+VM
Control
Voice 5 CS5 EF 15%(vmdc) no 15% Gold (15%) 1p7q4t-out-q5 cir=50 per
Bearer drop (cir=50mbpc=200 VM, 100
per VM) per cust
Interactive 4 CS4 AF41 15% no drop 15% (cir=50 Silver(15%) 1p7q4t-out-q4
Video ms, bc=200 per
(WebEx, VM)
SPT)
Call Control 3 CS3 AF42, 3%(vmdc) N/A FC(40%) 1p7q4t-out-q3
+FCoE AF43
WebEx 1,2 CS1, AF 42% 44% Bronze(10%) 1p7q4t-out-q2 250 mbps
Data, other CS2 per VM 500
critical data mbps per
cust/3G
burst
Standard 0 CS0 Default 15%(vmdc) 10% Best Effort (10%) 1p7q4t-out-q-default

As shown in the preceding table, Cisco HCS uses the CoS-based marking within the data center and mapping
of CoS to DSCP. You can use a similar approach in the UCS combined with enabling the flow control between
the UCS network port and the uplink port to protect the drop of data in the case of congestion at the UCS
uplink. You can achieve this by using the DCE pause frame technique, which sends pause frames to the uplink
port to hold the traffic for a few milliseconds while the congestion at the UCS level is cleared.
For more information, see: http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/cli/config/guide/2.0/
b_UCSM_CLI_Configuration_Guide_2_0_chapter_010010.html
Normally in Cisco HCS, the traffic that flows through the UCS is only Cisco HCS application traffic, which
is mostly the signaling traffic (meaning that it requires not that much of the bandwidth). Also because we are
using 10GE links between all the uplink and network ports, for Cisco HCS one should have enough bandwidth
and may not need to enable the pause frame flow control technique.

Note You can apply only ingress traffic actions for QoS policies on Layer 2 interfaces. You can apply both ingress
and egress traffic actions on Layer 3 interfaces.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
119
Quality of Service Considerations
Quality of Service for Audio and Video Media from Softphones

Quality of Service for Audio and Video Media from Softphones


An integral part of the Cisco Unified Communications network design recommendations is to classify or mark
voice and video traffic so that it can be prioritized and appropriately queued as it traverses the Unified
Communications network. A number of options exist to set the DSCP values of audio and video traffic
generated by clients. For example:
• Using a Unified CM Trusted Relay Point to enforce DSCP marking for QOS on behalf of a softphone
client registered with Unified CM.
• Using network-based access control lists (ACLs) to mark DSCP values for voice and video traffic.
• Using Active Directory Group Policy to mark DSCP values for voice and video traffic. Note that many
operating systems limit the ability of applications to mark traffic with DSCP values for QoS treatment.

QOS Enforcement Using a Trusted Relay Point (TRP)


A Trusted Relay Point (TRP) can be used in conjunction with the device mobility feature to enforce and/or
re-mark the DSCP values of media flows from endpoints. This feature allows QoS to be enforced for media
from endpoints such as softphones, where the media QoS values might have been modified locally.
A TRP is a media resource based upon the existing Cisco IOS media termination point (MTP) function.
Endpoints can be configured to Use Trusted Relay Point, which will invoke a TRP for all calls.
For QoS enforcement, the TRP uses the configured QoS values for media in Unified CM's Service Parameters
to re-mark and enforce the QoS values in media streams from the endpoint. If no TRP is available, the call
will proceed without modification of the DSCP value of the traffic generated by the endpoint. Cisco IOS
MTPs and transcoding resources support TRP functionality. (Use Unified CM to check Enable TRP on the
MTP or transcoding resource to activate TRP functionality.)

Client Services Framework – Instant Messaging and Presence Services


Instant messaging and presence services for Jabber clients can be provided through the Cisco Client Services
Framework XMPP interface. Cisco offers instant messaging and presence services with the following products:
The choice between Cisco IM and Presence or Cisco Webex Messenger for instant messaging and presence
services can depend on a number of factors. Cisco Webex Messenger deployments use Cisco Webex as a
cloud-based service that is accessible from the Internet. On-premises deployments based on Cisco IM and
Presence provide the administrator with direct control over their IM and presence platform and also allow
presence federation using SIP/SIMPLE to Microsoft IM and presence services.
For information on the full set of features supported by each IM and Presence platform, refer to the following
documentation:
• Cisco IM and Presence
• Cisco Webex Messenger

Note With Cisco UC Integration for Microsoft Lync, Microsoft provides instant messaging and presence services.

• Cisco IM and Presence

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
120
Quality of Service Considerations
Client Services Framework – Audio, Video and Web Conferencing Services

• Cisco Webex Messenger

Client Services Framework – Audio, Video and Web Conferencing Services


Access to scheduled conferencing services for clients can be provided through a Cisco Client Services
Framework HTTP interface.
Cisco audio, video and web-based scheduled conferencing services can be provided by using the cloud-based
Cisco Webex Meetings service or a combination of on-premises MeetingPlace audio and video conferencing
services and WebEx cloud-based web conferencing services. For more information, refer to the Cisco Webex
Meetings documentation at http://www.cisco.com/c/en/us/support/conferencing/webex-meeting-center/
tsd-products-support-series-home.html.

Client Services Framework – Contact Management


The Client Services Framework can handle the management of contacts through a number of sources, including
the following:
• Cisco Unified CM User database via the User Data Service (UDS)
• LDAP directory integration
• Cisco Webex Messenger

Contacts can also be stored and retrieved locally using either of the following:
• Client Services Framework Cache
• Local address books and contact lists

The Client Services Framework uses reverse number lookup to map an incoming telephone number to a
contact, in addition to photo retrieval. The Client Services Framework contact management allows for up to
five search bases to be defined for LDAP queries.

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
121
Quality of Service Considerations
Client Services Framework – Contact Management

Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
122

You might also like