HCS 12 5 SRND PDF
HCS 12 5 SRND PDF
5 Solution Reference
Network Design Guide
First Published: 2019-06-25
Last Modified: 2019-11-05
Americas Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 527-0883
THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS,
INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS.
THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH
THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY,
CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.
The following information is for FCC compliance of Class A devices: This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15
of the FCC rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment
generates, uses, and can radiate radio-frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications.
Operation of this equipment in a residential area is likely to cause harmful interference, in which case users will be required to correct the interference at their own expense.
The following information is for FCC compliance of Class B devices: This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to part 15 of
the FCC rules. These limits are designed to provide reasonable protection against harmful interference in a residential installation. This equipment generates, uses and can radiate radio
frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. However, there is no guarantee that interference
will not occur in a particular installation. If the equipment causes interference to radio or television reception, which can be determined by turning the equipment off and on, users are
encouraged to try to correct the interference by using one or more of the following measures:
• Connect the equipment into an outlet on a circuit different from that to which the receiver is connected.
Modifications to this product not authorized by Cisco could void the FCC approval and negate your authority to operate the product.
The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB’s public domain version of
the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.
NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED "AS IS" WITH ALL FAULTS.
CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE.
IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT
LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS
HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network
topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional
and coincidental.
All printed copies and duplicate soft copies of this document are considered uncontrolled. See the current online version for the latest version.
Cisco has more than 200 offices worldwide. Addresses and phone numbers are listed on the Cisco website at www.cisco.com/go/offices.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com
go trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any
other company. (1721R)
© 2019 Cisco Systems, Inc. All rights reserved.
THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS,
INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS.
THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH
THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY,
CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.
The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB's public domain version of
the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.
NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED “AS IS" WITH ALL FAULTS.
CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE.
IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT
LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS
HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network
topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional
and coincidental.
All printed copies and duplicate soft copies of this document are considered uncontrolled. See the current online version for the latest version.
Cisco has more than 200 offices worldwide. Addresses and phone numbers are listed on the Cisco website at www.cisco.com/go/offices.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com
go trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any
other company. (1721R)
© 2019 Cisco Systems, Inc. All rights reserved.
CONTENTS
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
v
Contents
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
vi
Contents
CHAPTER 3 Applications 65
Core UC Applications and Integrations 65
IP Multimedia Subsystem Network Architecture and Components 67
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
vii
Contents
Enterprise User Calls Into Cisco Webex and Calls from Cisco Webex CCA to Enterprise Users 75
External Users Call into Cisco Webex and Calls from Cisco Webex CCA to External Users 76
Mobility 76
Mobile Connect 77
Mobile Connect Mid-Call Features 77
Enterprise Feature Access 79
Mobile Voice Access Enterprise 80
Mobile Voicemail Avoidance 80
Clientless FMC Integration with NNI or SS7 81
Clientless FMC Integration with IMS 84
Mobile Clients and Devices 85
Cisco Jabber 85
IMS Clients 85
Cisco Proximity for Mobile Voice 85
Assurance Considerations and Impact to HCM-F 86
Cisco Hosted Collaboration Mediation Fulfillment Impact 86
Cisco Collaboration Clients and Applications 87
Endpoints - Conference 87
Directory 88
LDAP Integration 88
Cisco Unified CM User Data Service (UDS) 88
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
viii
Contents
LDAP Directory 88
Cisco Webex Directory Integration 89
Client Services Framework Cache 89
Directory Search 89
Client Services Framework – Dial Plan Considerations 89
Translation Patterns 90
Application Dialing Rules 90
Directory Lookup Rules 90
Client Transformation 90
Deploying Client Services Framework 90
Design Considerations for Client Services Framework 90
Deployment Models for Jabber Clients 91
Push Notifications 91
Cisco Webex Hybrid Services Architecture Overview 91
Cisco Cloud Collaboration Management 92
CHAPTER 5 OTT Deployment and Secured Internet with Collaboration Edge Expressway 97
Cisco Expressway Over-the-Top Solution Overview 97
Supported Functionality 98
Endpoint Support 99
Design Highlights 99
Expressway Sizing and Scaling 100
Virtual Machine Options 101
Cisco HCS Clustered Deployment Design 101
Network Elements 102
Internal Network Elements 102
Cisco Expressway Control 102
DNS 102
DHCP Server 102
Router 102
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
ix
Contents
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
x
Change History
• Change History, on page xi
Change History
Date Description
June 18, 2019 Initial release of document. Changes since the 11.5
release include adding information about Smart
Licensing and removing information about
components that are no longer supported as part of
the solution.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
xi
Change History
Change History
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
xii
CHAPTER 1
System Architecture
• Cisco HCS System Architecture, on page 1
• Functional Layers, on page 2
• Data Center Architecture, on page 3
• Virtualization Architecture, on page 23
• Service Fulfillment System Architecture, on page 24
• Cisco Prime Collaboration Assurance Overview, on page 40
• Cisco Expressway, on page 45
• Aggregation System Architecture, on page 46
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
1
System Architecture
Functional Layers
The rest of this guide describes the Cisco HCS architecture in more detail. Other Cisco HCS deployments
such as Micro Node or Small PoD (not shown in the preceding diagram) are introduced in Data Center
Architecture, on page 3.
Functional Layers
Cisco Hosted Collaboration Solution is an end-to-end cloud-based collaboration architecture that, on a high
level, may be distributed into the following functional layers:
• Customer/customer-premises equipment (CPE) layer
• UC infrastructure layer
• Aggregation layer
• Management layer
• SP network/cloud layer
These layers, shown in Cisco HCS System Architecture, on page 1 as an overlay on the overall HCS
Architecture. Each of these functional layers has a distinct purpose in HCS architecture, as described below.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
2
System Architecture
UC Infrastructure Layer
UC Infrastructure Layer
Cisco Unified Computing System (UCS) hardware in the SP data center runs unified communications (UC)
applications for multiple hosted business solutions. Virtualization, which enables multiple instances of an
application to run on the same hardware, is highly leveraged so that UC application instances are dedicated
for each hosted business. The ability to create new virtual machines dynamically allows the SP to add new
hosted businesses on the same UCS hardware.
Management Layer
Management tools support easy service activation, interoperability with existing SP OSS, and other management
activities including service fulfillment and assurance.
SP Cloud Layer
The SP cloud layer leverages existing services in the SP network such as PSTN and regulatory functions. In
the Cisco HCS system architecture, the UC infrastructure components are deployed as single tenants (dedicated
per customer) in the cloud. These dedicated components and other management components run on virtual
machines running on UCS hardware.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
3
System Architecture
Small Medium Business Solutions
• Micro Node - suitable for small-to-medium business deployments of less than 20 customers and for using
smaller capacity hardware components.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
4
System Architecture
Dedicated Instance
Dedicated Instance
The Service Provider Cisco HCS Data Center infrastructure model includes Nexus 7000 switches, SAN disks,
UCS with B-series blades, and a supported Session Border Controller (SBC), which support a large number
of end users across a high number of customers. This infrastructure model involves considerable initial cost
and is suitable for large service providers.
For service providers with fewer than 940 customers, there are a number of ways that you can deploy the data
center infrastructure using to optimize scale and cost.
You can deploy the Cisco HCS solution on any of the data center infrastructure models: Large PoD, Small
PoD, or Micro Node.
Dedicated instance refers to the model of applications where there is a separate application instance (Cisco
Unified Communications Manager) for each customer. In one C-series server there can be different customer
instances based on how applications are distributed in the server. Any reference to a UC application such as
Unified Communications Manager, Unified Communications Manager IM and Presence, Cisco Unity
Connection, Cisco Emergency Responder, and CUAC, that does not include "Shared" or "Partitioned" as part
of the title implies that it is a dedicated instance.
Dedicated Server
Dedicated server refers to an Cisco HCS model of applications available for Micro Node deployments where
one C-series server contains only one customer, but may have one or more UC applications running on the
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
5
System Architecture
Partitioned Unity Connection
same server for that customer (for example Cisco Unified Communications Manager or Cisco Unity
Connection).
Note The cluster limit for Unified CM and IM/P is one. For more information on Partitioned Unity Connection,
see the documentation as follows:
• Use Cisco Unified Communications Domain Manager to provision partitioned Cisco Unity Connection
if you are running version Cisco Unified Communications Domain Manager 10.6(1).
Solution Architecture
The solution is optimized toward data center environments to reduce the operation footprints of service provider
environments. It provides a set of tools to provision, manage, and monitor the entire architecture to deliver
an automated service that assures reliability and security throughout the data center operations.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
6
System Architecture
Data Center Design for Large PoD
• Customer/CPE Layer: This layer provides the connectivity to the end devices that includes phones,
mobile devices, and local gateways. In addition to the end user interfaces, this layer provides connectivity
from the customer site to the provider's network
• UC infrastructure layer: The UC infrastructure layer is constructed around the HCS data center design
to provide a highly scalable, reliable, cost effective, and secure environment to host multiple HCS
customers that meet the unique SLA requirements for each application/customer.
In this architecture, the UC layer services components (such as Unified Communications Manager, Cisco
Unity Connection, the Management layer, and IM and Presence Service) are deployed as a single tenant
(dedicated per customer) in the cloud on the multi-tenant UC infrastructure. Expressway-E and
Expressway-C provide secure signaling and media paths through the firewalls into the enterprise for the
key protocols identified. The hardware is shared using the virtualization among many enterprises and
the software (applications) is dedicated per customer. Expressway is used for secure access into the
enterprise from the internet as opposed to other access methods (MPLS VPN, IPSEC, Anyconnect, and
so on.)
• Telephony Aggregation Layer: This layer is required in a Cisco HCS deployment to aggregate all the
HCS customers at a higher layer to centralize the routing decision for all the off-net and inter-enterprise
communication. A session border controller (SBC) in the aggregation layer functions as a media and
signaling anchoring device. In this layer, the SBC functions as a Cisco HCS demarcation that normalizes
all communication between Cisco HCS and the external network, either a different IP network or the IP
Multimedia Subsystem (IMS) cloud.
Note The information in this section applies to all data center infrastructure deployment models; any differences
are noted in Data Center Design for Small PoD, on page 13 and Data Center Design for Micro Node, on page
21.
Within the data center backbone, the Large PoD design provides the option to scale up the available network
bandwidth by leveraging port-channel technology between the different layers. With Virtual Port Channels
(vPC) , it also offers multipathing and node/link redundancy without blocking any links.
When they deploy the Cisco HCS Large PoD solution, service providers require isolation at a per-customer
level. Unique resources can be assigned to each customer. These resources can include different policies,
pools, and quality of service definitions.
Virtualization at different layers of a network allows for logical isolation without dedicating physical resources
to each customer; some of the isolation features are as follows:
• VRF-Lite provides aggregation of customer traffic at Layer 3
• Multicontext ASA configuration provides dedicated firewall service context for each of the customers
• VLAN provides Layer 2 segregation all the way to the VM level
To support complete segregation of all the Cisco HCS customers, Cisco recommends that you have separate
Virtual Routing and Forwarding (VRF) entries for each Cisco HCS customer. Each customer is assigned a
VRF identity. VRF information is carried across all the hops within a Layer 3 domain, and then is mapped
into a one or more VLANs within a Layer 2 domain. Communication between VRFs is not allowed by default
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
7
System Architecture
Data Center Aggregation Layer
for privacy protection of each customer. The multimedia communication between customers is allowed only
through the Session Border Controller (SBC).
The following figure shows the Cisco HCS Solution architecture with all the data center components for a
Large PoD deployment. For more information on the Small PoD deployment architecture, refer to Small PoD
Architecture, on page 14. For more information on the Micro Node deployment architecture, refer to Micro
Node Deployment Models, on page 22.
Figure 2: Physical Data Center Deployment for Large PoD
Nexus 7000 switches are used as the aggregation switches and there is no core layer within the Service Provider
Cisco HCS Data Center. The aggregation device has Layer 3 northbound and Layer 2 southbound traffic.
In the Cisco HCS Large PoD architecture, it is assumed that the VRF for each customer terminates at the
MPLS PE level and runs the VRF-lite between the PE and the Nexus 7000 aggregation. In this case, the Nexus
7000 acts as a CE router from the MPLS cloud perspective.
As shown in the preceding figure, the Access layer provides connectivity to the servers. This is the first
oversubscription point which aggregates all server traffic onto the Gigabit ethernet or 10 Gigabit ethernet
port-channel uplink to the aggregation layer.
Cisco recommends that you use 62xx series fabric interconnect; this requires UCS Manager 2.0.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
8
System Architecture
Access-to-Aggregation Connectivity
northbound to the MPLS PE routers, and southbound to either fabric interconnect or a Nexus 5000 switch at
Layer 2, depending on the scale of the deployment.
In this configuration it is not necessary to define separate VDCs, therefore resources such as VLANs, VRFs,
HSRP groups, BGP peers, and so on, are available at the chassis level.
For more details on components in the aggregation layer, see Aggregation System Architecture, on page 46.
Access-to-Aggregation Connectivity
Access-layer devices are dual-homed to the aggregation pair of switches for redundancy. When spanning-tree
protocol is used in this design, there is Layer 2 loop and one of the uplinks is in blocking mode. This limits
the bandwidth to half if multiple links are deployed between the access and the aggregation layers. These
uplinks are configured as trunks to forward multiple VLANs. Based on spanning-tree root existence of each
VLAN and if these VLANs are load-balanced across multiple aggregation switches, some VLANs are active
on one link and the rest of the VLANs are active on the second link. This provides a way to achieve some
level of load-balancing. However this design is complex and involves administrative overhead of configuration.
We recommend that you use the virtual port channel. This allows you to create a Layer 2 port channel interface
distributed across two different physical switches. Logically it is one port-channel. Virtual port channels
interoperate with STP to provide a loop free topology. The best practices to achieve that is to make the 7000
aggregation layer the logical root, assign same priority for all instances in both 7000 and configure peer switch
feature. For more information about best practices see http://www.cisco.com/c/dam/en/us/td/docs/switches/
datacenter/sw/design/vpc_design/vpc_best_practices_design_guide.pdf
Note This section does not apply to HCS Micro Node deployments.
You must set up the following before you implement UCS and Cisco 6200 Fabric Interconnect:
• IP infrastructure as described in Implementing Service Provider IP Infrastructure.
• UCS Chassis Basic physical setup, cabling and connectivity.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
9
System Architecture
HCS Deployment on Vblock
HCS on FlexPoD
FlexPoD is a predesigned base configuration that is built on the Cisco Unified Computing System (UCS),
Cisco Nexus data center switches, and NetApp Fabric-Attached Storage (FAS) components and includes a
range of software partners. FlexPoD can scale up for greater performance and capacity or it can scale out for
environments that need consistent, multiple deployments. FlexPoD is a baseline configuration, but also has
the flexibility to be sized and optimized to accommodate many different use cases.
Cisco and NetApp have developed FlexPoD as a platform that can address current virtualization needs and
simplify data center evolution to IT as a Service (ITaaS) infrastructure. Cisco and NetApp have provided
documentation for best practices for building the FlexPoD shared infrastructure stack. As part of the FlexPoD
offering, Cisco and NetApp designed a reference architecture. Each customer's FlexPoD system may vary in
its exact configuration. Once a FlexPoD unit is built it can easily be scaled as requirements and demand
change. This includes scaling both up (adding additional resources within a FlexPoD unit) and out (adding
additional FlexPoD units).
For more detailed information about FlexPoD, click the following link: http://www.cisco.com/en/US/netsol/
ns1137/index.html.
Service Insertion
Integration of network services such as firewall capabilities and server load balancing is a critical component
of designing the data center architecture. The aggregation layer is a common location for integration of these
services since it typically provides the boundary between Layer 2 and Layer 3 in the data center and allows
service devices to be shared across multiple access layer switches. The Nexus 7000 Series does not currently
support services modules.
For HCS data center architecture, Cisco Adaptive Security Appliance (ASA) is recommended for firewall
services. The ASA can be deployed in Layer 2 or Layer 3 multicontext mode depending on the requirement
of the service provider.
As an example, if a service provider wants to terminate the VPN at the ASA security appliance, the ASA has
to be deployed in Layer 3 mode, because Layer 2 mode does not support the VPN termination. The service
provider can specify the customer VLANs that need to go through the ASA for security purposes; the rest of
the traffic will not go through the ASA security appliance.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
10
System Architecture
Storage Integration
Storage Integration
Note This section does not apply to HCS Micro Node as these deployments have local storage on UCS C-Series
so MDS is not required.
Another important factor changing the landscape of the data center access layer is the convergence of storage
and IP data traffic onto a common physical infrastructure, referred to as a unified fabric. The unified fabric
architecture offers cost savings in multiple areas including server adapters, rack space, power, cooling, and
cabling. The Cisco Nexus family of switches spearheads this convergence of storage and data traffic through
support of Fiber Channel over Ethernet (FCoE) switching in conjunction with high-density 10-Gigabit Ethernet
interfaces. Server nodes may be deployed with converged network adapters that support both IP data and
FCoE storage traffic, allowing the server to use a single set of cabling and a common network interface.
Note that the Cisco Nexus family of switches also supports direct LUN connectivity to SAN storage using
FC connectivity. With licensing, the ports can be fiber channel switched directly to an external storage array.
The Fabric Interconnect connects the Cisco HCS platform to the storage network using the MDS 9000 series
switches with multiple physical links (fiber channel) for high availability. In Cisco HCS deployment of the
data center, all the link connections between any components are deployed in a redundant mode to provide
high level of resilience.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
11
System Architecture
Data Center Bandwidth Capacity
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
12
System Architecture
Data Center Oversubscription in Network Layers
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
13
System Architecture
Small PoD Architecture
Although this section discusses the options to scale either horizontally or by migration to a Large PoD, each
design has its own pros and cons. You must perform the necessary due diligence concerning the scale and
growth needed before you decide on the Small PoD option.
Refer to the Cisco Hosted Collaboration Solution Compatibility Matrix at http://www.cisco.com/en/US/partner/
products/ps11363/products_device_support_tables_list.html for a list of Small PoD hardware components.
The sections that follow discuss the details of the Small PoD model and its impacts such as scale, performance
and reliability.
The complete system as shown in the figure is a single PoD that connects to the WAN Edge/MPLS Provider
Edge (PE) router. You can connect multiple PoDs to the WAN Edge/MPLS PE router as long as you address
the bandwidth requirements of each PoD. For more information, refer to Traffic Patterns and Bandwidth
Requirements for Cisco HCS, on page 11.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
14
System Architecture
Small PoD Architecture
The Cisco Adaptive Security Appliance (ASA) 5555-X which provides virtual firewalls for every customer
using firewall contexts, provides the perimeter security for the HCS customers. The ASA connects to the
Nexus 5548UP in a redundant manner to provide availability during failures. To provide redundancy, configure
vPC links on Nexus 5000 and the ether channels on the ASA.
To support site-to-site Virtual Private Networks (VPN), use the Cisco ASR 1000 Series Aggregation Services
Router (ASR) as the Site-to-Site VPN Concentrator. The ASR 1000 is configured for Virtual Routing and
Forwarding (VRF) aware VPN to support the VPN tunnels from the customer premises.
You can use the a third-party SBC to aggregate the traffic to and from the public switched telephone network
(PSTN) and inter-customer traffic.
The key elements for a Small PoD deployment are as follows:
1. The UCS 5108 chassis configuration uses the same configuration as a standard HCS deployment that
is equipped with B-Series half-width servers (as recommended for Cisco HCS).
2. Each FI in the FI pair are connected with two links from each UCS 5108 chassis.
3. One option is to directly connect the storage to the FIs with virtual SANs (VSANs) distributed across
the two FIs if the version of the UCS is 2.1 or above. The two links between each FI and redundant
storage processors on the storage system provide high availability during failures. This deployment
does not require MDS switches and assumes the 2.1 and later versions for FI. For more information,
refer to the UCS Direct Attached Storage and FC Zoning Configuration Example, available at
http://www.cisco.com/en/US/products/ps11350/products_configuration_
example09186a0080c0a508.shtml. The recommended connectivity configuration uses an MDS 9200,
9500 or 9700 series with security and encryption enabled.
4. You can also connect the storage at the Nexus 5548 switches, if the version of the Cisco UCS Manager
(pre 2.1) that is deployed does not support direct connectivity from the FI without a switch.
5. Equip the Nexus 5000 with a Layer 3 Daughter Card to configure Layer 3 functionality. The access and
aggregation layer functions are collapsed into the Nexus 5000 pair in this deployment.
6. Configure the Nexus 5000 in the aggregation layer using the Large PoD configuration, which includes
the following configuration:
• Border Gateway Protocol (BGP) toward the PE for each customer
• North and south VRF for each customer
• North and south HSRP instances for each customer, along with static routes
7. Connect the Adaptive Security Appliance (ASA) to the Nexus 5000 at the aggregation level, as in the
Cisco HCS Large PoD environment.
8. If centralized PSTN routing is needed, deploy an SBC for centralized call aggregation as in a Cisco
HCS Large PoD deployment.
9. Attach Customer Premises Equipment (CPE) devices to the PE for MPLS VPN between the customer
premises and the data center.
10. To support Local Breakout (LBO), use an Integrated Services Router (ISR) G2 Series. The same
equipment can be used as a CPE.
11. If you deploy Small PoDs geographically across data centers, you must meet the delay requirements as
specified for Clustering Over the WAN (CoW).
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
15
System Architecture
Small Pod Deployment Models
12. Backup and restore is performed using standard Cisco HCS procedures. Refer to the Cisco Hosted
Collaboration Solution Release 12.5 Maintain and Operate Guide, available at http://www.cisco.com/
en/US/partner/products/ps11363/prod_maintenance_guides_list.html.
The following figure shows the HCS Small PoD system architecture from a logical topology perspective. The
Nexus 5000 Aggregation node is split logically into a north VRF and a south VRF for each customer. A Layer
3 (L3) firewall context (on ASA 5555-X) is inserted in the routed mode to provide perimeter firewall services.
In the figure, an SBC is used to interconnect to the PSTN. It also provides logical separation for each customer
within the same box using VRFs/VLANs and adjacency features.
Figure 5: HCS Small PoD Logical Network
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
16
System Architecture
Small Pod Deployment Models
This figure shows the storage connection with two options. The solid line FC connections are for direct storage
connection at the FI. The dashed FC connections are for storage connection at the Nexus 5500 or 5600,
depending on which is being used in the deployment. The additional FC links between the FI and Nexus to
carry the storage traffic from the FI to the Storage System are also shown. Other options using FCOE are
possible but not covered in this document.
The following figure shows the Small PoD deployment with an SBC.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
17
System Architecture
Small PoD Redundancy
Similar to the Large PoD deployment, the Small PoD deployment model uses the ASR 1000 as the Site-to-Site
VPN Concentrator to connect customers over the internet.
With the Small PoD deployment, service providers can still deploy multiple data centers and deploy clustering
over WAN for all the Unified Communications (UC) applications to support geo-redundancy. To accomplish
this, deploy a Small PoD in multiple data centers, or deploy a Small PoD in one data center and a Large PoD
in the other data center. Follow the standard HCS disaster recovery procedures as recommended.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
18
System Architecture
Options for Storage Connectivity
Cons: • Cannot extend storage beyond a single pair of FI. However, since Small
PoD deployment does not span more than one FI pair, this disadvantage
does not impact Small PoD deployment.
Pros: • Storage can be shared across FI pairs. This is not a requirement in Small
PoD deployments.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
19
System Architecture
Small PoD Storage Setup
Note When you change VLAN to MST instance mapping, the system restarts MST. Cisco recommends that you
map VLANs to the MST instance at the time of initial configuration.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
20
System Architecture
Small PoD Layer 3 Scale
For more information, refer to Cisco Hosted Collaboration Solution Release 12.5 Capacity Planning Guide.
Minimum Maximum
Nexus 5500 or 5600 switches 2 2
Adaptive Security Appliance (ASA 2 2
5555-X)
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
21
System Architecture
Micro Node Deployment Models
Minimum Maximum
C-series servers 5 - one for applications Cisco ~24 - twenty-one for applications
Unified Communications Manager, Cisco Unified Communications
Cisco Unity Connection, and Cisco Manager, Cisco Unity Connection,
Unified Communications Manager and Cisco Unified Communications
IM and Presence Service, - four Manager IM and Presence Service
systems required for management (that includes redundancy), - three
and up to seven systems for a full for management applications
deployment
(Optional) SBC 1 (optional) 20 (or 1 optional SBC)
Clusters 1 20
Users Minimum OVA supports 1,000 20,000 users with 20 application
users with one application cluster clusters
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
22
System Architecture
Virtualization Architecture
Figure 8: Micro Node with C-series, Cisco Unified Border Element (SP Edition)
In the Micro Node deployment, the infrastructure redundancy is highly recommended and required to keep
the infrastructure downtime to a minimum. In Micro Node, you use a vPC port channel between the Nexus
5548 and the Security appliance, and vPC from the Nexus 5548. Each C-series chassis has dual links, one to
each redundant Nexus 5548.
Note When you deploy a Nexus 5548 as a Layer 3/Layer 2 device, there is no redundancy of the Layer 3 module
within the Nexus 5548.
With the Micro Node deployment, you can still deploy multiple data centers and deploy clustering over WAN
for all the Unified Communications applications to support geo-redundancy for the applications.
Virtualization Architecture
Capacity and Blade Density
The UCS blades are rapidly growing in terms of capacity and performance. To take advantage in the growth
of systems with an increasing number of processor cores, our virtualization support is changing in two ways.
First, the blades supported are based on a support specification rather than Cisco certifying specific hardware.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
23
System Architecture
VMware Feature Support
The virtualized UC applications will support hardware based on a minimum set of specs (processor type,
RAM, I/O devices, and so on).
These strategies should help realize the architectural goals of HCS service fulfillment, which are:
• Minimizing the need for multiple interfaces
• Maximizing common executable across multiservice domains
• Simplified management of subscribers, customers, sites, and databases, for example
• Integrating additional multiservice domains (rapid, simple for deployment, extensible and open)
• Northbound integration with service provider OSS/BSS systems
• Supporting rapid deployment scenarios
• SP hosted services
• Private cloud
• Reseller
• White label
• Supporting an ecosystem of Cisco and service provider products
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
24
System Architecture
Service Fulfillment Architectural Layers
3. The Cisco HCM-F Administrative UI: Allows configuration of management and monitoring of UC
applications through Cisco HCM-F services by automatic and manual changes to the Shared Data
Repository.
4. Services to create and license UC application servers:
• Cisco HCS IPA Service
• Cisco HCS License Manager Service
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
25
System Architecture
Prime Collaboration Deployment for UC Applications
Based on data extracted from the Shared Data Repository, these three services work together to
automatically configure the Cisco Prime Collaboration Assurance to monitor Unified Communications
Applications and customer equipment.
6. An HCS Northbound Interface (NBI) API service: Provides a programmable interface for integration with
Service Provider OSS/BSS systems.
7. Billing services through Service Inventory: Provides the service provider with reports on customers,
subscribers, and devices. These reports are used by the service provider to generate billing records for
their customers.
8. Platform Manager: An installation, upgrade, restart and backup management client for Cisco Unified
Communications Manager, Cisco Unified Communications Manager IM and Presence Service, and Cisco
Unity Connection applications. The Platform Manager allows you to manage and monitor the installation,
upgrade, restart and backup of these servers. You can configure the system server inventory as well as
select, schedule, and monitor upgrades of one or more servers across one or more clusters. You access
the Platform Manager through the Cisco HCM-F administrative interface.
The figure below displays how HCM-F Application Node fits into the HCS solution and interactions
between various HCM-F services and other solution components.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
26
System Architecture
Prime Collaboration Deployment for UC Applications
Tip Cisco Prime Collaboration Deployment does not delete the source cluster VMs
after migration is complete. You can fail over to the source VMs if there is a
problem with the new VMs. When you are satisfied with the migration, you can
manually delete the source VMs.
The functions that are supported by the Cisco Prime Collaboration Deployment can be found in the at Prime
Collaboration Deployment Administration Guide.
The functions that are supported by Platform Manager are listed in the following tables. Each table identifies
the UC applications and versions that the functions support. The support for UC applications and their versions
is irrespective of Cisco HCS releases.
Product and Cluster Migration to Upgrade Task Restart Task Switch Version Fresh Install a Readdress
Functions Discovery 10.x/11.x/12.x (Upgrade Task New Task (Change
Cluster Application 10.x/11.x/12.x Hostname or IP
Server or Cluster Addresses for
Install COP One or More
Files) Nodes in a
Cluster)
Cisco Unified 9.0.(1), 9.1(1), From From 9.0.(1), 9.1(1), 9.0.(1), 9.1(1), 10.x, 11.x, 10.x, 11.x, 12.x
Communications 9.1(2), 10.x, 9.1(2),10.x, 9.1(2), 10.x, 12.x
6.1(5), 7.1(3), 10.5(x), 11.x,
Manager 11.x, 12.x 11.x, 12.x 11.x, 12.x
7.1(5), 8.0(1), 12.x
8.0(2), 8.0(3),
To
8.5(1), 8.6(1),
8.6(2), 9.0.(1), 10.5(x), 11.x,
9.1(1), 9.1(2), 12.x
10.x, 11.x,
12.x
To
10.x, 11.x,
12.x
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
27
System Architecture
Prime Collaboration Deployment for UC Applications
Product and Cluster Migration to Upgrade Task Restart Task Switch Version Fresh Install a Readdress
Functions Discovery 10.x/11.x/12.x (Upgrade Task New Task (Change
Cluster Application 10.x/11.x/12.x Hostname or IP
Server or Cluster Addresses for
Install COP One or More
Files) Nodes in a
Cluster)
Cisco Unified 9.0(1), 9.1(1), From9.0(1), From 9.0(1), 9.1(1), 9.0(1), 9.1(1), 10.x, 11.x, Not
IM and 10.x, 11.x, 9.1(1), 10.x, 10.x, 11.x, 10.x, 11.x, 12.x Supported*
10.5(x), 11.x,
Presence 12.x 11.x, 12.x 12.x 12.x
12.x
Service
To10.x, 11.x,
To
12.x
10.5(x), 11.x,
NOTE: Prime
12.x
Collaboration
Deployment
Migration
11x/12.0+ to
11x+/12.0 is
not supported,
if “11x+/12.0”
is identical
version as in
same major,
same minor,
same MR,
same SU/ES.
Cisco Unity 8.6.1, 8.6.2, Not Supported 10.5(x), 11.x, 8.6(1), 8.6(2), 8.6(1), 8.6(2), 10.x, 11.x, 10.x, 11.x, 12.x
Connection 9.x, 10.x, 11.x, 12.x 9.x, 10.x, 11.x, 9.x, 10.x, 11.x, 12.x
12.x 12.x 12.x
To
10.5(x), 11.x
and 12.x
Note *Changing a hostname in Cisco Unified IM and Presence Service must be done manually. Refer to the version
of the Changing IP Address and Hostname for Cisco Unified Communications Manager and IM and Presence
Service document that applies to your configuration.
Cisco supports virtualized deployments of Cisco Prime Collaboration Deployment. The application is deployed
by using an OVA that contains the preinstalled application. This OVA is obtained with a licensed copy of
Cisco Unified Communications Manager software. For more information about how to extract and deploy
the PCD_VAPP.OVA file, see the Cisco Prime Collaboration Deployment Administration Guide.
In your Cisco HCS environment, install only one instance of Cisco Prime Collaboration Deployment, which
must have the following:
• Access to all Cisco Unified Communications Manager clusters for all customers, including those behind
a NAT
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
28
System Architecture
IP Addressing for HCS Applications
Use the Cluster Discovery feature to find application clusters on which to perform fresh installs, migration,
and upgrade functions. Perform this discovery on a blade-by-blade basis.
For more information about features, installation, configuration and administration, best practices, and
troubleshooting, see the following documents:
• Prime Collaboration Deployment Administration Guide
• Release Notes for Cisco Prime Collaboration Deployment
Note When deploying Cisco HCS in the hosted environment, you must not have NAT between any end device
(phone) and the Cisco Unified Communications Manager (UC application) on the line side, because some of
the mid-call features may not function properly. However, when Over The Top access is supported (using
Expressway, etc.), there can be NAT in front of the endpoint. It is also recommended that the HCS Management
applications not be deployed within a NAT. Using NAT between the vCenter Server system and ESXi/ESX
hosts is an unsupported configuration. For more details, see http://kb.vmware.com/kb/1010652
Device Layer
This layer interfaces with the Domain Manager layer and comprises Cisco Unified Communications Manager,
Cisco Unity Connection, Unified CMIP, and Cisco Webex modeled as devices from the Cisco HCS perspective.
Cisco HCS Application and Infrastructure layer delivers a full set of Cisco UC and collaboration services,
including:
• Voice
• Video
• Messaging and presence
• Audio conferencing
• Mobility
• Contact center
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
29
System Architecture
HCS License Management
• Collaboration
Note In this document, the term License Manager refers to both Enterprise License Manager and Prime License
Manager.
HLM runs as a stand-alone Java application on the Hosted Collaboration Mediation Fulfillment platform,
utilizing Cisco Hosted Collaboration Mediation Fulfillment service infrastructure and message framework.
There is one HLM per deployment of Cisco HCS. HLM and its associated License Manager manage licenses
for Cisco Unified Communications Manager, Cisco Unity Connection, and TelePresence Room.
If it is not running, start HLM using the following command: utils service start Cisco HCS License Manager
Service. This service must run to provide HLM functionality.
Note There is no licensing requirement for Cisco Unified Communications Manager IM and Presence Service, and
Cisco Unified Communications Manager IM.
HCS supports multiple deployment modes. A deployment mode can be Cisco HCS, Cisco HCS-Large Enterprise
(HCS-LE), or Enterprise. Each Prime License Manager is added with a deployment mode and all UC clusters
added to the License Manager must have the same deployment mode of License Manager. License Managers
with different deployment mode can be added to HCM-F. When adding License Manager, the default
deployment mode is selected, but it can be manually changed by selecting a different deployment mode from
the drop-down menu.
Through the Cisco Hosted Collaboration Mediation Fulfillment NBI or GUI, an administrator can create,
read, or delete a License Manager instance in Cisco HCM-F. A Cisco Hosted Collaboration Mediation
Fulfillment administrator cannot perform any licensing management function until HLM validates its connection
to the installed License Manager and its license file is uploaded. HLM exposes an interface to list all of the
License Manager instances.
After the administrator adds and validates a License Manager instance to the HLM, you can assign a customer
to the License Manager. This action does not automatically assign all Cisco Unified CM and Cisco Unity
Connection clusters within this customer to that License Manager. The administrator must assign each Cisco
Unified CM or Cisco Unity Connection cluster to a License Manager after the associated customer is assigned
to that License Manager. If the customer is not assigned to License Manager, the cluster assignment fails, and
you are advised to associate the customer with a License Manager first.
The administrator can unassign a UC cluster from a License Manager through the HLM NBI or GUI.
For more information about Prime License Manager, see Cisco Prime License Manager User Guide.
HLM supports License Report generation. The report includes all customers on the system with aggregate
license consumption at the customer level.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
30
System Architecture
HCS License Manager (HLM)
Note Customers that are assigned to Enterprise Licensing Manager 9.0 are not reported. The license usage of 9.0
clusters that are assigned to Enterprise Licensing Manager 9.1 is not counted in the report either.
An optional field Deal ID at the customer level is included in the report. Each customer has zero or more Deal
IDs that can be configured through the HCM-F GUI.
The Administrator requests the system-level Cisco HCS license report through the HLM GUI or NBI. The
report request generates two files: csv, and xlsx format. Both files are saved into the HLM license report
repository (/opt/hcs/hlm/reports/system) for download. The retention period of the report is set
to 60 days by default.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
31
System Architecture
Multiple Deployment Mode
License Manager manages licensing for Unified CM and Cisco Unity Connection clusters in an enterprise.
Cisco Hosted Collaboration Solution supports only standalone Prime License Manager.
Cisco Emergency Responder (CER) enhances the existing emergency 9-1-1 functionality offered by Cisco
Unified Communications Manager by sending emergency calls to the appropriate Public Safety Answering
Point (PSAP). Cisco Emergency Responder is ordered as a Cisco HCS add-on license.
For more information, see the Cisco Unified Communications Domain Manager Maintain and Operate Guide
and Cisco Hosted Collaboration Solution License Management.
Each deployment mode must have its own License Manager, and all UC clusters added to the License Manager
must have the same deployment mode as the License Manager. When you add a License Manager, Default
Deployment Mode is automatically selected. You can select a different deployment mode from the Default
Deployment Mode drop-down list.
Note Cisco HCM-F supports License Managers with different deployment modes.
Step 1 From the side menu, select License Management > License Manager Summary.
Step 2 Click Add New.
Step 3 Enter the following information:
Field Description
Name The name of the License Manager instance.
Hostname The hostname/IP Address of the License Manager instance. If hostname is specified,
then it must be a fully qualified domain name. If IP address is specified, then ensure
that the IP address specified is the NAT IP Address of License Manager.
Note If the License Manager is in Application Space, ensure that the Hostname
field has the NAT IP Address of License Manager specified.
License Manager Cluster The License Manager Cluster Capacity is set at 1000 and cannot be edited.
Capacity
User ID The OS administrator user ID associated with the License Manager.
Re-enter Password Re-enter the password associated with the user ID.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
32
System Architecture
HCM-F License Dashboard
Field Description
Deployment Mode Select the required Deployment Mode from the drop-down list.
Note Licenses of Cisco Collaboration Flex Plan work only in HCS mode.
In Shared Architecture setup, the License Dashboard may not provide the accurate data.
In a co-existing deployment, Cisco Hosted Collaboration Mediation Fulfillment can be configured to have
Unified Communications Domain Manager 8.x, 10.x and UC applications for the License Dashboard to provide
the accurate data.
Apart from the supported SI Report versions, the License Dashboard is not available.
For details on License Dashboard REST APIs, see Cisco Hosted Collaboration Mediation Fulfillment Developer
Guide.
For more information on Smart Licensing, see Cisco Hosted Collaboration Solution Smart Licensing Guide.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
33
System Architecture
Prime License Manager (PLM)
For details on installing and configuring Cisco Prime License Manager, see the Cisco Prime License Manager
User Guide
The license types available for Collaboration Flex Plan - Hosted are:
• Cisco HCS Standard licenses for your knowledge workers.
• Cisco HCS Foundation for public space phones.
• Cisco HCS Essential licenses for analog phones such as fax machines.
• Cisco HCS Standard Messaging license for voicemail.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
34
System Architecture
Coresident Prime License Manager
Figure 11: HCS and Collaboration Flex Plan - Hosted - License Management
For more information, see the Cisco Unified Communications Licensing page at http://www.cisco.com/c/en/
us/products/unified-communications/unified-communications-licensing/index.html.
Note Before adding the Coresident PLM, ensure to add Unified CM cluster and applications in HCM-F with all
the network settings and credentials.
Ensure that the License Management service is started to activate Cisco Prime License Manager Resource
API and Cisco Prime License Manager Resource Legacy API using the CLI commands:
• utils service activate Cisco Prime LM Resource API
• utils service activate Cisco Prime LM Resource Legacy API
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
35
System Architecture
Overview of Smart Licensing
It is a Cisco initiative to move all the licenses to the cloud. The purpose of this initiative is to simplify the
license management for HCS partners and enable them to adopt Cisco’s cloud-based license management
system. Smart Licensing helps in overcoming most of the limitations with the traditional PAK-based licenses.
Most of the Cisco products including routing, switching, security, collaboration, and so on supports smart
licensing.
Smart Licensing in HCS depends on Cisco Smart Software Manager (CSSM), and HCM-F. In CSSM you
can activate and manage all Cisco licenses. HCM-F simplifies the complexities of registration or activation
of UC Applications with CSSM, management of Smart Licenses, generate licensing reports for inventory,
and billing purposes. HCM-F also provides licensing dashboards for consumption details and compliance
status.
PLM is not supported for UC applications cluster versions higher than 11.x. Register all the 12.x UC applications
cluster to CSSM.
HCM-F currently supports registration of UC Applications to Prime License Manager (PLM) for consuming
the traditional PAK-based licenses. UC application versions 11.x or earlier supports registration through PLM.
For more information about PLM, see Cisco Hosted Collaboration Solution License Management.
Smart Licensing helps simplify three core functions:
• Purchasing: The software that you have installed in your network can automatically self-register
themselves, without Product Activation Keys (PAKs).
• Management: You can automatically track activations against your license entitlements. Also, you do
not need to install the license file on every node. You can create License Pools (logical grouping of
licenses) to reflect your organization structure. Smart Licensing offers you Cisco Smart Software Manager,
a centralized portal that enables you to manage all your Cisco software licenses from one centralized
website.
• Reporting: Through the portal, Smart Licensing offers an integrated view of the licenses you purchased
and the licenses that are deployed in your network. You can use this data to make better purchase decisions,
based on your consumption.
Cisco Smart Software Licensing helps you to procure, deploy, and manage licenses easily, where devices
register and report license consumption, removing the need for product activation keys (PAK). It Pools license
entitlements in a single account and allow you to move licenses freely through the network, wherever you
need them. It is enabled across Cisco products and managed by a direct cloud-based or mediated deployment
model.
The Cisco Smart Software Licensing service registers the product instance, reports license usage, and obtains
the necessary authorization from Cisco Smart Software Manager.
HCM-F enables the user to perform multiple tasks, such as, change the license deployment to Hosted
Collaboration Solution (HCS), setting the transport mode to UC Applications, create token in CSSM, register
the UC applications and validate the same, and so on. If there is a failure while performing the tasks, HCM-F
collects the error messages from the UC application or CSSM, and updates the HCM-F Job entry with the
issue details.
CSSM reports at smart account-level and product level. However, user information is not available at these
levels. HCM-F provides the Service Inventory report and the HLM report of license usage at customer-level
and virtual account level. It also provides Licensing dashboards to display the usage.
You can use Smart Licensing to:
• See the license usage and count.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
36
System Architecture
Smart Versus Traditional Licensing
To track smart account-related alerts, change the preference settings, and configure email notification.
Navigate to Smart Software Licensing in Cisco Smart Software Manager.
For additional information, go to https://software.cisco.com.
You procure the license and manually install it on the Your device requests the licenses that it needs from
PLM. CSSM.
Node-locked licenses - license is associated with a Pooled licenses - Smart accounts are the company
specific device. account specific that can be used with any compatible
device in your company.
No common install base location to view the licenses Licenses are stored securely on Cisco servers that are
that are purchased or software usage trends. accessible 24x7x365.
No easy means to transfer licenses from one device Licenses can be moved between product instances
to another. without a license transfer, which greatly simplifies
the reassignment of a software license as part of the
Return Material Authorization (RMA) process.
Limited visibility into all software licenses being used Complete view of all Smart Software Licenses used
in the network. Licenses are tracked only on per node in the network using a consolidated usage report of
basis. software licenses and devices in one easy-to-use
portal.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
37
System Architecture
Smart Accounts and Virtual Accounts
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
38
System Architecture
Default and Override at Each Level
Advantage: Its simple and automatically takes care of entire license mode assignment.
Disadvantage: There's a risk of how it's interpreted. For example, if admin updates SA level licensing mode,
the license mode doesn't change for the virtual accounts. However, any new virtual account synced is assigned
to this license mode. Also, the license mode settings at SA level may show one type whereas the license mode
settings at individual Virtual Accounts level may show a different type.
Cloud Connectivity
Set the transport mode in HCM-F to connect HCM-F and UC applications to CSSM.
The first option is Proxy transport mode (Connection to Cisco Smart Software Manager through proxy server)
where data transfer happens directly over the Internet to the Cloud server through an HTTPs proxy, either
Smart Call Home Transport Gateway or off the shelf HTTPs proxy such as Apache.
The third option is Direct transport mode (Direct connection to Cisco Smart Software Manager on cisco.com)
where data transfer happens over the Internet to the CSSM (Cloud server) directly from the devices to the
cloud through HTTPs. In Direct transport mode, HCM-F connects directly to the Cisco Smart Software
Manager on cisco.com.
When Smart Account is provisioned with client credentials (Client ID and Client Secret) in HCM-F, the
HCM-F authenticates with the Cisco Authentication Gateway with client credentials. HCM-F gets the access
token from Cisco Authentication Gateway for communicating with CSSM.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
39
System Architecture
License Authorization Status
Smart Accounts provide full visibility into all types of Cisco software licenses except for Right-To-Use (RTU)
licenses. The greatest benefit of a Smart Account is achieved when consuming a Smart License.
• For Smart Licensing, no PAKs are required and it’s easy to order and activate Smart Licenses.
• For Classic, PAK-based licenses, you gain enterprise-wide visibility of PAK licenses and devices that
are assigned to the Smart Account.
• For Cisco Enterprise Agreements (EA), you benefit from simplified EA management, enterprise-wide
visibility, and automatic license fulfillment.
When the user orders the licenses in CCW or Cisco commerce, user should select the smart account and virtual
account, so that all the licenses are sent to the virtual account.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
40
System Architecture
Voice and Video Unified Dashboard
Cisco Prime Collaboration Advanced includes three separate modules: Provisioning, Assurance, and Analytics.
Prime Collaboration Analytics helps you to identify the traffic trend, technology adoption trend,
over-and-under-utilized resources, and device resource usages in your network. You can also track intermittent
and recurring network issues and address service quality issues using the Prime Collaboration Analytics
Dashboards. Prime Collaboration Assurance in MSP mode supports only three features of Analytics: Traffic
Analysis, UC System Performance, and Service Experience/Call Quality. Cisco Prime Collaboration Standard
includes a subset of the features available in the Provisioning and Assurance modules. The Analytics module
and Cisco Prime Collaboration Contact Center Assurance are available as part of the Cisco Prime Collaboration
Advanced offer only.
Cisco Prime Collaboration Standard is included with Cisco Unified Workspace Licensing and Cisco User
Connect Licensing for Cisco Unified Communications. It provides essential provisioning and assurance
management to support deployments of Cisco Unified Communications Manager 10.0 and later.
Cisco Prime Collaboration Assurance features include the following:
• Support for Cisco Unified Communications components include Cisco Unified Communications Manager,
Cisco Unity Connection and Cisco Unified Communications Manager IM and Presence Service.
• Complete view of Contact Center through Dashboards that enable end-to-end monitoring of your Contact
Center components.
• Support for Contact Center Topology view, fault management, and alarm correlation.
• Fault monitoring for core Cisco Unified Communications components (Unified Communications Manager,
Cisco Unity Connection).
• Support for TelePresence components including Cisco TelePresence Video Communication Server (Cisco
Expressway).
• Contextual cross launch of serviceability pages of Cisco Unified Communications components.
• Role Based Access Control (RBAC).
• Fault Management, Diagnostics, and Reports.
• Single-Sign On and Analytics
Note Cisco Hosted Collaboration Solution supports a single HCM-F with one or more PCA that is used to monitor.
Different versions of Prime Collaboration Assurance running in the same environment is not supported.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
41
System Architecture
Device Inventory/Inventory Management
Refer to the Prime Collaboration Dashboards to learn how the dashlets are populated after deploying the Cisco
Prime Collaboration Assurance servers.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
42
System Architecture
Voice and Video Endpoint Monitoring
In addition, Cisco Prime Collaboration Assurance continuously monitors active calls supported by the Cisco
Unified Communications system and provides near real-time notification when the voice quality of a call fails
to meet a user-defined quality threshold. Cisco Prime Collaboration Assurance also allows you to perform
call classification based on a local dial plan.
See Prerequisites for Setting Up the Network for Monitoring in Cisco Prime Collaboration Network Monitoring,
Reporting, and Diagnostics Guide, 9.x and later to understand how to monitor IP Phones and TelePresence.
Diagnostics
Prime Collaboration uses Cisco Medianet technology to identify and isolate video issues. It provides media
path computation, statistics collection, and synthetic traffic generation.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
43
System Architecture
Fault Management
In addition, for IP phones, Prime Collaboration uses the IP SLA to monitor the reachability of key phones in
the network. A phone status test consists of:
• A list of IP phones to test.
• A configurable test schedule.
• IP SLA-based pings from an IP SLA-capable device (for example, a switch, a router, or a voice router)
to the IP phones. Optionally, it also pings from the Prime Collaboration server to IP phones.
Fault Management
Prime Collaboration ensures near real-time quick and accurate fault detection. After identifying an event,
Prime Collaboration groups it with related events and performs fault analysis to determine the root cause of
the fault.
Prime Collaboration allows to monitor the events that are of importance to you. You can customize the event
severity and enable to receive notifications from Prime Collaboration, based on the severity.
Prime Collaboration generate traps for alarms and events and sends notifications to the trap receiver. These
traps are based on events and alarms that are generated by the Prime Collaboration server. The traps are
converted into SNMPv2c notifications and are formatted according to the CISCO-EPM-NOTIFICATION-MIB.
Reports
Prime Collaboration Assurance provides the following predefined reports and customizable reports:
• Inventory Reports—Provide IP phone, audio phone, video phone, SRST phone, audio SIP phone, and
IP communicator inventory details. Inventory reports also provide information about CTI applications,
ATA devices, and the Cisco 1040 Sensor. Provides information on managed or unmanaged devices, and
the endpoints displayed in the Endpoints Diagnostics page.
• Call Quality Event History Reports—Provide the history of call quality events. Event History reports
can display information for both devices and clusters. You can use Event History to generate customized
reports of specific events, specific dates, and specific device groups.
• CDR & CMR Reports — Provides call details such as call category type, call class, call duration,
termination type, call release code, and so on.
• NAM & Sensor Reports— Provides call details collected form Sensor or NAM such as MOS, jitter, time
stamp, and so on.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
44
System Architecture
Cisco Expressway
• TelePresence Endpoint Reports — Provides details on completed and in-progress conference, endpoint
utilization, and No Show endpoints. TelePresence reports also provide a list of conferencing devices and
their average and peak utilization in your network.
• Activity Reports—Provide information about IP phones and video phones that have undergone a status
change during the previous 1 to 30 days.
Cisco Expressway
Cisco Expressway can be deployed in Cisco Hosted Collaboration Solution for Collaboration Edge to support
Over the Top (OTT) connectivity for HCS Endpoints and for Business to Business calls using a shared
Expressway.
Connectivity to Unified Communications Manager (for OTT) can be either secure or non-secure from remote
endpoints. Two distinct sessions, such as TCP and TLS, will be established with session traffic multiplexed
over these connections.
Audio and Video media streams are to be secured with SRTP and BFCP, IX and FECC are also negotiated
and relayed through the edge components.
Note For OTT deployments, hard endpoints must have client certificates to connect to the edge and therefore, must
be configured in secure mode.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
45
System Architecture
Aggregation System Architecture
Note The content under the title OTT Deployment and Secured Internet with Collaboration Edge Expressway is
existing content in the current SRND that has been added for context.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
46
CHAPTER 2
Network Architecture
• Service Provider IP Infrastructure, on page 47
• Signaling Aggregation Infrastructure, on page 57
The following table outlines the set of components and their intended placement within the end-to-end system
architecture.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
47
Network Architecture
HCS Traffic Types
Table 5: Components Requiring IP Connectivity Device Category Network Placement - Sample Device
Each of the devices shown in the preceding table requires IP connectivity to one or more other devices.
Media
• On-premises endpoints of one customer must have reachability to on-premises endpoints of another
customer for interenterprise on-net calls.
• On-premises endpoints must have reachability to PSTN media gateway.
• (MGW) in the service provider's data center.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
48
Network Architecture
Management
Management
• Per-customer management components in the service provider's data center must have reachability to
multitenant management components in the service provider's data center.
• In the case of managed CPE or SRST routers, the on-premises CPE management address must have
reachability to per-customer management components in the service provider's data center. For Cisco
Prime Unified Operations Manager, this must currently be accomplished without using PAT or NAT.
• On-premises LDAP server to the customer IM and Presence Service server instance in the service
provider's data center.
Data
• Connectivity between multiple sites within an enterprise customer.
• No direct connectivity between sites of different enterprise customers.
• Because multiple enterprise customers share service provider IP (and data center) infrastructure as a
transport medium, some fundamental design and security constraints must be addressed:
• On-premises components of one enterprise must not negatively impact hosted components of other
enterprises or the service provider network in general.
• Customer traffic must be segregated as it passes through the service provider IP (and data center)
infrastructure. This is because multiple customers use the same infrastructure to access applications
hosted in the service provider data center.
• While providing over traffic segregation, the service provider must support some intercustomer
communication. For example, media for intercustomer on-net calls can be sent over an IP network
between endpoints in two different enterprises without being sent to the PSTN.
• IP network design must consider potential overlapping address spaces of both on-premises and
hosted components for multiple enterprises.
Note The use of network address translation (NAT) address space is not recommended for management applications
such as Cisco Hosted Collaboration Mediation Fulfillment Layer (HCM-F) when they are accessed from
customer Unified Communications applications.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
49
Network Architecture
Service Provider NAT/PAT Design
These components must be directly accessed from the individual customer domains without network address
translation of the management components.
Figure 12: HCS Management Addressing Scheme
The deployment scheme shown in the preceding figure is the preferred and validated method, which enables
all management features to work correctly.
Note Some deployments do not follow the above recommended best practice, and problems with some features
have been encountered; for example, platform upgrade manager or automation of assurance provisioning. We
highly recommend that you migrate noncomplying deployments to the above Cisco HCS supported and
validated deployment deployment (in other words, addresses of management applications such as HCM-F)
must be directly accessible (without NAT) from the UC applications, whereas the UC applications can have
their addresses translated (NAT) while being accessed from management applications.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
50
Network Architecture
Grouping VLANs and VLAN Numbering
Use the following number scheme if only two VLANs are configured for each end customer:
• 0100 to 1999: UC Apps (100 to 999 are the customer IDs for Group 1)
• 2100 to 3999: outside VLANs (100 to 999 are the customer IDs for Group 1)
While this is the recommended grouping of VLANS to help you scale the number of customers that can be
hosted on a Cisco HCS platform, you may reach the upper limit of customers due to limitations in other areas
of the Cisco HCS solution.
VPN Options
The following VPN options are supported in an HCS deployment:
1. MPLS VPN
2. Site-to-Site IPsec VPN
3. FlexVPN
4. AnyConnect VPN
5. For access options that do not require VPN, see Cisco Expressway Over-the-Top Solution Overview, on
page 97
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
51
Network Architecture
Service Provider IP infrastructure design MPLS VPN
• Use of MPLS VPN and VLAN to provide customer traffic isolation and segregation
Endpoints in individual customer sites connect to the service provider network through MPLS Provider Edge
(PE) devices. Customer traffic may be untagged, in which case physical interfaces are used on MPLS PE
devices. Or the service provider may choose to use a bump-in-the-wire and may aggregate multiple customers
on the same physical MPLS PE interface, in which case each customer is assigned its own VLAN and each
customer is terminated on a customer-specific sub-interface with 802.1Q encapsulation that matches the
VLAN sent by customer.
The customer-facing MPLS PE device is responsible for implementing per-customer MPLS Layer 3 Virtual
Private Network (VPN), which provides customer traffic separation through the service provider MPLS-IP
infrastructure.
As an MPLS VPN PE node this device is responsible for the following:
• Defining customer-specific VRF
• Assigning customer-facing interfaces to VRF
• Implementing PE-CE Routing protocol for route exchange
• Implementing Multiprotocol BGP (M-BGP) for VPN route exchange through the MPLS Core
• Routing redistribution between PE-CE and M-BGP routing protocol
MPLS Provider (P) routers are core service provider routers, responsible for high-speed data transfer through
the service provider backbone. Depending upon overall service provider design, this P router may or may not
be part of M-BGP deployment. Other than regular service provider routing and MPLS operations, there is no
specific Cisco HCS-related requirement.
Per-customer MPLS VPN services initiated at the customer-facing MPLS PE devices are terminated at the
data center facing MPLS PEs. The implementation at data center core facing MPLS PEs is the same as the
customer-facing PE device. This effectively means that MPLS L3 VPN is used only in the service provider
MPLS/IP core for customer data transport.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
52
Network Architecture
HCS Tenant Connectivity Over Internet Model
Note Use of labels for MPLS VPN may push the packet size beyond the default maximum of 1500 bytes that may
cause fragmentation in some cases. A good practice is to increase MTU size to accommodate these added
bytes.
The data center core-facing interfaces on the MPLS PE implement a per-customer sub-interface, which is
configured for the customer VRF and is a VLAN unique to each customer. In other words, customer traffic
handoff from service provider core to the data center core devices is based on per-customer VLAN. Data
center infrastructure uses this VLAN to implement VRF-Lite for customer traffic separation.
A similar approach is used to hand over customer traffic to the Session Border Controller (SBC). Any
intercustomer calls, or any calls to PSTN are done through SBC. The Nexus 7000 device hands off customer
traffic to SBC using a per-customer sub-interface, similar to data center handoff. The Session Border Controller
is responsible to correctly route customer calls, based on the configuration within SBC.
Note This solution is meant to enable a Cisco HCS tenant site and not a single user.
IPsec is a framework of open standards. It provides security for the transmission of sensitive information over
unprotected networks such as the Internet. IPsec acts at the network layer, protecting and authenticating IP
packets between participating IPsec devices or peers, such as Cisco routers.
Figure 14: Architecture for Site Connectivity Over Internet
In the above diagram of the IP gateway, the device service provider typically has in their IP cloud for the
Internet connectivity. There is no mandate on which IP router one may use, as long as it provides the IP routing
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
53
Network Architecture
HCS Tenant Connectivity Over Internet Model
capabilities for the incoming traffic over IPSec to the appropriate VPN concentrator in the service provider's
HCS data center for IPSec VPN tunnel termination. As shown in the diagram, the VPN concentrator
recommended for this kind of deployment is ASR 1000, which sits inside the Service Provider Cisco HCS
Data Center as a centralized VPN concentrator. This is called Site-Site IPSec VPN tunnel on ASR router.
Figure 15: Detailed Architecture for Connectivity Over Internet
As shown above, the cloud for the MPLS traffic and cloud for the Internet traffic are considered to be different
from one another in terms of how they ingress to the service provider's network. For the traffic coming out
of Internet, the IP gateway is the ingress point, where as in the case of the traffic coming from the MPLS
cloud, the PE is the ingress point.
The above architecture applies to the aggregation-only layer in the above design within the data center.
Deploy the VPN concentrator as other services are typically deployed in this layer. Use the ASR 1000 dedicated
as a VPN concentrator as encryption and decryption happen on the ASR 1000. Running other services may
impact the performance overall.
There are multiple ways to deploy this solution within the Service Provider Cisco HCS Data Center using
two different techniques.
1. Use Layer 3 between the IP gateway and ASR 1000. In this case, the Nexus 7000 switch is used as a
router.
The Nexus 7000 acts as a default gateway for ingress and egress traffic for encrypted traffic in the global
routing table.
2. Use Layer 2 technology between the IP gateway and ASR 1000, and in this case the Nexus 7000 switch
is transparent to the traffic and ASR 1000.
ASR 1000 acts as a default gateway for ingress and the IP gateway is used as a egress default gateway
for encrypted traffic in the global routing table.
You can deploy using the Layer 2 connectivity between the IP gateway and ASR 1000. This keeps this
inter-connectivity architecture as an overlay network on top of the Cisco HCS VPN based network.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
54
Network Architecture
HCS Tenant Connectivity Over Internet Model
There are multiple ways to deploy this over the Internet solution within the SP's data center.
1. Bring the IPsec tunnel directly to the ASR 1000 (VPN concentrator), which decrypts into VRF and connects
to the south VRF on Nexus 7000 using a static route per tenant. On this tenant, it points to the Nexus 7000
aggregation and similarly builds a static route per tenant on Nexus 7000 for any outgoing traffic. You
also require one more static route on the Nexus 7000 toward the SBC for any inter SMB traffic or PSTN
traffic.
2. 2. Bring the IPsec tunnel directly to the ASR 1000 K (VPN concentrator) and connect it to the Nexus
7000 aggregation using dynamic routing protocol BGP. Dynamic BGP also has the advantage to redistribute
the IPSec RRI routes from ASR to Nexus 7000 automatically.
In the diagram below, the ASR 1000-VPN decrypts into VRF and this VRF is connected to the Northbound
VRF on N7000. Then it goes to ASA Outside, and from ASA Inside to Southbound VRF on the Nexus 7000,
then to UC Applications.
Figure 16: Detailed Architecture for Connectivity Over Internet
The IP address on the Customer Premise Equipment (CPE) and the VPN concentrator need to be in the public
domain from the reachability perspective. For all the various different customer sites, there is only one common
public IP address, which they use to connect.
IPSec tunnels are sets of security associations (SAs) that are established between two IPsec peers. The SAs
define the protocols and the algorithms to be applied to sensitive packets and specify the keying material to
be used by the two peers. SAs are unidirectional and are established per security protocol (AH or ESP)
With IPsec, you define the traffic that should be protected between two IPsec peers by configuring access
lists and applying these access lists to interfaces by way of crypto map sets. Therefore, traffic may be selected
on the basis of source and destination address, and optionally Layer 4 protocol, and port.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
55
Network Architecture
FlexVPN
Note The access lists used for IPsec are used only to determine the traffic that should be protected by IPsec, and
not the traffic that should be blocked or permitted through the interface. Separate access lists define blocking
and permitting at the interface.
Access lists associated with IPsec crypto map entries also represent the traffic that a device requires to be
protected by IPsec. Inbound traffic is processed against the crypto map entries--if an unprotected packet
matches a permit entry in a particular access list associated with an IPsec crypto map entry, that packet is
dropped because it was not sent as an IPsec-protected packet.
Cisco recommends static IP addresses on the CPE device and on the VPN concentrator to avoid teardown of
the IPSec tunnel. If the CPE device is using the DHCP or dynamic IP address scheme, there is no way to
establish the tunnel from the central site to the remote site.
FlexVPN
FlexVPN is deployed in HCS as a site-to-site VPN, between the customer site and the hosted HCS datacenter.
The FlexVPN based site-to-site VPN is easy to configure with IKEv2 smart defaults feature. The deployment
model only requires a customer to have internet access and FlexVPN capable routers from HCS. Both dedicated
or shared Cisco Unified Communications Manager can be used to offer HCS to customers behind the FlexVPN.
The following key assumptions are made with regard to the FlexVPN support:
• Endpoints deployed in the customer premise are directly accessible at layer 3 level from UC Applications
deployed in the HCS data center.
• No NAT is assumed between the customer endpoints and the UC applications.
• The Customer VPN client router may be connected to the Internet domain from behind a NAT enabled
internet facing router.
• The VPN client router’s WAN facing address may be private and may be dynamically assigned.
• The VPN server for the HCS may need to support the configuration, such that a common public IP can
be used for all customer VPN client router connectivity.
• Dual Tunnels can be established to two different FlexVPN server routers and tracking enabled at the
client side to failover.
AnyConnect VPN
Cisco AnyConnect VPN Client provides secure SSL connections for remote users. You can secure connections
through the Cisco ASA 5500 Series using SSL and DTLS protocols. It provides a broad desktop and mobile
OS platform support.
ASA for AnyConnect is independent of the existing Firewall ASA in Cisco HCS. You need one ASA per
cluster as multi-context SSL VPN support in ASA is not available yet. AnyConnect split tunneling allows
only the configured applications to go through the VPN tunnel while other Internet traffic from that endpoint
goes outside of the VPN.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
56
Network Architecture
Signaling Aggregation Infrastructure
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
57
Network Architecture
Signaling Aggregation Infrastructure
Cisco HCS offers a number of deployment models, and depends on type of services, interconnect and
aggregation component preferences. The different aggregation components can be deployed in various
combinations to provide different services. In each case an “HCS demarcation” point exists which provides
a logical and administrative separation between the service provider network and Cisco HCS solution for the
purposes of network interconnect. The following figure shows the different deployment models and the
demarcation in each case.
Figure 19: Deployment Models and Cisco HCS Demarcation
Cisco HCS offers a number of deployment models that can be used depending on customer services,
interconnect and component preferences.
The different aggregation components can be deployed in various combinations to provide different services.
In each case an “HCS demarcation” point exists which provides a logical and administration separation between
the SP network and HCS solution for the purposes of network interconnect.
The third party SBC deployment models requires Service Providers to manage:
• Validations and integration southbound (HCS) and north bound (SIP PSTN or IMS) integration
• Feature and roadmap management
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
58
Network Architecture
IMS Network Integration
• Support services
In addition, the aggregation layer provides the following functions depending on the device used, for example:
• Multi VRF Support and Multi Customer Support
• Media Anchoring
• Protocol Conversions - Signaling Protocol, DTMF 2833 <> Notify, Late <> Early Offer
• Security, Access, Control Network Demarcation, Admission Control and Topology Hiding
• Routing—All Cisco HCS intercustomer calls traverse the aggregation layer and calls are switched by
the service provider's switch.
The following table provides further details on the specific attributes of each deployment model.
Per Customer SIP Trunking With this deployment model, a Cisco HCS customer chooses to deploy
a dedicated SIP trunk as opposed to using a centralized SBC. This may
also be advantageous for Cisco HCS deployments where the Service
Provider has not offered a centralized SBC.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
59
Network Architecture
IMS Network Integration
Peer-based business trunking: The IMS and the NGCN networks connect as peers through Interconnect
Border Control Functions (IBCF). The business subscribers are not necessarily provisioned in HSS. The point
of interconnection between peer network and IMS is the IMS Ici interface.
Application Server (AS): In this model Cisco HCS/Unified Communications Manager appears as the
Application Server in the IMS network for the mobile phones, and the ISC (IMS Service Control) interface
is used between IMS and Cisco HCS. The key requirement here is for Unified Communications Manager to
support ISC interface route header for application sequencing so that Mobile Service Provider can mesh up
features delivered by multiple application servers for the same call. Other significant requirements include
the support of P-Charging-Vector and P-Charging-Function addresses.
Highlights of the Unified Communications Manager IMS Application Server feature are as follows:
• A phone type "IMS-integrated Mobile (Basic)" is introduced. This is modeled after Cisco Mobile Client.
Note that not all MI (Mobility Identity) attributed are available for IMS client.
• SIP trunk type 'ISC'. The ISC trunk in Cisco Unified Communications Manager is added support to Route
Header. Unified Communications Manager will use the top Route Header in the initial INVITE to decide
how to handle this request, either as originating call or terminating call or as regular SIP call.
New calls flows are based on a half call model for calls involving IMS-integrated clients. These are significantly
different from the normal call flow in Cisco Unified Communications Manager. When the initial request
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
60
Network Architecture
Features and Services
(INVITE) is received on an SIP ISC trunk, the top most Route-Header must correspond to the Unified
Communications Manager (the ISC trunk configuration shall have the ability to specify this URI to validate
the route header) and there is at least one other Route-Header (corresponding to S-CSCF). If these conditions
are not met, Unified Communications Manager fails the request with "403 Forbidden".
• DTMF and other features for the IMS-integrated Mobile are similar to Cisco Mobile Client features
(hold/exclusive hold/resume/conference/transfer/dusting).
• P-Charging-Vector: The P-Charging-Vector header is defined by 3GPP to correlate charging records
generated from different entities that are related to the same session. It contains the following parameters:
ICID and IOI. Cisco Unified Communications Manager will use cluster ID, concatenated with a unique
number as the icid_value. The IOI identifies both originating and terminating networks involved in a
session/transaction.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
61
Network Architecture
IMS Supplementary Services for VoLTE
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
62
Network Architecture
Central PSTN Gateways
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
63
Network Architecture
Central PSTN Gateways
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
64
CHAPTER 3
Applications
• Core UC Applications and Integrations, on page 65
• IP Multimedia Subsystem Network Architecture and Components, on page 67
• Video Call Flow in HCS Deployments, on page 68
• Fax, on page 71
• Cisco Webex Meetings - Cisco HCS Deployment, on page 72
• Cisco Webex Cloud Connected Audio , on page 72
• Mobility, on page 76
• Assurance Considerations and Impact to HCM-F, on page 86
• Cisco Hosted Collaboration Mediation Fulfillment Impact, on page 86
• Cisco Collaboration Clients and Applications, on page 87
• Endpoints - Conference, on page 87
• Directory, on page 88
• Client Services Framework – Dial Plan Considerations, on page 89
• Translation Patterns, on page 90
• Application Dialing Rules, on page 90
• Directory Lookup Rules, on page 90
• Client Transformation, on page 90
• Deploying Client Services Framework, on page 90
• Deployment Models for Jabber Clients, on page 91
• Push Notifications, on page 91
• Cisco Webex Hybrid Services Architecture Overview, on page 91
• Cisco Cloud Collaboration Management, on page 92
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
65
Applications
Core UC Applications and Integrations
Cisco Expressway
Cisco Expressway offers users outside your firewall simple, highly secure access to all collaboration workloads,
including video, voice, content, IM, and presence. Users can collaborate with people who are on third-party
systems and endpoints or in other companies; teleworkers and Cisco Jabber mobile users can work more
effectively on their device of choice.
For more information, see the Cisco Expressway documentation: https://www.cisco.com/c/en/us/support/
unified-communications/expressway-series/tsd-products-support-series-home.html
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
66
Applications
IP Multimedia Subsystem Network Architecture and Components
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
67
Applications
Video Call Flow in HCS Deployments
• Interrogating-CSCF (I-CSCF)
The P-CSCF forwards registration requests to an I-CSCF, which interrogates HSS to obtain the address
of the relevant S-CSCF to process the SIP initiation request. For call processing, SIP requests are sent
to I-CSCF.
• SIP application servers (AS)
Servers that host and execute services and interface with the S-CSCF using SIP.CUCM) functions as an
AS in the configuration via an ISC interface.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
68
Applications
Non-HCS to HCS Enterprise Point-to-Point Video Calling
calls. However, the trunks carrying video traffic to and from the SBC need to be appropriately configured to
handle video sessions.
Depending on the service provider, aggregation layer routing options, inter-enterprise audio calls may be
hairpinned at the SBC or at a Softswitch in the service provider domain.
Regardless of routing infrastructure within the SP domain, we assume that the SP network preserves the Video
SDP (attributes) so that the inter-enterprise audio call can succeed as a video call if both endpoints support
Video.
Figure 23: Inter-Enterprise Call
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
69
Applications
HCS Enterprise Video
Depending on the SP network and requirements, you can configure an SBC with a dedicated adjacency to the
non-Cisco HCS video cloud or the SP network can directly connect to the video cloud and provide the routing
across Cisco HCS and non-Cisco HCS video endpoints.
Within Cisco HCS, the SBC is configured to validate the interworking of Non-Cisco HCS Video Signaling.
Non-Cisco HCS video signaling includes calls to and from external Cisco TelePresence Systems, which can
either be a scheduled or ad hoc meeting on the Cisco TelePresence Systems. However some of the features
specific to Cisco TelePresence Systems like One Button To Push are not available on the Video Endpoints
registered with the HCS leaf clusters.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
70
Applications
Fax
Fax
For most customers, there is a requirement to provide fax service to the end users. This includes inbound fax
from the PSTN and outbound Fax to the PSTN and Fax over VoIP between sites.
The fax machines are connected to a VG, which communicates preferably in SIP with the Unified
Communications Manager.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
71
Applications
Outbound Fax to PSTN
• Call comes in on Broadsoft, then from there is sent to SBC through SIP and from SBC to Unified
Communications Manager SIP trunk.
• Unified Communications Manager sends the call to VG based on DN.
• Once the local fax provides fax tone, the fax session is established end-to-end and the fax is received by
the local fax.
Note To configure Inbound and Outbound fax from the MGCP gateway, see the http://www.cisco.com/c/en/us/
tech/voice/gateway-protocols/tsd-technology-support-troubleshooting-technotes-list.html for detailed
information.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
72
Applications
Cisco Webex Cloud Connected Audio
Calls received by an SBC or on the PSTN-Webex adjacency are routed out of the WebEx-CCA adjacency.
Similarly, call back calls received on Cisco Webex CCA adjacency are routed out of PSTN-Cisco Webex
adjacency towards the PSTN provider network. The PSTN network then routes the call to the final destination.
Cisco Webex Cloud Connected Audio allows Cisco Webex enabled enterprises to use native PSTN connectivity
instead of using the Cisco Webex PSTN connectivity. Within an SBC, all calls originating from HCS tenants
from leaf clusters towards Cisco Webex are routed to the PSTN provider network, which routes the call back
to an SBC on a dedicated PSTN-Cisco Webex adjacency. For deployments with LBO, calls still use Cisco
Webex PSTN.
This is done by creating a dedicated adjacency towards Cisco Webex. This adjacency is used to send and
receive Cisco Webex audio calls from and to users joining Cisco Webex Meetings hosted by HCS tenant
enterprises Cisco Webex MeetingsThe following diagram captures architecture that will be supported in HCS
for integrating/connecting to the Cisco Webex for cloud connected audio feature.
Figure 26: Cisco Webex Collaboration Cloud Audio Architecture
All signaling and media sessions specific to Cisco Webex audio between the leaf clusters and the Cisco Webex
are routed through the Session Border Controller (SBC) on the existing sip trunk/Adjacency between leaf
clusters and SBC. All Enterprises enabled for Cloud Connected Audio are configured with the same non-disable
meeting number and the same or different E.164 numbers on per enterprise basis on Cisco Webex. Cisco
Webex uses the meeting IDs to uniquely identify the meeting ownership.
The leaf clusters are configured to route the calls to the Cisco Webex number over the same SIP trunk
configured towards the SBC. The SBC is configured to route the calls specific to Cisco Webex number over
a shared trunk/adjacency to Cisco Webex. The SBC is configured to uniquely identify the enterprise or Cisco
Unified Communications Manager initiating the Cisco Webex audio call.
In the figure below, the SBC hands over the Cisco Webex call to the Service Provider PSTN switch. The
service provide PSTN switch does the number analysis and other various routing methodologies to identify
the termination of a unique SIP trunk to an SBC for calls destined to Cisco Webex CCA. The SBC determines
the destination adjacency is Cisco Webex CCA after receiving calls on this specific adjacency.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
73
Applications
Cisco Webex Cloud Connected Audio
For call back calls requested during a specific enterprise hosted session, the Cisco Webex routes the calls to
the SBC with additional parameters to uniquely identify the enterprise that needs to handle and complete the
call.
For call back calls from Cisco Webex CCA, the SBC hands over all call invites to the service provider PSTN
switch. The switch identifies the termination a subscriber under a hosted customer site or PSTN if the user
joins a meeting from PSTN or sites that do not have Central Break Out and depend local connection to PSTN.
All non-enterprise users have to dial the enterprise specific Cisco Webex number that is routed through the
SBC to the enterprise specific leaf cluster.
These call flows include both signalling and media information as they follow the same path.
Figure 27: Central Breakout for Cisco Webex CCA
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
74
Applications
Enterprise User Calls Into Cisco Webex and Calls from Cisco Webex CCA to Enterprise Users
Enterprise User Calls Into Cisco Webex and Calls from Cisco Webex CCA to
Enterprise Users
Figure 28: Routing for Hosted Enterprise User Joining a Meeting
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
75
Applications
External Users Call into Cisco Webex and Calls from Cisco Webex CCA to External Users
External Users Call into Cisco Webex and Calls from Cisco Webex CCA to
External Users
Figure 29: Routing Callbacks Over PSTN
External users can dial an enterprise owned e.164 number dedicated for Cisco Webex audio sessions. When
external users from PSTN dial-in to the meeting, the service provide PSTN switch identifies the unique SIP
trunk to and SBC for calls destined to Cisco Webex CCA. The SBC routes the call to destination adjacency
towards Cisco Webex CCA.
Callback calls from Cisco Webex CCA are handed over to PSTN by the service provider PSTN switch to
route to the user joining the meeting. This behavior allows the call back calls to be handled by the correct
enterprise, regardless of the called user's number or location.
Mobility
Cisco HCS offers Mobile Unified Communications solutions and applications that deliver features and
functionality of the enterprise environment to mobile workers wherever they might be. With Mobile Unified
Communications solutions, mobile users can handle business calls on a multitude of devices and access
enterprise applications whether moving around the office building, between office buildings, or between
geographic locations outside the enterprise.
The following are a set of mobility features that are offered through HCS:
• Mobile Connect: Includes Desk Phone Pickup, Remote Destination Pickup, Mid Call Features
• Enterprise Feature Access: Two-stage dialing without an IVR feature
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
76
Applications
Mobile Connect
Mobile Connect
The Mobile Connect feature allows an incoming call to an enterprise user to be offered to the user's IP desk
phone and up to ten configurable remote destinations. Typically, a user's remote destination is their mobile
or cellular telephone. After the call is offered to both the desktop and remote destination phone, the user can
answer any of the phones. When the user answers the call on one of the remote destination phones, or on the
IP desk phone, the user has the option to hand off or pick up the call on the other phone.
Mobile Connect supports the following scenarios:
• Desk Phone Pickup: When a call to the enterprise number has been made by or answered at the desk
phone, the user can switch or move the active call to the remote destination.
• Remote Destination Pickup: When a call to the enterprise number has been made by or answered at
the remote destination, the user can switch or move the active call to the desk phone.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
77
Applications
Mobile Connect Mid-Call Features
Conference *85 Enterprise 1. Press Enterprise Conference soft key. Smartphone sends *82.
Conference
2. Enter <Conference_Target/DN>. Then the smartphone automatically does
the following when the conference target
3. When conference target answers, press DN is entered:
Enterprise Conference soft key.
1. Makes a new call to preconfigured
Enterprise Feature Access DID.
2. Sends a preconfigured PIN number
when Enterprise Feature Access
answers it, followed by *85, followed
by conference target/DN.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
78
Applications
Enterprise Feature Access
To retrieve a parked call, the user must use Mobile Voice Access or
Enterprise Feature Access Two-Stage Dialing to place a call to the
directed call park number to be dialed that must be prefixed with the
appropriate call park retrieval prefix.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
79
Applications
Mobile Voice Access Enterprise
An administrator must configure a number of service parameters for this feature that are available in the
Administration Guide for Cisco Unified Communications Manager , available at http://www.cisco.com/c/en/
us/support/unified-communications/unified-communications-manager-callmanager/
products-maintenance-guides-list.html.
Note The User Control method depends on successful relay of the DTMF tone from the remote destination on the
mobile voice network or PSTN to Cisco Unified Communications Manager. The DTMF tone must be sent
out-of-band to Unified Communications Manager. If DTMF relay is not properly configured on the network
and system, DTMF is not received and all call legs to remote destinations relying on the user control method
are disconnected. The system administrator should ensure proper DTMF interoperation and relay across the
enterprise telephony network prior to enabling the user control method. If DTMF cannot be effectively relayed
from the PSTN to Unified Communications Manager, the Timer Control method should be used instead.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
80
Applications
Clientless FMC Integration with NNI or SS7
Task Description
Ensure the forward-no-answer time Make sure that the global Forward No Answer Timer field in Unified
is shorter at the desk phone than at Communications Manager or the No Answer Ring Duration field under
the remote destination phones the individual phone line is configured with a value that is less than the
amount of time a remote destination phone rings before forwarding to
the remote destination voice mailbox. In addition, you can use the Delay
Before Ringing Timer parameter under the Remote Destination
configuration page to delay the ringing of the remote destination phone
in order to further lengthen the amount of time that must pass before a
remote destination phone forwards to its own voice mailbox. However,
when adjusting the Delay Before Ringing Timer parameter, take care
to ensure that the global Unified Communications Manager Forward
No Answer Timer (or the line-level No Answer Ringer Duration field)
is set sufficiently high enough so that the mobility user has time to
answer the call on the remote destination phone. You can set the Delay
Before Ringing Timer parameter for each remote destination; it is set
to 4000 milliseconds by default.
Ensure that the remote destination Set the Answer Too Late Timer parameter under the Remote Destination
phone stops ringing before it is configuration page to a value that is less than the amount of time that a
forwarded to its own voice mailbox remote destination phone rings before forwarding to its voice mailbox.
This ensures that the remote destination phone stops ringing before the
call can be forwarded to its own voice mailbox. You can set the Answer
Too Late Timer parameter for each remote destination; it is set to 19,000
milliseconds by default
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
81
Applications
Clientless FMC Integration with NNI or SS7
Users can extend the following business features to any mobile device and provide value proposition to MSP
by reducing churn and sticky services:
• Enterprise dial plan and calling policy without special client: Same dialing policy, call barring as your
desk (including ext dialing).
• Enterprise Caller-ID: Replace mobile number with enterprise caller ID.
• Single Number Reach through both Fixed or Mobile DN: Simultaneous ring for all shared-devices
regardless of identity.
• Seamless handoff between devices: Seamless transition of active call between mobile and desk, or soft
phone.
• True Single Business Voicemail: Single voice mailbox across multiple phone numbers.
• Native Message Waiting Indicator: MWI for business voicemail.
• DTMF-based Mid-Call features: Music on hold, conference, transfer, call park, session handoff, and call
move are invoked through DTMF star codes.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
82
Applications
Clientless FMC Integration with NNI or SS7
The following call flow shows forced calls through Cisco HCS.
Figure 31: Clientless FMC Integration - Forced Calls
With the call flow shown in the preceding figure, all of the dialing experience is the same as the enterprise
office location. All calling policies/restrictions apply to both fixed and mobile originations. Fixed credentials
are presented for off-net calls rather than mobile. However, this feature does rely on the MSP to provide the
IN VPN application to trigger the forced routing to Cisco HCS platform.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
83
Applications
Clientless FMC Integration with IMS
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
84
Applications
Mobile Clients and Devices
ISC interface is defined as call processing control interface between S-CSCF and application server. This
interface runs SIP normal protocol as defined by RFC 3261, with additional enhancement to signify "origination"
or "termination" call leg toward the application server.
Cisco Jabber
Cisco Jabber is a set of mobile clients for Android and Apple iOS mobile devices including iPhone and iPad
that provide the ability to make voice and video calls over IP on the enterprise WLAN network or over the
mobile data network. Cisco Jabber also provides the ability to access the corporate directory and enterprise
voicemail services, and XMPP-based enterprise IM and Presence services.
The set of mobile clients for Android and Apple iOS include the following:
• Cisco Jabber for Android and iPhone
• Cisco Jabber for iPad
IMS Clients
To provide HCS FMC services, Unified Communications Manager defines generic mobile phones. They
include “IMS Mobile(Basic)” and “Carrier Integrated Mobile”.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
85
Applications
Assurance Considerations and Impact to HCM-F
Because Cisco Proximity for Mobile Voice relies on Bluetooth pairing, there is no requirement to run an
application or client on the mobile device. All communication and interaction occurs over the standard-based
Bluetooth interfaces.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
86
Applications
Cisco Collaboration Clients and Applications
• Virtual machines running on the C-series servers are synced from vCenter.
• Because the hardware associated with the C-series does not appear in SDR, service assurance is not able
to do some service impact analysis and root cause analysis based on events from the C-series server. The
reason is because SDR does not show which ESXi Host is associated with each C-series server.
Endpoints - Conference
Be sure to consider requirements for conference endpoints as part of your Cisco HCS deployment:
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
87
Applications
Directory
• The Cisco Telepresence MX Series turns any conference room into a video collaboration hub by connecting
teams face to face at a moment's notice. MX Series features the MX700 and MX800 systems for medium
and large rooms, and gives you flexibility to deploy and scale video depending on the needs of your
business.
For more information, see the Cisco Telepresence MX Series documentation: https://www.cisco.com/c/
en/us/support/collaboration-endpoints/telepresence-mx-series/tsd-products-support-series-home.html
• The Cisco Webex DX Series offers all-in-one desktop collaboration, clearing desktop clutter while adding
high-quality video conferencing. Enjoy all-in-one HD video and voice, with unified communications
features that can replace your IP phone. With the Cisco Webex Room OS, you can whiteboard and
annotate shared content with the touchscreen.
For more information, see the Cisco Webex DX Series documentation: https://www.cisco.com/c/en/us/
support/collaboration-endpoints/desktop-collaboration-experience-dx600-series/
tsd-products-support-series-home.html
Directory
LDAP Integration
Any access to a corporate directory for user information requires LDAP synchronization with Unified
Communications Manager. However, if a deployment includes both an LDAP server and Unified
Communications Manager that does not have LDAP synchronization enabled, then the administrator should
ensure consistent configuration across Unified Communications Manager and LDAP when configuring user
directory number associations.
LDAP Directory
You can configure a corporate LDAP directory to satisfy a number of different requirements, including the
following:
• User provisioning: you can provision users automatically from the LDAP directory into the Cisco Unified
Communications Manager database using directory integration. Cisco Unified CM synchronizes with
the LDAP directory content so that you avoid having to add, remove, or modify user information manually
each time a change occurs in the LDAP directory.
• User authentication: you can authenticate users using the LDAP directory credentials. Cisco IM and
Presence synchronizes all the user information from Cisco Unified Communications Manager to provide
authentication for client users.
• User lookup: you can enable LDAP directory lookups to allow Cisco clients or third-party XMPP clients
to search for contacts in the LDAP directory.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
88
Applications
Cisco Webex Directory Integration
Directory Search
When a contact cannot be found in the local Client Services Framework cache or contact list, a search for
contacts can be made. The Cisco Webex Messenger user can utilize a predictive search whereby the cache,
contact list, and local Outlook contact list are queried as the contact name is being entered. If no matches are
found, the search continues to query the corporate directory (Cisco Webex Messenger database).
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
89
Applications
Translation Patterns
Translation Patterns
Translation patterns are used by Unified CM to manipulate the dialed digits before a call is routed, and they
are strictly handled by Unified CM. Translation patterns are the recommended method for manipulating dialed
numbers.
Client Transformation
Before a call is placed through contact information, the client application removes everything from the phone
number to be dialed, except for letters and digits. The application transforms the letters to digits and applies
the dialing rules. The letter-to-digit mapping is locale-specific and corresponds to the letters found on a
standard telephone keypad for that locale. For example, for a US English locale, 1-800-4UCSRND transforms
to 18004827763. Users cannot view or modify the client transformed numbers before the application places
the call.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
90
Applications
Deployment Models for Jabber Clients
• The administrator must determine how to install, deploy, and configure the Unified Client Services
Framework in their organization. Cisco recommends using a well known installation package such as
Altiris to install the application
• The userid and password configuration of the Cisco Unified Client Services Framework user must match
the userid and password of the user stored in the LDAP server to allow for proper integration of the
Unified Communications and back-end directory components.
• The directory number configuration on Cisco Unified CM and the telephoneNumber attribute in LDAP
should be configured with a full E.164 number. A private enterprise dial plan can be used, but it might
involve the need to use translation patterns or application dialing rules and directory lookup rules.
• The use of deskphone mode for control of a Cisco Unified IP Phone uses CTI; therefore, when sizing a
Unified CM deployment, you must also account for other applications that require CTI usage.
• For firewall and security considerations, the port usage required for the Client Services Framework and
corresponding applications being integrated can be found in the product release notes for each application.
• To reduce the impact on the amount of traffic (queries and lookups) to the back-end LDAP servers,
configure concise LDAP search bases for the Client Services Framework rather than a top-level search
base for the entire deployment.
Push Notifications
Cisco Hosted Collaboration Solution can leverage push notifications for a variety of purposes, including:
• Apple iOS notifications
• Smart Licensing for Cisco products
• Endpoint activation
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
91
Applications
Cisco Cloud Collaboration Management
To integrate Cisco Webex Hybrid Services with Cisco HCS, you must consider:
• Network topology and interconnect options
• Customer and Service Provider administrator responsibilities
• Cisco Webex Hybrid Services connector components and existing Cisco HCS components
Cisco HCS integration must also consider configurations, call flows, the Cisco Webex Hybrid Services call
model, and bandwidth calculations. For more information, see the Cisco Webex Hybrid Services Integration
Reference Guide.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
92
CHAPTER 4
Third-Party Applications and Integrations
• Third-Party Applications and Integrations, on page 93
• Third-party PBX Integration in Cisco HCS, on page 93
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
93
Third-Party Applications and Integrations
Third-party PBX Integration in Cisco HCS
As shown in the preceding figure, the leaf cluster Cisco Unified Communications Manager deployed on a
per-customer basis within Cisco HCS can be deployed and configured with either the central breakout option
or the local breakout option.
Key architectural assumptions for the preceding deployment are as follows:
• Centralized handling of PSTN connectivity and routing policies at the Cisco HCS Unified Communications
Manager.
• Unified Communications Manager provides a legacy PBX integration.
• DNs and E.164 patterns belonging to third-party PBX endpoints are independently routed to the PBX
over a SIP or H323 trunk.
• DNs of Cisco HCS endpoints are served directly by Unified Communications Manager. Unified
Communications Manager can provide Single Number Reach (SNR) services to Cisco HCS users and
can include DNs of Lync clients.
• No feature transparency or interworking occurs across Cisco HCS and third-party PBX clients.
• Emergency call handling integration is done independently on the IPPBX.
• Cisco HCS UC deployment can be configured to provide voicemail to the third-party PBX endpoints
using an independent SIP trunk to the Cisco Unity Connection.
The diagrams that follow describe the various call flows that are supported as part of third-party PBX
integration.
As shown in the following figure, the SNR feature can be configured for Cisco HCS endpoints and users, so
that calls arriving at the Cisco HCS endpoints can be sent to the third-party PBX endpoints.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
94
Third-Party Applications and Integrations
Third-party PBX Integration in Cisco HCS
Figure 34: Call Flow for PSTN to Cisco HCS Endpoint with SNR to Third-Party Endpoint
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
95
Third-Party Applications and Integrations
Third-party PBX Integration in Cisco HCS
Figure 36: Call Flow for Cisco HCS Endpoint to Third-Party Endpoint
For more information on third-party PBX integration in Cisco HCS, see Third-party PBX SIP Integration for
Cisco Hosted Collaboration Solution and CUCILync Integration Guide for Cisco Hosted Collaboration
Solution, available at http://www.cisco.com/en/US/partner/products/ps11363/prod_maintenance_guides_
list.html.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
96
CHAPTER 5
OTT Deployment and Secured Internet with
Collaboration Edge Expressway
• Cisco Expressway Over-the-Top Solution Overview, on page 97
• Supported Functionality, on page 98
• Endpoint Support , on page 99
• Design Highlights, on page 99
• Expressway Sizing and Scaling, on page 100
• Virtual Machine Options, on page 101
• Cisco HCS Clustered Deployment Design, on page 101
• Network Elements, on page 102
• Jabber Client SSO OTT, on page 103
• BtoB Calls Shared Edge Expressway, on page 104
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
97
OTT Deployment and Secured Internet with Collaboration Edge Expressway
Supported Functionality
Supported Functionality
The following Cisco Jabber functions are supported without a VPN connection:
• IM and Presence
• Make and receive voice and video calls
• Mid call control (transfer, conference, mute, hold, park, handoff to mobile , and so on)
• Communications history (view placed, missed, received calls)
• Directory search: The HTTP proxy will allow Jabber to use the CUCM User Data Service (UDS)
• Escalate to Web conference (MeetingPlace / Cisco Webex)
• Screen share / file transfer when Jabber is in SSO mode
• Visual Voicemail (view, play, delete, filter by, sort by over HTTP)
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
98
OTT Deployment and Secured Internet with Collaboration Edge Expressway
Endpoint Support
Endpoint Support
The Cisco Collaboration Edge Architecture provides enabling for any-to-any collaboration for many types of
endpoint devices. For the Cisco Expressway implementation in Cisco Hosted Collaboration Solutions the
following endpoints are supported:
• Jabber Desktop - Windows
• Jabber Mobile - iPhone, Android, and iPad
• Hard endpoints - EX60, EX90
• Cisco DX Series endpoints
• 7800 Series IP phones
• 8800 Series IP phones
Note DX/78XX/88XX endpoints have a fixed certificate trust list that is not configurable by the administrator. The
Cisco Collaboration Edge Architecture needs to have a certificate signed by a real certificate authority.
Design Highlights
The Cisco Expressway OTT Solution provides the following design highlights:
• Expressway-E is treated as an SBC and like any other endpoint is routed through the Firewall.
• Unified CM provides call control for both mobile and on-premises endpoints.
• Signaling traverses the Expressway solution between the mobile endpoint and UCM.
• Media traverses the Expressway solution and is relayed between endpoints directly; all media is encrypted
between the Expressway-C and the mobile endpoint.
The following diagram illustrates these highlights.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
99
OTT Deployment and Secured Internet with Collaboration Edge Expressway
Expressway Sizing and Scaling
• Medium deployment
• 2 Core
• 4800 Mhz
• 6 GB of RAM
• 132 GB Disk
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
100
OTT Deployment and Secured Internet with Collaboration Edge Expressway
Virtual Machine Options
• Large deployment
• 8 Core
• 25600 Mhz
• 8 GB of RAM
• 132 GB Disk
• 10 GB nic (2500 registrations)
• 500 video or 1000 audio calls
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
101
OTT Deployment and Secured Internet with Collaboration Edge Expressway
Network Elements
Network Elements
Internal Network Elements
The internal network elements are devices which are hosted on the organization's local area network.
Elements on the internal network have an internal network domain name. This internal network domain name
is not resolvable by a public DNS. For example, the Expressway-C is configured with an internally resolvable
name of vcsc.i nternal-domain.net, which resolves to an IP address of 10.0.0.2 by the internal DNS servers.
DNS
DNS servers are used by Expressway-C to perform DNS lookups (resolve network names on the internal
network).
DHCP Server
The DHCP server provides host, IP gateway, DNS server, and NTP server addresses to endpoints located on
the internal network.
Router
The router device acts as the gateway for all internal network devices to route towards the DMZ (to the NAT
device internal address).
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
102
OTT Deployment and Secured Internet with Collaboration Edge Expressway
External Network Elements
DNS (Host)
This is the DNS owned by the service provider that hosts the external domain (DNS (external 1 & external
2). This is also the DNS used by the Cisco Expressway to perform DNS lookups.
SIP Domain
• DNS SRV records are configured in the public (external) and local (internal) network DNS server to
enable routing of signaling request messages to the relevant infrastructure elements (for example, before
an external endpoint registers, it will query the external DNS servers to determine the IP address of the
Cisco Expressway).
• The internal SIP domain is the same as the public DNS name. This enables both registered and
non-registered devices in the public internet to call endpoints registered to the internal and external
infrastructure (Expressway-C and Expressway-E).
Endpoints connect using one identity and one authentication mechanism to access multiple unified
communications services. Authentication is owned by the IdP. No authentication occurs at the Expressway
or at the internal unified communications services.
Cisco Jabber determines whether it is inside your network before it requests a unified communications service.
When Jabber is outside of the network, it requests the service from the Expressway-E on the edge of the
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
103
OTT Deployment and Secured Internet with Collaboration Edge Expressway
BtoB Calls Shared Edge Expressway
network. If SSO is enabled at the edge, the Expressway-E redirects Jabber to the IdP with a request to
authenticate the user.
The IdP challenges Jabber to identify itself. After the identify is authenticated, the IdP redirects the Jabber
service request to the Expressway-E with a signed assertion that the identity is authentic.
Because the Expressway-E trusts the IdP, it passes the request to the appropriate service inside the network.
The unified communications service trust the IdP and the Expressway-E, so it provides the requested service
to the Jabber client.
The provisioning of Jabber Client SSO involves such tasks as downloading the federation metadata file,
configuring Unified CM and Cisco Unity Connection, configuring SAML SSO, and configuring AD FS. For
more information, see the Cisco Unified Communications Domain Manager Maintain and Operate Guide.
This feature is supported in the following deployment models:
• IdP and the directory are in the customer premises, with LDAP synchronization from the Directory server
to CUCM and then to CUCDM
• IdP and the directory are in the customer premises, with LDAP synchronization from the Directory server
to CUCM and then to CUCDM
• IdP and the directory are in a per-customer domain in the Data Center, with LDAP synchronization from
the Directory server to CUCM and then to CUCDM
• IdP and the directory are in a per-customer domain in the Data Center, with LDAP synchronization from
the Directory server to CUCM and then to CUCDM
References
For information about SSO for Jabber clients, see the "Enabling Jabber Client Single Sign-On" topic in the
Cisco Hosted Collaboration Solution Release 12.5 Customer Onboarding Guide.
For information about SSO for Cisco collaboration solutions, see the SAML SSO Deployment Guide for Cisco
Unified Communications Applications: http://www.cisco.com/c/en/us/support/unified-communications/
unified-communications-manager-callmanager/products-maintenance-guides-list.html.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
104
OTT Deployment and Secured Internet with Collaboration Edge Expressway
Supported Functionality
From the HCS Endpoint the dialed URIs if they do not belong to the dialing tenant are routed on a dedicated
adjacency to the SBC where special call-policies are configured which route these URI's to Expressway-C
for onward routing to the user on Internet through Expressway-E.
Options are available within the call-policies to select all or block certain URIs within the SBC:
• This feature allows all HCS tenants to use URIs to dial and receive calls from any non-HCS Enterprise
users thru the internet. Rich media licenses are therefore shared on the Expressway products. This is a
trunking-based solution.
• This feature differs from the Collaboration Edge/OTT Expressway feature where Jabber and TC-based
endpoints register via the internet and are configured per HCS customer. This is a registration-based
solution.
Supported Functionality
The following functions of Cisco Expressway are supported within Cisco Hosted Collaboration Solution:
• Non-HCS Enterprise users can dial into Cisco HCS using the HCS user's URI.
• Cisco HCS users can dial non-HCS video users using URIs.
Endpoint Support
• All Cisco HCS supported Video Endpoints can make and receive calls using Shared Expressway.
• Remote non-HCS endpoints must conform to Cisco Telepresence Interface specifications to successfully
make and receive video calls.
Design Highlights
The Cisco Shared Expressway for Business to Business calling solution features the following design highlights:
• Expressway-E is treated as a session border controller (SBC), and like any other endpoint, is routed
through the firewall. Expressway-E is deployed in the DMZ with one interface (NIC) facing the internet
and the other interface (NIC) connected to the Expressway-C.
• SBC peers with the shared Expressway-C and provides the connectivity to each tenant's leaf cluster over
a dedicated adjacency exclusively used for URI dialing.
• Cisco Unified Communications Manager is configured with a dedicated trunk toward the SBC for URI
dialing.
• Cisco Unified Communications Manager is provisioned with wildcard SIP route patterns to route to SBC.
• SBC performs onward routing.
• Signaling traverses the Expressway solution between the Internet non-HCS Endpoint and SBC.
• All media is encrypted between the Expressway-E and the remote non-HCS endpoint.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
105
OTT Deployment and Secured Internet with Collaboration Edge Expressway
Virtual Machine Options
For information on Cisco Expressway scale targets, see the Cisco Hosted Collaboration Solution Compatibility
Matrix and Cisco Expressway documentation.
Network Elements
Internal Network Elements
The internal network elements are devices which are hosted on the organization's local area network.
Elements on the internal network have an internal network domain name. This internal network domain name
is not resolvable by a public DNS. For example, the Expressway-C is configured with an internally resolvable
name of vcsc.internal-domain.net, which resolves to an IP address of 10.0.0.2 by the internal DNS servers.
Element Description
Cisco Expressway Control Expressway-C is configured with a traversal client zone to communicate with
the Expressway-E to allow inbound and outbound calls to traverse the NAT
device.
DNS DNS servers are used by Expressway-C to perform DNS lookups (resolve
network names on the internal network).
Element Description
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
106
OTT Deployment and Secured Internet with Collaboration Edge Expressway
Network Elements
Element Description
DNS (Host) This is the DNS owned by the service provider that hosts the external domains
(DNS). This is also the DNS used by Cisco Expressway to perform DNS lookups.
NTP Server Pool An NTP server pool that provides the clock source used to synchronize both
internal and external devices.
NAT Devices and The example deployment includes:
Firewalls
• The NAT(PAT) device performing port address translation functions for
network traffic routed from the internal network to addresses in the DMZ
(and beyond- towards remote destinations on the internet).
• The firewall device on the public-facing side of the DMZ. This device
allows all outbound connections and inbound connections on specific ports.
SIP Domain • DNS SRV records are configured in the public (external) and local (internal)
network DNS server to enable routing of signaling request messages to the
relevant infrastructure elements (for example, third-party enterprises query
an external DNS for Cisco HCS enterprise Domains to determine the IP
address of the shared expressway-E.).
• The internal SIP domain is the same as the public DNS name. This enables
both registered and non-registered devices in the public internet to call
endpoints registered to the internal and external infrastructure.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
107
OTT Deployment and Secured Internet with Collaboration Edge Expressway
Network Elements
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
108
CHAPTER 6
Quality of Service Considerations
• Quality of Service Considerations, on page 109
• Guidelines for Implementing Quality of Service, on page 110
• Quality of Service for Audio and Video Media from Softphones, on page 120
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
109
Quality of Service Considerations
Guidelines for Implementing Quality of Service
This section provides some high level guidelines for implementing Quality of Service (QoS) in a Service
Provider Cisco HCS Data Center network that serves as a transport for multiple applications, including
delay-sensitive (Unified Communications applications) and others such as Collaboration. These applications
may enhance business processes, but stretch network resources. QoS can provide secure, predictable,
measurable, and guaranteed services to these applications by managing delay, delay variation (jitter), bandwidth,
and packet loss in a network.
QoS is a fundamental requirement for the Cisco HCS multi-customer solution for differentiated service support:
• QoS provides the means for fine-tuning network performance to meet application requirements
• QoS enables delay and bandwidth commitments to be met without gross over-provisioning
• QoS is a prerequisite for admission control
• Being able to guarantee SLAs is a primary differentiator for SP versus public cloud offerings
There is a misconception that by over-provisioning the network you can provide great service because you
have enough bandwidth to handle all the data flowing on your network. Over-provisioning may not provide
the handling of data in all circumstances, for the following reasons:
• Complexity with over-provisioning approach is in ensuring that the network is overprovisioned in all
circumstances
• Overprovisioning is not always possible and at times congestion may be unavoidable
• Capacity planning failures
• Network failure situations
• Unexpected traffic demands/bandwidth unavailability
• DDOS attacks
• TCP has a habit of eating 'abundant' bandwidth
• Fate sharing – in these cases there is no differentiation between premium and best effort
• In congestion all services degrade
Use classification to partition traffic into classes. Classify the traffic based on the port characteristics (class
of service [CoS] field) or the packet header fields that include IP precedence, Differentiated Services Code
Point (DSCP), Layer 2 to Layer 4 parameters, and the packet length.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
110
Quality of Service Considerations
Guidelines for Implementing Quality of Service
The values used to classify traffic are called match criteria. When you define a traffic class, you can specify
multiple match criteria, you can choose to not match on a particular criterion, or you can determine the traffic
class by matching any or all criteria.
Traffic that fails to match any class is assigned to a default class of traffic called class-default.
Normally within the SP cloud, there are four classes of traffic (Real-time, Signaling/Control, Critical, and
Best Effort) within an SP network. This does not mean that only four types of traffic can be defined and you
can not define QoS in a more granular fashion. In general, service providers define the maximum number of
QoS classes at the edge of the customer (meaning the CPE device on the Cisco HCS end customer premises)
to utilize the WAN bandwidth efficiently without compromising the critical data. As the traffic comes toward
the SP cloud and data center, it is marked into bigger buckets based on the SLAs and bandwidth requirements.
When deploying the hosted collaboration services in the cloud, the network management traffic plays a very
key role in terms of monitoring, fulfillment and so on, and needs to be prioritized within the HCS data center
and within the SP cloud, as management applications may be residing in another data center monitoring HCS
applications in other data center.
RFC 4594 has some differences, which you should know so that you can understand how the classes are
differentiated and assign various PHB values.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
111
Quality of Service Considerations
Guidelines for Implementing Quality of Service
The following is a list of nomenclature changes between the Cisco baseline and the RFC 4594.
Table 12: Nomenclature Changes Between Cisco Baseline and RFC 4594
Note In a Cisco HCS deployment, we recommend that you follow the Cisco baseline table for all QoS configurations.
There are some minor and significant differences between Cisco baseline and industry baseline RFC 4594,
but the RFC 4594 is informational, meaning it is recommended but not a requirement. For example, in RFC
4594 now the streaming video is changed from CS4 to AF31 (drop precedence of 1) and named as Multimedia
streaming.
Another difference is that the QoS baseline marking recommendation of CS3 for Call Signaling was changed
in RFC 4594 to mark Call Signaling to CS5.
Note Giving the guideline of Cisco baseline and RFC reference does not mean it is mandatory to use those classes.
This is a baseline and every deployment may be different because eight codepoints simply do not give enough
granularity; for example, although Cisco baseline recommends CS2 for OAM, according to NGN, we
recommend CS7 for OAM.
A new application class has been added to RFC 4594 - Real-time interactive. This addition allows for a service
differentiation between elastic conferencing applications (which would be assigned to the Multimedia
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
112
Quality of Service Considerations
Guidelines for Implementing Quality of Service
Conferencing class) and inelastic conferencing applications (which would include high-definition applications,
like Cisco TelePresence, in the real-time interactive class). Elasticity refers to the ability of the application to
function despite experiencing minor packet loss. Multimedia Conferencing uses the AF4 class and is subject
to markdown (and potential dropping) policies, while the real-time interactive class uses CS4 and is not subject
to markdown or dropping policies.
A second new application class was added to RFC 4594 -Broadcast video. This addition allows for a service
differentiation between elastic and inelastic streaming media applications. Multimedia Streaming uses the
AF3 class and is subject to markdown (and potential dropping) policies, while broadcast video uses the CS3
class and is not subject to markdown or dropping policies.
Note The most significant of the differences between Cisco's QoS baseline and RFC 4594 is the recommendation
to mark Call Signaling to CS5. Cisco does not change this value and we recommend that you use the value
of CS3 for call signaling.
Classification and marking of traffic flows creates a trust boundary within the network edges.
Within the trust boundaries, received CoS or DSCP values are simply accepted and matched rather than
remarked. Classification and marking are applied at the network edge, close to the traffic source, in Service
Provider Cisco HCS Data Center design, at the Nexus 1000V virtual access switch for traffic originating from
Unified Communications applications and at the MPLS WAN edge for traffic entering the Service Provider
Cisco HCS Data Center infrastructure. The trust boundary in Service Provider Cisco HCS Data Center is at
the Nexus 7000 Access/Aggregation device connecting to the UCS (and Nexus 1000V), and on the Nexus
7000 DC Core connecting to the MPLS WAN edge router as follows:
Figure 40: Trust Boundaries and Policy Enforcement Points From Cisco HCS Customer to Service Provider Data Center
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
113
Quality of Service Considerations
Quality of Service Domains
Figure 41: Trust Boundaries and Policy Enforcement Points - Service Provider Data Center to Cisco HCS Customer Site
Traditionally, network and bandwidth resource provisioning for VPN networks was implemented based on
the concept of specifying traffic demand for each node pair belonging to the VPN and reserving resources for
these point-to-point pipes between the VPN endpoints. This is what has come to be termed the resource "pipe"
model. The more recently introduced "hose" model for point-to-cloud services defines a point-to-multipoint
resource provisioning model for VPN QoS, and is specified in terms of ingress committed rate and egress
committed rate with edge conditioning. In this model, the focus is on the total amount of traffic that a node
receives from the network (that is, customer aggregate) and the total amount of traffic it injects into the
network.
Figure 42: Point to Multipoint Resource Provisioning Model for VPN QoS
Any SLAs that are applied would be committed across each domain; thus, SP end-end SLAs would be a
concatenation of domain SLAs (IP/NGN + SP DC). Within the VMDC SP DC QoS domain SLAs must be
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
114
Quality of Service Considerations
Cross-Platform Classification and Marking
committed from DC edge to edge: at the PE southbound (into the DC) in practice there would thus be an SLA
per-customer per class, aligning with the IP/NGN SLA and at the N1000VV northbound there would be an
SLA per VNIC per VM (optionally per class per VNIC per VM). As this model requires per-customer
configuration at the DC edges only (that is, PE and N1000V), there is no per-customer QoS requirement at
the core/aggregation/access layers of the infrastructure as shown below:
Figure 43: Per-Customer QoS Configuration
Note Inter-customer or off-net traffic goes through SBC, which means all the signaling and media is terminated
and re-originated by the SBC. This step erases the QoS setting of all the outgoing traffic. Make sure the SBC
QoS policy is similar to what is set by the applications or DC edge (Nexus 1000V) or else the policy may get
changed by SBC.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
115
Quality of Service Considerations
Cross-Platform Classification and Marking
VMDC 8 Class COS VMDC HCS VMDC NGN VMDC (Unified Cisco HCS 6 4 Class Model
Model Aligned 8 Class Aligned 8 Class Communications Class Model Nexus 7000
Model Model System 6xx0) 6 Fabric
Class Model
Network Mgmt + 7 Network Mgmt + Network Mgmt + Network Mgmt Network Mgmt Queue 1
Service control VM control VM control (COS 7) + (COS 7) +
Service control Service control
Network control 6 Network control Network control
(COS 7) + (COS 7) +
Network control Network control
(COS 6) (COS 6)
Priority #1 5 Voice bearer Res VoIP / Bus Priority #1 Voice bearer
Real-time
Bandwidth #1 4 Interactive Video Video streaming Bandwidth #1 Interactive Video Queue 2
(Priority 2)
Bandwidth #2 3 Call Video interactive FCOE Call
Control/FCOE / FCOE (Bandwidth #2) Control/FCOE
Bandwidth #3 2 Business Critical Bus critical Bus critical Business Critical Queue 3
"Gold" in-contract (COS in-contract (COS
2) 2)
Bus critical Bus critical
out-of-contract out-of-contract
(COS 1) (COS 1)
The number of classes supported within the SP DC QoS domain is limited by the number of CoS markings
available (up to eight), and the number of queues/thresholds supported by each DC platform. To ensure a
seamless extension of NGN services, the number of classes would ideally (at a minimum) match the number
available across the IP/NGN.
The following table shows all the classes with PHB values, with admission requirements for some classes,
and maps to various applications.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
116
Quality of Service Considerations
Cross-Platform Classification and Marking
In general, four classes (sometimes read as five classes due to the fact the signaling and control may be defined
differently) is the recommended model for provisioning QoS for Voice, Video and Data. Some of these classes
can be gradually split into more granular classes, as shown in the following figure. Classification
recommendations remain the same, but you can combine multiple DSCPs into a single queuing class.
• The Real-Time queue is for voice and video traffic in general, as they are time-sensitive applications.
• Signaling/control includes all the control signaling, meaning call signaling, and also includes the
management control traffic including the vMotion traffic.
• Critical data includes any bulk data transfer, which may include databases, and so on.
• The last best effort class includes anything other than the traffic described in the preceding text, for
example, Internet traffic.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
117
Quality of Service Considerations
Cross-Platform Classification and Marking
An example of queuing policy on the Nexus 7000 in the HCS data center is as follows:
Figure 45: Example Queuing Policy
The Cisco NX-OS device processes the QoS policies that you define based on whether they are applied to
ingress or egress packets. The system performs actions for QoS policies only if you define them under the
type qos service policies.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
118
Quality of Service Considerations
Cross-Platform Classification and Marking
The recommended Cisco HCS QoS model appears in the following table.
HCS Traffic EXP/COS DSCP PHB BW Res Nexus 1000 Unified Nexus ASR9000
(N5000, ASA, Communications 7000-Egress
N7000-Ingress) System
Network 7 CS7 AF 6%(vmdc) 6% Default in Unified 1p7q4t-out-q7
Mgmt WRED Communications
System
Network 6 CS6 AF 4%(vmdc) 10% Platinum (10%) 1p7q4t-out-q6
Control + WRE
vMotion
+VM
Control
Voice 5 CS5 EF 15%(vmdc) no 15% Gold (15%) 1p7q4t-out-q5 cir=50 per
Bearer drop (cir=50mbpc=200 VM, 100
per VM) per cust
Interactive 4 CS4 AF41 15% no drop 15% (cir=50 Silver(15%) 1p7q4t-out-q4
Video ms, bc=200 per
(WebEx, VM)
SPT)
Call Control 3 CS3 AF42, 3%(vmdc) N/A FC(40%) 1p7q4t-out-q3
+FCoE AF43
WebEx 1,2 CS1, AF 42% 44% Bronze(10%) 1p7q4t-out-q2 250 mbps
Data, other CS2 per VM 500
critical data mbps per
cust/3G
burst
Standard 0 CS0 Default 15%(vmdc) 10% Best Effort (10%) 1p7q4t-out-q-default
As shown in the preceding table, Cisco HCS uses the CoS-based marking within the data center and mapping
of CoS to DSCP. You can use a similar approach in the UCS combined with enabling the flow control between
the UCS network port and the uplink port to protect the drop of data in the case of congestion at the UCS
uplink. You can achieve this by using the DCE pause frame technique, which sends pause frames to the uplink
port to hold the traffic for a few milliseconds while the congestion at the UCS level is cleared.
For more information, see: http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/cli/config/guide/2.0/
b_UCSM_CLI_Configuration_Guide_2_0_chapter_010010.html
Normally in Cisco HCS, the traffic that flows through the UCS is only Cisco HCS application traffic, which
is mostly the signaling traffic (meaning that it requires not that much of the bandwidth). Also because we are
using 10GE links between all the uplink and network ports, for Cisco HCS one should have enough bandwidth
and may not need to enable the pause frame flow control technique.
Note You can apply only ingress traffic actions for QoS policies on Layer 2 interfaces. You can apply both ingress
and egress traffic actions on Layer 3 interfaces.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
119
Quality of Service Considerations
Quality of Service for Audio and Video Media from Softphones
Note With Cisco UC Integration for Microsoft Lync, Microsoft provides instant messaging and presence services.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
120
Quality of Service Considerations
Client Services Framework – Audio, Video and Web Conferencing Services
Contacts can also be stored and retrieved locally using either of the following:
• Client Services Framework Cache
• Local address books and contact lists
The Client Services Framework uses reverse number lookup to map an incoming telephone number to a
contact, in addition to photo retrieval. The Client Services Framework contact management allows for up to
five search bases to be defined for LDAP queries.
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
121
Quality of Service Considerations
Client Services Framework – Contact Management
Cisco Hosted Collaboration Solution Release 12.5 Solution Reference Network Design Guide
122