IDIRECT Technical Reference Guide PDF
IDIRECT Technical Reference Guide PDF
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
12 Fast Acquisition
Feature Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Purpose
The Technical Reference Guide provides detailed technical information on iDirect technology
and major features as implemented in iDX Release 2.0.
Intended Audience
The intended audience for this guide includes network operators using the iDirect iDS system,
network architects, and anyone upgrading to iDX Release 2.0.
Note: It is expected that the user of this material has attended the iDirect IOM
training course and is familiar with the iDirect network solution and associated
equipment.
Document Conventions
This section illustrates and describes the conventions used throughout the manual. Take a
look now, before you begin using this manual, so that you’ll know how to interpret the
information presented.
Getting Help
The iDirect Technical Assistance Center (TAC) is available to help you 24 hours a day, 365 days
a year. Software user guides, installation procedures, a FAQ page, and other documentation
that supports our products are available on the TAC webpage. Please access our TAC webpage
at: http://tac.idirect.net.
If you are unable to find the answers or information that you need, you can contact the TAC at
(703) 648-8151.
If you are interested in purchasing iDirect products, please contact iDirect Corporate Sales by
telephone or email.
Telephone: (703) 648-8000
Email: [email protected]
iDirect strives to produce documentation that is technically accurate, easy to use, and helpful
to our customers. Your feedback is welcomed! Send your comments to [email protected].
This chapter presents a high-level overview of iDirect Networks. It provides a sample iDirect
network and describes the IP network architectures supported by iDirect.
System Overview
An iDirect network is a satellite based TCP/IP network with a Star topology in which a Time
Division Multiplexed (TDM) broadcast downstream channel from a central hub location is
shared by a number of remote nodes. iDX Release 2.0 supports both iDirect SCPC downstream
carriers and DVB-S2 downstream carriers. An example iDirect network is shown in Figure 1.
iDX 2.0 does not support iSCPC or Mesh networks.
The iDirect Hub equipment consists of an iDirect Hub Chassis with Hub Line Cards, a Protocol
Processor (PP), a Network Management System (NMS) and the appropriate RF equipment. Each
remote node consists of an iDirect broadband router and the appropriate external VSAT
equipment. The remotes transmit to the hub on one or more shared upstream carriers using
Deterministic-Time Division Multiple Access (D-TDMA), based on dynamic timeplan slot
assignment generated at the Protocol Processor.
The selection of an upstream carrier by a remote is determined either at network acquisition
time or dynamically at run-time, based on a network configuration setting. iDirect software
has features and controls that allow the system to be configured to provide QoS and other
traffic engineered solutions to remote users. All network configuration, control, and
monitoring functions are provided via the integrated NMS.
The iDirect software provides:
• Packet-based and network-based QoS, TCP acceleration
• TCP acceleration
• AES link encryption
• Local DNS cache on the remote
• End-to-end VLAN tagging
• Dynamic routing protocol support via RIPv2 over the satellite link
• Multicast support via IGMPv2
• VoIP support via voice optimized features such as cRTP
An iDirect network interfaces to the external world through IP over Ethernet ports on the
remote unit and the Protocol Processor at the hub.
IP Network Architecture
The following figures illustrate the basic iDirect IP network architectures.
• Figure 2, “iDirect IP Architecture – Multiple VLANs per Remote”
• Figure 3, “iDirect IP Architecture – VLAN Spanning Remotes”
• Figure 4, “iDirect IP Architecture – Classic IP Configuration”
iDirect allows you to mix traditional IP routing based networks with VLAN based
configurations. This capability provides support for customers that have conflicting IP address
ranges in a direct fashion, and multiple independent customers at a single remote site by
configuring multiple VLANs directly on the remote.
In addition to end-to-end VLAN connection, the system supports RIPv2 in an end-to-end
manner including over the satellite link; RIPv2 can be configured on per-network interface.
Digital Video Broadcasting (DVB) represents a set of open standards for satellite digital
broadcasting. DVB-S2 is an extension to the widely-used DVB-S standard and was introduced in
March 2005. It provides for:
• Improved inner coding: Low-Density Parity Coding
• Greater variety of modulations: QPSK, 8PSK, 16APSK
• Dynamic variation of the encoding on broadcast channel: Adaptive Coding and Modulation
These improvements lead to greater efficiencies and flexibility in the use of available
bandwidth.
DVB-S2 defines three methods of applying modulation and coding to a data stream:
• CCM (Constant Coding and Modulation) specifies that every BBFRAME is transmitted at the
same MODCOD. Effectively, the iDirect SCPC system is a CCM system.
Note: In iDX Release 2.0, you can simulate a CCM outbound carrier using short frames
by selecting ACM and setting the Maximum and Minimum MODCODs to the same
value. CCM using long frames will be supported in future releases. See the
iBuilder User Guide for details on configuring your carriers.
• ACM (Adaptive Coding and Modulation) specifies that every BBFRAME can be transmitted
on a different MODCOD. Remotes receiving an ACM carrier cannot anticipate the MODCOD
of the next BBFRAME. A DVB-S2 demodulator must be designed to handle dynamic
MODCOD variation.
• VCM (Variable Coding and Modulation) specifies that MODCODs are assigned according to
service type. As in ACM mode, the resulting downstream contains BBFRAMEs transmitted
at different MODCODs. (IDirect does not support VCM on the downstream.)
Figure 5 compares iDirect’s SCPC Mode, CCM Mode and ACM Mode.
SCPC Mode:
All Frames: single Modulation (QPSK or BPSK )
All Frames: single coding (TPC 0 .793
, etc
.)
QPSK QPSK
...
TPC .793 TPC .793
time
time
time
DVB-S2 in iDirect
iDirect DVB-S2 networks support ACM on the downstream carrier with all modulations up to
16APSK. An iDirect DVB-S2 network uses short DVB-S2 BBFARMES for ACM. iDirect does not
support VCM on the downstream carrier.
iDX Release 2.0 supports the following DVB-S2 hardware:
• Evolution eM1D1 line card (Tx/Rx; SCPC or DVB-S2)
• Evolution XLC-11 line card (Tx/Rx; SCPC or DVB-S2)
• Evolution XLC-10 line card (Tx-only; DVB-S2 networks only)
• Evolution XLC-M line card (Rx-only; one inbound channel; SCPC or DVB-S2 networks)
• Evolution e8350 remote satellite router (SCPC or DVB-S2 networks)
• Evolution iConnex e800/e850mp remote satellite routers (SCPC or DVB-S2 networks)
• Evolution X3 remote satellite router (DVB-S2 networks only)
• Evolution X5 remote satellite router (SCPC or DVB-S2 networks)
The eM1D1 line card and the XLC-11 line card are Tx/Rx line cards. Both line cards can
transmit either an iDirect SCPC or a DVB-S2 downstream carrier while receiving a TDMA
upstream carrier. An XLC-10 line card is a Tx-only line card that can only be deployed in DVB-
S2 networks only. An XLC-M line card is multi-channel, Rx-only line card that can be deployed
in either DVB-S2 or iDirect SCPC networks.
Note: In iDX Release 2.0, an XLC-M line card only supports a single inbound channel.
Note: The eM1D1, XLC-11, and XLC-M line cards all require the correct corresponding
hub firmware package to operate in a DVB-S2 or iDirect SCPC network. These
line cards require the evo_d_hub firmware for a DVB-S2 network and the
evo_l_hub firmware for an SCPC network. See the iBuilder User Guide chapter
titled “Converting Between SCPC and DVB-S2 Networks” for details.
An Evolution e8350, e800, e850 or X5 remote satellite router can receive either an SCPC or a
DVB-S2 downstream carrier while transmitting on the TDMA upstream carrier. An Evolution X3
remote satellite router can only operate in DVB-S2 networks.
DVB-S2 Downstream
An iDirect SCPC network is effectively CCM on the downstream. At configuration time, a
modulation (such as BPSK) and coding rate (such as TPC 0.79) are selected. These
characteristics of the downstream are fixed for the duration of the operation of the network.
A DVB-S2 downstream can be configured as CCM (future) or ACM. If you configure the
downstream as ACM, it is not constrained to operate at a fixed modulation and coding.
Instead, the modulation and coding of the downstream varies within a configurable range of
MODCODs.
An iDirect DVB-S2 downstream contains a continuous stream of Physical Layer Frames
(PLFRAMEs). The PLHEADER indicates the type of modulation and error correction coding used
on the subsequent data. It also indicates the data format and frame length. Refer to Figure 6.
The PLHEADER always uses /2 BPSK modulation. Like most DVB-S2 systems, iDirect injects
pilot symbols within the data stream. The overhead of the DVB-S2 downstream varies
between 2.65% and 3.85%.
The symbol rate remains fixed on the DVB-S2 downstream. Variation in throughput is realized
through DVB-S2 support, and the variation of MODCODs in ACM Mode. The maximum possible
throughput of the DVB-S2 carrier (calculated at 45 MSps and highest MODCOD 16APSK 8/9) is
approximately 155 Mbps. As with iDirect SCPC networks, multiple protocol processors may be
required to support high traffic to multiple remotes.
iDirect uses DVB-S2 “Generic Streams” for encapsulation of downstream data between the
DVB-S2 line cards and remotes. Although the DVB-S2 standard includes the provision for
generic streams, it is silent on how to encapsulate data in this mode. iDirect uses the
proprietary LEGS (Lightweight Encapsulation for Generic Streams) protocol for this purpose.
LEGS maximizes the efficiency of data packing into BBFRAMES on the downstream. For
example, if a timeplan only takes up 80% of a BBFRAME, the LEGS protocol allows the line
card to include a portion of another packet that is ready for transmission in the same frame.
This results in maximum use of the downstream bandwidth.
ACM Operation
ACM mode allows remotes operating in better signal conditions to receive data on higher
MODCODs. This is accomplished by varying the MODCODs of data targeted to specific remotes
to match their current receive capabilities.
Not all data is sent to a remote at its best MODCOD. Important system information (such as
timeplan messages), as well as broadcast traffic, is transmitted at the minimum MODCOD
configured for the outbound carrier. This allows all remotes in the network, even those
operating at the worst MODCOD, to reliably receive this information.
The protocol processor determines the maximum MODCOD for all data sent to the DVB-S2 line
card for transmission over the outbound carrier. However, the line card does not necessarily
respect these MODCOD assignments. In the interest of downstream efficiency, some data
scheduled for a high MODCOD may be transmitted at a lower one as an alternative to inserting
padding bytes into a BBFRAME. When assembling a BBFRAME for transmission, the line card
first packs all available data for the chosen MODCOD into the frame. If there is space left in
the BBFRAME, and no data left for transmission at that MODCOD, the line card attempts to
pack the remainder of the frame with data for higher MODCODs. This takes advantage of the
fact that a remote can demodulate any MODCOD in the range between the carrier’s minimum
MODCOD and the remote’s current maximum MODCOD.
The maximum MODCOD of a remote is based on the latest Signal-to-Noise Ratio (SNR)
reported by the remote to the protocol processor. The table in Figure 7 shows the SNR
thresholds per MODCOD for the Evolution X3 and X5 remotes. The table in Figure 8 shows the
SNR thresholds per MODCOD for the Evolution e8350 remote.These values are determined
during hardware qualification. The graph shows how spectral efficiency increases as the
MODCOD changes.
Figure 8. SNR Threshold vs. MODCOD for the Evolution e8350 Remote
The hub adjusts the MODCODs of the transmissions to the remotes by means of the feedback
loop shown in Figure 9 on page 14. Each remote continually measures its downstream SNR and
reports the current value to the protocol processor. When the protocol processor assigns data
to an individual remote, it uses the last reported SNR value to determine the highest MODCOD
on which that remote can receive data without exceeding a specified BER. The protocol
processor includes this information when sending outbound data to the line card. The line
card then adjusts the MODCOD of the BBFRAMES to the targeted remotes accordingly.
Note: The line card may adjust the MODCOD of the BBFRAMEs downward for reasons
of downstream packing efficiency.
Figure 9 and Figure 10 show the operation of the SNR feedback loop and the behavior of the
line card and remote during fast fade conditions. Figure 9 shows the basic SNR reporting loop
described above. The example shows an XLC-10 line card transmitting to an X3 remote.
However, the feedback loop discussion applies to any Evolution line card that is transmitting a
DVB-S2 carrier to any Evolution remote.
Figure 10 shows the backoff mechanism that exists between the line card and protocol
processor to prevent data loss. The protocol processor decreases the maximum data sent to
the line card for transmission based on a measure of the number of remaining untransmitted
bytes on the line card. These bytes are scaled according to the MODCOD on which they are to
be transmitted, since bytes destined to be transmitted at lower MODCODs will take longer to
transmit than bytes destined to be transmitted on a higher MODCODs.
Figure 10. Feedback Loop with Backoff from Line Card to Protocol Processor
Fixed Bandwidth
600 400
350
500
Relative Bandwidth
300
400
250
Nominal
CIR
150
200
100
100
50
0 0
Figure 11. Total Bandwidth vs. Information Rate in Fixed Bandwidth Operation
EIR is only enabled in the range of MODCODs from the remote’s Nominal MODCOD down to the
configured EIR Minimum MODCOD. Within this range, the system always attempts to allocate
requested bandwidth in accordance with the CIR and MIR settings, regardless of the current
MODCOD at which the remote is operating. Since higher MODCODs contain more information
bits per second, as the remote’s MODCOD increases, so does the capacity of the outbound
channel to carry additional information.
As signal conditions worsen, and the MODCOD assigned to the remote drops, the system
attempts to maintain CIR and MIR only down to the configured EIR Minimum MODCOD. If the
remote drops below this EIR Minimum MODCOD, it is allocated bandwidth based on the
remote’s Nominal MODCOD with the rate scaled to the MODCOD actually assigned to the
remote. The net result is that the remote receives the CIR or MIR as long as the current
MODCOD of the remote does not fall below the EIR Minimum MODCOD. Below the EIR
minimum MODCOD, the information rate achieved by the remote falls below the configured
settings.
The system behavior in EIR mode is shown in Figure 12. The remote’s Nominal MODCOD is
labeled “Nominal” in the figure. The system maintains the CIR and MIR down to the EIR
Minimum MODCOD. Notice in the figure that when the remote is operating below EIR Minimum
MODCOD, it is granted the same amount of satellite bandwidth as at the remote’s Nominal
MODCOD.
EIR Mode
600 400
350
500
Relative Bandwidth
300
400
250
Nominal EIR Min
CIR
300 200
150
200
100
100
50
0 0
Figure 12. EIR: Total Bandwidth vs. Information Rate as MODCOD Varies
Scaling
MODCOD Comments
Factor
16APSK 8/9 1.2382 Best MODCOD
16APSK 5/6 1.3415
16APSK 4/5 1.4206
16APSK 3/4 1.5096
16APSK 2/3 1.6661
8PSK 8/9 1.6456
8PSK 5/6 1.7830
8PSK 3/4 2.0063
8PSK 2/3 2.2143
8PSK 3/5 2.4705
QPSK 8/9 2.4605
QPSK 5/6 2.6659
QPSK 4/5 2.8230
QPSK 3/4 2.9998
QPSK 2/3 3.3109
QPSK 3/5 3.6939
QPSK 1/2 5.0596
QPSK 2/5 5.6572
QPSK 1/3 6.8752
QPSK 1/4 12.0749 Worst MODCOD
The following formula can be used to determine the information rate at which data is sent
when that data is scaled to the remote’s Nominal MODCOD:
IRa = IRn x Sb / Sa
where:
• IRa is the actual information rate at which the data is sent
• IRn is the nominal information rate (for example, the configured CIR)
• Sb is the scaling factor for the remote’s Nominal MODCOD
• Sa is the scaling factor for the MODCOD at which the data is sent
For example, assume that a remote is configured with a CIR of 1024 kbps and a Nominal
MODCOD of 16ASPK 8/9. If EIR is not in effect, and data is being sent to the remote at
MODCOD QPSK 8/9, then the resulting information rate is:
IRa = IRn x Sb / Sa
IRa = 1024 kbps x 1.2382 / 2.4605 = 515 kbps
For two scenarios showing how CIR and MIR are allocated for a DVB-S2 network in ACM mode,
see page 44 and page 46.
Note: When bandwidth is allocated for a remote, the CIR and MIR are scaled to the
remote’s Nominal MODCOD. At higher levels of the Group QoS tree (Bandwidth
Group, Service Group, etc.) CIR and MIR are scaled to the network’s best
MODCOD.)
DVB-S2 Configuration
The iBuilder GUI allows you to configure various parameters that affect the operation of your
DVB-S2 networks. For details on configuring DVB-S2, see the iBuilder User Guide. The
following areas are affected:
• Downstream Carrier Definition: When you add an ACM DVB-S2 downstream carrier, you
must specify a range of MODCODs over which the carrier will operate. Error correction for
the carrier is fixed to LDPC and BCH. In addition, you cannot select an information rate or
transmission rate for a DVB-S2 carrier as an alternative to the symbol rate, since these
rates will vary dynamically with changing MODCODs.
However, beginning with iDX Release 2.0, iBuilder provides a MODCOD Distribution
Calculator that allows you to estimate the overall IP Data Rate for your carrier based on
the distribution of the Nominal MODCODs of the remotes in your network. You can access
this calculator by clicking the MODCOD Distribution button on the DVB-S2 Downstream
Carrier dialog box. A similar button allows you to estimate CIR and MIR bandwidth
requirements at various levels of the Group QoS tree.
This chapter describes the modulation modes and Forward Error Correction (FEC) rates that
are supported in iDX Release 2.0.
Note: For specific Eb/No values for each FEC rate and Modulation combination, refer
to the iDX 2.0 Link Budget Analysis Guide, which is available for download from
the TAC web page located at http://tac.idirect.net.
Table 3 on page 22 shows the upstream and downstream Modulation Modes and FEC Rates for
Evolution and iNFINITI hardware. iDirect also supports 2D 16-State Inbound Coding on
upstream TDMA carriers in DVB-S2 networks only. For details see “2D 16-State Inbound Coding
for DVB-S2 Networks” on page 23.
In addition to the advantages offered by 2D 16-State Inbound Coding, Evolution line cards
have much greater FPGA resources than iNFINITI line cards, allowing improved demodulator
performance for existing TCP FEC rates even for SCPC networks containing iNFINITI remotes.
For example
• QPSK Rate 0.533 TPC has a 1 dB improvement in C/N and Ebi/No threshold on Evolution
line cards when compared to iNFINITI line cards.
• 8PSK Rate 0.66 TPC has a 0.8 dB improvement in C/N and Ebi/No threshold on Evolution
line cards when compared to iNFINITI line cards.
Note: For specific Eb/No values for each FEC rate and Modulation combination, refer
to the iDirect Link Budget Analysis Guide, which is available for download from
the TAC web page located at http://tac.idirect.net.
Note: For the list of DVB-S2 downstream MODCODs supported in iDX 2.0, see Table 1
on page 8.
Note: For specific Eb/No values for each FEC rate and Modulation combination, refer
to the iDX 2.0 Link Budget Analysis Guide.
Table 4. Modulation Modes and FEC Rates for 2D 16-State Inbound Coding
Table 5. Block Sizes and IP Payload Sizes for 2D 16-State Inbound Coding
This section provides information about Spread Spectrum technology in an iDirect network. It
discusses the following topics:
• “What is Spread Spectrum?” on page 25
• “Downstream Specifications” on page 27
• “Upstream Specifications” on page 28
Spreading takes place when the input data (dt) is multiplied with the PN code (pnt) which
results in the transmit baseband signal (txb). The baseband signal is then modulated and
transmitted to the receiving station. Despreading takes place at the receiving station when
the baseband signal is demodulated (rxb) and correlated with the replica PN (pnr) which
results in the data output (dr).
Spread Spectrum transmission is supported in both TDMA and SCPC configurations. Spread
spectrum is not available on DVB-S2 downstream carriers. SS mode is employed in iDirect
networks to minimize adjacent satellite interference (ASI). ASI can occur in applications such
Note: A Downstream Spreading Factor of 8 is only available for Evolution hub line
cards transmitting to Evolution Remotes. Upstream Spreading Factors of 8 and
16 are only available for Evolution Remotes transmitting to Evolution hub line
cards.
Note: The following uses of Spread Spectrum require a license from iDirect: Upstream
Spread Spectrum for the Evolution X5 remote; Upstream Spread Spectrum for
the XLC-11 line card; and Downstream Spread Spectrum for the XLC-11 line
card.
Each symbol in the spreading code is called a “chip”, and the spread rate is the rate at which
chips are transmitted. For example, selecting an SF of 1 means that the spread rate is one
chip per symbol (which is equivalent to regular BPSK, and therefore, there is no spreading).
Selecting an SF of 4 means that the spread rate is four chips per symbol.
An additional Spreading Factor, COTM SF=1, is for upstream TDMA carriers only. Like an SF of
1, if you select COTM SF=1, there is no spreading. However, the size of the carrier unique
word is increased, allowing mobile remotes to remain in the network when they might
otherwise drop out. An advantage of this spreading factor is that you can receive error-free
data at a slightly lower C/N compared to regular BPSK. However, carriers with COTM SF=1
transmit at a slightly lower information rate.
COTM SF=1 is primarily intended for use by fast moving mobile remotes. The additional unique
word overhead allows the remote to tolerate more than 10 times as much frequency offset as
can be tolerated by regular BPSK. That makes COTM SF=1 the appropriate choice when the
Doppler effect caused by vehicle speed and acceleration is significant even though the link
budget does not require spreading. Examples include small maritime vessels, motor vehicles,
trains, and aircraft. Slow moving, large maritime vessels generally do not require COTM SF=1.
Spread Spectrum can also be used to hide a carrier in the noise of an empty transponder.
However, SS should not be confused with Code Division Multiple Access (CDMA), which is the
process of transmitting multiple SS channels simultaneously on the same bandwidth.
Spread Spectrum may also be useful in situations where local or RF interference is
unavoidable, such as hostile jamming. However, iDirect designed the Spread Spectrum feature
primarily for COTM and ASI mitigation. iDirect SS may be a good solution for overcoming some
instances of interference or jamming, but it is recommended that you discuss your particular
application with iDirect sales engineering.
Note: You must install the M1D1-TSS HLC in a slot that has one empty slot to the right.
For example, if you want to install the HLC in slot 4, slot 5 must be empty. Be sure
that you also check chassis slot configuration in iBuilder to verify that you are not
installing the HLC in a reserved slot.
The remotes that support spread spectrum are the iNFINITI 8350, the Evolution e8350, and
the iConnex e800 and e850mp. The Evolution X5 supports upstream Spread Spectrum if Spread
Spectrum is licensed on the remote. Other remotes do not currently support spread spectrum.
Downstream Specifications
The specifications for the spread spectrum downstream channel are outlined in Table 6.
Upstream Specifications
The specifications for the spread spectrum upstream channel are outlined in Table 7. The
Spreading Factor COTM 1, used in fast moving mobile applications, is described on page 26.
This chapter describes how you can configure Quality of Service definitions to achieve
maximum efficiency by prioritizing traffic.
QoS Measures
When discussing QoS, at least four interrelated measures are considered. These are
Throughput, Latency, Jitter, and Packet Loss. This section describes these parameters in
general terms, without specific regard to an iDirect network.
Throughput. Throughput is a measure of capacity and indicates the amount of user data that
is received by the end user application. For example, a G729 voice call without additional
compression (such as cRTP), or voice suppression, requires a constant 24 Kbps of application
level RTP data to achieve acceptable voice quality for the duration of the call. Therefore this
application requires 24 Kbps of throughput. When adequate throughput cannot be achieved
on a continuous basis to support a particular application, Qos can be adversely affected.
Latency. Latency is a measure of the amount of time between events. Unqualified latency is
the amount of time between the transmission of a packet from its source and the receipt of
that packet at the destination. If explicitly qualified, it may also mean the amount of time
between a request for a network resource and the time when that resource is received. In
general, latency accounts for the total delay between events and it includes transit time,
queuing, and processing delays. Keeping latency to a minimum is very important for VoIP
applications for human factor reasons.
Packet Loss. Packet Loss is a measure of the number of packets that are transmitted by a
source, but not received by the destination. The most common cause of packet loss on a
network is network congestion. Congestion occurs whenever the volume of traffic exceeds the
available bandwidth. In these cases, packets are filling queues internal to network devices at
a rate faster than those packets can be transmitted from the device. When this condition
exists, network devices drop packets to keep the network in a stable condition. Applications
that are built on a TCP transport interpret the absence of these packets (and the absence of
their related ACKs) as congestion and they invoke standard TCP slow-Start and congestion
avoidance techniques. With real time applications, such as VoIP or streaming video, it is often
impossible to gracefully recover these lost packets because there is not enough time to
retransmit lost packets. Packet loss may affect the application in adverse ways. For example,
parts of words in a voice call may be missing or there maybe an echo; video images may break
up or become block-like (pixilation effects).
Service Levels
A Service Level may represent a single application (such as VoIP traffic from a single IP
address) or a broad class of applications (such as all TCP based applications). Each Service
Level is defined by one or more packet-matching rules. The set of rules for a Service Level
allows logical combinations of comparisons to be made between the following IP packet
fields:
• Source IP address
• Destination IP address
• Source port
• Destination port
• Protocol (such as DiffServ DSCP)
• TOS priority
• TOS precedence
• VLAN ID
Packet Scheduling
Packet Scheduling is a method used to transmit traffic according to priority and classification.
In a network that has a remote that always has enough bandwidth for all of its applications,
packets are transmitted in the order that they are received without significant delay.
Application priority makes little difference since the remote never has to select which packet
to transmit next.
In a network where there are periods of time in which a remote does not have sufficient
bandwidth to transmit all queued packets the remote scheduling algorithm must determine
which packet from a set of queued packets across a number of service levels to transmit next.
For each service level you define in iBuilder, you can select any one of three queue types to
determine how packets using that service level are to be selected for transmission. These are
Priority Queue, Class-Based Weighted Fair Queue (CBWFQ), and Best-Effort Queue.
The procedures for defining profiles and service levels are detailed in the chapter titled
“Configuring Quality of Service for iDirect Networks” of the iBuilder User Guide.
Priority Queues are emptied before CBWFQ queues are serviced and CBWFQ queues are in
turn emptied before Best Effort queues are serviced. Figure 15 on page 33 presents an
overview of the iDirect packet scheduling algorithm.
The packet scheduling algorithm (Figure 15) first services packets from Priority Queues in
order of priority, P1 being the highest priority for non-multicast traffic. It selects CBWFQ
packets only after all Priority Queues are empty. Similarly, packets are taken from Best Effort
Queues only after all CBWFQ packets are serviced.
You can define multiple service levels using any combination of the three queue types. For
example, you can use a combination of Priority and Best Effort Queues only.
Priority Queues
There are four levels of user Priority Queues:
• Multicast: (Highest priority. Only for downstream multicast traffic.)
• Level 1: P1
• Level 2: P2
• Level 3: P3
• Level 4: P4 (Lowest priority)
All queues of higher priority must be empty before any lower-priority queue are serviced. If
two or more queues are set to the same priority level, then all queues of equal priority are
emptied using a round-robin selection algorithm prior to selecting any packets from lower
priority queues.
Group QoS
Group QoS (GQoS), introduced in iDS Release 8.0, enhances the power and flexibility of
iDirect’s QoS feature for TDMA networks. It allows advanced network operators a high degree
of flexibility in creating subnetworks and groups of remotes with various levels of service
tailored to the characteristics of the user applications being supported.
Group QoS is built on the Group QoS tree: a hierarchical construct within which containership
and inheritance rules allow the iterative application of basic allocation methods across groups
and subgroups. QoS properties configured at each level of the Group QoS tree determine how
bandwidth is distributed when demand exceeds availability.
Group QoS enables the construction of very sophisticated and complex allocation models. It
allows network operators to create network subgroups with various levels of service on the
same outbound carrier or inroute group. It allows bandwidth to be subdivided among
customers or Service Providers, while also allowing oversubscription of one group’s configured
capacity when bandwidth belonging to another group is available.
Note: Group QoS applies only to TDMA networks. It does not apply to iDirect iSCPC
connections.
For details on using the Group QoS feature, see the chapter titled “Configuring Quality of
Service for iDirect Networks” in the iBuilder User Guide.
Bandwidth Pool
A Bandwidth Pool is the highest node in the Group QoS hierarchy. As such, all sub-nodes of a
Bandwidth Pool represent subdivisions of the bandwidth within that Bandwidth Pool. In the
iDirect network, a Bandwidth Pool consists of an outbound carrier or an inroute group.
Bandwidth Group
A Bandwidth Pool can be divided into multiple Bandwidth Groups. Bandwidth Groups allow a
network operator to subdivide the bandwidth of an outroute or inroute group. Different
Bandwidth Groups can then be assigned to different Service Providers or Virtual Network
Operators (VNO).
Bandwidth Groups can be configured with any of the following:
• CIR and MIR: Typically, the sum of the CIR bandwidth of all Bandwidth Groups equals the
total bandwidth. When MIR is larger than CIR, the Bandwidth Group is allowed to exceed
its CIR when bandwidth is available.
• Priority: A group with highest priority receives its bandwidth before lower-priority groups.
• Cost: Cost allows bandwidth allocations to different groups to be unequally apportioned
within the same priority. For equal requests, lower cost nodes are granted more
bandwidth than higher cost nodes.
Bandwidth Groups are typically configured using CIR and MIR for a strict division of the total
bandwidth among the groups. By default, any Bandwidth Pool is configured with a single
Bandwidth Group.
Service Group
A Service Provider or a Virtual Network Operator can further divide a Bandwidth Group into
sub-groups called Service Groups. A Service Group can be used strictly to group remotes into
sub-groups or, more typically, to differentiate groups by class of service. For example, a
platinum, gold, silver and best effort service could be defined as Service Groups under the
same Bandwidth Group.
Like Bandwidth Groups, Service Groups can be configured with CIR, MIR, Priority and Cost.
Service Groups are typically configured with either a CIR and MIR for a physical separation of
the groups, or with a combination of Priority, Cost and CIR/MIR to create tiered service. By
default, a single Service Group is created for each Bandwidth Group.
Application Group
An Application defines a specific service available to the end user. Application Groups are
associated with any Service Group. The following are examples:
• VoIP
• Video
• Oracle
• Citrix
• VLAN
• NMS Traffic
• Default
Each Application List can have one or more matching rules such as:
• Protocol: TCP, UDP, and ICMP
• Source and/or Destination IP or IP Subnet
• Source and/or Destination Port Number
• DSCP Value or DSCP Ranges
• VLAN
Each Application List can be configured with any of the following:
• CIR/MIR
• Priority
• Cost
Service Profiles
Service Profiles are derived from the Application Group by selecting Applications and
matching rules and assigning per remote CIR and MIR when applicable. While the Application
Group specifies the CIR/MIR by Application for the whole Service Group, the Service Profile
specifies the per-remote CIR/MIR by Application. For example, the VoIP Application could be
configured with a CIR of 1 Mbps for the Service Group in the Application Group and a CIR of 14
Kbps per-remote in the Service Profile.
Typically, all remotes in a Service Group use the Default Profile for that Service Group. When
a remote is created under an inroute group, the QoS Tab allows the operator to assign the
remote to a Bandwidth Group and Service Group. The new remote automatically receives the
default profile for the Service Group. The Group QoS interface can also be used to assign a
remote to a Service Group or change the assignment of the remote from one Service Group to
another.
In order to accommodate special cases, however, additional profiles (other than the Default
Profile) can be created. For example, profiles can be used by a specific remote to prioritize
an Application that is not used by other remotes; to prioritize a specific VLAN on a remote; or
to prioritize traffic to a specific IP address (such as a file server) connected to a specific
remote in the Service Group. Or a Network Operator may want to configure some remotes for
a single VoIP call and others for two VoIP calls. This can be accomplished by assigning
different profiles to each group of remotes.
Note: Another solution would be to create a single Bandwidth Group with two Service
Groups. This solution would limit the flexibility, however, if the satellite
provider decides in the future to further split each group into sub-groups.
VoIP could also be configured as priority 1 traffic. In that case, demand for VoIP must be fully
satisfied before serving lower priority applications. Therefore, it is important to configure an
MIR to avoid having VoIP consume all available bandwidth.
Note that cost could be used instead of priority if the intention were to have a fair allocation
rather than to satisfy the Platinum service before any bandwidth is allocated to Gold; and
then satisfy the Gold service before any bandwidth is allocated to Silver. For example:
• Platinum – Cost 0.1 - CIR 6 Mbps, MIR 12 Mbps
• Gold – Cost 0.2 - CIR 6 Mbps, MIR 18 Mbps
• Silver – Cost 0.3 - No CIR, No MIR Defined
Note: When bandwidth is allocated for a remote, the CIR and MIR are scaled to the
remote’s Nominal MODCOD. At higher levels of the Group QoS tree (Bandwidth
Group, Service Group, etc.) CIR and MIR are scaled to the network’s best
MODCOD.)
Referring to Figure 22:
• The Scaled CIR for Remote 1 = 1 Mbps * 1.6456 / 1.2382 = 1.33 Mbps
• The Scaled CIR for Remote 2 = 1 Mbps * 2.4605 / 1.2382 = 1.99 Mbps
• The Scaled CIR for Remote 3 = = 1 Mbps * 3.6939 / 1.2382 = 2.98 Mbps
• The Scaled Aggregate CIR for the three remotes is 6.3 Mbps. Since the Scaled Aggregate
CIR is less than the Service Group CIR (6.5 Mbps), all three remotes get their full CIR of 1
Mbps.
• The remaining 900 Kbps (Service Group MIR of 7.2 Mbps minus 6.3 Mbps required for CIRs)
are divided equally between the three remotes which gives each remote 300 Kbps based
on the Nominal MODCODs.
• Remote 1 receives 300 Kbps * 1.2382 / 1.6456 = 226 Kbps of Best Effort for a Total of 1.226
Mbps
• Remote 2 receives 300 Kbps * 1.2382 / 2.4605 = 150 Kbps of Best Effort for a Total of 1.151
Mbps
• Remote 3 receives 300 Kbps * 1.2382 / 3.6939 = 101 Kbps of Best Effort for a Total of 1.101
Mbps
App
Grp
Figure 24 shows two remotes, Remote 1 and Remote 2. Remote 1 is configured with a CIR or
256 Kbps while Remote 2 is configured with a CIR of 512 Kbps. Both remotes are requesting
their full CIR, but only 450 Kbps of bandwidth is available.
The tree on the left-hand side of Figure 24 shows the result of disabling Bandwidth Allocation
Fairness Relative to CIR for the Service Group. The bandwidth is split equally between Remote
1 and Remote 2 until the bandwidth is exhausted. Both remotes receive 225 Kbps of
bandwidth. (If Remote 1’s CIR could be fully satisfied, any remaining bandwidth would be
granted to Remote 2. For example, if Remote 1 had only 200 Kbps of configured CIR, Remote
1 would be granted 200 Kbps of bandwidth and Remote 2 would be granted 250 Kbps of
bandwidth.)
The tree on the right-hand side of Figure 24 shows the result of enabling Bandwidth Allocation
Fairness Relative to CIR for the Service Group. In that case, Remote 1 receives 150 Kbps of
bandwidth, half that of Remote 2, since Remote 1 has half the configured CIR of Remote 2.
Figure 25 shows two remotes, Remote 1 and Remote 2, each configured with a CIR of 1 Mbps.
Remote 1 is operating at a Nominal MODCOD of 8PSK 3/4. Remote 2 is operating at a Nominal
MODCOD of QPSK 3/4. Both remotes are requesting their full CIR, but only enough bandwidth
to satisfy 1.65 Mbps of CIR at 8PSK 3/4 is available. Note that QPSK 3/4 requires about 1.5
times the raw satellite bandwidth of 8PSK 3/4 to deliver the same CIR.
The tree on the left-hand side of Figure 25 shows the result of disabling Bandwidth Allocation
Fairness Relative to MODCOD for the Service Group. The satellite bandwidth is split equally
between Remote 1 and Remote 2 until the bandwidth is exhausted. This results in Remote 1
receiving 825 Kbps of CIR and Remote 2 receiving 550 Kbps of CIR.
The tree on the right-hand side of Figure 25 shows the result of enabling Bandwidth Allocation
Fairness Relative to MODCOD for the Service Group. Each remote receives enough bandwidth
to carry 660 Kbps CIR. To accomplish this, Remote 2 must be granted 1.5 times the satellite
bandwidth of Remote 1.
Application Throughput
Application throughput depends on properly classified and prioritized QoS and on properly
available bandwidth management. For example, if a VoIP application requires 16 Kbps and a
remote is only given 10 Kbps the application fails regardless of priority, since there is not
enough available bandwidth.
Bandwidth assignment is controlled by the Protocol Processor. As a result of the various
network topologies (for example, a shared TDM downstream with a deterministic TDMA
upstream), the Protocol Processor has different mechanisms for downstream control versus
upstream control. Downstream control of bandwidth is provided by continuously evaluating
network traffic flow to assigning bandwidth to remotes as needed. The Protocol Processor
assigns bandwidth and controls the transmission of packets for each remote according to the
QoS parameters defined for the remote’s downstream.
Upstream bandwidth is requested continuously with each TDMA burst from each remote. A
centralized bandwidth manager integrates the information contained in each request and
produces a TDMA burst time plan which assigns individual bursts to specific remotes. The
burst time plan is produced once per TDMA frame (typically 125 ms or 8 times per second).
Note: There is a 250 ms delay from the time that the remote makes a request for
bandwidth and when the Protocol Processor transmits the burst time plan to it.
iDirect has developed a number of features to address the challenges of providing adequate
bandwidth for a given application. These features are discussed in the sections that follow.
QoS Properties
There are several QoS properties that you can configure based on your traffic throughput
requirements. These are discussed in the sections that follow. For information of configuring
these properties, see the chapter titled, “Configuring Quality of Service for iDirect Networks”
of the iBuilder User Guide.
Static CIR
You can configure a static Committed Information Rate (CIR) or an upstream minimum
information rate for any upstream (TDMA) channel. Static CIR is bandwidth that is guaranteed
even if the remote does not need the capacity. By default, a remote is configured with a
single slot per TDMA frame. Increasing this value is considered as an inefficient configuration
because these slots are wasted if the remote is inactive. No other remote can be given these
slots unless the remote with the static CIR has not been acquired into the network. A static
CIR is considered as the highest priority upstream bandwidth. Static CIR only applies in the
upstream direction. The downstream does not need or support the concept of a static CIR.
Dynamic CIR
You can configure Dynamic CIR values for remotes in both the downstream and upstream
directions. Dynamic CIR is not statically committed and is granted only when demand is
actually present. This allows you to support CIR based service level agreements and, based on
statistical analysis, oversubscribe networks with respect to CIR. If a remote has a CIR but
demand is less than the CIR, only the actual demanded bandwidth is granted. It is also
possible to indicate that only certain QoS service levels “trigger” a CIR request. In these
cases, traffic must be present in a triggering service level before the CIR is granted. Triggering
is specified on a per-service level basis.
By default, additional burst bandwidth is assigned evenly among all remotes requesting
bandwidth. All available burstable bandwidth (BW) is equally divided between all remotes
requesting additional BW, regardless of already allocated CIR.
Previously, a remote in a highly congested network would often not get burst bandwidth
above its CIR. For example, consider a network with a 3 Mbps upstream and three remotes,
R1, R2, and R3. R1 and R2 are assigned a CIR of 1 Mbps each and R3 has no CIR. In older
releases, if all remotes requested 2 Mbps each, 1 Mbps was given to R3, making the total used
BW 3 Mbps. In that case, R1 and R2 received no additional BW.
Using the same example network, the additional 1 Mbps BW is evenly distributed by giving
each remote an additional 333 Kbps. The default configuration is to allow even bandwidth
distribution.
Using Group QoS, you can alter the “fairness” algorithm used to apportion both the CIR
bandwidth and the best-effort bandwidth. “Allocation Fairness Relative to CIR” and
“Allocation Fairness Relative to MODCOD” can be selected at various levels of the group QoS
tree.
Further information and QoS configuration procedures can be found in the chapter titled,
“Configuring Quality of Service for iDirect Networks” of the iBuilder User Guide.
upstream. Reducing a remote’s minimum statically committed CIR increases ramp latency.
Ramp latency is the amount of time it takes a remote to acquire the necessary bandwidth.
The lower the upstream static CIR, the fewer TDMA time plans contain a burst dedicated to
that remote, and the greater the ramp latency. Some applications may be sensitive to this
latency and may result in a poor user experience. iDirect recommends that this feature be
used with care. The iBuilder GUI enforces a minimum of one slot per remote every two
seconds. For more information, see the section titled “Upstream and Downstream Rate
Shaping” in the chapter titled “Configuring Remotes” of the iBuilder User Guide.
Sticky CIR
Sticky CIR is activated only when CIR is over-subscribed on the downstream or on the
upstream. When enabled, Sticky CIR favors remotes that have already received their CIR over
remotes that are currently asking for it. When disabled (the default setting), The Protocol
Processor reduces assigned bandwidth to all remotes to accommodate a new remote in the
network. Sticky CIR can be configured in the Bandwidth Group and Service Group level
interfaces in iBuilder.
Application Jitter
Jitter is the variation of latency on a packet-by-packet basis of application traffic. For an
application like VoIP, the transmitting equipment spaces each packet at a known fixed interval
(every 20 ms, for example). However, in a packet switched network, there is no guarantee
that the packets will arrive at their destination with the same interval rate. To compensate
for this, the receiving equipment employs a jitter buffer that attempts to play out the
arriving packets at the desired perfect interval rate. To do this it must introduce latency by
buffering packets for a certain amount of time and then playing them out at the fixed
interval.
While jitter plays a role in both downstream and upstream directions, a TDMA network tends
to introduce more jitter in the upstream direction. This is due to the discrete nature of the
TDMA time plan where a remote may only burst in an assigned slot. The inter-slot times
assigned to a particular remote do not match the desired play out rate, which results in jitter.
Another source of jitter is other traffic that a node transmits between (or in front of)
successive packets in the real-time stream. In situations where a large packet needs to be
transmitted in front of a real-time packet, jitter is introduced because the node must wait
longer than normal before transmission.
The iDirect system offers features that limit the effect of such problems; these features are
described the sections that follow.
Packet Segmentation
Beginning with iDS Release 8.2, Segmentation and Reassembly (SAR) and Packet Assembly and
Disassembly (PAD) have been replaced by a more efficient iDirect application. Although you
can continue to configure the downstream segment size in iBuilder, all upstream packet
segmentation is handled internally to optimize upstream packet segmentation.
You may wish to change the downstream segment size if you have a small outbound carrier
and need to reduce jitter in your downstream packets. Typically, this is not required. For
details on configuring the downstream segment size, see the chapter on “Configuring
Remotes” in the iBuilder User Guide.
Application Latency
Application latency is typically a concern for transaction-based applications such as credit
card verification systems. For applications like these, it is important that the priority traffic
be expedited through the system and sent, regardless of the less important background
traffic. This is especially important in bandwidth-limited conditions where a remote may only
have a single or a few TDMA slots. In this case, it is important to minimize latency as much as
possible after the distributor’s QoS decision. This allows a highly prioritized packet to make
its way immediately to the front of the transmit queue.
During acquisition, the iNFINITI remote attempts to join the network according to the burst
plan assigned to the remote by the hub. The initial transmit power must be set correctly so
that the remote can join the network and stay in the network. This chapter describes the best
practices for setting Transmit (TX) Initial Power in an iDirect network.
At any time after site commissioning, you can check the TX Initial Power setting by observing
the Remote Status and UCP tabs in iMonitor. If the remote modem is in a “steady state” and
no power adjustments are being made, you can compare the current TX Power to the TX
Initial Power parameter to verify that TX Initial Power is 3 dB higher than the TX Power. For
detailed information on how to set TX Initial Power, refer to the “Remote Installation and
Commissioning Guide”.
Note: Best nominal Tx Power measurements are made during clear sky conditions at
the hub and remote sites.
Ideal Case :
Optimal Detection R ange
6 7 8 9 10 11 12 13 14 C /N (dB )
Threshold C /N
U nder ideal circumstances , the average C /N of all remotes on the upstream channel is equal
to the center of the U CP adjustment range . Therefore the optimal detection range extends to
below the threshold C /N. (This example illustrates the TPC R ate 0 .66 threshold )
T X Initial P ow er T oo H igh:
S ke w ed D e tection R ange
6 7 8 9 10 11 12 13 14 C /N (dB )
T h re sh old C /N
W h en the T X Initia l P ow e r is set too high , rem otes entering the netw o rk skew the average C /N to
be above th e center o f the U C P A djustm ent R a nge . T herefore , durin g this period th e op tim al
detection ra nge does no t inclu de the thresho ld C /N an d rem otes experiencing rain fad e m ay
experie nce a perform an ce degrad ation .
Bursts can still be detected below threshold but the probability of detection and
demodulation reduces. This can lead to long acquisition times (Figure 28).
T X In itial P ow er T o o Lo w :
S kew ed D etection R ange
6 7 8 9 10 11 12 13 14 C /N (dB )
T hreshold C /N
W hen the T x Initial P ow er is set too low , rem otes entering the netw ork skew the averag e C /N to be
b elo w the center of the U C P A djustm ent R ange . T h is co u ld cau se rem o tes co m in g in at th e
h ig h er en d (e.g . 14 d B ) to exp erie n ce so m e d isto rtio n in th e d em o d u latio n p ro cess .
A dditionally, a rem ote acquiring at a low C /N (below threshold ) experiences a large num ber of
C R C errors w hen it enters the netw ork until its pow er is increased .
This chapter describes how the Global NMS works in a global architecture and a sample Global
NMS architecture.
In this example, there are 4 different networks connected to three different Regional
Network Control Centers (RNCCs). A group of remote terminals has been configured to roam
among the four networks.
Note: This diagram shows only one example from the set of possible network
configurations. In practice, there may be any number RNCCs and any number of
protocol processors at each RNCC.
On the left side of the diagram, a single NMS installed at the Global Network Control Center
(GNCC) manages all the RNCC components and the group of roaming remotes. Network
operators, both remote and local, can share the NMS server simultaneously with any number
of VNOs. (Only one VNO is shown in the Figure 30.) All users can run iBuilder, iMonitor, or both
on their PCs.
The connection between the GNCC and each RNCC must be a dedicated high-speed link.
Connections between NOC stations and the NMS server are typically standard Ethernet.
Remote NMS connections are made either over the public Internet protected by a VPN, port
forwarding, or a dedicated leased line.
This chapter describes basic recommended security measures to ensure that the NMS and
Protocol Processor servers are secure when connected to the public Internet. iDirect
recommends that you implement additional security measures over and above these minimal
steps.
Root Passwords
Root password access to the NMS and Protocol Processor servers should be reserved for only
those you want to have administrator-level access to your network. Restrict the distribution
of this password information.
Servers are shipped with default passwords. Change the default passwords after the
installation is complete and make sure these passwords are changed on a regular basis and
when an employee leaves your company.
When selecting your new passwords, iDirect recommends that you follow these practices for
constructing difficult-to-guess passwords:
• Use passwords that are at least 8 characters in length.
• Do not base passwords on dictionary words.
• Use passwords that contain a mixture of letters, numbers, and symbols.
This chapter describes how the Protocol Processor works in a global architecture. Specifically
it contains “Remote Distribution,” which describes how the Protocol Processor balances
remote traffic loading and “De-coupling of NMS and Data Path Components,” which describes
how the Protocol Processor Blades continue to function in the event of a Protocol Processor
Controller failure.
Remote Distribution
The actual distribution of remotes and processes across a blade set is determined by the
Protocol Processor controller dynamically in the following situations:
• At system Startup, the Protocol Processor Controller determines the distribution of
processes based on the number of remotes in the network(s).
• When a new remote is added in iBuilder, the Protocol Processor Controller analyzes the
current system load and adds the new remote to the blade with the least load.
• When a blade fails, the Protocol Processor Controller re-distributes the load across the
remaining blades, ensuring that each remaining blade takes a portion of the load.
The Protocol Processor controller does not perform dynamic load-balancing on remotes. Once
a remote is assigned to a particular blade, it remains there unless it is moved due to one of
the situations described above.
multiple Protocol Processor Blades. A high-level architecture of the Protocol Processor, with
one possible configuration of processes across two blades is shown in Figure 31.
PP Blade 1
N M S Server
sam nc sarm t
spaw n
NM S Servers and
control
sarouter sana
pp_controller
PP Blade 2
M onitor and C ontrol
sam nc
spaw n
and
control
sarm t
sarouter
This chapter describes how you can design your network through a Distributed NMS server,
manage it through iDS supporting software, and back up or restore the configuration.
You can distribute your NMS server processes across multiple server machines. The primary
benefits of machine distribution are improved server performance and better utilization of
disk space.
iDirect recommends a distributed NMS server configuration once the number of remotes being
controlled by a single NMS exceeds 500-600. iDirect has tested the new distributed platform
with over 3000 remotes with iDS 7.0.0. Future releases continue to push this number higher.
The most common distribution scheme for larger networks is shown in Figure 32.
This section describes how TRANSEC and FIPS is implemented in an iDirect Network. It
includes the following sections:
• “What is TRANSEC?" defines Transmission Security.
• “iDirect TRANSEC" describes protocol implementation.
• “TRANSEC Downstream" describes the data path from the hub to the remote.
• “TRANSEC Upstream" describes the data path from the remote to the hub.
• “TRANSEC Key Management" describes public and private key usage.
• “TRANSEC Remote Admission Protocol" describes acquisition and authentication.
• “Reconfiguring the Network for TRANSEC" describes conversion requirements.
What is TRANSEC?
Transmission Security (TRANSEC) prevents an adversary from exploiting information available
in a communications channel without necessarily having defeated the encryption inherent in
the channel. Even if an encrypted wireless transmission is not compromised, information such
as timing and traffic volumes can be determined by using basic signal processing techniques.
This information could provide someone monitoring the network a variety of information on
unit activity. For example, even if an adversary cannot defeat the encryption placed on
individual packets, it might be able to determine answers to questions such as:
• What types of applications are active on the network currently?
• Who is talking to whom?
• Is the network or a particular remote site active now?
• Is it possible to determine between network activity and real world activity, based on
traffic analysis and correlation?
There are a number of components to TRANSEC, one of them being activity detection. With
current VSAT systems an adversary can determine traffic volumes and communications
activities with a simple spectrum analyzer. With a TRANSEC compliant VSAT system an
adversary is presented with a strongly encrypted and constant wall of data. Other
components of TRANSEC include remote and hub authentication. TRANSEC eliminates the
ability of an adversary to bring a non-authorized remote into a secured network.
iDirect TRANSEC
iDirect achieves full TRANSEC compliance by presenting to an adversary who may be
eavesdropping on the RF link a constant “wall” of fixed-size, strongly encrypted (such as
Advanced Encryption Standard (AES) and 256 bit key Cipher Block Chaining (CBC) Mode) traffic
segments, which do not vary in frequency in response to network utilization.
Other than network messages that control the admission of a remote terminal into the
network, all portions of all packets are encrypted, and their original size is hidden. The
content and size of all user traffic (Layer 3 and above), as well as network link layer (Layer 2)
traffic is completely indeterminate from an adversary’s perspective. Further, no higher layer
information is revealed by monitoring the physical layer (Layer 1) signal.
The solution includes a remote-to-hub and a hub-to-remote authentication protocol based on
standard X.509 certificates designed to prevent man-in-the-middle attacks. This
authentication mechanism prevents an adversary’s remote from joining an iDirect TRANSEC
secured network. In a similar manner, it prevents an adversary from coercing a TRANSEC
remote into joining the adversary’s network. While these types of attacks are extremely
difficult to achieve even on a non-TRANSEC iDirect network, the mechanisms put in place for
the TRANSEC feature render them completely impossible.
Note: In this release, HiFin encryption cards are no longer required on your protocol
processor blades for TRANSEC key management.
All hub line cards and remote model types associated with a protocol processor must be
TRANSEC compatible. The only iDirect hardware that operate in TRANSEC mode are the M1D1-
T, M1D1-TSS, and eM1D1 Hub Line Cards; the iNFINITI 7350, 8350 and Evolution e8350
remotes; and the iConnex 700 and iConnex e800/e850mp remotes. Therefore these are the
only iDirect products that are capable of operating in a FIPS 140-2 Level 1 compliant mode.
For more information, see the chapter “Converting an Existing Network to TRANSEC” of the
iBuilder User Guide.
Note: TRANSEC is not supported on DVB-S2 outbound carriers. The eM1D1 line card
only supports TRANSEC when transmitting an iDirect SCPC outbound carrier.
TRANSEC Downstream
A simplified block diagram for the iDirect TRANSEC downstream data path is shown in Figure
34. Each function represented in the diagram is implemented in software and firmware on a
TRANSEC capable line card.
Consider the diagram from left to right with variable length packets arriving on the far left
into the block named Packet Ingest. In this diagram, the encrypted path is shown as solid
black, and the unencrypted (clear) path is shown in dashed red. The Packet Ingest function
receives variable length packets which can belong to four logical classes: User Data, Bypass
Burst Time plan (BTP), Encrypted BTP, and Bypass Queue. All packets arriving at the transmit
Hub Line Card have this indication present as a pre-pended header placed there by the
protocol processor (not shown). The Packet Ingest function determines the message type and
places the packet in the appropriate queue. If the packet is not valid, it is not placed in any
queue and it is dropped.
Packets extracted from the Data Queue are always encrypted. Packets extracted from the
Clear Queue are always sent unencrypted, and time-sensitive BTP messages from the BTP
Queue can be sent in either mode. A BTP sent in the clear contains minimal traffic analysis
information for an adversary and is only utilized to allow remotes attempting to exchange
admission control messages with the hub to do so. Traffic sent in the clear bypasses the
Segmentation Engine and the AES Encryption Engine, and precedes the physical framing and
FEC engines for transmission. Clear, unencrypted packets are transmitted without regard to
segmentation; they are allowed to exist on the RF link with variable sized framing.
Encrypted traffic next enters the Segmentation Engine. The Segmentation Engine segments
incoming packets based on a configured size and provides fill-packets when necessary. The
Segmentation Engine allows the iDirect TRANSEC downstream to transmit a configurable,
fixed size TDM packet segment on a continuous basis.
After segmentation, fixed sized packets enter the Encryption Engine. The encryption
algorithm utilizes the AES algorithm with a 256 bit key and operates in CBC Mode. Packets exit
the Encryption Engine with a pre-pended header as shown in Figure 35.
The Encryption Header consists of five 32 bit words with four fields. The fields are:
• Code. This field indicates if the frame is encrypted or not, and if encrypted indicates the
entry within the key ring (described under the key management section later in this
document) to be utilized for this frame. The Code field is one byte in length.
• Seq. This field is a sequence number that increments with each segment. The Seq field is
two bytes in length (16 bits, unsigned).
• Rsvd. This field is 1 byte and is reserved for future use.
• Initialization Vector (IV). IV is utilized by the encryption/decryption algorithm and
contains random data. The IV field is 16 bytes in length (128 bits unsigned).
A new IV is generated for each segment. The first IV is generated from the cipher text of the
initial Known Answer Test (KAT) conducted at system boot time. Subsequent IVs are taken
from the last 128 bits of the cipher text of the previously encrypted segment. IVs are
continuously updated regardless of key rotations and they are independent of the key rotation
process. They are also continuously updated regardless of the presence of user traffic since
the filler segments are encrypted. While no logic is included to ensure that IVs do not repeat,
the chance of repetition is very small; estimates place the probability of an IV repeating at
1:2102 for a maximum iDirect downstream data rate.
The Segment is of fixed, configurable length and consists of a series of fixed length Fragment
Headers (FH) followed by variable length data Fragments (F). The entire Segment is
encrypted in a single operation by the encryption engine. The FH contains sufficient
information for the source packet stream, post decryption on the receiver, to be
reconstructed. Each Fragment contains a portion of a source packet.
The Encryption Header is transmitted unencrypted but contains only enough information for a
receiver to decrypt the segment if it is in possession of the symmetric key.
Once an encrypted packet exits the Encryption Engine it undergoes normal processing such as
framing and forward error correction coding. These functions are essentially independent of
TRANSEC but complete the downstream transmission chain and are thus depicted in figure 1.
TRANSEC Upstream
A simplified block diagram for the iDirect TRANSEC upstream data path is shown in Figure 36.
The functions represented in this diagram are implemented in software and firmware on a
TRANSEC capable remote.
The encrypted path is shown is solid black, and the unencrypted (clear) path is shown in
dashed red. The Packet Ingest function determines the message type and places the packet in
the appropriate queue or drops it if it is not valid.
Consider the diagram from left to right with variable length packets arriving on the far left
into the block named Packet Ingest. The upstream (remote to hub) path differs from the
downstream (hub to remote) in that on the upstream is configured for TDMA. Variable length
packets from a remote LAN are segmented in software, and can be considered as part of the
Packet Ingest function. Therefore there is no need for the firmware level segmentation
present in the downstream. Additionally, since the remote is not responsible for the
generation of BTPs, there is no need for the additional queues present in the downstream.
Packets extracted from the Data Queue are always encrypted. Packets exacted from the Clear
Queue are always sent unencrypted. The overwhelming majority of traffic will be extracted
from the Data Queue. Traffic sent in the clear bypasses the Encryption Engine and precedes
the FEC engine for transmission.
The encryption algorithm utilizes AES algorithm with a 256 bit key and will operate in CBC
Mode. Packets exit the Encryption Engine with a pre-pended header as described in Figure 37.
Note: TRANSEC overhead reduces the payload size shown in Table 3 on page 22 by the
following amounts for each FEC rate: .431: 7 bytes; .533: 4 bytes; . 660: 4
bytes; .793: 6 bytes.
The Encryption Header consists of a single 32 bit word with 3 fields. The fields are:
IV Seed. This field is a 29 bit field utilized to generate an 128 bit IV. The IV Seed field starts at
zero and increments for each transmitted burst. The full 128 bit IV is generated from the
padded seed by passing it though the encryption engine. The IV is expanded into a 128-bit IV
by encrypting it with the current AES key for the inroute. Remotes can therefore expand the
same seed into the same full IV. However, this does not create any problems because due to
addressing requirements, it is impossible for any two remotes within the same upstream to
generate the same plain text data. While no logic is included to ensure that IVs do not repeat
for a single terminal, repetition is impossible because the key rotates every two hours by
default. Since the seed increments for each transmission burst, the number of total bursts
prior to a seed wrapping around is 229 or 536,870,912. Given the two-hour key rotation
period, a single terminal would need to send over 75,000 TDMA bursts per second to exhaust
the range of the seed. This exceeds any possible iDirect upstream data rate by far.
Key ID. This field indicates the entry within the key ring (described under the key
management section later in this document) to be utilized for this frame.
Enc. This field indicates if the frame is encrypted or not.
The Segment is of fixed, configurable length and consists of what we might call the standard
iDirect TDMA frame. A description of the details of the standard frame are beyond the scope
of this document, but as a general description, consist of a Demand Header which indicates
the amount of bandwidth a remote is requesting, the iDirect Link Layer (LL) Header, and
ultimately the actual Payload. This Segment is encrypted. The Encryption Header is
transmitted unencrypted but contains only enough information for a receiver to decrypt the
segment if it is in possession of the symmetric key.
Once an encrypted packet exits the Encryption Engine it undergoes normal processing such as
forward error correction coding. This function is essentially independent of TRANSEC but
completes the upstream transmission chain (as shown in figure 3).
A remote will always burst in its assigned slots even when traffic is not present by generating
encrypted fill payloads as needed. The iDirect Hub dynamic allocation algorithm will always
operate in a mode whereby all available time slots within all time plans are filled.
Key Distribution Protocol assumes that upon the receipt of a certificate from a peer that the
host is able to validate and establish a chain of trust based on the contents of the certificate.
iDirect TRANSEC utilizes standard X.509 certificates and methodologies to verify the peer’s
certificate.
After the completion of the sequence shown in Figure 38, a peer may provide a key update
message again in an unsolicited fashion as needed. The data structure utilized to complete
key update (also called a key roll) is shown in Figure 39.
This data structure conceptually consists of a set of pointers (Current, Next, Fallow), a two
bit identification field (utilized in the Encryption Headers described above), and the actual
symmetric keys themselves. A key update consists of generating a new key, placing it in the
last fallow slot just prior to the Current pointer, updating the next pointers (circular update
so 11 rolls to 00) and current pointers and generating a Key Update message reflecting these
changes. The key roll mechanism allows for multiple keys to be “in play” simultaneously so
that seamless key rolls can be achieved. By default the iDirect TRANSEC solution rolls any
symmetric key every two hours, but this is a user configurable parameter. The iDirect Host
Keying Protocol is shown Figure 40.
This protocol describes how hosts are originally provided an X.509 certificate from a
Certificate Authority. iDirect provides a Certificate Authority Foundry module with its
TRANSEC hub. Host key generation is done on the host in all cases.
the acquisition slot and which remotes may burst in the clear (unencrypted) on selected slots.
The union of the two time plans covers all slots in all inroutes.
The time plans are then forwarded and broadcast to all remotes in the normal method.
Remotes that are not yet acquired receive the unencrypted time plan and wait for an
invitation to join the network via this unencrypted message.
The remote designated in the acquisition slot acquires in the normal fashion by sending an
unencrypted response in the acquisition slot of a specific inroute.
Once the physical layer acquisition occurs, the remote must follow the key distribution
protocol before it is trusted by the network, and for it to trust the network it is a part of. This
step must be carried out in the clear. Therefore remotes in this state will request bandwidth
normally and they will be granted unencrypted TDMA slots. The hub and remotes exchange
key negotiation messages in the cleartext channel. Three message types exist:
• Solicitations, which are used to synchronize, request, inform, and acknowledge a peer.
• Certificate Presentations, which contain X.509 certificates.
• Key Updates, which contain AES key information that is signed and RSA encrypted; the
RSA encryption is accomplished by using the remote’s public key and the signature is
created by using the hub’s private key.
After authentication, the key update message must also be completed in the clear. The actual
symmetric keys are encrypted using the remote’s public key information obtained in the
exchanged certificate. Once the symmetric key is exchanged, the remote enters the network
as a trusted entity, and begins normal operation in an encrypted mode.
The Fast Acquisition feature reduces the average acquisition time for remotes, particularly in
large networks with hundreds or thousands of remotes. The acquisition messaging process
used in prior versions is included in this release. However, the Protocol Processor now makes
better use of the information available regarding hub receive frequency offsets common to all
remotes to reduce the overall network acquisition time. No additional license requirements
are required for this feature.
Feature Description
Fast Acquisition is configured on a per-remote basis. When a remote is attempting to acquire
the network, the Protocol Processor determines the frequency offset at which a remote
should transmit and conveys it to the remote in a time plan message. From the time plan
message, the remote learns when to transmit and at what frequency offset. The remote
transmit power level is configured in the option file. Based on the time plan message, the
remote calculates the correct Frame Start Delay (FSD). The fundamental aspects of
acquisition are how often a remote gets an opportunity to come into the network, and how
many frequency offsets need to be tried for each remote before it acquires the network.
If a remote can acquire the network more quickly by trying fewer frequency offsets, the
number of remotes that are out of the network at any one time can be reduced. This
determines how often other remotes get a chance to acquire. This feature reduces the
number of frequency offsets that need to be tried for each remote.
By using a common hub receive frequency offset, the fast acquisition algorithm can determine
an anticipated range smaller than the complete frequency sweep space configured for each
remote. As the common receive frequency offset is updated and refined, the sweep window is
reduced.
If an acquisition attempt fails within the reduced sweep window, the sweep window is
widened to include the entire sweep range. Fast Acquisition is enabled by default. You can
disable it by applying a custom key.
For a given ratio x:y, the hub informs the remote to acquire using the smaller frequency offset
range calculated based on the Fast Acquisition scheme. After x number of attempts, the
remote sweeps the entire range y times before it will sweep the narrower acquisition range.
The default ratio is 100:1. That is, try 100 frequency offsets within the reduced (common)
range before resorting to one full sweep of the remote’s frequency offsets.
If you want to modify the ratio, you can use custom keys that follow to override the defaults.
You must apply the custom key to the hub side for each remote in the network.
[REMOTE_DEFINITION]
sweep_freq_fast = 100
sweep_freq_entire_range = 1
sweep_method = 1 (Fast Acquisition enabled)
sweep_method = 0 (Fast Acquisition disabled)
Fast Acquisition cannot be used on 3100 series remotes when the upstream symbol rate is less
than 260 Ksym/s. This is because the FLL on 3100 series remotes is disabled for upstream
rates less than 260 Ksym/s.
The NMS disables Fast Acquisition for any remote that is enabled for an iDirect Music Box and
for any remote that is not configured to utilize the 10 MHz reference clock. In IF-only
networks, such as a test environment, the 10 MHz reference clock is not used.
The Remote Sleep Mode feature conserves remote power consumption during periods of
network inactivity. This chapter explains how Remote Sleep Mode is implemented. It includes
the following sections:
• “Feature Description” on page 79
• “Awakening Methods” on page 80
• “Enabling Remote Sleep Mode” on page 80
• “Power Consumption” on page 81
Note: Not all versions of iDX 2.0 support Sleep Mode. This feature requires iDX 2.0.1
or later. Earlier versions of iDX 2.0 do not support Sleep Mode.
Feature Description
Remote Sleep mode is supported on all iNFINITI and Evolution series remotes. In this mode,
the BUC is powered down, thus saving power consumption.
When Sleep Mode is enabled on the iBuilder GUI for a remote, the remote enters Remote
Sleep Mode after a configurable period elapses with no data to transmit. By default, the
remote exits Remote Sleep Mode whenever packets arrive on the local LAN for transmission on
the inbound carrier.
Note: You can use the powermgmt mode set sleep console command to enable or
powermgmt mode set wakeup to disable remote sleep mode.
The stimulus for a remote to exit sleep mode is also configurable in iBuilder. You can select
which types of traffic automatically “trigger wakeup” on the remote by selecting or clearing a
check box for the any of the QoS service levels used by the remote. If no service levels are
configured to trigger wakeup the remote, you can manually force the remote to exit sleep
mode by disabling sleep mode on the remote configuration screen.
Before a remote enters sleep mode, the protocol processor continues to allocate traffic slots
(including minimum CIR) to the remote. Before it enters sleep mode, the remote notifies the
NMS and the real time state of the remote is updated in iMonitor. Once the remote enters
sleep mode, as far as the protocol processor is concerned, the remote is out of the network.
Therefore, no traffic slots are allocated to the remote while it is in sleep mode. When the
remote receives traffic that triggers wakeup, the remote returns to the network and traffic
slots are allocated as normal by the protocol processor.
Awakening Methods
There are two methods by which a remote is “awakened” from Sleep Mode. They are
“Operator-Commanded Awakening”, and “Activity-Related Awakening”.
Operator-Commanded Awakening
With Operator Command Awakening, you can manually force a remote into Remote Sleep
Mode and subsequently “awake” it via the NMS. This can be done remotely from the Hub since
the remote continues to receive the downstream while in sleep mode.
Note: When Sleep Mode is enabled, a remote with RIP enabled will always advertise
the satellite route as available on the local LAN, even if the satellite link is
down. Therefore, the Sleep Mode feature is not compatible with configurations
that rely on the ability of the local router to detect loss of the satellite link.
To enable Remote Sleep Mode, see the chapter on configuring remotes in the iBuilder User
Guide. To configure service level based wake up, see the QoS Chapter in the iBuilder User
Guide.
Power Consumption
Power consumed by typical remote terminals during both normal operation and sleep mode is
shown in Table 8.
Table 8. Power Consumption: Normal Operations vs. Remote Sleep Mode
This section contains information pertaining to Automatic Beam Selection (ABS) for roaming
remotes in a maritime environment.
Theory of Operation
Since the term “network” is used in many ways, the term “beam” is used rather than the
term “network” to refer to an outroute and its associated inroutes.
ABS is built on iDirect’s existing mobile remote functionality. When a modem is in a particular
beam, it operates as a traditional mobile remote in that beam.
In a maritime environment, a roaming remote terminal consists of an iDirect modem and a
controllable, steerable, stabilized antenna. The ABS software in the modem can command the
antenna to find and lock to any satellite. Using iBuilder, you can define an instance of the
remote in each beam that the modem is permitted to use. You can also configure and monitor
all instances of the remote as a single entity. The remote options file (which conveys
configuration parameters to the remote from the NMS) contains the definition of each of the
remote’s beams. Options files for roaming remotes, called “consolidated” options files, are
described in detail in the iBuilder User Guide.
As a vessel moves from the footprint of one beam into the footprint of another, the remote
must shift from the old beam to the new beam. Automatic Beam Selection enables the remote
to select a new beam, decide when to switch, and to perform the switch-over, without human
intervention. ABS logic in the modem reads the current location from the antenna and decides
which beam will provide optimal performance for that location. This decision is made by the
remote, rather than by the NMS, because the remote must be able to select a beam even if it
is not communicating with the network.
To determine the best beam for the current location, the remote relies on a beam map file
that is downloaded from the NMS to the remote and stored in memory. The beam map file is a
large data file containing beam quality information for each point on the Earth's surface as
computed by the satellite provider. Whenever a new beam is required by remotes using ABS,
the satellite provider must generate new map data in a pre-defined format referred to as a
“conveyance beam map file.” iDirect provides a utility that converts the conveyance beam
map file from the satellite provider into a beam map file that can be used by the iDirect
system.
Note: In order to use the iDirect ABS feature, the satellite provider must enter into an
agreement with iDirect to provide the beam map data in a specified format.
By default, a remote modem always attempts to join any beam included in the beam map file
if that beam is determined to be the best choice available. This includes beams with a quality
value of zero for the remote’s current location. Beginning with iDX Release 2.0.1, you can
configure a custom key for your remotes so that they never attempt to join a beam if the
quality of the beam at the current location is zero. See the Automatic Beam Selection
appendix of the iBuilder User Guide for instructions on configuring the custom key.
The iDirect NMS software consists of multiple server applications. One such server
application, know as the map server, manages the iDirect beam maps for remotes in its
networks. The map server reads the beam maps and waits for map requests from remote
modems.
A modem has a limited amount of non-volatile storage, so it cannot save an entire map of all
beams. Instead, the remote asks the map server to send a map of a smaller area (called a
beam “maplet”) that encompasses its current location. When the vessel nears the edge of its
current maplet, the remote asks for another beam maplet centered on its new location. The
geographical size of these beam maplets varies in order to keep the file size approximately
constant. A beam maplet typically covers a 1000 km square.
reason ultimately results in the inability of the remote to communicate with the outside
world using the beam. Therefore the only usability check is based on the “layer 3 state” of
the satellite link, such as whether or not the remote can exchange IP data with the upstream
router.
Examples of causes that might result in a beam becoming unusable include:
• The NMS operator disables the modem instance.
• A Hub Line Card fails with no available backup.
• The Protocol Processor fails with no backup.
• A component in the upstream or downstream RF chain fails.
• The satellite fails.
• The beam is reconfigured.
• The remote cannot lock to the downstream carrier.
• The receive line card stops receiving the modem.
Anything that causes the remote to inhibit its transmitter causes the receive line card to stop
receiving the modem, which eventually causes Layer 3 to fail. The modem stops transmitting
if it loses downstream lock. A mobile remote will also stop transmitting under the following
conditions:
• The remote has not acquired and no GPS information is available.
• The remote antenna declares loss-of-lock.
• The antenna declares a blockage.
• Orbit-Marine AL-7104
• Schlumberger SpaceTrack 4000
• SeaTel DAC
• Open AMIP
A steerable, stabilized antenna must know its geographical location in order to point to the
antenna. The antenna includes a GPS receiver for this purpose. The remote must also know its
geographical location to select the correct beam and to compute its distance from the
satellite. The remote periodically commands the antenna controller to send the current
location to the modem.
IP Mobility
Communications to the customer intranet (or to the Internet) are automatically re-
established after a beam switch-over. The process of joining the network after a new beam is
selected uses the same internet routing protocols that are already established in the iDirect
system. When a remote joins a beam, the Protocol Processor for that beam begins advertising
the remote's IP addresses to the upstream router using the RIP protocol. When a remote
leaves a beam, the Protocol Processor for that beam withdraws the advertisement for the
remote's IP addresses. When the upstream routers see these advertisements and withdrawals,
they communicate with each other using the appropriate IP protocols to determine their
routing tables. This permits other devices on the Internet to send data to the remote over the
new path with no manual intervention.
Operational Scenarios
This section presents a series of top-level operational scenarios that can be followed when
configuring and managing iDirect networks that contain roaming remotes using Automatic
Beam Selection. Steps for configuring network elements such as iDirect networks (beams) and
roaming remotes are documented in iBuilder User Guide. Steps specific to configuring ABS
functionality, such as adding an ABS-capable antenna or converting a conveyance beam map
file, are described in “Appendix C, Configuring Networks for Automatic Beam Selection” of
the iBuilder User Guide.
6. The customer orders and installs all required equipment and an NMS.
7. The NMS operator configures the beams (iDirect networks).
8. The NMS operator runs the conversion program to create the server beam map file from
the conveyance beam map file or files.
9. The NMS operator runs the map server as part of the NMS.
Adding a Vessel
This scenario outlines the steps required to add a roaming remote using ABS to all available
beams.
1. The NMS operator configures the remote modem in one beam.
2. The NMS operator adds the remote to the remaining beams.
3. The NMS operator saves the modem's options file and delivers it to the installer.
4. The installer installs the modem aboard a ship.
5. The installer copies the options file to the modem using iSite.
6. The installer manually selects a beam for commissioning.
7. The modem commands the antenna to point to the satellite.
8. The modem receives the current location from antenna.
9. The installer commissions the remote in the initial beam.
10. The modem enters the network and requests a maplet from the NMS map server.
11. The modem checks the maplet. If the commissioning beam is not the best beam, the
modem switches to the best beam as indicated in the maplet. This beam is then assigned
a high preference rating by the modem to prevent the modem from switching between
overlapping beams of similar quality.
12. Assuming center beam in clear sky conditions:
13. The installer sets the initial transmit power to 3 dB above the nominal transmit power.
14. The installer sets the maximum power to 6 dB above the nominal transmit power.
Note: Check the levels the first time the remote enters each new beam and adjust the
transmit power settings if necessary.
Normal Operations
This scenario describes the events that occur during normal operations when a modem is
receiving map information from the NMS.
1. The ship leaves port and travels to next destination.
2. The modem receives the current location from antenna every five minutes.
3. While in the beam, the antenna automatically tracks the satellite.
4. As the ship approaches the edge of the current maplet, the modem requests a new
maplet from the map server.
5. When the ship reaches a location where the maplet shows a better beam, the remote
switches by doing the following:
a. a. Computes best beam.
b. b. Saves best beam to non-volatile storage.
c. c. Reboots.
d. d. Reads the new best beam from non-volatile storage.
e. e. Commands the antenna to move to the correct satellite and beam.
f. f. Joins the new beam.
Mapless Operations
This scenario describes the events that occur during operations when a modem is not
receiving beam mapping information from the NMS.
1. While operational in a beam, the remote periodically asks the map server for a maplet.
The remote does not attempt to switch to a new beam unless one of the following
conditions are true:
a. a. The remote drops out of the network.
b. b. The remote receives a maplet indicating that a better beam exists.
c. c. The satellite drops below the minimum look elevation defined for that beam.
2. If not acquired, the remote selects a visible, usable beam based only on satellite
longitude and attempts to switch to that beam.
3. After five minutes, if the remote is still not acquired, it marks the new beam as unusable
and selects the best beam from the remaining visible, usable beams in the options file.
This step is repeated until the remote is acquired in a beam, or all visible beams are
marked as unusable.
4. If all visible beams are unusable, the remote marks them all as usable, and continues to
attempt to use each beam in a round-robin fashion as described in step 3.
Error Recovery
This section describes the actions taken by the modem under certain error conditions.
1. If the remote cannot communicate with the antenna and is not acquired into the network,
it will reboot after five minutes.
2. If the antenna is initializing, the remote waits for the initialization to complete. It will
not attempt to switch beams during this time.
This chapter describes how you can establish a primary and back up hub that are
geographically diverse. It includes the following sections:
• “Feature Description” describes how geographic redundancy is accomplished.
• “Configuring Wait Time Interval for an Out-of-Network Remote” describes how you can set
the wait period before switchover.
Feature Description
The Hub Geographic Redundancy feature builds on the previously developed Global NMS
feature and the existing dbBackup/dbRestore utility. You configure the Hub Geographic
Redundancy feature by defining all the network information for both the Primary and Backup
Teleports in the Primary NMS. All remotes are configured as roaming remotes and they are
defined identically in both the Primary and Backup Teleport network configurations.
During normal (non-failure) operations, carrier transmission is inhibited on the Backup
Teleport. During failover conditions (when roaming network remotes fail to see the
downstream carrier through the Primary Teleport NMS) you can manually enable the
downstream transmission on the Backup Teleport, allowing the remotes to automatically
(after the configured default wait period of five minutes) acquire the downstream
transmission through the Backup Teleport NMS.
iDirect recommends the following for most efficient switchover:
• A separate IP connection (at least 128 Kbps) between the Primary and Backup Teleport
NMS for database backup and restore operations. A higher rate line can be employed to
reduce this database archive time.
• The downstream carrier characteristics for the Primary and Backup Teleports MUST be
different. For example, either the FEC, frequency, frame length, or data rate values must
be different.
• On a periodic basis, backup and restore your NMS configuration database between your
Primary and Backup Teleports. See the NMS Redundancy and Failover Technical Note for
complete NMS redundancy procedures.
This chapter describes carrier bandwidth optimization and carrier spacing. It includes the
following sections:
• “Overview" describes how reducing carrier spacing increases overall available bandwidth.
• “Increasing User Data Rate" provides an example of how you can increase user data rates
with out increasing occupied bandwidth.
• “Decreasing Channel Spacing to Gain Additional Bandwidth" provides an example of how
you can increase occupied bandwidth.
Overview
The Field Programmable Gated Array (FPGA) firmware uses optimized digital filtering which
reduces the amount of satellite bandwidth required for an iDirect carrier. Instead of using a
40% guard band between carriers, now the guard band may be reduced to as low as 20% on
both the broadcast Downstream channel and the TDMA Upstream. Figure 41 shows an overlay
of the original spectrum and the optimized spectrum.
This optimization translates directly into a cost savings for existing and future networks
deployed with iDirect remote modems.
The spectral shape of the carrier is not the only factor contributing to the guard band
requirement. Frequency stability parameters of a system may result in the need for a guard
band of slightly greater than 20% to be used. iDirect complies with the adjacent channel
interference specification in IESS 308 which accounts for adjacent channels on either side
with +7 dB higher power.
Be sure to consult the designer of your satellite link prior to changing any carrier parameters
to verify that they do not violate the policy of your satellite operator.
because the automatic frequency control algorithm uses the hub receiver’s estimate of
frequency offset to adjust each remote transmitter frequency. Hub stations which use a
feedback control system to lock their downconverter to an accurate reference may have
negligible offsets. Hub stations using a locked LNB will have a finite frequency stability range.
Another reason to add guard band is to account for frequency stability of other carriers
directly adjacent on the satellite which are not part of an iDirect network. Be sure to review
this situation with your satellite link designer before changing carrier parameters.
The example that follows accounts for a frequency stability range for systems using
equipment with more significant stability concerns. Given the “Current Carrier Parameters”
the previous example and a total frequency stability of +/-5 kHz, compute the new carrier
parameters:
Solution:
• Subtract the total frequency uncertainty from the available bandwidth to determine the
amount of bandwidth left for the carrier (882.724 kHz – 10 kHz = 872.724 kHz).
• Divide this result by the minimum channel spacing (872.724 / 1.2 = 727.270 kHz).
• Use the result as the carrier symbol rate and compute the remaining parameters.
New Carrier Parameters
• User Bit (info) Rate: 1153.450 kbps
• Carrier Bit Rate: 1454.540 kbps
• Carrier Symbol Rate: 727.270 ksps
• Occupied Bandwidth: 882.724 kHz
• Guard Band Between Carriers: 21.375% (Channel Spacing = 1.21375)
A 15.345% improvement in user bit rate was achieved at no additional cost.
This chapter provides information about iDirect’s Alternate Downstream Carrier feature. It
contains the following sections:
• “Background” on page 97
• “Feature Description” on page 97
Background
The Alternate Downstream Carrier feature is intended to make it easier to move your iDirect
network to a new transmit carrier and to eliminate the danger of stranding remotes that have
not received the new carrier definition when the carriers are switched. If, for example, you
want to move your network to a larger transmit carrier, or you want to switch from SCPC to
DVB-S2, you can use the Alternate Downstream Carrier feature to facilitate the transition. In
earlier releases, if you changed your downstream carrier, a site visit was required to recover
any remotes that were not in the network at the time that the carrier was changed.
The Alternate Downstream Carrier feature is disabled if your NMS server is licensed for the
Global NMS feature. However, the Global NMS feature allows you to accomplish the same goal
by creating an alternate network containing the new downstream carrier and configuring
instances of your roaming remotes in both the existing network and the new network. Like the
Alternate Downstream Carrier feature, this allows you to ensure that all remotes have the
new downstream carrier definition prior to the actual upgrade.
Feature Description
Beginning in iDX Release 2.0, iBuilder provides the capability of selecting an alternate
downstream carrier on the Line Card dialog box of your transmit line card. (See the chapter
titled “Defining Networks, Line Cards, and Inroute Groups” in the iBuilder User Guide for
details). The configuration includes all necessary parameters for the remote to acquire the
alternate downstream. You should configure the alternate carrier for your network well in
advance of the carrier change to ensure that all remotes have the alternate carrier definition
when you change carriers.
If a remote is not in the network at the time of the carrier change it will attempt to acquire
the old primary carrier unsuccessfully when it first tries to rejoin the network. Since the old
primary carrier is no longer being transmitted, the remote will then attempt to acquire its
configured alternate downstream carrier which is the new primary carrier. At that point the
remote will acquire the network on the new carrier.
iDirect supports two types of downstream carriers: DVB-S2 and SCPC. A DVB-S2 downstream
carrier can serve as the alternate carrier for an SCPC primary carrier. Similarly, an SCPC
downstream carrier can serve as the alternate carrier for a DVB-S2 primary carrier. However,
this only works if your Tx line card and all remotes in your network support both downstream
carrier types. For example, an Evolution XLC-11 line card can transmit either a DVB-S2 or an
SCPC carrier and an Evolution X5 remote can receive either a DVB-S2 or an SCPC carrier.
Therefore, you can configure a network containing an XLC-11 transmit line card and X5
remotes with one type of carrier as the primary downstream carrier and the other type of
carrier as the alternate downstream carrier.
Note: An Evolution line card that is capable of transmitting either SCPC or DVB-S2
requires one firmware package for SCPC and another firmware package for
DVB-S2. If you plan to use the Alternate Downstream Carrier feature to switch
between SCPC and DVB-S2, you should load both packages onto your line card.
See the chapter titled “Converting Between SCPC and DVB-S2 Networks” in the
iBuilder User Guide for details.
When a remote joins a network with a configured Alternate Downstream Carrier, it first
attempts to acquire the last downstream carrier to which it was locked before it attempts to
acquire the other carrier. Therefore, if the remote was last locked to the primary carrier, it
attempts to lock to the primary carrier again when it tries to rejoin the network. Similarly, if
the remote was last locked to the alternate carrier, it attempts to lock to the alternate
carrier again when it tries to rejoin the network.
By default, a remote tries for five minutes (300 seconds) to find the last carrier before
switching to the other carrier. However, this timeout can be changed by defining the
net_state_timeout remote-side custom key on the Remote Custom tab in iBuilder as
follows:
[BEAMS]
net_state_timeout = <timeout>
where <timeout> is the number of seconds that the remote tries to acquire the primary
carrier before switching to the alternate carrier.
Note: If a new remote has never locked to any carrier, it always attempts to lock to
the primary downstream carrier first. Therefore, when commissioning a new
remote, it will first look for the primary carrier even if an alternate carrier is
configured.
Primary and alternate downstream carriers cannot co-exist as active carriers in an iDirect
system. In addition, the Alternate Downstream Carrier feature is not intended to be used as a
recovery channel. If you have selected an Alternate Downstream Carrier for one Tx line card,
iBuilder does not allow you to assign that carrier to another line card, either as the primary or
alternate carrier.
The procedure for moving your network to the Alternate Downstream Carrier is documented
in the iBuilder User Guide. See “Changing to an Alternate Downstream Carrier” in the chapter
titled “Defining Networks, Line Cards, and Inroute Groups.”
Beginning with iDX Release 2.0, you must license your chassis slots and certain iDirect
features before you can enable them in iBuilder.
Licensed Features
In addition to requiring chassis slots to be licensed, iBuilder requires licenses for the following
features:
• Evolution X3 AES Link Encryption
• Evolution X5 AES Link Encryption
• Evolution X5 Upstream Spread Spectrum
• XLC-11 Upstream Spread Spectrum
• XLC-11 Downstream Spread Spectrum
License Files
When you license your chassis slots or any of the features listed above, iDirect will send you a
license file. Using the iBuilder License Toolbar, you must then import the license file to enable
the configuration of the chassis or feature on the iBuilder GUI.
For information on importing your license files into iBuilder and for validating your chassis
licences in iBuilder, see the iBuilder User Guide.
For general information on licensing (including obtaining licenses from iDirect), see the
iDirect Features and Chassis Licensing Guide.
This chapter describes basic hub line card failover concepts, transmit/receive verses receive-
only line card failover, failover sequence of events, and failover operation from a user’s point
of view.
For information about configuring your line cards for failover, refer the “Networks, Line
Cards, and Inroute Groups” chapter of the iBuilder User Guide.
Note: If your Tx line card fails, or you only have a single Rx line card and it fails, all
remotes must re-acquire into the network after failover is complete.
parameters loaded into memory. The only difference between the active Tx(Rx) card and the
warm standby is that the standby mutes its transmitter (and receiver). When the NMS detects
a Tx(Rx) line card failure, it sends a command to the warn standby to un-mute its transmitter
(and receiver), and the standby immediately assumes the role of the Tx(Rx) card.
Cold standby line cards take longer to failover than warm standby line cards because they
need to receive a new options file, flash it, and reset.
Event Server
determines line
card has failed
Configuration
Server is notified
YES
YES
Send command to
spare to switch
Send ACTIVE
Warm role from Standby
options file of NO YES
Standby? to Primary ; send
failed card to
ACTIVE options
spare and reset
file of failed card
but DO NOT reset
Apply necessary
changes to puma
serial
( number )