0% found this document useful (0 votes)
48 views

High Performance Spaceflight Computing (HPSC)

The document discusses NASA's High-Performance Spaceflight Computing (HPSC) Program. The goal of HPSC is to dramatically advance spaceflight computing capabilities with nearly two orders of magnitude improvement over current performance. This will provide significant benefits for future NASA and Air Force space missions. Under the HPSC contract, Boeing will develop prototype radiation hardened multi-core processors called "Chiplets" along with system software and evaluation boards, advancing the state-of-the-art for spaceflight computing.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views

High Performance Spaceflight Computing (HPSC)

The document discusses NASA's High-Performance Spaceflight Computing (HPSC) Program. The goal of HPSC is to dramatically advance spaceflight computing capabilities with nearly two orders of magnitude improvement over current performance. This will provide significant benefits for future NASA and Air Force space missions. Under the HPSC contract, Boeing will develop prototype radiation hardened multi-core processors called "Chiplets" along with system software and evaluation boards, advancing the state-of-the-art for spaceflight computing.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

National Aeronautics and

Space Administration

High-Performance Spaceflight Computing


(HPSC) Program Overview

Wesley Powell

Assistant Chief for Technology


NASA Goddard Space Flight Center
Electrical Engineering Division (Code 560)

[email protected]
301-286-6069

To be presented at Space Computing &


Connected Enterprise Resiliency Conference
(SCCERC), Bedford, MA, June 4-8, 2018.

To be presented at Space Computing & Connected Enterprise Resiliency Conference (SCCERC), Bedford,www.nasa.gov/spacetech
MA, June 4-8, 2018.
Acronym List

AFRL Air Force Research Laboratory GB/s Gigabytes Per Second RTOS Real Time Operating System
AMBA ARM Advanced Microcontroller Bus GNC Guidance Navigation and Control S/C Spacecraft
Architecture
ASIC Application Specific Integrated GOPS Giga Operations Per Second SCP Self Checking Pair
Circuit
BW Bandwidth GSFC Goddard Space Flight Center SMD Science Mission Directorate
CFS Core Flight Software HEOMD Human Exploration and Operations SpW SpaceWire
Directorate
CPU Central Processing Unit HPSC High Performance Spaceflight SRAM Static Random Access memory
Computing
C&DH Command and Data Handling JPL Jet Propulsion Laboratory SRIO Serial RapidIO
DDR Double Data Rate KHz Kilohertz SSR Solid State Recorder
DMR Dual Modular Redundancy Kpps Kilo Packets Per Second STMD Space Technology Mission
Directorate
DRAM Dynamic Random Access memory Mbps Megabits Per Second TTE Time Triggered Ethernet
EEPROM Electrically Erasable Programmable MCM Multi Chip Module TTGbE Time Triggered Gigabit Ethernet
Read-Only Memory
FCR Fault Containment Region MRAM Magnetoresistive Random Access TMR Triple Modular Redundancy
Memory
FPGA Field Programmable Gate Array NASA National Aeronautics and Space TRCH Timing Reset Configuration and
Administration Health
FSW Flight Software NVRAM Nonvolatile Random Access memory XAUI 10 Gigabit Media Independent
Interface)
Gb/s Gigabits Per Second PCB Printed Circuit Board VMC Vehicle Management Computer

To be presented at Space Computing & Connected Enterprise Resiliency Conference (SCCERC), Bedford, MA, June 4-8, 2018.
2
Outline

• HPSC Overview
• HPSC Contract
• Chiplet Architecture
• HPSC Middleware
• NASA HPSC Use Cases

To be presented at Space Computing & Connected Enterprise Resiliency Conference (SCCERC), Bedford, MA, June 4-8, 2018.
3
High Performance Spaceflight
Computing (HPSC) Overview
• The goal of the HPSC program is to dramatically advance the state of the art
for spaceflight computing

• HPSC will provide a nearly two orders-of-magnitude improvement above the


current state of the art for spaceflight processors, while also providing an
unprecedented flexibility to tailor performance, power consumption, and fault
tolerance to meet widely varying mission needs

• These advancements will provide game changing improvements in computing


performance, power efficiency, and flexibility, which will significantly improve the
onboard processing capabilities of future NASA and Air Force space missions

• HPSC is funded by NASA’s Space Technology Mission Directorate (STMD),


Science Mission Directorate (SMD), and the United States Air Force

• The HPSC project is managed by Jet Propulsion Laboratory, and the HPSC
contract is managed by NASA Goddard Space Flight Center (GSFC)

To be presented at Space Computing & Connected Enterprise Resiliency Conference (SCCERC), Bedford, MA, June 4-8, 2018.
4
HPSC Background

• HPSC began with a NASA internal study, which identified several use cases for high
performance spaceflight computing
Human Spaceflight (HEOMD) Use Cases Science Mission (SMD) Use Cases
Cloud Services Extreme Terrain Landing
Advanced Vehicle Health Management Proximity Operations / Formation Flying
Crew Knowledge Augmentation Systems Fast Traverse
Improved Displays and Controls New Surface Mobility Methods
Augmented Reality for Recognition and Cataloging Imaging Spectrometers
Tele-Presence Radar
Autonomous & Tele-Robotic Construction Low Latency Products for Disaster Response
Automated Guidance, Navigation, and Control (GNC) Space Weather
Human Movement Assist Science Event Detection and Response
Immersive Environments for Science Ops / Outreach

• Following this study, a AFRL/NASA Next Generation Space Processor (NGSP)


analysis program engaged industry to define and benchmark future multi-core
processor architectures

• Based on the results of this program, the Government generated the conceptual
reference architecture and detailed requirements for the HSPC “Chiplet”
To be presented at Space Computing & Connected Enterprise Resiliency Conference (SCCERC), Bedford, MA, June 4-8, 2018.
5
HPSC Background

• Reference design features power-efficient ARM 64-bit processor cores (8) and on-
chip interconnects scalable and extensible in MCM (Multi-Chip Module) or on PCB
(Printed Circuit Board) via XAUI and SRIO (Serial RapidIO) 3.1 high-speed links

 Multi-Chiplet configurations (tiled or cascaded) provide increased processing


throughput and/or increased fault tolerance (e.g. each Chiplet as separate fault
containment regions, NMR)
 Chiplets may be connected to other XAUI/SRIO devices
 e.g. FPGAs, GPUs, or ASIC co-processors

• Supports multiple hardware-based and software-based fault tolerance techniques

Multi-Chiplet Configuration

HPSC “Chiplet” Reference Design


To be presented at Space Computing & Connected Enterprise Resiliency Conference (SCCERC), Bedford, MA, June 4-8, 2018.
6
HPSC Contract

• Following a competitive procurement, the HPSC cost-plus fixed-fee


contract was awarded to Boeing

• Under the base contract, Boeing will provide:


 Prototype radiation hardened multi-core computing processors (Chiplets), both
as bare die and as packaged parts
 Prototype system software which will operate on the Chiplets
 Evaluation boards to allow Chiplet test and characterization
 Chiplet emulators to enable early software development

• Five contract options have been executed to enhance the capability


of the Chiplet
 On-chip Level 3 cache memory
 Dual real-time processors
 Dual Time Triggered Ethernet (TTE) interfaces
 Dual SpaceWire interfaces
 Package amenable to spaceflight qualification

• Contract deliverables are due April 2021

To be presented at Space Computing & Connected Enterprise Resiliency Conference (SCCERC), Bedford, MA, June 4-8, 2018.
7
Chiplet Architecture

• With the contract options awarded and the preliminary design


completed, the Chiplet architecture has evolved from the original
reference architecture

HPSC Chiplet Architecture

To be presented at Space Computing & Connected Enterprise Resiliency Conference (SCCERC), Bedford, MA, June 4-8, 2018.
8
HPSC Middleware

• AFRL is funding JPL and NASA GSFC


INTEGRATED STACK CONCEPT
to develop HPSC Middleware

Mission Applications
• Middleware will provide a software layer
that provides services to the higher-level FSW Product Lines – Core S/C Bus
application software to achieve: Functions
 Configuration management GSFC and JPL Core Flight Software (CFS)
 Resource allocation HPSC Middleware – Resource Management
 Power/performance management Mission-Friendly Interface for
 Fault tolerance capabilities of the HPSC Managing/Allocating Cores for
chiplet Performance vs. Power vs. Fault Tolerance
Traditional System Software – RTOS or
• Serving as a bridge between the upper Hypervisor, FSW Development
application layer and lower operating Environment
system or hypervisor, the middleware
will significantly reduce the complexity of Hardware – Multi-core Processor Chips,
developing applications for the HPSC Evaluation Boards
chiplet

To be presented at Space Computing & Connected Enterprise Resiliency Conference (SCCERC), Bedford, MA, June 4-8, 2018.
9
HPSC Use Cases

Rover
Compute Needs • System Metrics
• Vision Processing • 2-4 GOPs for mobility(10x
• Motion/Motor Control RAD750)
• GNC/C&DH • >1Gb/s science instruments
• Planning • 5-10GOPs science data
• Science Instruments processing
• Communication • >10KHz control loops
• Power Management • 5-10GOPS, 1GB/s memory
• Thermal Management BW for model based
• Fault detection/recovery reasoning for planning

Lander
Compute Needs • System Metrics
• Hard Real time compute • >10 GOPs compute
• High rate sensors w/zero data • 10Gb/s+ sensor rates
loss • Microsecond I/O latency
• High level of fault protection/ • Control packet rates >1Kpps
fail over • Time tagging to microsecond
accuracy

To be presented at Space Computing & Connected Enterprise Resiliency Conference (SCCERC), Bedford, MA, June 4-8, 2018.
10
HPSC – High Bandwidth Instrument and
SmallSats / Constellations Use Cases
High Bandwidth Instrument
Compute Needs • System Metrics

NVRAM

NVRAM

NVRAM
DDR

DDR

DDR
• Soft real time • 10-20 GOPs compute
• Non-mission critical • >10GB/s memory bandwidth
• High rate sensors • >20Gbps sensor IO data rates TBD
SRIO SRIO SRIO
Imager FPGA Chiplet Chiplet Chiplet
• Large calibration sets in NV
memory SRIO
SpaceWire

SSR

Smallsat

NVRAM

DDR
Compute Needs • System Metrics
• Hard and Soft real time • 2-5Gbps sensor IO
• GNC/C&DH • 1-10GOPs Instrument
SRIO
Chiplet
• Autonomy and • 1GB/s memory bandwidth
constellation(cross link • 250Mbps cross link
comm) bandwidth SpW
SRIO
SpaceWire
• Sensor data processing Router
• Autonomous science
SSR or
Comm

To be presented at Space Computing & Connected Enterprise Resiliency Conference (SCCERC), Bedford, MA, June 4-8, 2018.
11
HPSC – HEO Habitat/Gateway Use
Case

Similar to Orion two fault tolerant architecture


Sensors Sensors Sensors Sensors
(Cameras, (Cameras, (Cameras, (Cameras,
Lidars, etc.) Lidars, etc.) Lidars, etc.) Lidars, etc.)

FCR FCR FCR FCR

TTGbE
x3

Existing Orion • A single HPSC exceeds the performance metrics of a Orion


Vehicle Vehicle Management Computer (VMC)
Management
Computer (VMC) • A VMC contains three Self-Checking Pairs (SCP)
To be presented at Space Computing & Connected Enterprise Resiliency Conference (SCCERC), Bedford, MA, June 4-8, 2018.
12
Conclusion

• Future space mission scenarios call out for significantly improved


spaceflight computing capability

• Improved spaceflight computing means enhanced computational


performance, energy efficiency, and fault tolerance

• With the ongoing HPSC development, we are well underway to


meeting future spaceflight computing needs

• The NASA-developed Middleware will allow the efficient infusion of


the HPSC chiplet into those missions

• As illustrated by the NASA use cases, our future missions demand


the capabilities of HPSC

Acknowledgements: Rich Doyle (JPL), Rafi Some (JPL), Jim Butler (JPL), Irene Bibyk
(GSFC), and Jonathan Wilmot (GSFC) for diagrams and use case definitions

To be presented at Space Computing & Connected Enterprise Resiliency Conference (SCCERC), Bedford, MA, June 4-8, 2018.
13

You might also like