
-
Vice President of Technology, RISC-V International
Industry veteran Tom shapes the overall technical strategy for the RISC-V Instruction Set Architecture (ISA). An alum of IBM and Linaro, he has built a career on fostering partnerships and deepening stakeholder engagement, and now helps RISC-V International’s members drive ever-greater collaboration.
As the dust settles on RISC-V Summit North America 2025, I look back on my first RISC-V Summit since joining as VP of Technology – and the packed programme of keynotes, panels, and talks from industry leaders and subject-matter experts.
Sessions from RISC-V Summit North America 2025 are now freely available to watch on YouTube, so whether you joined us in person or are catching up from afar, you can dive into the full catalogue at your own pace. To make that easier, I’ve pulled together some of our favourite moments; the talks that surprised us, challenged our assumptions, or revealed things we simply hadn’t realised about the depth and diversity of this ecosystem.
From insights that stretch back to the earliest days of reduced instruction set computing (RISC), through to bold visions for the future of open-standard computing, this collection brings together the very best of what Summit had to offer in 2025…
1. Dave Patterson’s Radical RISC Wager Defied the Entire Semiconductor Playbook
Many of us wouldn’t be here, doing what we’re doing now, if it wasn’t for the work of Dave Patterson and his team of visionary computer scientists in the early 1980s. Yet as the man himself explains in this striking account of the early days of his career, Patterson’s plan to strip the instruction set down, simplify the microcode, and rely more heavily on the compiler went against everything the semiconductor industry had been working towards at the time.
Competitors derided that this proposed ‘reduced instruction set computer (RISC)’ architecture would only bloat code, hurt cache behavior and increase design complexity, not improve performance. To prove it, Patterson’s team would have to build it – but what chance did a University lab have at out-engineering the likes of DEC’s VAX?
A RISCy Approach to Microprocessor Technology – David Patterson, Pardee Professor of CS
2. Space Is Leading the Incumbent Markets, and RISC-V Is Poised to Own It
We’re all looking forward to a future of RISC-V smartphones, PCs, and data centers. Yet some of RISC-V’s most interesting breakout wins, says Ted Speers, Technical Fellow at Microchip Technology, are more likely to come from incumbent markets where the field is still wide open.
These are the markets to the right of the size/growth curve, from automotive to critical infrastructure. Then, right at the far end of this scale we have space – a market in hypergrowth, as reusable launch and private satellite constellations reshape the economics of orbit, creating unprecedented demand for radiation-tolerant, ultra-efficient, highly customised compute on every satellite and spacecraft.
These are the markets, says Speers, that are today holding their own conferences about RISC-V.
Keynote: Securing the Final Frontier: RISC-V® in Space and Critical Infrastructure – Ted Speers
3. With Cloud AI at a Sustainability Precipice, RISC-V’s Next Big AI Opportunity Lies at the Edge
The Edge AI Foundation’s Ed Doran opened with some sobering stats: data centers can consume up to 40% of a community’s electricity budget, and a single data center can consume up to 5 million gallons of water per day.
There will always be a vital and perfectly sensible role for big, centralized data centers, but the real takeaway for me is that we urgently need to rethink where AI runs. That’s precisely the mission of the Edge AI Foundation, which, in partnership with RISC-V International, is championing the edge as RISC-V’s next major proving ground.
Doran then hands over to Makeljana Shruti, Chair of the RISC-V AI Market Development Committee, to spell out why rigid legacy architectures can’t deliver the flexible determinism AI needs at the edge, and why RISC-V is such a natural fit. “It’s scalable, it’s modular, it’s secure,” she says, before walking us through concrete real-world examples of how this plays out in practice.
Keynote: RISC-V Opportunities at the Edge of AI – Makeljana Shkurti & Ed Doran
4. NVIDIA’s CUDA Will Turbocharge RISC-V Supercomputing
“NVIDIA’s announcement to port CUDA to RISC-V is critical for us in HPC”, said Dr Nick Brown, Senior Research Fellow at EPCC Edinburgh. “I think it’s then going to be a really interesting journey to kick the tires and really see what we can get out of it on RISC-V”.
Of course, CUDA is only one piece of the puzzle. As well as GPU support, Brown lists applications, libraries, tooling, filesystems, networking and matrix extensions as key areas of work on the road to a truly RISC-V-based supercomputer. Not that this has stopped Dr Brown: Already at EPCC in Edinburgh, he notes, there is a RISC-V test bed that “feels like” a supercomputer, with every node powered by RISC-V.
Keynote: Reimagining the Future of High Performance Computing Catalysed by RISC-V – Nick Brown, Senior Research Fellow, EPCC
5. You Can Access a RISC-V FPGA Right Now, via AWS
In his keynote, AWS’ Jeremy Dahan showed us all how easy it is to get access to RISC-V hardware. Sure, physical dev boards are cool, but being able to prototype, test, and benchmark your own RISC-V designs on real programmable logic from anywhere in the world is surely the next best thing.
Today, you can spin up an FPGA instance on an Amazon EC2 F1 instance and load a RISC-V soft core on top of it. If you’re happy with existing open-source RISC-V cores, you can deploy those; if you want to tweak the microarchitecture yourself, you can do that too – all in a reproducible, scriptable cloud environment.
Keynote: Designing Processors in the Cloud: How Advanced Emulation and AWS Cloud Infrastructure is Reshaping Silicon Development – Jeremy Dahan, Automotive Compute Tech GTM Specialist, AWS
6. Upstreaming Remains A Heated Issue
“Part of your responsibility as a hardware vendor is to work with the software community to get your silicon support software accepted upstream”, retorted Red Hat’s Steve Wanless in response to a question from the floor on hardware compatibility.
“You hardware guys are trying to bring new boards to market quickly, and you’re distributing them with your own kernel, no upstream work,” added Darren Davis from SUSE. “We’ve only recently got graphics working on some boards. You should have relationships with us – we all want to achieve the same goal.”
This is something we covered in detail a few months back in Full-Fat, Kernel-Ready: Why RISC-V Linux Needs Everyone Upstream, and it’s clear it remains a friction point. We all need to come together to ensure every new RISC-V board benefits from the same well-tested, community-maintained stack.
Keynote Panel: Linux and RISC-V: Principles for a Winning Partnership
7. It’s Okay for Standardization to Coexist With Choice
As RISC-V co-founder Krste Asanović noted in his State of the Union address, there are often multiple ways to achieve the same capability – and negotiating different approaches is one of the finer aspects of RISC-V International’s role.
In particular, Asanović highlighted the four (actually 3+1) matrix-extension approaches currently progressing in parallel, from lighter-weight IME approaches for embedded and edge inference to high-end AME designs with separate matrix accumulators for servers and data center AI compute. “All four of these are making good progress and we’ll see how they evolve over time,” he said, highlighting them as a clear example of how this member-driven community is advancing the ISA’s matrix-multiply capabilities across every market segment.
