GPU-Accelerated DFT
the power of modern GPU hardware; GPU4PySCF on Rowan; pricing changes coming in 2026; an interview with Navvye Anand from Bindwell; using Rowan to develop antibacterial PROTACs
Today we are launching GPU-accelerated DFT for all Rowan users with the GPU4PySCF package. GPU4PySCF provides large speedups over CPU-based DFT, allowing significantly larger systems to be run (or more calculations to be run for the same budget). We’re also announcing some upcoming pricing changes, effective January 1st 2026. Read on for more details!
GPU-Accelerated DFT
CPU-based DFT has traditionally been more popular than GPU-based DFT since the calculation of complicated high-angular momentum integrals is naturally more amenable to CPUs. However, significant advances have been made in GPU-based algorithms, making it easier to harness the large number of GPU cores. Additionally, significant resources have been poured into GPU development thanks to the rise of ML, leading to huge increases in the number of cores, memory bandwidth and speed of double precision math (NVIDIA H200s have 16,896 CUDA cores, 141 GB of RAM, 4.89 TB/s of bandwidth, and 30.16 TFLOPS for FP64 computations). All these improvements make it possible to run DFT with incredible speed on modern GPUs.
While there have also been many advances in methods that approximate DFT, such as semiempirical methods and neural network potentials (NNPs), we think that DFT is still quite important. Current NNPs are significantly faster than DFT but are mostly limited to ground-state energy and gradient calculations. They cannot compute many chemically relevant properties, such as excited states, orbitals, and response properties like NMR, Raman, and hyperpolarizability. Additionally, since DFT solves for the density, a large number of density-derived properties come with every calculation, including dipole moments, electrostatic potentials, and atomic charges. Perhaps most importantly, DFT is used to generate the training data for NNPs, and as the demand for better, faster, and more capable NNPs grows, there will be an increased need for DFT for dataset generation.
In practice, GPU4PySCF is much faster than Psi4 or PySCF, often by more than an order of magnitude. We are excited for the possibilities offered by GPU4PySCF and wrote a deeper technical blog for those who are interested, going into the performance across various functionals, basis sets, GPUs, and systems (sneak peek: GPU4PySCF offers even greater speedups for larger basis sets and range-separated hybrids). Even though GPUs are more expensive per hour than CPUs, GPU4PySCF is so fast that it’s cheaper to run than Psi4 or PySCF:

Incorporating GPU4PySCF makes DFT through Rowan significantly faster, more reliable, and more powerful. We run all production GPU4PySCF calculations on top-of-the-line NVIDIA H200 GPUs, making calculations >50x faster in some cases while allowing us to scale to larger systems than before.
Running GPU4PySCF Through Rowan
It’s easy to submit GPU4PySCF calculations through Rowan. We’ve updated the default DFT methods—now, our suggested “routine” and “careful” DFT methods both run through GPU4PySCF.
For full control, you can click “Show Details” to individually modify the functional, basis set, corrections, and choice of DFT engine. Different methods are supported by different engines; while the “3c” methods are not yet available in PySCF or GPU4PySCF, most other functionals and basis sets are available.
Once submitted, calculations run through GPU4PySCF can be monitored, analyzed, and resubmitted just like any other calculation run through Rowan. The main different is speed—for instance, this imatinib calculation takes only 30 seconds per optimization step at the r2SCAN-D4/vDZP level of theory.
Pricing Changes
As the Rowan platform becomes capable of doing more varied tasks, we’ve noticed that our pricing has become inconsistent with the underlying compute costs. Jobs that run on the biggest Hopper GPUs (like GPU4PySCF) consume credits at the same rate as jobs that run on small CPU workers and our subscription tiers no longer give users enough credits to fully utilize our most recent features. To address these issues, we’re announcing a number of changes to how we price our offerings.
Starting on January 1, 2026, jobs that run on GPU hardware will use 3–7 credits per GPU minute. We will be increasing the cost of our subscription tiers and changing their credit allocations while decreasing the cost of ad hoc credit purchases. (Current subscribers will retain their existing subscription plans.) Finally, we will start tracking the storage associated with workflows; we will provide tools to manage storage; each subscription tier will have a certain included amount of storage, and additional storage will cost $0.25 per GB per month.
Here’s a few more details about these changes.
Compute
Previously, CPU and GPU jobs all cost 1 credit per minute of compute. Moving forward, jobs running on CPU machines will continue to charge 1 credit per minute. Jobs running on GPUs with the Turing (ex. T4), Ada (ex. L40S), or Ampere (ex. A100) architectures will charge 3 credits per minute. Jobs running on Hopper-architecture GPUs (H100s and H200s) will charge 7 credits per minute.
We will continue to choose the hardware best suited for each job submitted based on careful internal benchmarking, and we plan to indicate what hardware a given job will be run on before submission. (This change will affect already subscribed Rowan users.)
New Tiers
As we add more long-running and high-throughput workflows to the Rowan platform like pose-analysis MD, protein binder design, and batch docking, we’ve found that even our business-tier users are frequently short on credits. Additionally, different pricing tiers have dramatically different implied per-credit costs, creating cloud-spend risk for us and strange incentives for our users.
To provide our business-tier users with more credits while working to reduce the cost-per-credit variance across our different plans, we’re modifying our subscription tiers. The following changes will take effect on January 1, 2026:
The cost of additional credit purchases will decrease from $0.05/credit to $0.04/credit.
The cost of our individual plan will increase to $300/month (prev. $200/month). This plan will come with 2,000 credits/week (prev. 1,250 credits/week).
The cost of our business plan will increase to starting at $50,000/year (prev. $18,000/year). This plan will come with 35,000 credits/week (prev. 12,500 credits/week).
The cost of our individual and group academic plans will stay the same ($65/month for individuals). These plans will each come with 500 credits/user/week (prev. 1,250 credits/user/week).
Current subscribers will retain their existing subscription plans. These changes will only affect users subscribing to Rowan in 2026 and beyond.
Storage
We’ve noticed that some outputs—MD trajectories, large orbital calculations, etc—can take up a ton of storage, even when the results are no longer relevant. In 2026, we will cap free-tier users at 1 GB of storage. Non-academic individual users and academic individual & group users will be given 10 GB storage per user. Business users will be given 100 GB storage per user. [A previous version of this newsletter got these numbers wrong—sorry!]
For subscribers, additional storage will be billed at the rate of $0.25 per GB per month. Free-tier users exceeding their storage allocation will be unable to run additional calculations until they have available storage.
Right now, you can check how much storage you are using by going to your account’s billing page. We plan to add additional tools to help users monitor and understand their storage usage.
Other Rowan Updates
We hosted our first interview! This week Corin interviewed Navvye Anand, co-founder of Bindwell, the company that just raised $6M to build next-generation pesticides with AI. Corin and Navvye talk about Bindwell’s approach, how pesticide discovery differs from drug discovery, and what research Navvye finds most exciting right now. Check out the full interview.
We also posted our first Rowan Research Spotlight in a few months, covering Emilia Taylor’s work on adapting mammalian PROTAC technology to target antibiotic resistance. Emilia shares her thoughts on the potential of “BacPROTACs,” how she’s addressing the challenges of membrane permeability on beyond-rule-of-five chemical space, and how Rowan’s computations have helped her—read the full conversation here.
Happy Thanksgiving for our US-based readers! As always, feel free to reach out to our team if you’re interested in partnering with Rowan; until next time, happy computing.









