towards sustained hypersonic flight

What comes after the Space Shuttle?

New Glenn? Falcon??

I believe something more radical is on the horizon…

Summary: a modernized X30 National Aero-Space Plane with advanced computing under the hood.

About six years ago, I was fortunate to receive hundreds of hours of guidance from the CFD chairman at Boeing (now at Blue Origin). As my startup’s acting VP of Research, he helped us establish technical requirements for a new simulation platform for next-gen systems. He set us on a path, and I worked to bring it all together, pulling from a spectrum of experiences at JPL, Blue Origin, and Virgin Galactic…

Why does hypersonic flight require a new engineering approach?

Banner image, courtesy https://en.wikipedia.org/wiki/Specific_impulse

ABSURD ENERGIES

By definition, “hypersonic” means much faster than sound. There does not appear to be a formal demarcation between supersonic and hypersonic, but design philosophies start to deviate markedly as kinetics take over. At sufficient speed and conditions, traditional compressible flow theory becomes inaccurate due to additional energy modes of excitation, storage, and transmission (that were not included in the original model). As specific kinetic energy approaches molecular bond energies, a distribution undergoes dissociation, inhibiting chemical reformation reflected in further-limiting reaction progress (“Damkohler numbers”). A transition occurs as radiation dominates thermal modes. Plasma density increases as free stream energy density approach valence electronic Gibbs potentials. At some point, you can’t extract net positive work because combustion doesn’t progress (until recombination outside the engine).

For air, I’d say hypersonic phenomena onset around M~6. Very few vehicles to date (or planned) have such capabilities, obviously.

However, I think it is within our technological grasp to cruise at Mach 15+ with the right configuration and engineering approach, enabling point-to-point travel and booster services for deploy-ables and satellites.

In time, I intend to demonstrate a clear pathway forward. First we must understand the basic principles and underlying processes…

PERFORMANCE

Perhaps close or slightly worse than current commercial high-bypass turbo-jet engines, but certainly worse than future hydrogen turbo-jets!

However, a marked performance improvement over traditional hydrogen-oxygen rocket performance since not only does the air-breathing vehicle not have to carry its own fuel, but it can control the effective specific impulse by varying the ratio of bypass (air) to heat input (fuel).

To move beyond traditional liquid-fueled rockets for high-speed trans-atmospheric flight, we can extract more thrust-per-Watt out of an air-breathing engine by including more air (“working fluid”) in the propulsive process at a slower jet speed (difference between engine outlet and inlet velocities). We essentially spread the jet power to maximize fuel efficiency (“effective specific impulse”) and to have jet outlet velocity match free-stream speed to maximize jet kinetic efficiency. (Ideally, once ejected, exhaust would stand still in the reference frame of the fluid. However this is not possible at very low speeds due to minimal mass flux through engine generating minimal net thrust, albeit at very high efficiency! At high Mach numbers, there isn’t enough delta-v in the exhaust to keep up with vehicle speed, and a gradual drop in thermodynamic efficiency is expected.)

Example everyone can observe: this is why commercial jets have big engines with big bypasses so that majority of the thrust comes from the fan, rather than the core engine flow. I think nowadays the ratio is something like 8:1. The exhaust velocity is roughly sonic at Mach 0.85 cruise — all to maximize fuel economy — the driving economic factor for air-travel and a significant portion of your airfare. Not to mention ecological impact. Image courtesy https://en.wikipedia.org/wiki/General_Electric_GE90

The average kinetic energy of the vehicle scales as the square of its speed, while the power required to sustain flight scales as the cube.

What does this mean about powered vehicles that fly very fast?

INTEGRATED ENGINES

As vehicle power-density scales as speed cubed, propulsion starts to dominate the design of the vehicle in hypersonics. The vehicle becomes a big flying engine as M->25, and the project schedule and funding should reflect this. Based on flight profile and lift requirements, a linear “wave-rider” design may be considered vs more practical annular layout (which also is more efficient at carrying large thermal-stress loads and propellant storage). Fuel density remains important, but not as much as net specific energy density.

Sub-cooled liquid hydrogen is used as fuel and coolant, and if pressed above supercriticality, has insane heat capacity — but at a cost of varying density (and Nusselt number used in regen cooling analysis). Both active and passive cooling strategies are required to offset vehicle and engine heat transfer. An open cycle is unacceptable to overall performance, so boundary layer coolant (BLC) must be injected on leading surfaces and ingested / combusted (as part of a turbulent shock-detonation inlet). Combustion takes place in specialized sub-sonic burners before being mixed with the primary flow as part of a closed staged-combustion cycle. Liquid oxygen is supplemented to the combustors for take-off and LEO injection.

Engine length becomes an impediment in smaller vehicles (such as those encountered by any research/test program) due to finite combustion reaction time, requiring longer characteristic chamber length to ensure relatively-complete combustion (Damkohler numbers close to one). Net chemical power extraction is balanced against thermal and drag impediments, so the systems must balance all these and resolve rate reacting large eddy simulation (LES), as physical testing will have inherent limitations to replicate and measure combustion environment. Simulations are used for analysis and optimization and to characterize transfer functions to be applied as the machine’s advanced onboard control system.

Although a hypersonic compressor and diffuser does not use rotating turbomachinery (per excessive thermal-stresses), supporting cooling and fluid control systems remain a large-scale systems engineering challenge. The technical scope is akin to a nuclear power plant that can fly and requires multiples modes of operation. Structural engineering must make no assumptions regarding thermal and acoustic environments as the vehicle will pass through many regimes, expected and off-nominal. Quantifying dynamic load environments require experiment or flight experience, as computing resources to resolve turbulent micro-structures scale as the Reynolds number to 9/4 power, more than square of speed!

To have any hope to getting this right, we must have a very strong concept and technology basis. We need a good initial vector and structured yet flexible approach…so defining the problem by systems and subsystems provides the exact encapsulation and recursive definition required to be infinitely interchangeable and expandable (only limited by computing resources). These tools must be intuitive and powerful as to fully-leverage parallel computing so analysis doesn’t continue to be the bottleneck:

From a project cost and schedule perspective, it is imperative that the concept and its infrastructure be a suitable architecture, as more than 2/3 of project costs are locked-in by the time the first design decision is made. I’ve heard officials from DARPA claim, from their experience, that problems costs 1000x more to fix while in operations than if caught in pre-acquisition stages.

START WITH THE DATA LAYERS

There’s obviously a lot of competing factors in advanced aerospace and energy systems. To integrate these different domains (fluid, thermal, mechanical, electronic) we need an alternative to the current isolated unidirectional “waterfall” engineering process. We need a unified HPC platform everyone can use to integrate systems, not just fluids or solids.

To take steps beyond theory into practice — to actually conceptualize, design, analyze, and build these systems, we need some amazing software and sustained discipline across many teams. Realistically, the problem must be approached with a strong systems framework and restraint on exotics. (“Can I personally actually build this?”) I’ve been participating in various AIAA and peer-review conferences over past years, and there is certainly some impressive work out there. I think the CREATE suite from the DoD has taken a real but ambitious approach to give the military turn-key analysis tools. However, I haven’t seen many commercial or academic firms with their eye (or checkbook) on the systems challenge of next-gen engineering — let alone an architecture that not only demonstrates multi-disciplinary functionalities now (CFD, FEA, etc) while remaining relevant to future computing.

I pulled away from the aerospace industry to dedicate just under 20,000 hours to this software infrastructure, collaborating with a few bright graduate researchers at Stanford, MIT, and the Von Karman Institute. We made hundreds of thousands of code contributions across more than two-thousand commits. We burned through a small fortune of friends and family investments and leveraged technology to work more efficiently towards decadal objectives of NASA. Things, we have reason to believe, few are attempting. It is now getting exciting…

Despite funding obstacles, we’ve broken through major barriers and are ready to apply our new advanced engineering platform to new projects — leveraging modern software machinery (C++14, OpenCL) and processing hardware (CPU, GPU, FPGA). Our integrated engineering environment provides end-to-end capabilities for such grand challenges. We can now build simulations out of different systems and algorithms and dispatch them to any processor. Aerospace is only the first use case.

You’ve really got to be a fearless generalist to take on something like this. But you’ve also got to be able to dive deep into key areas and understand the process on first and zeroth principles. Many fields of mathematics and technical practice, concentrated into an applied real-world problem. Since you can’t rely on books for answers to new questions, we must inquire the fundamental laws and be cognizant of our human constructs and assumptions made therein.

Is it possible to optimize against physics while also providing a practical engineering path?

I’ve pondered such quandaries for many years, but now I think I have a clear path. Over the next few years I hope to demonstrate and share what I can on this blog.

-Graham

P.S. All this talk about jet engine thrust reminds me of this time a senior engineer at Blue Origin emailed a challenge question to the company along the lines of – if force is the integral of pressure times area, what parts of a jet engine are most responsible for its net thrust generation?

Do you know?

It appears most of the company did not. I took a stab:

It’s the pressure differential across the bypass compressor blades, probably followed by the central jet exit (and its compressor blades and internal cowling).

new GPU mesh generator

Everyone hates meshing. Well that’s about to change…

To give a brief taste of what’s to come (for you scientists and engineers), here’s a short intro to our brand-new hardware-accelerated mesh generator. What’s really cool about it is that it creates near-optimal tetrahedral (and eventually hybrid) meshes essentially automatically. There are only a few knobs to turn. It does this by defining shape with implicit Signed Distance Field (SDF) — other refinement controls (grading, curvature, feature size) can be derived directly from the background’s SDF. The SDF can be defined with analytical functions or with triangular faceted surface references. We describe the process in our recent ICCFD10-145 paper:

So nodes use Delaunay triangulation to find associativity to repel neighboring nodes along each interior edge. Pseudo time integration displaces all node positions… though those near boundaries or SDF surfaces are iteratively projected back (normal) to the interface to conform. As the nodes rearrange and move more than a characteristic radius, the set is re-triangulated to update associativity and the process repeats. After many iterations, all elements are annealed to their local size function and the shape is resolved. The last step is to remove illegal hulls from the Delaunay-defined convex hulls in a surface unwrapping procedure.

The process is illustrated with the infamous Stanford Bunny:

Stochastic meshing process with 20K nodes, unrefined: a) Shape is defined by reference surfaces used to solve SDF background. b) Nodes are injected into valid SDF locations and equalized. c) Triangulation updates topology with over-wrapped mesh (comprised of convex hulls). d) Elements are evaluated and cleaned against SDF while surfaces are unwrapped. e) Original (darker) mesh superimposed over new mesh to visualize spatial deviation. f) Close-up showing error around high-curvature features.

Assuming adequate background SDF resolution, this technique proves to be fairly agnostic to complexity. It’s a robust method, but requires some serious parallel computing when converting from discrete geometries. For F number of reference faces, the winding number calculation scales on order of F log F to determine inside vs outside — something very easy for humans but heavy for machines! Here’s a slightly more realistic engineering component example:

In this test, a tet mesh of 40,000 nodes (about 200,000 cells) is generated for finite element analysis (FEA) of liquid hydrocarbon rocket engine injector (quarter). Slices through the SDF-derived size field are shown, followed by the resulting mesh. Curvature refinement is not present, showing lack of size refinement around fine features…

To learn more, please visit www.xplicitcomputing.com/geometry


coming soon — adventures in tech

Welcome to my new blog!

I hope to share some fun stuff in the realm of numerical simulation, machine design, and aerospace systems. Realistically, lots of other crazy stuff will pop up along the way….

A bit about me: educated at Harvey Mudd College in general engineering. Had a few internships at NASA JPL/Caltech supporting Dawn, MSL, and hypervelocity impactor programs. Then a short stint at Blue Origin developing engine infrastructure. Found myself reinventing CFD numerics on my laptop in Matlab to address engine and facility challenges at Virgin Galactic. Then founded Xplicit Computing and worked very hard to bring all the best ideas and people together. Broke code and new ground in HPC…

Over the last five years I’ve been very focused on building the software data layers to enable next-gen engines and power systems. XCOMPUTE enables us to define and simulate complex systems building blocks for heterogeneous (CPU/GPU) algorithm processing. This means we can solve fluid, solid, or any other problem in a unified architecture, leveraging the latest in C++ and OpenCL. Powerful advanced simulation is now available on desktop computers! Computing tools are now accessible to many more people…a huge impact on small and big businesses.

I’m also into a variety of music (piano improv, percussion, etc), cultural foods (many types), and philosophy (Spinoza-Einstein).

Very exciting new things on the horizon. Stay tuned…