the π theorem

Yes, it’s π day (3.14) and so I am obliged to write a short something on that topic.

Most people are celebrating the free or $3.14 pizza today. My mother is a little ticked that I didn’t tell her until the evening.

Academic types are geeking out over the proximity of the base-10 decimal number to today’s Gregorian date. Some might even throw a party at 1:59 local time.

I’m more excited about Buckingham-π

Yes, that is lovely. It’d take that over more math any day. BTW, do you think they serve Buckingham-Pi at Buckingham-Palace ? I’ll take two stones for 3.14 pounds please. [source]

Unfortunately, no. It is a math thing, again! It has awesome science and engineering dimensional-analysis applications (just using units!):

π-groups are sets comprised of physical variables (with units) raised to some powers so that the group product is devoid of dimensions. (Does not depend on length, time, mass, temperature, etc.)

This is used to construct an invariant vector space of all physically-consistent variable permutations within the system — hinting at possible underlying relationships or mechanisms.

Buckingham-π is recommended in multi-dimensional regression studies on instrumented data where there isn’t a clear relationship between measurable physical variables. These groupings imply similar solution space if relative relationships are enforced (asserting analogous conditions). This allows us to do test miniature bridges and wings in the lab, and infer fundamental relationships between dimensional things like like E=mc^2 .

Shamelessly stolen from MIT notes; the source for everything you don’t know.

We do this all the time in our heads but less rigorously. We know not to add length and time, but can divide them to make the units of speed. We know you can only add energy to energy, etc. Energy divided by distance has units of force — one of innumerable dimensional truisms.

As an example, the aspect ratio of your computer screen is a π-group, as width divided by height is dimensionless. n=2 variables and k=1 dimension type (length), so j = 2-1 = 1 possible π-group.

Non-dimensional parameters such as Reynolds number comprise the π-group for density, velocity, length, and viscosity. We expect self-similar relationships (and system behavior) if the Re is held constant :

In terms of SI (kg-m-s) units, this equates to dimensional nullity:

The above has n=4 variables and k=3 dimensions, so j = (n-k) = 1 means this also is the only possible combination for this group. If we had more variables, we would have more possible π-groups and exponential coefficients to determine.

While integer exponents are expected on dimensional variables, non-dimensional π-groups are technically allowed to have irrational (non-integer) exponents.

Since 1 to any power is itself, π-groups are invariant under power — the constant exponent relating weightings between multiple π-groups is constant for a specific configuration; coefficients are evaluated using an experiment test matrix for each independent variable.

Usually, integer exponents can be defined for all by multiplying each exponent by the lowest common exponent as they are often easily divided. In highly-empirical regressions, odd irregular decimals such as 0.37 are not uncommon as they are not factorable.

Read more about Buckingham-π on Wikipedia

the problem with technical blogging

Now that I think about it — there are many reasons to make no attempt. I’m still going to do it…

Following my last post ‘towards sustained hypersonic flight’, I planned to launch into the ‘thermodynamics of propulsion’. Initially I had modest ambitions, but somewhere around 20 hours of derivations and eight page of differential thermodynamics, I realized that my efforts were futile — one could just go to https://en.wikipedia.org/wiki/Propulsive_efficiency and read a more complete version with references.

I’m refocusing the work to provide my angle on the analysis, and perhaps have a few entertaining Graham-isms in there. The value being — things that are not found elsewhere. I need to establish some technical foundations so that later statements and diagrams will have merit.

This got me thinking as to what I’m up against to get the ideas and feelings across. People act on emotions. Eventually, we need action.

Here’s an example of typical accurate technical writing, in ‘robot mode’:

A typical 2nd-order finite volume scheme, expressed as single equation. You’d probably vomit if we posted the actual algorithm code here. It’s certainly elegant, but not good dinner or date conversation. Therefore it doesn’t belong on a blog and should only be discussed in a study or toilet.

MATH IS HARD

Who wants to read about finite math schemes after (or during) work?

Against my hopes and dreams, I’ve learned this to be true for most readers. If you’re going to create a blog that can be enjoyed by many, rather than a few experts in the field — discussion has got to be direct and interesting — not lost in the details of mathematical schemes. Content must not ignore those who have some authority in the field…so consider the blog an entry point for other articles and papers.

Balance how to convey the main technical idea, without sacrificing readability, accuracy, or applicability to professional work!

The original Apple Phone. Talk is cheap, unless it provides enabling information….

IDEAS ARE CHEAP

Like patents, it comes down to: “does the disclosure enable the tech?”. There is a big gap between understanding how something works and being able to actually do it. For instance, Intel, Nvidia, AMD and other chip manufacturers expect a certain amount of free information and technical literacy in the field of semiconductors; it’s good business as it keeps customers and prospective employees engaged.

Industrial know-how is deep and guarded. Some is documented, some defined as a process, others locked inside people’s brains — a strategy to remain a valuable asset. Currently, my work online perhaps accounts for about 1% of internal scientific memos and documents. As more is curated for public consumption perhaps this figure will approach about 10%. This is to be expected for long-term high-tech R&D as the information coalesces.

For instance, technical documentation for XCOMPUTE is over 1000 pages, which does a decent job describing the structure of the code — but there is no equivalence to its dense 60,000 lines of C/C++. This is the living embodiment thousands of ideas operating in collective harmony. It was fascinating to guide its natural evolution from text document to bona fide library. Talking about it is inherently reductionist, yet we still write papers and do our best to describe key concepts and processes in a universal format.

I wish I could openly share everything…aerospace has certain inherent limitations. I have liberties on some matters…but the most important things I’m going to keep to myself. It’s a delicate act to share without helping competitors — lessons learned in recent years. I’ve been working towards a big plan for about a decade, and it will have to come out in phases as it unfolds. Lots of twists and turns!

A map of the internet, March 2019. Now that much knowledge is free and common, the really valuable stuff is off-line; specialized abilities become more rare as they deviate from this common denominator. A recent NY Times article fears that this could exacerbate socioeconomic issues.

OMNISCIENT INTER-WEB (maybe?)

High-quality information is readily available on long-standing websites including Wikipedia and publications out of major universities and scientific proceedings. If you’re an expert or produce new information on a topic, you probably have provided online material…millions of articles have become the new digital encyclopedic compendium.

My writing and mathematical escapades here cannot match; they’re intended to be more of a technical exploration rather than reference. There may even be math errors! It’s a waste of our time to do anything else.

It’s not that I think I’m super original, but one has hardly any chance of originality if they aren’t allowed to re-synthesize a field and make some mistakes along the way. (Of course, you’re going to need to read my papers and/or come work with me to really understand the technical approach.)

A loose approach to R&D is only appropriate in early phases: one can’t afford big mistakes in critical engineering applications. Part of the art is slowly developing a deterministic (and stable) design and analysis process that utilizes analytic, computation, and experiment to converge on design decisions. Of course as the project matures, we reference standard documents such as MMPDS and AISC to refine engineering data. Naturally, conservative estimates are used where uncertainty is high — perhaps uncertainty that outlines a lack of previous failures?

EVERYONE’S AN EXPERT (ha!)

  • Success is built on top of failures
  • If you haven’t failed, you haven’t pushed hard enough.
  • Worthy things tend to be difficult, and thus require many failures.
Dunning-Kruger Effect with superimposed population densities: We’ve all been there. Well, most of us… the black line is the “wisdom-confidence curve” showing that inexperienced persons tend to think they know a lot. After doing some stupid shit on the peak of Mt Stupid, they plunge into the valley of despair. It’s a long climb to gain all that knowledge and eventually confidence returns. The colored lines are log-normal population densities for varying distribution widths. The red line is a proficient homogeneous workforce. The green line is a bit more diverse with more gurus and more idiots. The blue is a widely-diverse population with a few more geniuses but at the cost of many more ass-hats. You know, like employees who drive box trucks into doors or bridges.

Everyone has an opinion — and now we can voice it online without social accountability! Further, with Google search we can easily find the “facts”.

We all can talk about causality and professional judgment (in retrospect), but few are apt at practicing and managing the inherent risks when there are many competing real factors at play. The process becomes somewhat of a personal art pulling from a myriad of experiences. Mastering this art often requires major general tribulations and experience that cannot be emulated by AI or a novice. Even the best had to crawl through a lot of pain and anguish, and I think those who don’t settle continuously find themselves at odds with the status quo.

It’s important to note that there is no objective authority in science as to who is correct and who is wrong! Although, certain institutions and individuals certainly lead in credibility — but that should remain challenged. Even when it comes to the Standard Model, there is room for improvement. Therefore it’s imperative to not just explore “local optimizations in knowledge”, but to understand its underlying principles enabling one to extrapolate beyond the well-known.

I guess I’m saying: I’m not really the kind of engineer or scientist to shoot from the hip. However as you try to do more ambitious things, more situations require it — with tamed composure. I’m old-school; I wish everything could be solved with analytical closed-form solutions. I’ve since also experienced the beauty and power of computation — an emerging pillar of science. However at the end of the day, none of that means anything if the experiment or test data says otherwise.

The challenge is not doing it; it is doing it well…or better than before.

towards sustained hypersonic flight

What comes after the Space Shuttle?

New Glenn? Falcon??

I believe something more radical is on the horizon…

Summary: a modernized X30 National Aero-Space Plane with advanced computing under the hood.

About six years ago, I was fortunate to receive hundreds of hours of guidance from the CFD chairman at Boeing (now at Blue Origin). As my startup’s acting VP of Research, he helped us establish technical requirements for a new simulation platform for next-gen systems. He set us on a path, and I worked to bring it all together, pulling from a spectrum of experiences at JPL, Blue Origin, and Virgin Galactic…

Why does hypersonic flight require a new engineering approach?

Banner image, courtesy https://en.wikipedia.org/wiki/Specific_impulse

ABSURD ENERGIES

By definition, “hypersonic” means much faster than sound. There does not appear to be a formal demarcation between supersonic and hypersonic, but design philosophies start to deviate markedly as kinetics take over. At sufficient speed and conditions, traditional compressible flow theory becomes inaccurate due to additional energy modes of excitation, storage, and transmission (that were not included in the original model). As specific kinetic energy approaches molecular bond energies, a distribution undergoes dissociation, inhibiting chemical reformation reflected in further-limiting reaction progress (“Damkohler numbers”). A transition occurs as radiation dominates thermal modes. Plasma density increases as free stream energy density approach valence electronic Gibbs potentials. At some point, you can’t extract net positive work because combustion doesn’t progress (until recombination outside the engine).

For air, I’d say hypersonic phenomena onset around M~6. Very few vehicles to date (or planned) have such capabilities, obviously.

However, I think it is within our technological grasp to cruise at Mach 15+ with the right configuration and engineering approach, enabling point-to-point travel and booster services for deploy-ables and satellites.

In time, I intend to demonstrate a clear pathway forward. First we must understand the basic principles and underlying processes…

PERFORMANCE

Perhaps close or slightly worse than current commercial high-bypass turbo-jet engines, but certainly worse than future hydrogen turbo-jets!

However, a marked performance improvement over traditional hydrogen-oxygen rocket performance since not only does the air-breathing vehicle not have to carry its own fuel, but it can control the effective specific impulse by varying the ratio of bypass (air) to heat input (fuel).

To move beyond traditional liquid-fueled rockets for high-speed trans-atmospheric flight, we can extract more thrust-per-Watt out of an air-breathing engine by including more air (“working fluid”) in the propulsive process at a slower jet speed (difference between engine outlet and inlet velocities). We essentially spread the jet power to maximize fuel efficiency (“effective specific impulse”) and to have jet outlet velocity match free-stream speed to maximize jet kinetic efficiency. (Ideally, once ejected, exhaust would stand still in the reference frame of the fluid. However this is not possible at very low speeds due to minimal mass flux through engine generating minimal net thrust, albeit at very high efficiency! At high Mach numbers, there isn’t enough delta-v in the exhaust to keep up with vehicle speed, and a gradual drop in thermodynamic efficiency is expected.)

Example everyone can observe: this is why commercial jets have big engines with big bypasses so that majority of the thrust comes from the fan, rather than the core engine flow. I think nowadays the ratio is something like 8:1. The exhaust velocity is roughly sonic at Mach 0.85 cruise — all to maximize fuel economy — the driving economic factor for air-travel and a significant portion of your airfare. Not to mention ecological impact. Image courtesy https://en.wikipedia.org/wiki/General_Electric_GE90

The average kinetic energy of the vehicle scales as the square of its speed, while the power required to sustain flight scales as the cube.

What does this mean about powered vehicles that fly very fast?

INTEGRATED ENGINES

As vehicle power-density scales as speed cubed, propulsion starts to dominate the design of the vehicle in hypersonics. The vehicle becomes a big flying engine as M->25, and the project schedule and funding should reflect this. Based on flight profile and lift requirements, a linear “wave-rider” design may be considered vs more practical annular layout (which also is more efficient at carrying large thermal-stress loads and propellant storage). Fuel density remains important, but not as much as net specific energy density.

Sub-cooled liquid hydrogen is used as fuel and coolant, and if pressed above supercriticality, has insane heat capacity — but at a cost of varying density (and Nusselt number used in regen cooling analysis). Both active and passive cooling strategies are required to offset vehicle and engine heat transfer. An open cycle is unacceptable to overall performance, so boundary layer coolant (BLC) must be injected on leading surfaces and ingested / combusted (as part of a turbulent shock-detonation inlet). Combustion takes place in specialized sub-sonic burners before being mixed with the primary flow as part of a closed staged-combustion cycle. Liquid oxygen is supplemented to the combustors for take-off and LEO injection.

Engine length becomes an impediment in smaller vehicles (such as those encountered by any research/test program) due to finite combustion reaction time, requiring longer characteristic chamber length to ensure relatively-complete combustion (Damkohler numbers close to one). Net chemical power extraction is balanced against thermal and drag impediments, so the systems must balance all these and resolve rate reacting large eddy simulation (LES), as physical testing will have inherent limitations to replicate and measure combustion environment. Simulations are used for analysis and optimization and to characterize transfer functions to be applied as the machine’s advanced onboard control system.

Although a hypersonic compressor and diffuser does not use rotating turbomachinery (per excessive thermal-stresses), supporting cooling and fluid control systems remain a large-scale systems engineering challenge. The technical scope is akin to a nuclear power plant that can fly and requires multiples modes of operation. Structural engineering must make no assumptions regarding thermal and acoustic environments as the vehicle will pass through many regimes, expected and off-nominal. Quantifying dynamic load environments require experiment or flight experience, as computing resources to resolve turbulent micro-structures scale as the Reynolds number to 9/4 power, more than square of speed!

To have any hope to getting this right, we must have a very strong concept and technology basis. We need a good initial vector and structured yet flexible approach…so defining the problem by systems and subsystems provides the exact encapsulation and recursive definition required to be infinitely interchangeable and expandable (only limited by computing resources). These tools must be intuitive and powerful as to fully-leverage parallel computing so analysis doesn’t continue to be the bottleneck:

From a project cost and schedule perspective, it is imperative that the concept and its infrastructure be a suitable architecture, as more than 2/3 of project costs are locked-in by the time the first design decision is made. I’ve heard officials from DARPA claim, from their experience, that problems costs 1000x more to fix while in operations than if caught in pre-acquisition stages.

START WITH THE DATA LAYERS

There’s obviously a lot of competing factors in advanced aerospace and energy systems. To integrate these different domains (fluid, thermal, mechanical, electronic) we need an alternative to the current isolated unidirectional “waterfall” engineering process. We need a unified HPC platform everyone can use to integrate systems, not just fluids or solids.

To take steps beyond theory into practice — to actually conceptualize, design, analyze, and build these systems, we need some amazing software and sustained discipline across many teams. Realistically, the problem must be approached with a strong systems framework and restraint on exotics. (“Can I personally actually build this?”) I’ve been participating in various AIAA and peer-review conferences over past years, and there is certainly some impressive work out there. I think the CREATE suite from the DoD has taken a real but ambitious approach to give the military turn-key analysis tools. However, I haven’t seen many commercial or academic firms with their eye (or checkbook) on the systems challenge of next-gen engineering — let alone an architecture that not only demonstrates multi-disciplinary functionalities now (CFD, FEA, etc) while remaining relevant to future computing.

I pulled away from the aerospace industry to dedicate just under 20,000 hours to this software infrastructure, collaborating with a few bright graduate researchers at Stanford, MIT, and the Von Karman Institute. We made hundreds of thousands of code contributions across more than two-thousand commits. We burned through a small fortune of friends and family investments and leveraged technology to work more efficiently towards decadal objectives of NASA. Things, we have reason to believe, few are attempting. It is now getting exciting…

Despite funding obstacles, we’ve broken through major barriers and are ready to apply our new advanced engineering platform to new projects — leveraging modern software machinery (C++14, OpenCL) and processing hardware (CPU, GPU, FPGA). Our integrated engineering environment provides end-to-end capabilities for such grand challenges. We can now build simulations out of different systems and algorithms and dispatch them to any processor. Aerospace is only the first use case.

You’ve really got to be a fearless generalist to take on something like this. But you’ve also got to be able to dive deep into key areas and understand the process on first and zeroth principles. Many fields of mathematics and technical practice, concentrated into an applied real-world problem. Since you can’t rely on books for answers to new questions, we must inquire the fundamental laws and be cognizant of our human constructs and assumptions made therein.

Is it possible to optimize against physics while also providing a practical engineering path?

I’ve pondered such quandaries for many years, but now I think I have a clear path. Over the next few years I hope to demonstrate and share what I can on this blog.

-Graham

P.S. All this talk about jet engine thrust reminds me of this time a senior engineer at Blue Origin emailed a challenge question to the company along the lines of – if force is the integral of pressure times area, what parts of a jet engine are most responsible for its net thrust generation?

Do you know?

It appears most of the company did not. I took a stab:

It’s the pressure differential across the bypass compressor blades, probably followed by the central jet exit (and its compressor blades and internal cowling).

new GPU mesh generator

Everyone hates meshing. Well that’s about to change…

To give a brief taste of what’s to come (for you scientists and engineers), here’s a short intro to our brand-new hardware-accelerated mesh generator. What’s really cool about it is that it creates near-optimal tetrahedral (and eventually hybrid) meshes essentially automatically. There are only a few knobs to turn. It does this by defining shape with implicit Signed Distance Field (SDF) — other refinement controls (grading, curvature, feature size) can be derived directly from the background’s SDF. The SDF can be defined with analytical functions or with triangular faceted surface references. We describe the process in our recent ICCFD10-145 paper:

So nodes use Delaunay triangulation to find associativity to repel neighboring nodes along each interior edge. Pseudo time integration displaces all node positions… though those near boundaries or SDF surfaces are iteratively projected back (normal) to the interface to conform. As the nodes rearrange and move more than a characteristic radius, the set is re-triangulated to update associativity and the process repeats. After many iterations, all elements are annealed to their local size function and the shape is resolved. The last step is to remove illegal hulls from the Delaunay-defined convex hulls in a surface unwrapping procedure.

The process is illustrated with the infamous Stanford Bunny:

Stochastic meshing process with 20K nodes, unrefined: a) Shape is defined by reference surfaces used to solve SDF background. b) Nodes are injected into valid SDF locations and equalized. c) Triangulation updates topology with over-wrapped mesh (comprised of convex hulls). d) Elements are evaluated and cleaned against SDF while surfaces are unwrapped. e) Original (darker) mesh superimposed over new mesh to visualize spatial deviation. f) Close-up showing error around high-curvature features.

Assuming adequate background SDF resolution, this technique proves to be fairly agnostic to complexity. It’s a robust method, but requires some serious parallel computing when converting from discrete geometries. For F number of reference faces, the winding number calculation scales on order of F log F to determine inside vs outside — something very easy for humans but heavy for machines! Here’s a slightly more realistic engineering component example:

In this test, a tet mesh of 40,000 nodes (about 200,000 cells) is generated for finite element analysis (FEA) of liquid hydrocarbon rocket engine injector (quarter). Slices through the SDF-derived size field are shown, followed by the resulting mesh. Curvature refinement is not present, showing lack of size refinement around fine features…

To learn more, please visit www.xplicitcomputing.com/geometry


coming soon — adventures in tech

Welcome to my new blog!

I hope to share some fun stuff in the realm of numerical simulation, machine design, and aerospace systems. Realistically, lots of other crazy stuff will pop up along the way….

A bit about me: educated at Harvey Mudd College in general engineering. Had a few internships at NASA JPL/Caltech supporting Dawn, MSL, and hypervelocity impactor programs. Then a short stint at Blue Origin developing engine infrastructure. Found myself reinventing CFD numerics on my laptop in Matlab to address engine and facility challenges at Virgin Galactic. Then founded Xplicit Computing and worked very hard to bring all the best ideas and people together. Broke code and new ground in HPC…

Over the last five years I’ve been very focused on building the software data layers to enable next-gen engines and power systems. XCOMPUTE enables us to define and simulate complex systems building blocks for heterogeneous (CPU/GPU) algorithm processing. This means we can solve fluid, solid, or any other problem in a unified architecture, leveraging the latest in C++ and OpenCL. Powerful advanced simulation is now available on desktop computers! Computing tools are now accessible to many more people…a huge impact on small and big businesses.

I’m also into a variety of music (piano improv, percussion, etc), cultural foods (many types), and philosophy (Spinoza-Einstein).

Very exciting new things on the horizon. Stay tuned…