the π theorem

Yes, it’s π day (3.14) and so I am obliged to write a short something on that topic.

Most people are celebrating the free or $3.14 pizza today. My mother is a little ticked that I didn’t tell her until the evening.

Academic types are geeking out over the proximity of the base-10 decimal number to today’s Gregorian date. Some might even throw a party at 1:59 local time.

I’m more excited about Buckingham-π

Yes, that is lovely. It’d take that over more math any day. BTW, do you think they serve Buckingham-Pi at Buckingham-Palace ? I’ll take two stones for 3.14 pounds please. [source]

Unfortunately, no. It is a math thing, again! It has awesome science and engineering dimensional-analysis applications (just using units!):

π-groups are sets comprised of physical variables (with units) raised to some powers so that the group product is devoid of dimensions. (Does not depend on length, time, mass, temperature, etc.)

This is used to construct an invariant vector space of all physically-consistent variable permutations within the system — hinting at possible underlying relationships or mechanisms.

Buckingham-π is recommended in multi-dimensional regression studies on instrumented data where there isn’t a clear relationship between measurable physical variables. These groupings imply similar solution space if relative relationships are enforced (asserting analogous conditions). This allows us to do test miniature bridges and wings in the lab, and infer fundamental relationships between dimensional things like like E=mc^2 .

Shamelessly stolen from MIT notes; the source for everything you don’t know.

We do this all the time in our heads but less rigorously. We know not to add length and time, but can divide them to make the units of speed. We know you can only add energy to energy, etc. Energy divided by distance has units of force — one of innumerable dimensional truisms.

As an example, the aspect ratio of your computer screen is a π-group, as width divided by height is dimensionless. n=2 variables and k=1 dimension type (length), so j = 2-1 = 1 possible π-group.

Non-dimensional parameters such as Reynolds number comprise the π-group for density, velocity, length, and viscosity. We expect self-similar relationships (and system behavior) if the Re is held constant :

In terms of SI (kg-m-s) units, this equates to dimensional nullity:

The above has n=4 variables and k=3 dimensions, so j = (n-k) = 1 means this also is the only possible combination for this group. If we had more variables, we would have more possible π-groups and exponential coefficients to determine.

While integer exponents are expected on dimensional variables, non-dimensional π-groups are technically allowed to have irrational (non-integer) exponents.

Since 1 to any power is itself, π-groups are invariant under power — the constant exponent relating weightings between multiple π-groups is constant for a specific configuration; coefficients are evaluated using an experiment test matrix for each independent variable.

Usually, integer exponents can be defined for all by multiplying each exponent by the lowest common exponent as they are often easily divided. In highly-empirical regressions, odd irregular decimals such as 0.37 are not uncommon as they are not factorable.

Read more about Buckingham-π on Wikipedia

the problem with technical blogging

Now that I think about it — there are many reasons to make no attempt. I’m still going to do it…

Following my last post ‘towards sustained hypersonic flight’, I planned to launch into the ‘thermodynamics of propulsion’. Initially I had modest ambitions, but somewhere around 20 hours of derivations and eight page of differential thermodynamics, I realized that my efforts were futile — one could just go to https://en.wikipedia.org/wiki/Propulsive_efficiency and read a more complete version with references.

I’m refocusing the work to provide my angle on the analysis, and perhaps have a few entertaining Graham-isms in there. The value being — things that are not found elsewhere. I need to establish some technical foundations so that later statements and diagrams will have merit.

This got me thinking as to what I’m up against to get the ideas and feelings across. People act on emotions. Eventually, we need action.

Here’s an example of typical accurate technical writing, in ‘robot mode’:

A typical 2nd-order finite volume scheme, expressed as single equation. You’d probably vomit if we posted the actual algorithm code here. It’s certainly elegant, but not good dinner or date conversation. Therefore it doesn’t belong on a blog and should only be discussed in a study or toilet.

MATH IS HARD

Who wants to read about finite math schemes after (or during) work?

Against my hopes and dreams, I’ve learned this to be true for most readers. If you’re going to create a blog that can be enjoyed by many, rather than a few experts in the field — discussion has got to be direct and interesting — not lost in the details of mathematical schemes. Content must not ignore those who have some authority in the field…so consider the blog an entry point for other articles and papers.

Balance how to convey the main technical idea, without sacrificing readability, accuracy, or applicability to professional work!

The original Apple Phone. Talk is cheap, unless it provides enabling information….

IDEAS ARE CHEAP

Like patents, it comes down to: “does the disclosure enable the tech?”. There is a big gap between understanding how something works and being able to actually do it. For instance, Intel, Nvidia, AMD and other chip manufacturers expect a certain amount of free information and technical literacy in the field of semiconductors; it’s good business as it keeps customers and prospective employees engaged.

Industrial know-how is deep and guarded. Some is documented, some defined as a process, others locked inside people’s brains — a strategy to remain a valuable asset. Currently, my work online perhaps accounts for about 1% of internal scientific memos and documents. As more is curated for public consumption perhaps this figure will approach about 10%. This is to be expected for long-term high-tech R&D as the information coalesces.

For instance, technical documentation for XCOMPUTE is over 1000 pages, which does a decent job describing the structure of the code — but there is no equivalence to its dense 60,000 lines of C/C++. This is the living embodiment thousands of ideas operating in collective harmony. It was fascinating to guide its natural evolution from text document to bona fide library. Talking about it is inherently reductionist, yet we still write papers and do our best to describe key concepts and processes in a universal format.

I wish I could openly share everything…aerospace has certain inherent limitations. I have liberties on some matters…but the most important things I’m going to keep to myself. It’s a delicate act to share without helping competitors — lessons learned in recent years. I’ve been working towards a big plan for about a decade, and it will have to come out in phases as it unfolds. Lots of twists and turns!

A map of the internet, March 2019. Now that much knowledge is free and common, the really valuable stuff is off-line; specialized abilities become more rare as they deviate from this common denominator. A recent NY Times article fears that this could exacerbate socioeconomic issues.

OMNISCIENT INTER-WEB (maybe?)

High-quality information is readily available on long-standing websites including Wikipedia and publications out of major universities and scientific proceedings. If you’re an expert or produce new information on a topic, you probably have provided online material…millions of articles have become the new digital encyclopedic compendium.

My writing and mathematical escapades here cannot match; they’re intended to be more of a technical exploration rather than reference. There may even be math errors! It’s a waste of our time to do anything else.

It’s not that I think I’m super original, but one has hardly any chance of originality if they aren’t allowed to re-synthesize a field and make some mistakes along the way. (Of course, you’re going to need to read my papers and/or come work with me to really understand the technical approach.)

A loose approach to R&D is only appropriate in early phases: one can’t afford big mistakes in critical engineering applications. Part of the art is slowly developing a deterministic (and stable) design and analysis process that utilizes analytic, computation, and experiment to converge on design decisions. Of course as the project matures, we reference standard documents such as MMPDS and AISC to refine engineering data. Naturally, conservative estimates are used where uncertainty is high — perhaps uncertainty that outlines a lack of previous failures?

EVERYONE’S AN EXPERT (ha!)

  • Success is built on top of failures
  • If you haven’t failed, you haven’t pushed hard enough.
  • Worthy things tend to be difficult, and thus require many failures.
Dunning-Kruger Effect with superimposed population densities: We’ve all been there. Well, most of us… the black line is the “wisdom-confidence curve” showing that inexperienced persons tend to think they know a lot. After doing some stupid shit on the peak of Mt Stupid, they plunge into the valley of despair. It’s a long climb to gain all that knowledge and eventually confidence returns. The colored lines are log-normal population densities for varying distribution widths. The red line is a proficient homogeneous workforce. The green line is a bit more diverse with more gurus and more idiots. The blue is a widely-diverse population with a few more geniuses but at the cost of many more ass-hats. You know, like employees who drive box trucks into doors or bridges.

Everyone has an opinion — and now we can voice it online without social accountability! Further, with Google search we can easily find the “facts”.

We all can talk about causality and professional judgment (in retrospect), but few are apt at practicing and managing the inherent risks when there are many competing real factors at play. The process becomes somewhat of a personal art pulling from a myriad of experiences. Mastering this art often requires major general tribulations and experience that cannot be emulated by AI or a novice. Even the best had to crawl through a lot of pain and anguish, and I think those who don’t settle continuously find themselves at odds with the status quo.

It’s important to note that there is no objective authority in science as to who is correct and who is wrong! Although, certain institutions and individuals certainly lead in credibility — but that should remain challenged. Even when it comes to the Standard Model, there is room for improvement. Therefore it’s imperative to not just explore “local optimizations in knowledge”, but to understand its underlying principles enabling one to extrapolate beyond the well-known.

I guess I’m saying: I’m not really the kind of engineer or scientist to shoot from the hip. However as you try to do more ambitious things, more situations require it — with tamed composure. I’m old-school; I wish everything could be solved with analytical closed-form solutions. I’ve since also experienced the beauty and power of computation — an emerging pillar of science. However at the end of the day, none of that means anything if the experiment or test data says otherwise.

The challenge is not doing it; it is doing it well…or better than before.