The world has been waiting.
It’s finally here.
Perhaps, the world’s most powerful systems platform.
The world has been waiting.
It’s finally here.
Perhaps, the world’s most powerful systems platform.
PSA: Abstracts for ICCFD11 are due on 18 Feb 2020!
Amazing destination conference opportunity in Maui, Hawaii
Every two years, the International Conference on Computational Fluid Dynamics is held in a different iconic city across the globe. ICCFD conferences begun in 2000 as the merger of “Numerical Methods in Fluid Dynamics (ICNMFD), and the International Symposium on Computational Fluid Dynamics (ISCFD); which had been running since 1969 and 1985, respectively.” According to the ICCFD homepage.
In early 2018, I submitted two abstracts and was lucky enough to be invited to present a poster on “Scalable HPC Building-Block for Multi-Disciplinary Systems” and complete paper on “Unified Geometries for Dynamic HPC Modeling”. I’ve published a few papers before, but this felt like a new experience; here I was submitting for peer review intimate technical details on the inner workings of some of my original innovations. In the past it was always part of academic group. I worked for two months to bring together my 20-page paper, leaving the presentation until the week of the event…
Food. Okay let me just say that perhaps my favorite element about Spain was the food. Fresh, flavorful, and affordable. Tapas around great, but we made a point to try many different types of restaurants ranging from known tourist spots to tiny spots that only open for a few hours in the late evening to serve a couple families. Honestly, almost every place we tried had something special about it and several of them we just had to come back for a second or third time. Really, we should have taken more food pictures.
To make up for it, here are a few treats:
Hope to see you at ICCFD11 Summer 2020 in Maui, Hawaii!
It’s advanced and universal?
Easy to use and FREE??
YES!!!
Wouldn’t it be great to be able to access data easily with any of your favorite languages?
Build advanced apps and workflows?
XCOMPUTE utilizes a new strategy (originally developed by Google) to express complex data between computers / sessions as protocol buffers.
When you save or load to disc or transmit something over a network, the associative data structures present in your computer’s RAM must be flattened (aka serialized), buffered, and eventually reconstructed (aka deserialized) so that they can be transmit in linear fashion across a wire or into a storage device and back again.
There are many ways to do this, but most are not suitable to big data.
We’ve elected to use a special protoc compiler to auto-generate compatible interfaces that provides native access across many languages. They’re essentially feather-weight code headers or libraries that allow you to tie into xcompute.
They also sport speeds approaching the theoretical limits of the attached devices and channels (PCIe, etc).
Messages™ by Xplicit Computing provides standard support for:
While xcompute-server remains a proprietary centerpiece of the XC ecosystem, we’re excited to announce our plan to release our other official Apps, free & open!
This way, everyday users do not have to worry about subscription to xcompute-client. It makes collaboration that much easier.
Hosts maintain their xcompute-server subscriptions and now can invite friends and colleagues freely, and share results as they please with said Apps.
You own and control your data, while Xplicit continues to focus on providing high-quality, unified technologies.
For a technical overview, please read this below excerpt from the README provided with the Messages™ bundle:
msg.set_something(value)
auto some_value = msg.something();
for (auto entry : msg.vector() )
something = entry;
auto N = msg.vector_size();
something.resize(N);
#pragma omp parallel for
for (auto n=0; n'<'N; n++)
something[n] = msg.vector(n);
auto N = other.vector_size();
//get a reference to mutable object
auto& vec = *msg.mutable_vector();
vec.resize(N);
#pragma omp parallel for
for (auto n=0; n<'N'; n++)
vec[n] = other.vector(n);
> mkdir -p cpp python java javascript ruby objc csharp go
> protoc --cpp_out=cpp --python_out=python --java_out=java --js_out=javascript --ruby_out=ruby --objc_out=objc --csharp_out=csharp vector.proto system.proto spatial.proto meta.proto
Obviously, a big anniversary for aerospace and human kind.
We are indeed taking this as a cue to look forward into the next 50 years.
Our ambitions lie not only in space, but in saving Earth…
Once a year a pilgrimage occurs to the mecca of amateur and commercial rocketry, nestled in the desert north of Las Cruces New Mexico. University-sponsored teams from around the world converge on Spaceport America to demonstrate their rocket design, analysis, building, and launching skills… one of the greatest defining moments in their collegiate careers. Teams can compete in certain competition categories, targeting altitudes of 10,000ft and 30,000ft. However, some teams elect to attempt flights beyond — to 50,000ft and up to 100,000ft using solid, hybrid, and liquid propulsion systems.
This facility is uniquely positioned just west of Whitesands Missile Range, so it is possible to obtain waivers to fly all the way to space on a regular basis, if required… as are the plans of emerging spaceflight companies Virgin Galactic and others.
That’s the pot calling the kettle black!
Thank you for the kind words and interest! We had a BLAST at Space Access 2019!
https://thespaceshow.com/show/13-may-2019/broadcast-3314-kim-holder-rick-kwan-john-jossy
Xplicit Computing gets discussed from 46:21 – 52:20 . Here’s the segment set to some of the presentation material:
The most interesting space tech conference you’ve never heard about.
Turns out crazy comes in many different forms:
XCOMPUTE’s graphics architecture is built on OpenGL 3.3 with some basic GLSL shaders. The focus has always been on efficiency and usefulness with large engineering data sets – it is meant to visualize systems.
However, along the way we recognized that we could unify all graphics objects (technically, vertex array objects) in our render pipeline as to not only handle 3d objects, topologies, and point clouds, but provide a powerful framework for in-scene widgets and helpers. We’ve barely started on that:
As we’re getting ready to launch the product, I’m connecting modules that perhaps didn’t have priority in the past. The other night, I spent a few hours looking at what easy things we could do with a unified “appearance” widget, built in the client with Qt in about 130 lines:
I then imported a complex bracket geometry and applied a wood PNG texture with RGBA channels projected in the Z-direction:
This looks pretty good for rasterization (60fps @ 1440×2560), but it isn’t perfect….there are a few artifacts and shadowing is simplified. I think the space between the wood slats is really cool and makes me want to grab this thing and pull it apart. Those gaps are simply from the alpha-channel of the PNG image…just for fun. We’ll expose more bells and whistles eventually.
Soon, I’ll show the next step of analyzing such a component including semi-realistic displacement animations.
In the future (as we mature our signed distance infrastructure), we make look at ray-tracing techniques, but for now the focus is on efficiency for practical engineering analyses.
It’s no secret around here that I’ve been burning the candle from both ends in order to complete “The Great Server-Client Divide” as we call this year-long task. A task that has been in planning since the very start.
With big-data applications, its challenging to get a server (simulation state machine) to interact (somewhat generically) with any number of clients without compromising on performance. We studied the principles and the mechanics of this issue and slowly arrived at a viable solution requiring extreme software engineering care.
For our engineering analysis software, we navigated many performance compromises. One notable compromise (compared to game engines) has been on maintaining both high (FP64) and low precision (FP32) data sets for computation vs render — every iteration we must convert and buffer relevant results from device to host in order to maintain a global state with which clients can interact.
(Still, we are finding that proper software design yields a compute bottleneck in GPU-like devices, rather than I/O bandwidth limitation over PCIe — so this extra process is not responsible for any slowdown. We’re measuring and reporting more than 25x speed-up over CPU-only).
XCOMPUTE has gone through several thousand iterations to get where we are, and along the way we developed high-level and low-level optimizations and generalizations to further expand our capabilities and performance. For instance, we are approaching the minimum number of operations to synchronize arbitrary numerical data — and our C++ code syntax makes all these operations very clear and human-readable.
It should bit little surprise that eventually there would be a high degree of data structure unification (via dynamic compile-time and run-time tricks), and that the messages required to save/load could possibly be reused in wide-scale communication protocols. After all, both messages require serialization and de-serialization infrastructure, so if the encoding/decoding format is flexible and nearly run-time optimal, why not unify all I/O? Especially if it is easily parallelized and permits flexible usage and sharing with users.
That is exactly what we did; we implemented “protocol buffers” using a schema file definiton to build an array of sources, headers, and libraries that are later linked by the larger application during compile. There are no run-time libraries…it’s essentially a code generator.
The protobuf definition file assigns variable names to types and a specific integer spot; repeated and embedded messages are also possible. Developers have a clear way to package messages and the proto definition file can be made publicly available to bind external applications (to almost any language) to natively interface without compromising the legal intellectual property of the actual code-base. It’s just an interface.
I’m only aware of two good protocol buffer libraries, both authored by the same person (first at Google, then on his own). The only major limitation I’ve encountered is that for both libraries (for various reasons), the maximum message size is limited to about 2^30 bytes, or about 1GB. This presents a challenge to the size of any one system, but should work well for most as large problems should be decomposed into manageable systems, not one huge homogeneous domain with poor numerical complexity.
I could talk for days about message design and how it sort-of parallels your class structures — and how it also is sort of its own thing! Being introspective on “what constitutes a message” can yield huge optimizations across your application in practice. This is because if messages are not well-encapsulated, they will tend to have repetitive or unnecessary data per the context. Ideally, you’d only transmit what is needed, especially given bandwidth constraints. If you can constrain this to a finite set of messages, you’re off to a great start.
Another really neat byproduct of sever-client message unification is that servers already expect self-contained protobuf messages in order to perform operations, such as creating new objects (geometries, algorithms, etc). A command line interface (CLI) could also construct protobuf messages and invoke macro-level commands, just like a client. One could access a simulation via client, CLI, or through files on disk.
Applied to numerical computing, we developed four protocol buffer definition files, each applicable to specific contexts:
XCOMPUTE has implemented these messages for finite element, finite volume, and are formalizing support for finite difference, lattice-Boltzmann, and advanced geometric representations. The following unified XCOMPUTE file types that somewhat correspond to the aforementioned messages:
RSA or other encryption can wrap the serialized byte-stream as necessary. When you purchase an XCOMPUTE license, you receive a copy of these definitions along with a Creative Commons Attribution Non-Derivatives license to allow anyone to use them for their own projects and hopefully integrate with ours!