Rather than longer times, what about short times? I did some work on fast fading and you can see rapid swings in fade over <5s. That is hard for automated systems to respond to, so you normally respond by increasing the link margin. If you can predict this you could reduce the margin needed. That could potentially be very valuable.
Spot on. We categorize that <5s window as tactical fade mitigation.
Our current 3-5m window is for topology/routing, but the sub-5s window is for Dynamic Link Margin (DLM). If we can predict fast-fading signatures—like tropospheric scintillation or edge-of-cloud diffraction, we can move from reactive to proactive ACM.
pretty intriguing demo video. how do you ensure your telemetry ingestion happens operationally that will be daunting task. output will be as good as your telemetry any delay or break in data everything bound break.
Great point, telemetry reliability is the biggest hurdle for any mission-critical system. We address the "garbage in, garbage out" risk by prioritizing freshness (our pipeline treats latency as a failure).
We use a
"leaky" buffer strategy (if data is too old to be actionable for a 3-minute forecast, we drop it to ensure the models aren't lagging behind the physical reality of the link),
graceful degradation (when telemetry is delayed or broken, the system automatically falls back to physics-only models i.e. orbital propagation and ITU standards), and
edge validation (we validate and normalize data at the ingestion point, if a stream becomes corrupted or "noisy," the system flags that specific sensor as unreliable and adjusts the prediction confidence scores in real-time).
I'm wondering how the physics models handle the state discontinuity if you're dropping intermediate telemetry. Typically those propagators rely on continuous integration steps, so if the buffer leaks data to catch up, I'd expect significant drift unless you're constantly re-seeding the state vector. How do you manage the handoff between the dropped data and the physics fallback without a jump in the prediction?
We prevent discontinuities by using a Continuous Extended Kalman Filter where the physics model serves as the persistent baseline and telemetry acts only as a corrective update. When the buffer leaks, the system doesn't snap to a new position; instead, it continues propagating the state via physics while the uncertainty covariance grows smoothly. When fresh data eventually arrives, we use the innovation delta to gradually steer the state back to reality, ensuring a seamless transition rather than a coordinate jump.
Is the inference running on-orbit or ground-side? I guess SWaP is a major constraint for the former. Not sure if you are using FPGAs or something like a Jetson?
Primary inference runs ground-side (K8s/GovCloud) to aggregate global data for routing. We do see the need for something like a hybrid-edge approach for tactical, sub-5s mitigation. We would target FPGAs (like Xilinx Versal) for production flight hardware to meet strict SWaP and radiation-hardening requirements.
Not currently, we're planning on opening up our seed round in 4 weeks, feel free to shoot us a note at hello@constellation-io.com if you're interested in learning more.
American Dynamism is a term the investors of Casteleon made up. That's the company, founded by SpaceX executives, that's mass producing hypersonic weapons to put into orbit.
you could have used a one word answer, yes. the extra words could have been "if we can get it".
in other words, you're not opposed to working in the military industrial complex. your reply walks the line of weasel words. trying not to offend those against while nodding to those that approve. you'll do fine as a spokeperson
Rather than longer times, what about short times? I did some work on fast fading and you can see rapid swings in fade over <5s. That is hard for automated systems to respond to, so you normally respond by increasing the link margin. If you can predict this you could reduce the margin needed. That could potentially be very valuable.
Spot on. We categorize that <5s window as tactical fade mitigation.
Our current 3-5m window is for topology/routing, but the sub-5s window is for Dynamic Link Margin (DLM). If we can predict fast-fading signatures—like tropospheric scintillation or edge-of-cloud diffraction, we can move from reactive to proactive ACM.
pretty intriguing demo video. how do you ensure your telemetry ingestion happens operationally that will be daunting task. output will be as good as your telemetry any delay or break in data everything bound break.
Great point, telemetry reliability is the biggest hurdle for any mission-critical system. We address the "garbage in, garbage out" risk by prioritizing freshness (our pipeline treats latency as a failure).
We use a
"leaky" buffer strategy (if data is too old to be actionable for a 3-minute forecast, we drop it to ensure the models aren't lagging behind the physical reality of the link),
graceful degradation (when telemetry is delayed or broken, the system automatically falls back to physics-only models i.e. orbital propagation and ITU standards), and
edge validation (we validate and normalize data at the ingestion point, if a stream becomes corrupted or "noisy," the system flags that specific sensor as unreliable and adjusts the prediction confidence scores in real-time).
I'm wondering how the physics models handle the state discontinuity if you're dropping intermediate telemetry. Typically those propagators rely on continuous integration steps, so if the buffer leaks data to catch up, I'd expect significant drift unless you're constantly re-seeding the state vector. How do you manage the handoff between the dropped data and the physics fallback without a jump in the prediction?
We prevent discontinuities by using a Continuous Extended Kalman Filter where the physics model serves as the persistent baseline and telemetry acts only as a corrective update. When the buffer leaks, the system doesn't snap to a new position; instead, it continues propagating the state via physics while the uncertainty covariance grows smoothly. When fresh data eventually arrives, we use the innovation delta to gradually steer the state back to reality, ensuring a seamless transition rather than a coordinate jump.
Is the inference running on-orbit or ground-side? I guess SWaP is a major constraint for the former. Not sure if you are using FPGAs or something like a Jetson?
Primary inference runs ground-side (K8s/GovCloud) to aggregate global data for routing. We do see the need for something like a hybrid-edge approach for tactical, sub-5s mitigation. We would target FPGAs (like Xilinx Versal) for production flight hardware to meet strict SWaP and radiation-hardening requirements.
Very cool company! Are y’all hiring?
Not right now but we will be soon! Send over your resume to hello@constellation-io.com if you're interested in joining.
Are you raising?
Not currently, we're planning on opening up our seed round in 4 weeks, feel free to shoot us a note at hello@constellation-io.com if you're interested in learning more.
Done (XX:56).
Do you plan to work on orbital weapon systems like Golden Dome?
We're big believers in American Dynamism.
American Dynamism is a term the investors of Casteleon made up. That's the company, founded by SpaceX executives, that's mass producing hypersonic weapons to put into orbit.
you could have used a one word answer, yes. the extra words could have been "if we can get it".
in other words, you're not opposed to working in the military industrial complex. your reply walks the line of weasel words. trying not to offend those against while nodding to those that approve. you'll do fine as a spokeperson
You get it!