It's the most high-influence, low-exposure essay I've ever read. As far as I'm concerned, this dude is a silent prescient genius working quietly for DARPA, and I had a sneak peak into future science when I read it. It's affected my thinking and trajectory for the past 8 years
This is literally a borderline crank article. "A new framework for physics and computing" turns out to be quantum annealing for SAT lol
The explanations about quantum mechanics are also imprecise and go nowhere towards the point of the article. Add a couple janky images and the "crank rant" impression is complete.
I will say that the philosophical remarks are pretty obtuse and detract from the post. For example...
"Physics–and more broadly the pursuit of science–has been a remarkably successful methodology for understanding how the gears of reality turn. We really have no other methods–and based on humanity’s success so far we have no reason to believe we need any."
Physics, which is to say, physical methods have indeed been remarkably successful...for the types of things physical methods select for! To say it is exhaustive not only begs the question, but the claim itself is not even demonstrable by these methods.
The second claim contains the same error, but with more emphasis. This is just off-the-shelf scientism, and scientism, apart from what withering refutations demonstrate, should be obviously self-refuting. Is the claim that "we have no other methods but physics" (where physics is the paradigmatic empirical science; substitute accordingly) a scientific claim? Obviously not. It is a philosophical claim. That already refutes the claim.
Thus, philosophy has entered the chat, and this is no small concession.
I’m not sure I understand what you’re trying to say. It’s not really questionable that science and math are the only things to come out of philosophy or any other academic pursuit that have actually shown us how to objectively understand reality.
Now physics vs other scientific disciplines sure. Physicists love to claim dominion just like mathematicians do. It is generally true however that physics = math + reality and that we don’t actually have any evidence of anything in this world existing outside a physical description (eg a lot of physics combined = chemistry, a lot of chemistry = biology, a lot of biology = sociology etc). Thus it’s reasonable to assume that the chemistry in this world is 100% governed by the laws of physics and transitively this is true for sociology too (indeed - game theory is one way we quantifiably explain the physical reality of why people behave the way they due). We also see this in math where different disciplines have different “bridges” between them. Does that mean they’re actually separate disciplines or just that we’ve chosen to name features on the topology as such.
Its just not that simple. The best way I can dovetail with the author is that you are thinking in terms of the abstraction but you have mistaken the abstraction for reality.
Physics, biological sciences, these are tools the mind uses to try and make guesses about the future based on past events. But the abstraction isn't perfect, and its questionable on whether or not it could or should one day be.
The clear example is that large breakthroughs in science often comes from rethinking this fundamental abstraction to explain problems that the old implementation had trouble with. Case in point being quantum physics which has warped how we original understood newtonian physics. Einstein fucking hated quantum because he felt it undermined the idea of objective reality.
The reality (pun intended) is that it is much more complex than our abstractions like science and we would do well to remember they are pragmatic tools and are ultimately unconcerned with the practice of metaphysics which is the underlying nature of reality.
This all seems like philosophy ramblings until we get to little lines like this. Scientism, or the belief that science is the primary and only necessary lens to understand the world falls for the same trap as religion of thinking that you have the answer to reality so anything else outside is either unnecessary or even dangerous to one who holds these views.
Man, this article is incredible. So many ideas resonate with me, but I never can't formulate them. Thanks for sharing, all my friends have to read this.
I'd be interested to learn who paid for this machine!
Did Sandia pay list price? Or did SpiNNcloud Systems give it to Sandia for free (or at least for a heavily subsidsed price)? I conjecture the latter. Maybe someone from Sandia is on the list here and can provide detail?
SpiNNcloud Systems is known for making misleading claims, e.g. their home page https://spinncloud.com/ lists DeepMind, DeepSeek, Meta and
Microsoft as "Examples of algorithms already leveraging dynamic sparsity", giving the false impression that those companies use SpiNNcloud Systems machines, or the specific computer architecture SpiNNcloud Systems sells.
Their claims about energy efficiency (like "78x more energy efficient than current GPUs") seem sketchy. How do they measure energy consumption and trade it off against compute capacities: e.g. a Raspberry Pi uses less absolute energy than a NVIDIA Blackwell but is this a meaningful comparison?
I'd also like to know how to program this machine. Neuromorphic computers have so far been terribly difficult to program. E.g. have JAX, TensorFlow and PyTorch been ported to SpiNNaker 2? I doubt it.
Deep Mind (Google’s reinforcement learning lab), Deep Seek (Alibaba’s LLM initiative), Deep Crack (EFF’s DES cracker), Deep Blue (IBM’s chess computer), and Deep Thought (Douglas Adams’ universal brain) all set the stage...
So naturally, this thing should be called Deep Spike, Deep Spin, Deep Discount, or -- given its storage-free design -- Deep Void.
If it can accelerated nested 2D FORTRAN loops, you could even call it Deep DO DO, and the next deeper version would naturally be called Deep #2.
JD Vance and Peter Thiel could gang up, think long and hard hard, go all in, and totally get behind vigorously pushing and fully funding a sexy supercomputer with more comfortably upholstered, luxuriously lubricated, passively penetrable cushioned seating than even a Cray-1, called Deep Couch. And the inevitable jealous break-up would be more fun to watch than the Musk-Trump Bromance!
I question how viable these architectures are when considering that accurate simulation of a spiking neural network requires maintaining strict causality between spikes.
If you don't handle effects in precisely the correct order, the simulation will be more about architecture, network topology and how race conditions resolve. We need to simulate the behavior of a spike preceding another spike in exactly the right way, or things like STDP will wildly misfire. The "online learning" promise land will turn into a slip & slide.
A priority queue using a quaternary min-heap implementation is approximately the fastest way I've found to serialize spikes on typical hardware. This obviously isn't how it works in biology, but we are trying to simulate biology on a different substrate so we must make some compromises.
I wouldn't argue that you couldn't achieve wild success in a distributed & more non-deterministic architecture, but I think it is a very difficult battle that should be fought after winning some easier ones.
Artem Kirsanov provides some insights into the neurochemistry and types of neurons in his latest analysis [0] of distinct synaptic plasticity rules that operate across dendritic compartments. When simulating neurons in a more realistic approach, the timing can be deterministic.
I see "storage-free"... and then learn it still has RAM (which IS storage) ugh.
John Von Neumann's concept of the instruction counter was great for the short run, but eventually we'll all learn it was a premature optimization. All those transistors tied up as RAM just waiting to be used most of the time, a huge waste.
In the end, high speed computing will be done on an evolution of FPGAs, where everything is pipelined and parallel as heck.
Not all, not always. FPGAs usually have more memory than is accessible in parallel (because memory cells are a lot cheaper than routing grid) and most customers want some blockram anyways. So what your synthesis tool will do with very high LUT usage is to do input or output multiplexing. Or even halving your effective clock and doubling your "number" of LUTs by doing multi-step lookups in then non-parallel memory.
I love bio-inspired stuff, but can we (collectively) temper our usage of it? A better name for this would probably be something like distributed storage and computing architecture (or someone who really understands what this thing is please come up with a better name). If they want to say that part of the architecture is bio-inspired, or mammalian brain inspired, than fine, but let's be parsimonious
The original intent for this architecture was for modelling large spiking neural networks in real-time, although the hardware is really not that specialized - basically a bunch of ARM chips with high speed interconnect for message passing.
It's interesting that the article doesn't say that's what it's actually going to be used for - just event driven (message passing) simulations, with application to defense.
On the TRS-80 Model III, the reset button was a bright red recessed square to the right of the attached keyboard.
It was irresistible to anyone who had no idea what you were doing as you worked, lost in the flow, insensitive to the presence of another human being, until...
--
Then there was the Kaypro. Many of their systems had a bug, software or hardware, that would occasionally cause an unplanned reset the first time, after you turned it on, that you tried writing to the disk. Exactly the wrong moment.
Oh, the Apple ][ reset button beat the TRS-80 Model III "hands down" many years earlier, with its "Big Beautiful Reset Button" on the upper right corner of the keyboard.
It was comically vulnerable -- just begging to be pressed. The early models had one so soft and easy to trigger that your cat could reboot your Apple ][ with a single curious paw. Later revisions stiffened the spring a bit, but it was still a menace.
There was a whole cottage industry aftermarket of defensive accessories: plastic shields that slid over the reset key, mail-order kits to reroute it through an auxiliary switch, or firmware mods to require CTRL-RESET. You’d find those in the classified ads in Nibble or Apple Orchard magazines, nestled between ASCII art of wizards and promises to triple your RAM.
Because nothing says "I live dangerously" like writing your 6502 assembly in memory with the mini assembler without saving, then letting your little brother near the keyboard.
I got sweet sweet revenge by writing a "Flakey Keyboard Simulator" in assembly that hooked into the keyboard input vector, that drove him bonkers by occasionally missing, mistyping, and repeating keystrokes, indistinguishable from a real flakey keyboard or drunk typist.
> Because nothing says "I live dangerously" like writing your 6502 assembly in memory with the mini assembler without saving, then letting your little brother near the keyboard.
RESET on the Apple II was a warm reset. You could set a value on page zero so that pressing it caused a cold start (many apps did that), but, even then, the memory is not fully erased on startup, so you'd probably be kind of OK.
So if I understand correctly, the hardware paradigm is shifting to align with the now-dominant neural-based software model. This marks a major shift, from the traditional CPU + OS + UI stack to a fully neural-based architecture. Am I getting this right?
I feel like we're just trading one bottleneck for another here. So instead of slow storage, we now have a system that's hyper-sensitive to any interruption and probably requires a dedicated power plant to run.
Cool experiment, but is this actually a practical path forward or just a dead end with a great headline? Someone convince me I'm wrong...
Sandia National Labs is one of the few places in the country (on the planet?) doing blue-sky research. My first thought was similar to yours--If it doesn't have storage, what can I realistically even do with it!?
But sometimes you just have to let the academics cook for a few decades and then something fantastical pops out the other end. If we ever make something that is truely AGI, its architecture is probably going to look more like this SpiNNaker machine than anything we are currently using.
It doesn't have built-in storage, but that doesn't mean it can't connect to external storage, or that its memory cannot be retrieved from a front-end computer.
HPC jobs generally don't stream data to disk in the first place. They write out (huge) snapshots periodically. So mount a network filesystem and be done with it. I don't see the issue.
Think of L2 cache (small access time, small capacity) vs. memory modules (larger access time, large capacity) on a motherboard. You get the large capacity and an access time somewhere in between, depending on the hit rate.
The title describes this machine as "brain-like" but the article doesn't support that conclusion. Why is it brain-like?
I also don't understand why this machine is interesting. It has a lot of RAM.... ok, and? I could get a consumer-grade PC with a large amount of RAM (admittedly not quite as much), put my applications in a ramdisk, e.g. tmpfs, and get the same benefit.
You can run software on bare metal without an OS. The downside is you have to write everything. That means drivers, networking code, the process abstraction (if you need it), etc.
One thing to remember is an operating system is just another computer program.
> the SpiNNaker 2’s highly parallel architecture has 48 SpiNNaker 2 chips per server board, each of which in turn carries 152 based cores and specialized accelerators.
NVIDIA step up your game. Now I want to run stuff on based cores.
Whenever I hear about neuromorphic computing, I think about the guy who wrote this article, who was working in the field:
Thermodynamic Computing https://knowm.org/thermodynamic-computing/
It's the most high-influence, low-exposure essay I've ever read. As far as I'm concerned, this dude is a silent prescient genius working quietly for DARPA, and I had a sneak peak into future science when I read it. It's affected my thinking and trajectory for the past 8 years
Isn't this just simulated annealing in hardware attached to a grandiose restatement of the second law of thermodynamics?
Yes. This keeps showing up in hardware engineering labs, and never holds up in real tasks.
It's not.
What an extremely uneducated guess.
Educate, then.
Is this what Extropic (https://www.extropic.ai/) is aiming to commercialize and bring to market?
This is literally a borderline crank article. "A new framework for physics and computing" turns out to be quantum annealing for SAT lol
The explanations about quantum mechanics are also imprecise and go nowhere towards the point of the article. Add a couple janky images and the "crank rant" impression is complete.
Interesting read, more so than the OP. Thank you.
I will say that the philosophical remarks are pretty obtuse and detract from the post. For example...
"Physics–and more broadly the pursuit of science–has been a remarkably successful methodology for understanding how the gears of reality turn. We really have no other methods–and based on humanity’s success so far we have no reason to believe we need any."
Physics, which is to say, physical methods have indeed been remarkably successful...for the types of things physical methods select for! To say it is exhaustive not only begs the question, but the claim itself is not even demonstrable by these methods.
The second claim contains the same error, but with more emphasis. This is just off-the-shelf scientism, and scientism, apart from what withering refutations demonstrate, should be obviously self-refuting. Is the claim that "we have no other methods but physics" (where physics is the paradigmatic empirical science; substitute accordingly) a scientific claim? Obviously not. It is a philosophical claim. That already refutes the claim.
Thus, philosophy has entered the chat, and this is no small concession.
I’m not sure I understand what you’re trying to say. It’s not really questionable that science and math are the only things to come out of philosophy or any other academic pursuit that have actually shown us how to objectively understand reality.
Now physics vs other scientific disciplines sure. Physicists love to claim dominion just like mathematicians do. It is generally true however that physics = math + reality and that we don’t actually have any evidence of anything in this world existing outside a physical description (eg a lot of physics combined = chemistry, a lot of chemistry = biology, a lot of biology = sociology etc). Thus it’s reasonable to assume that the chemistry in this world is 100% governed by the laws of physics and transitively this is true for sociology too (indeed - game theory is one way we quantifiably explain the physical reality of why people behave the way they due). We also see this in math where different disciplines have different “bridges” between them. Does that mean they’re actually separate disciplines or just that we’ve chosen to name features on the topology as such.
Its just not that simple. The best way I can dovetail with the author is that you are thinking in terms of the abstraction but you have mistaken the abstraction for reality.
Physics, biological sciences, these are tools the mind uses to try and make guesses about the future based on past events. But the abstraction isn't perfect, and its questionable on whether or not it could or should one day be.
The clear example is that large breakthroughs in science often comes from rethinking this fundamental abstraction to explain problems that the old implementation had trouble with. Case in point being quantum physics which has warped how we original understood newtonian physics. Einstein fucking hated quantum because he felt it undermined the idea of objective reality.
The reality (pun intended) is that it is much more complex than our abstractions like science and we would do well to remember they are pragmatic tools and are ultimately unconcerned with the practice of metaphysics which is the underlying nature of reality.
This all seems like philosophy ramblings until we get to little lines like this. Scientism, or the belief that science is the primary and only necessary lens to understand the world falls for the same trap as religion of thinking that you have the answer to reality so anything else outside is either unnecessary or even dangerous to one who holds these views.
Man, this article is incredible. So many ideas resonate with me, but I never can't formulate them. Thanks for sharing, all my friends have to read this.
If you like this article, you’ll probably enjoy reading most publications from the Santa Fe Institute.
I'd be interested to learn who paid for this machine!
Did Sandia pay list price? Or did SpiNNcloud Systems give it to Sandia for free (or at least for a heavily subsidsed price)? I conjecture the latter. Maybe someone from Sandia is on the list here and can provide detail?
SpiNNcloud Systems is known for making misleading claims, e.g. their home page https://spinncloud.com/ lists DeepMind, DeepSeek, Meta and Microsoft as "Examples of algorithms already leveraging dynamic sparsity", giving the false impression that those companies use SpiNNcloud Systems machines, or the specific computer architecture SpiNNcloud Systems sells. Their claims about energy efficiency (like "78x more energy efficient than current GPUs") seem sketchy. How do they measure energy consumption and trade it off against compute capacities: e.g. a Raspberry Pi uses less absolute energy than a NVIDIA Blackwell but is this a meaningful comparison?
I'd also like to know how to program this machine. Neuromorphic computers have so far been terribly difficult to program. E.g. have JAX, TensorFlow and PyTorch been ported to SpiNNaker 2? I doubt it.
As an ex-employee (and I even did some HPC) I am not aware of any instances of Sandia receiving computing hardware for free.
no but sometimes they are for demonstration/evaluation, though that wouldn’t usually make a press release
Unless the manufacturer makes it.
Deep Mind (Google’s reinforcement learning lab), Deep Seek (Alibaba’s LLM initiative), Deep Crack (EFF’s DES cracker), Deep Blue (IBM’s chess computer), and Deep Thought (Douglas Adams’ universal brain) all set the stage...
So naturally, this thing should be called Deep Spike, Deep Spin, Deep Discount, or -- given its storage-free design -- Deep Void.
If it can accelerated nested 2D FORTRAN loops, you could even call it Deep DO DO, and the next deeper version would naturally be called Deep #2.
JD Vance and Peter Thiel could gang up, think long and hard hard, go all in, and totally get behind vigorously pushing and fully funding a sexy supercomputer with more comfortably upholstered, luxuriously lubricated, passively penetrable cushioned seating than even a Cray-1, called Deep Couch. And the inevitable jealous break-up would be more fun to watch than the Musk-Trump Bromance!
I question how viable these architectures are when considering that accurate simulation of a spiking neural network requires maintaining strict causality between spikes.
If you don't handle effects in precisely the correct order, the simulation will be more about architecture, network topology and how race conditions resolve. We need to simulate the behavior of a spike preceding another spike in exactly the right way, or things like STDP will wildly misfire. The "online learning" promise land will turn into a slip & slide.
A priority queue using a quaternary min-heap implementation is approximately the fastest way I've found to serialize spikes on typical hardware. This obviously isn't how it works in biology, but we are trying to simulate biology on a different substrate so we must make some compromises.
I wouldn't argue that you couldn't achieve wild success in a distributed & more non-deterministic architecture, but I think it is a very difficult battle that should be fought after winning some easier ones.
Artem Kirsanov provides some insights into the neurochemistry and types of neurons in his latest analysis [0] of distinct synaptic plasticity rules that operate across dendritic compartments. When simulating neurons in a more realistic approach, the timing can be deterministic.
[0] https://www.youtube.com/watch?v=9StHNcGs-JM
I see "storage-free"... and then learn it still has RAM (which IS storage) ugh.
John Von Neumann's concept of the instruction counter was great for the short run, but eventually we'll all learn it was a premature optimization. All those transistors tied up as RAM just waiting to be used most of the time, a huge waste.
In the end, high speed computing will be done on an evolution of FPGAs, where everything is pipelined and parallel as heck.
FPGAs are implemented as tons of lookup-tables (LUTs). Basically a special kind of SRAM.
The thing about the LUT memory is that it's all accessed in parallel, not just a 64 bits at a time or so.
Not all, not always. FPGAs usually have more memory than is accessible in parallel (because memory cells are a lot cheaper than routing grid) and most customers want some blockram anyways. So what your synthesis tool will do with very high LUT usage is to do input or output multiplexing. Or even halving your effective clock and doubling your "number" of LUTs by doing multi-step lookups in then non-parallel memory.
Interesting that they converged on a memory/network architecture similar to a rack of GPUs.
- 152 cores per chip, equivalent to ~128 CUDA cores per SM
- per-chip SRAM (20 MB) equivalent to SM high-speed shared memory
- per-board DRAM (96 GB across 48 chips) equivalent to GPU global memory
- boards networked together with something akin to NVLink
I wonder if they use HBM for the DRAM, or do anything like coalescing memory accesses.
Doesn't give a lot of information about what this is for or how it works :/
https://arxiv.org/abs/2401.04491
Love to see a simulator where you can at least run a plodding version of some code.
You don’t have to write anything down if you can keep it in your memory…
I love bio-inspired stuff, but can we (collectively) temper our usage of it? A better name for this would probably be something like distributed storage and computing architecture (or someone who really understands what this thing is please come up with a better name). If they want to say that part of the architecture is bio-inspired, or mammalian brain inspired, than fine, but let's be parsimonious
The original intent for this architecture was for modelling large spiking neural networks in real-time, although the hardware is really not that specialized - basically a bunch of ARM chips with high speed interconnect for message passing.
It's interesting that the article doesn't say that's what it's actually going to be used for - just event driven (message passing) simulations, with application to defense.
Probably Ising models, phase transitions, condense matter stuff all to help make a bigger boom.
No storage? Wow!
Oh... 138240 Terabytes of RAM.
Ok.
>In Sandia’s case, it has taken delivery of a 24 board, 175,000 core system
So a paltry 2,304 GB RAM
I am reading it wrong, or the math doesn't add up? Shouldn't it be 138240 GB not 138240 TB?
You’re right, OP got the math wrong. It should be:
Either way, that doesn't exactly sound like a "storage-free" solution to me.
Just whatever you do, don't turn it off!
"What does this button do?" Bmmmfff.
On the TRS-80 Model III, the reset button was a bright red recessed square to the right of the attached keyboard.
It was irresistible to anyone who had no idea what you were doing as you worked, lost in the flow, insensitive to the presence of another human being, until...
--
Then there was the Kaypro. Many of their systems had a bug, software or hardware, that would occasionally cause an unplanned reset the first time, after you turned it on, that you tried writing to the disk. Exactly the wrong moment.
Oh, the Apple ][ reset button beat the TRS-80 Model III "hands down" many years earlier, with its "Big Beautiful Reset Button" on the upper right corner of the keyboard.
It was comically vulnerable -- just begging to be pressed. The early models had one so soft and easy to trigger that your cat could reboot your Apple ][ with a single curious paw. Later revisions stiffened the spring a bit, but it was still a menace.
There was a whole cottage industry aftermarket of defensive accessories: plastic shields that slid over the reset key, mail-order kits to reroute it through an auxiliary switch, or firmware mods to require CTRL-RESET. You’d find those in the classified ads in Nibble or Apple Orchard magazines, nestled between ASCII art of wizards and promises to triple your RAM.
Because nothing says "I live dangerously" like writing your 6502 assembly in memory with the mini assembler without saving, then letting your little brother near the keyboard.
I got sweet sweet revenge by writing a "Flakey Keyboard Simulator" in assembly that hooked into the keyboard input vector, that drove him bonkers by occasionally missing, mistyping, and repeating keystrokes, indistinguishable from a real flakey keyboard or drunk typist.
> Because nothing says "I live dangerously" like writing your 6502 assembly in memory with the mini assembler without saving, then letting your little brother near the keyboard.
RESET on the Apple II was a warm reset. You could set a value on page zero so that pressing it caused a cold start (many apps did that), but, even then, the memory is not fully erased on startup, so you'd probably be kind of OK.
Well since Neuromorphic methods can show that 138240 = 0, should it come as as surprise that they enable blockchain on Mars?
https://cointelegraph.com/news/neuromorphic-computing-breakt...
Just don’t turn it off I guess…
At least not while it's computing something. It should be fine to turn it off after whatever results have been transferred to other computer.
I hear Georges Leclanché is getting close to a sort of electro-chemical discovery for this conundrum.
I feel like there is a straightforward biological analogue for this.
But at in this case, one wouldn't subject to macro-scale nonlinear effects arising from the uncertainty principle when trying to "restore" the system.
So if I understand correctly, the hardware paradigm is shifting to align with the now-dominant neural-based software model. This marks a major shift, from the traditional CPU + OS + UI stack to a fully neural-based architecture. Am I getting this right?
I feel like we're just trading one bottleneck for another here. So instead of slow storage, we now have a system that's hyper-sensitive to any interruption and probably requires a dedicated power plant to run.
Cool experiment, but is this actually a practical path forward or just a dead end with a great headline? Someone convince me I'm wrong...
Sandia National Labs is one of the few places in the country (on the planet?) doing blue-sky research. My first thought was similar to yours--If it doesn't have storage, what can I realistically even do with it!?
But sometimes you just have to let the academics cook for a few decades and then something fantastical pops out the other end. If we ever make something that is truely AGI, its architecture is probably going to look more like this SpiNNaker machine than anything we are currently using.
> what can I realistically even do with it!?
It doesn't have built-in storage, but that doesn't mean it can't connect to external storage, or that its memory cannot be retrieved from a front-end computer.
HPC jobs generally don't stream data to disk in the first place. They write out (huge) snapshots periodically. So mount a network filesystem and be done with it. I don't see the issue.
> we're just trading one bottleneck for another
If you have two systems with opposite bottlenecks you can build a composite system with the bottlenecks reduced.
Usually, you get a state with two bottlenecks ...
Think of L2 cache (small access time, small capacity) vs. memory modules (larger access time, large capacity) on a motherboard. You get the large capacity and an access time somewhere in between, depending on the hit rate.
Sounds like you need a massively parallel hardware regexp accelerator (a RePU), so you can have two millions of problems!
This smells like a VC derived sentiment - the only value is from identifying the be all end all solution.
There's plenty to learn from endeavors like this, even if this particular approach isn't the one that e.g. achieves AGI.
The pessimist in me thinks someone will just use it to mine bitcoin after all the official research is completed.
The title describes this machine as "brain-like" but the article doesn't support that conclusion. Why is it brain-like?
I also don't understand why this machine is interesting. It has a lot of RAM.... ok, and? I could get a consumer-grade PC with a large amount of RAM (admittedly not quite as much), put my applications in a ramdisk, e.g. tmpfs, and get the same benefit.
In short, what is the big deal?
How much did this cost? I'd rather have CUDA cores.
Part of their job is to evaluate novel technologies. I find this quite exciting. CUDA is well understood. This is not.
They already have CUDA cores in production. This is a lab that's looking for the next big thing.
Sandia’s business model is different from NVIDIA for sure.
> this work will explore how neuromorphic computing can be leveraged for the nation’s nuclear deterrence missions.
Wasn't that the plot of the movie War Games?
Calling 138240 TB of DRAM "storage-free" is... impressive.
It's volatile storage. It needs to connect to other systems in order to operate.
If it doesn’t have an OS, how does it…run? Is it just connected to a host machine and used like a giant GPU?
Do GPUs have OSs? Or is it the host computer that sets up memory with data and programs and starts the processing units running?
You can run software on bare metal without an OS. The downside is you have to write everything. That means drivers, networking code, the process abstraction (if you need it), etc.
One thing to remember is an operating system is just another computer program.
At that point, you'd call the software stack an OS and not declare it to not have an OS
How does an OS "run"?
Imagine if this is actually Skynet and the apocalyptic AI is called Sandia instead :)
(Sandia means watermelon in Spanish)
When I first learned that, the prestigious-sounding "sandia national labs" became "watermelon national labs" and I couldn't help but laugh.
> the SpiNNaker 2’s highly parallel architecture has 48 SpiNNaker 2 chips per server board, each of which in turn carries 152 based cores and specialized accelerators.
NVIDIA step up your game. Now I want to run stuff on based cores.