I found this[1] article to give a nice overview over spiking neural networks and their connections to the more "traditional" neural networks of modern fame.
In particular the connection between the typical weighted-sum plus activation function and a simplistic spiking model where one considers the output simply by the spiking rate was illuminating (section 3).
neuromorphic hardware is just hardware that has biologically inspired designs.
spiking neural networks are artificial neural networks that actually simulate the dynamics of spiking neurons. rather than sums, ramps and squashing, they simulate actual spike trains and the integration of energy that occurs in the dendrites.
neuromorphic hardware can range from specialized asics for doing these simulations efficiently to more experimental hybrid analog-digital systems that use analog elements to do more of the computation.
it's all very cool stuff, but i tend to think of snns as similar to the wings on the avion 3 where simplified unit functions look more like a modern jet wing.
but who knows, maybe the neuromorphic route will open the door to far more efficient computations. personally, i'm very excited about potential wins that could come from novel computational substrates!
Lots of people involved in explaining away AI are labouring under the axiom that intelligence is mysterious. Therefore, if I can understand how a system works, it logically follows that it can't be intelligent.
I predict that many of those people will continue to believe that up until human cognition is mechanistically understood, at which point there will be some other reason that humans are "real" thinkers and machines are not. The problem is that theoretical opposition to the existence of AIs is incompatible with materialism and thus just doesn't fit with our world, which is very much built using the scientific truths that materialism enables us to discover.
It is insane to me that views of consciousness and cognition other than physicalism still exist in mainstream scientific and philosophical discourse. As far as I can tell, no matter how much discourse you dress it up in, any alternative boils down to "it's magic, I ain't gotta explain shit".
We… can’t really understand how neural networks work, but we can definitely tell they’re not intelligent beyond making good sounding word soup (as demonstrated in their minimal practical reasoning abilities)
I wouldn’t call pagerank intelligent, even though I can give it a text prompt and get relevant information back.
In my view, the only difference between that and an llm is the natural language interface.
I’m no expert on intelligence, but I’d expect being able to introspect and continually learn to be part of it.
One way to help you notice this is to try and estimate how many billions of people you've defined out of "being intelligent" with your latest goalpost movement.
Be honest, how many people do you think "introspect and continually learn" on a daily basis?
Unfortunately for the intelligence denial crowd, introspection and learning capability is something we can measure, as opposed to the vibes-based discourse you prefer to engage in. If that's what you've picked for your threshold of "intelligence", you've reduced the majority of the bell curve's left side to soulless automatons. Again, your definition, not mine.
> One way to help you notice this is to try and estimate how many billions of people you've defined out of "being intelligent" with your latest goalpost movement.
Words are useful to the extent they effectively communicate with the intended audience.
This can be accomplished by a mix of familiarity (has this word been already used enough in the target audience with the intended meaning) and the ability to evoke new meanings by intuitive derivation rules (word composition, affixes, ...)
In the case of this title, fwiw, it was perfectly clear to me what this was about because I'm already familiar with related topics and they were using the same terminology
I found this[1] article to give a nice overview over spiking neural networks and their connections to the more "traditional" neural networks of modern fame.
In particular the connection between the typical weighted-sum plus activation function and a simplistic spiking model where one considers the output simply by the spiking rate was illuminating (section 3).
[1]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9313413/ Spiking Neural Networks and Their Applications: A Review
I wonder how well this model can typecheck Ruby code.
Context: Sorbet is also the name of a popular Ruby type checker[1], built by Stripe.
[1]: https://sorbet.org
Realistically only at static time
Can’t wait for the follow up paper, RBS
There's no code or weights released => no way to reproduce their results.
They may not have linked the source in the paper, but it's not hard to find: https://github.com/Kaiwen-Tang/Sorbet
Haven't checked if there's enough there to build it.
Thank you very much for posting this! The code is indeed there, that's great.
Results get reproduced without code or weights all the time. I note that in this case training data and evaluation benchmarks are public.
The ML community has historically held themselves to a higher standard, IME.
sometimes it seems folks are just making up words.
neuromorphic hardware is just hardware that has biologically inspired designs.
spiking neural networks are artificial neural networks that actually simulate the dynamics of spiking neurons. rather than sums, ramps and squashing, they simulate actual spike trains and the integration of energy that occurs in the dendrites.
neuromorphic hardware can range from specialized asics for doing these simulations efficiently to more experimental hybrid analog-digital systems that use analog elements to do more of the computation.
it's all very cool stuff, but i tend to think of snns as similar to the wings on the avion 3 where simplified unit functions look more like a modern jet wing.
but who knows, maybe the neuromorphic route will open the door to far more efficient computations. personally, i'm very excited about potential wins that could come from novel computational substrates!
I wonder how far we will move the goalposts once we have a multimodal transformer type model running on neuromorphic hardware.
Lots of people involved in explaining away AI are labouring under the axiom that intelligence is mysterious. Therefore, if I can understand how a system works, it logically follows that it can't be intelligent.
I predict that many of those people will continue to believe that up until human cognition is mechanistically understood, at which point there will be some other reason that humans are "real" thinkers and machines are not. The problem is that theoretical opposition to the existence of AIs is incompatible with materialism and thus just doesn't fit with our world, which is very much built using the scientific truths that materialism enables us to discover.
It is insane to me that views of consciousness and cognition other than physicalism still exist in mainstream scientific and philosophical discourse. As far as I can tell, no matter how much discourse you dress it up in, any alternative boils down to "it's magic, I ain't gotta explain shit".
We… can’t really understand how neural networks work, but we can definitely tell they’re not intelligent beyond making good sounding word soup (as demonstrated in their minimal practical reasoning abilities)
I wouldn’t call pagerank intelligent, even though I can give it a text prompt and get relevant information back.
In my view, the only difference between that and an llm is the natural language interface.
I’m no expert on intelligence, but I’d expect being able to introspect and continually learn to be part of it.
You're engaging in explaining away intelligence.
One way to help you notice this is to try and estimate how many billions of people you've defined out of "being intelligent" with your latest goalpost movement.
Be honest, how many people do you think "introspect and continually learn" on a daily basis?
> Be honest, how many people do you think "introspect and continually learn" on a daily basis?
That's wild if you think that isn't quite literally one of the defining features of human consciousness (and many would say other animals as well).
If you think people thinking differently than you means they don't still indeed...think...then I don't know what tell you.
Unfortunately for the intelligence denial crowd, introspection and learning capability is something we can measure, as opposed to the vibes-based discourse you prefer to engage in. If that's what you've picked for your threshold of "intelligence", you've reduced the majority of the bell curve's left side to soulless automatons. Again, your definition, not mine.
> One way to help you notice this is to try and estimate how many billions of people you've defined out of "being intelligent" with your latest goalpost movement.
love this, I will use this in future rants.
the goalposts for what?
Literal goalpost in a game of football of course. Or soccer if you are an American.
To be fair, all words are made up.
Words are useful to the extent they effectively communicate with the intended audience.
This can be accomplished by a mix of familiarity (has this word been already used enough in the target audience with the intended meaning) and the ability to evoke new meanings by intuitive derivation rules (word composition, affixes, ...)
In the case of this title, fwiw, it was perfectly clear to me what this was about because I'm already familiar with related topics and they were using the same terminology
And even with a willingness to make up words, it’s STILL hard to name tech projects uniquely: https://github.com/sorbet/sorbet
I’m flashing back to bapi
I definitely know what "A" and "model" means.