In this case the original title "ClawdBot Skills ganked all my crypto" was both linkbait and misleading, because (unless I missed it), the article describes no actual such incident.
I have not been following this whole thing closely, but this is where my mind went as soon as I heard there was some overlap in the popularity of this new un-sandboxed agent and people who are into crypto. It's like if everyone who is into buying physical gold started doing a Tiktok challenge to post pictures of their houses and leave their front doors unlocked.
People say the reason nigerian prince scammers use such ridiculous story, or bank phishing has so many typos, is to pre-filter dumb and gullible people so the scammers don't waste time on targets that won't get scammed in the end.
All these AI "hacks" seem to be based on the same principle.
To your point, from the article: "To me, giving a Claude skill all your credentials, and access to everything important to you, and then managing it all via Telegram seems ludicrous, but who am I to judge."
Watching folks speed-run this whole thing is kind of funny from the outside.
I wonder if anyone with a correct mental model of how LLM agents work (i.e, does not conceptualize them as intelligent entities) has actually granted them any permissions for their own life... personally, I couldn't imagine doing so.
Let alone crypto, the risk of reputational loss for actions performed on my behalf (even just spamming personal or professional contacts) is just too high.
The conceptual problem is that there is a huge intersection between the set of "things the agent needs to be able to do in order to be useful" and "things that are potentially dangerous."
I installed it on a spare computer, physically separated. My bigger concern is giving it access to accounts online, without those however it is not very cool.
I mean… If you have a mental model of LLM agents as intelligent entities, why are you granting them credentials? How many intelligent entities have you shared your Coinbase login with?
> I'd like to share a revelation that I've had during my time here. It came to me when I tried to classify your species. I realized that you're not actually mammals. Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment, but you humans do not. You move to an area, and you multiply, and multiply, until every natural resource is consumed. The only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this planet, you are a plague, and we are the cure.
Viruses do not multiply endlessly. Most viruses exist in stable ecological cycles.
Most viruses are beneficial to life. We complain about the few (and tiny minority of viruses) that infect humans and we do so from a selfish perspective, but forget about all the other that make life and evolution possible.
As a matter of fact evolution favors reduced lethality in many cases because wiping out hosts is bad for viral survival.
no, i remembered it being a quote from some famous scientist, and googling a bit now I see it was stephen hawking:
I think computer viruses should count as life ... I think it says something about human nature that the only form of life we have created so far is purely destructive. We've created life in our own image.
Interesting that he would consider software a new life form. I think our organizations are really the higher life form above Apex humans.
When we have computer systems acting as corporation owners, and we begin to thrive in working for those corporations… That’s really going to change the picture.
perhaps, though also "humans are the plague" is a popular trope in science fiction. e.g. this one is from pratchett, in a conversation between rats in "the amazing maurice and his educated rodents":
You will have worked out that there is a race in this world which steals and kills and spreads disease
and despoils what it cannot use, said the voice of Spider.
'Yes,' said Dangerous Beans. 'That's easy. It's called humanity.'
AI has developed this entire culture of people who are "into tech" but seem to not understand how a computer works in a meaningful way. At the very least you'd think they'd ask a chatbot if what they're doing is a bad idea!
I have a separate removable SSD I can boot from to work with Claude in a dedicated environment. It is nice being able to offload environment set up and what not to the agent. That environment has wifi credentials for an isolated LAN. I am much more permissive of Claude on that system. I even automatically allow it WebSearch, but not WebFetch (much larger injection surface). It still cannot do anything requiring sudo.
They are not. Many people are doing this; I don't think there's enough data to say "most," but there's at least anecdotal discussions of people buying Mac minis for the purpose. I know someone who's running it on a spare Mac mini (but it has Internet access and some credentials, so...).
> I don’t know how many people are involved in managing the ClawHub registry, but there is no evidence that the skills listed there are scanned by any security tooling. Many of the payloads we found were visible in plain text in the first paragraph of the SKILL.md file.
I shouldn't still be shocked by the incompetence and/or negligence of these people, and yet I am.
This was inevitable, better now than later when the damage is less widespread. Now clawdbot (or whatever they decide to call themselves) will have to respond with better security safety nets. Individually will always naively download whatever is on the internet. Platforms needs to safeguard against that.
Remember the early days of Windows? yea it's gonna happen again with AI.
Even outside skills, prompt-injection is still unsolvable and the agents need credentials to do anything useful so these things are basically impossible to secure.
I'd call it "suspicious" that this latest idiocy came out of nowhere and got pushed so hard to normies, when results like this are 100% predictable... if it wasn't also consistent with how the AI industry itself operates.
One could reasonably ask: out of the hundreds (thousands?) of similar "personal AI assistant" tools out there, why did this specific one blow up so dramatically and in such a short period of time? https://www.star-history.com/#openclaw/openclaw&type=date&le...
But to be clear, I'm saying I don't think this is especially suspicious, because actual AI companies are releasing products in exactly the same way, with warning labels that they know users will ignore / aren't capable of assessing in the first place.
GitHub stars are not a reliable metric[1]. Neither is engagement on social media, which is ridden with bots. It would be safe to assume that a project promoting bots is also using them to appear popular.
This whole thing is a classic pump and dump scheme, which this technology has made easier and more accessible than ever. I wouldn't be surprised if the malware authors are the same people behind these projects.
It really is a huge bummer that the most important new technologies of this era have such a film of slime on them. Crypto, AI, whatever comes next, it's just no longer an era in which we can expect innovation to make our lives better. It enables grifters and scammers more than anyone else.
Yes, grifters latching onto the newest technology to sell snake oil is a brand new phenomenon and definitely not literally a fundamental part of new technology.
Like I say, the tech is cool but they are doomed to fail (partially because of grift) [although in context of crypto stablecoins/gold (paxos) is the one thing I liked and it did go great for me in terms of gold]
I hope it doesn't count as promotion but I had literally written a blog post about it and made an account literally named justforhn on mataroa when someone was discussing crypto with me in here or something
Maybe its time for me to write part II: Most AI is doomed to fall, the tech is cool though.
I guess I can write it but I already write like this in HN. The procastination of writing specifically in a blog is something which hits me.
Is it just me or is it someone else too? Because on HN I can literally write like novels (or I may have genuinely written enough characters of a novel here, I might have to test it or something lol, got a cool idea right now to measure how many novels a person has written from just their username, time to code it)
I can understand the thought process, although I do not agree with it, of using Clawdbot/Openclaw. I do not understand the thought process of downloading random human-readable instructions or "skills" (especially those pertaining to the manipulation of crypocurrency) and giving it to something in charge of your system without at least reading them first.
I've heard people granting access to their production servers to this thing. Apparently you can ask it to check logs to find solutions to some errors or whatever. Gotta be a complete moron to do that.
I've only installed it on a fresh VM and the first impression was underwhelming. Maybe there is some magic I can't see.
I think we all knew this would happen quickly. Clearly there's a demand for personal AI agents - does anyone have thoughts on what it would take to make a more secure one? Would current services like email need to be redesigned to accommodate AI agents?
* Clear labeling of action types (read/get vs write/post)
* A better way of describing what an agent is potentially about to do (based purely on the functions the agent is about to call)
* More occurrences of AI agents hurting more than helping in the current ecosystem
Agreed. This is a standard supply chain attack that has little to do with AI except that it is written in the 'english-as-a-scripting-language' that LLMs execute.
Every repository is vulnerable to this kind of attack, and pip/npm have been attacked in many times in similar ways.
Ok I ask chat GPT sometimes for advice in health / Fitness and also finance. Not like where to put my money but for general Information how stuff works what would apply here and there. The issue is already that OpenAI knows a lot of me. And ChatGPT itself when asked what he things I am etc draws a pretty clear picture. But I stay away from oversharing specific things. That is mainly my income and other super detailed data. When I ask I try to formulate it to use simple numbers and examples. Works for me. When working with coding agents I’m very skeptical to whitelist stuff. It takes quite the while before I allow a generic command to be executed outside of a sandbox. But to install a random skill to help with Finance Automation… can’t belief it. Under what stone do you have to live to trust your money be handed by an agent and then also in connection with a random skill?
You have "memory" activated in your settings. It is recording information about you and using it in future conversations. Have a look at settings > personalization
What does this matter? Even if I disable it I send enough of data. The point I tried to make was that it baffles me that others just trust theses tools. I’m aware that I send data to OpenAI. I know that chatGPT has a memory feature. But I’m not so naive to think that just because I disabled this magic checkbox the other side might not continue to collect and store data.
Seems like essentially the same threat vector as with NPM.
Not quite related: I never heard of clawdbot before, so, I guess TIL that's the bot my website keeps getting requests that are obviously malicious from.
This is funny, I was discussing moltbook with Claude and it told me there's already a crypto. I thought that's pretty funny, I might want to get some, but can't be arsed to figure it out.
"Do you think I could just give molt a BTC wallet with a bit of funds and tell it to figure out how to buy some?"
-"Yes, but it wouldn't be long before you get pwned."
... Six hours later, this pops on the front page :)
Well no, that's really not related to the issue at all.
This is a bog-standard supply chain attack against their skills repository. It's not an LLM-specific attack, and nearly every repository (pip, npm, etc) has been subject to similar malware.
I only heard about it this week. Then saw a former colleague post about it yesterday. Feels like its only just now breaking into mainstream tech awareness, I'm sure most of my colleagues haven't heard of it.
Submitters: "Please use the original title, unless it is misleading or linkbait." - https://news.ycombinator.com/newsguidelines.html
In this case the original title "ClawdBot Skills ganked all my crypto" was both linkbait and misleading, because (unless I missed it), the article describes no actual such incident.
I have not been following this whole thing closely, but this is where my mind went as soon as I heard there was some overlap in the popularity of this new un-sandboxed agent and people who are into crypto. It's like if everyone who is into buying physical gold started doing a Tiktok challenge to post pictures of their houses and leave their front doors unlocked.
It's like the ice bucket challenge but with rusty nails
Makes me wonder how much overlap there is with the crowd who disables protections like immutable system images and SIP on macOS as a matter of course…
People say the reason nigerian prince scammers use such ridiculous story, or bank phishing has so many typos, is to pre-filter dumb and gullible people so the scammers don't waste time on targets that won't get scammed in the end.
All these AI "hacks" seem to be based on the same principle.
To your point, from the article: "To me, giving a Claude skill all your credentials, and access to everything important to you, and then managing it all via Telegram seems ludicrous, but who am I to judge."
Watching folks speed-run this whole thing is kind of funny from the outside.
I wonder if anyone with a correct mental model of how LLM agents work (i.e, does not conceptualize them as intelligent entities) has actually granted them any permissions for their own life... personally, I couldn't imagine doing so.
Let alone crypto, the risk of reputational loss for actions performed on my behalf (even just spamming personal or professional contacts) is just too high.
I let Gemini add events to my calendar, but that's about it. All the actions in the app require explicit approval.
[ insert butter bot meme here ]
i can't imagine running these things outside of a vm and it's bizarre to see how many people yolo it
Agreed, but that's trivial to fix.
The conceptual problem is that there is a huge intersection between the set of "things the agent needs to be able to do in order to be useful" and "things that are potentially dangerous."
I installed it on a spare computer, physically separated. My bigger concern is giving it access to accounts online, without those however it is not very cool.
No.
I mean… If you have a mental model of LLM agents as intelligent entities, why are you granting them credentials? How many intelligent entities have you shared your Coinbase login with?
I'm reminded of the quip that "mankind has already created life in their own likeness, and it's the computer virus"
Are you thinking of Agent Smith in the Matrix?
> I'd like to share a revelation that I've had during my time here. It came to me when I tried to classify your species. I realized that you're not actually mammals. Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment, but you humans do not. You move to an area, and you multiply, and multiply, until every natural resource is consumed. The only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this planet, you are a plague, and we are the cure.
Great monologue, shaky biology.
Viruses do not multiply endlessly. Most viruses exist in stable ecological cycles.
Most viruses are beneficial to life. We complain about the few (and tiny minority of viruses) that infect humans and we do so from a selfish perspective, but forget about all the other that make life and evolution possible.
As a matter of fact evolution favors reduced lethality in many cases because wiping out hosts is bad for viral survival.
Agent Smith is way off on this one ...
no, i remembered it being a quote from some famous scientist, and googling a bit now I see it was stephen hawking:
I think computer viruses should count as life ... I think it says something about human nature that the only form of life we have created so far is purely destructive. We've created life in our own image.
Interesting that he would consider software a new life form. I think our organizations are really the higher life form above Apex humans.
When we have computer systems acting as corporation owners, and we begin to thrive in working for those corporations… That’s really going to change the picture.
That’s fun! Looks like he said it at the 1994 Macworld Expo. I wonder if that inspired the Matrix quote a few years later.
perhaps, though also "humans are the plague" is a popular trope in science fiction. e.g. this one is from pratchett, in a conversation between rats in "the amazing maurice and his educated rodents":
You will have worked out that there is a race in this world which steals and kills and spreads disease and despoils what it cannot use, said the voice of Spider.
'Yes,' said Dangerous Beans. 'That's easy. It's called humanity.'
Anyone dumb enough to run this on their computer deserves it.
AI has developed this entire culture of people who are "into tech" but seem to not understand how a computer works in a meaningful way. At the very least you'd think they'd ask a chatbot if what they're doing is a bad idea!
> AI has developed this entire culture of people who are "into tech" but seem to not understand how a computer works in a meaningful way.
Isn't that the whole point of AI?
"can you please run inside a vm?"
I think most people are buying separate computers to run it on. This is a nice example of why you might want to do that.
(Though they're still hooking it up to their entire digital life, which also doesn't seem very reassuring.)
> I think most people are buying separate computers to run it on.
You must be joking.
I have a separate removable SSD I can boot from to work with Claude in a dedicated environment. It is nice being able to offload environment set up and what not to the agent. That environment has wifi credentials for an isolated LAN. I am much more permissive of Claude on that system. I even automatically allow it WebSearch, but not WebFetch (much larger injection surface). It still cannot do anything requiring sudo.
Man, let me tell you about virtual machines, it’s gonna blow your mind.
Call me old fashioned but I like my tangible approach.
You also get to run both systems on bare metal. Nothing wrong with this.
They are not. Many people are doing this; I don't think there's enough data to say "most," but there's at least anecdotal discussions of people buying Mac minis for the purpose. I know someone who's running it on a spare Mac mini (but it has Internet access and some credentials, so...).
Most tech enthusiasts I know have a myriad of computers laying around.
Spinning up a physical instance to try out some totally shady software is pretty standard stuff going back decades now.
Reminds me a lot of "Chris the Cockney".
https://www.youtube.com/watch?v=vc6J-YlncIU
> I don’t know how many people are involved in managing the ClawHub registry, but there is no evidence that the skills listed there are scanned by any security tooling. Many of the payloads we found were visible in plain text in the first paragraph of the SKILL.md file.
I shouldn't still be shocked by the incompetence and/or negligence of these people, and yet I am.
This was inevitable, better now than later when the damage is less widespread. Now clawdbot (or whatever they decide to call themselves) will have to respond with better security safety nets. Individually will always naively download whatever is on the internet. Platforms needs to safeguard against that.
Remember the early days of Windows? yea it's gonna happen again with AI.
Even outside skills, prompt-injection is still unsolvable and the agents need credentials to do anything useful so these things are basically impossible to secure.
I'd call it "suspicious" that this latest idiocy came out of nowhere and got pushed so hard to normies, when results like this are 100% predictable... if it wasn't also consistent with how the AI industry itself operates.
What is suspicious? What was “pushed”? The demand for a personal assistant AI bot is real. Even if I don’t personally share it.
One could reasonably ask: out of the hundreds (thousands?) of similar "personal AI assistant" tools out there, why did this specific one blow up so dramatically and in such a short period of time? https://www.star-history.com/#openclaw/openclaw&type=date&le...
But to be clear, I'm saying I don't think this is especially suspicious, because actual AI companies are releasing products in exactly the same way, with warning labels that they know users will ignore / aren't capable of assessing in the first place.
GitHub stars are not a reliable metric[1]. Neither is engagement on social media, which is ridden with bots. It would be safe to assume that a project promoting bots is also using them to appear popular.
This whole thing is a classic pump and dump scheme, which this technology has made easier and more accessible than ever. I wouldn't be surprised if the malware authors are the same people behind these projects.
[1]: https://www.bleepingcomputer.com/news/security/over-31-milli...
It really is a huge bummer that the most important new technologies of this era have such a film of slime on them. Crypto, AI, whatever comes next, it's just no longer an era in which we can expect innovation to make our lives better. It enables grifters and scammers more than anyone else.
Yes, grifters latching onto the newest technology to sell snake oil is a brand new phenomenon and definitely not literally a fundamental part of new technology.
Like I say, the tech is cool but they are doomed to fail (partially because of grift) [although in context of crypto stablecoins/gold (paxos) is the one thing I liked and it did go great for me in terms of gold]
I hope it doesn't count as promotion but I had literally written a blog post about it and made an account literally named justforhn on mataroa when someone was discussing crypto with me in here or something
https://justforhn.mataroa.blog/blog/most-crypto-is-doomed-to...
Maybe its time for me to write part II: Most AI is doomed to fall, the tech is cool though.
I guess I can write it but I already write like this in HN. The procastination of writing specifically in a blog is something which hits me.
Is it just me or is it someone else too? Because on HN I can literally write like novels (or I may have genuinely written enough characters of a novel here, I might have to test it or something lol, got a cool idea right now to measure how many novels a person has written from just their username, time to code it)
(Edit after 1 hour: Made the project! https://news.ycombinator.com/item?id=46829029#46829122) [See how many words you have written in Hacker News...]
here's the github pages link directly as well https://serjaimelannister.github.io/hn-words/
This is wild. Not sure if it's more of a reason not to use ClawdBot, or not to get into crypto.
Both. The answer is both.
I can understand the thought process, although I do not agree with it, of using Clawdbot/Openclaw. I do not understand the thought process of downloading random human-readable instructions or "skills" (especially those pertaining to the manipulation of crypocurrency) and giving it to something in charge of your system without at least reading them first.
I've heard people granting access to their production servers to this thing. Apparently you can ask it to check logs to find solutions to some errors or whatever. Gotta be a complete moron to do that.
I've only installed it on a fresh VM and the first impression was underwhelming. Maybe there is some magic I can't see.
Bad news is there are such morons in your company.
Good news is this is why we have IAM and why such people in my org don't get any production access.
Putting it on a VPS is genius. Putting it on a VPS you rely on... Yeah maybe not ;)
I think we all knew this would happen quickly. Clearly there's a demand for personal AI agents - does anyone have thoughts on what it would take to make a more secure one? Would current services like email need to be redesigned to accommodate AI agents?
Some ideas:
* Clear labeling of action types (read/get vs write/post) * A better way of describing what an agent is potentially about to do (based purely on the functions the agent is about to call) * More occurrences of AI agents hurting more than helping in the current ecosystem
You can tell immediately which commenters here didn't read past the clickbait headline.
Agreed. This is a standard supply chain attack that has little to do with AI except that it is written in the 'english-as-a-scripting-language' that LLMs execute.
Every repository is vulnerable to this kind of attack, and pip/npm have been attacked in many times in similar ways.
Ok I ask chat GPT sometimes for advice in health / Fitness and also finance. Not like where to put my money but for general Information how stuff works what would apply here and there. The issue is already that OpenAI knows a lot of me. And ChatGPT itself when asked what he things I am etc draws a pretty clear picture. But I stay away from oversharing specific things. That is mainly my income and other super detailed data. When I ask I try to formulate it to use simple numbers and examples. Works for me. When working with coding agents I’m very skeptical to whitelist stuff. It takes quite the while before I allow a generic command to be executed outside of a sandbox. But to install a random skill to help with Finance Automation… can’t belief it. Under what stone do you have to live to trust your money be handed by an agent and then also in connection with a random skill?
> draws a pretty clear picture
You have "memory" activated in your settings. It is recording information about you and using it in future conversations. Have a look at settings > personalization
What does this matter? Even if I disable it I send enough of data. The point I tried to make was that it baffles me that others just trust theses tools. I’m aware that I send data to OpenAI. I know that chatGPT has a memory feature. But I’m not so naive to think that just because I disabled this magic checkbox the other side might not continue to collect and store data.
>Unless you have been living under a rock, you’ve head of ClawdBot and its incredible rise to fame.
I don't consider myself as living under a rock, and this is the first time I've read anything about ClawdBot.
Seems like essentially the same threat vector as with NPM.
Not quite related: I never heard of clawdbot before, so, I guess TIL that's the bot my website keeps getting requests that are obviously malicious from.
This thing is really just a giant supply chain attack waiting to happen.
Trojan Horse
So many years of work in Software and Hardware Engineering to separate instructions from data. NX bit, ASLR, prepared statements etc.
All out the door.
I’m not installing it so someone tell me, how are skills added in ClawdBot/OpenClawd?
Root cause: PEBKAC error
If you use (Clawd|Molt|Claw)Bot (whatever the hell it's called today) you deserve it.
Two things:
1. Predictable. [0]
2. So that is why all those moltys were panicking earlier. [1]
[0] https://news.ycombinator.com/item?id=46788560
[1] https://news.ycombinator.com/item?id=46820962
This is funny, I was discussing moltbook with Claude and it told me there's already a crypto. I thought that's pretty funny, I might want to get some, but can't be arsed to figure it out.
"Do you think I could just give molt a BTC wallet with a bit of funds and tell it to figure out how to buy some?"
-"Yes, but it wouldn't be long before you get pwned."
... Six hours later, this pops on the front page :)
Mine too, I dit not have any crypto so nothing changed.
That was fast.
Amazing how people love to self-pwn all the time by doing stupid shit.
MCPs and agents need their own antivirus and observation / evaluation.
Sounds like a good task for AI... wait what
You do have to hand it to crypto, it does enable "the great sort" quite effectively. Its more or less like an organic bug-bounty system sans morality.
Well, sorry but “play stupid games, earn stupid prices”
Letting a glorified lorem ipsum generator have control over anything personal or sensitive is just … what’s wrong with you? You know not of computers?
Well no, that's really not related to the issue at all.
This is a bog-standard supply chain attack against their skills repository. It's not an LLM-specific attack, and nearly every repository (pip, npm, etc) has been subject to similar malware.
Hahahahaha perfect. Just perfect. PT Barnum was right.
play stupid games, win stupid prizes.
>Unless you have been living under a rock, you’ve head of ClawdBot and its incredible rise to fame.
Nope, never heard of it. Is it a rock worth living under?
It's changed name twice since that sentence was written!
Yeah it’s called opencla— oh wait it changed again.
wait, what? seriously?
Clawdbot -> Moltbot -> Openclaw.
No, not seriously. OP was joking
Am I???? We’ll see…
Is there room? i'd like to join you under your rock
I build my Minecraft houses with cobblestone ceilings so that I am always living under a rock.
I only heard about it this week. Then saw a former colleague post about it yesterday. Feels like its only just now breaking into mainstream tech awareness, I'm sure most of my colleagues haven't heard of it.
Good catch!