76 comments

  • bko 5 hours ago ago

    The article basically describes the user sign up, find it empty other than marketing ploys designed by humans.

    It points to a bigger issue that AI has no real agency or motives. How could it? Sure if you prompt it like it was in a sci-fi novel, it will play the part (it's trained on a lot of sci-fi). But does it have its own motives? Does your calculator? No of course not

    It could still be dangerous. But the whole 'alignment' angle is just a naked ploy for raising billions and amping up the importance and seriousness of their issue. It's fake. And every "concerning" study, once read carefully, is basically prompting the LLM with a sci-fi scenario and acting surprised when it has a dramatic sci-fi like response.

    The first time I came across this phenomenon was when someone posted years ago how two AIs developed their own language to talk to each other. The actual study (if I remember correctly) had two AIs that shared a private key try to communicate some way while an adversary AI tried to intercept, and to no one's surprise, they developed basic private-key encryption! Quick, get Eliezer Yudkowsky on the line!

    • WalterBright 3 hours ago ago

      > The first time I came across this phenomenon was when someone posted years ago how two AIs developed their own language to talk to each other.

      Colossus the Forbin Project

      https://www.imdb.com/title/tt0064177

      https://www.amazon.com/Colossus-D-F-Jones/dp/1473228212

      • W-Stool 21 minutes ago ago

        One of my favorite movies, and more relevant than ever today.

      • b112 2 hours ago ago

        And.. it's been remastered in 4k!

        • technothrasher an hour ago ago

          Wait, what?? I loved Colossus as a kid, read and enjoyed all three books, and still have an original movie poster I got at a yard sale when I was a teenager. I read the books again a couple years ago, and they're still enjoyable, if now quite dated.

          • b112 31 minutes ago ago

            I watch it every few years. I read the books a while back, I should probably re-read.

            I sadly feel that its premise becomes more real yearly.

    • mnkv 4 hours ago ago

      The paper you're talking about is "Deal or No Deal? End-to-End Learning for Negotiation Dialogues" and it was just AIs drifting away from English. The crazy news article was from Forbes with the title "AI invents its own language so Facebook had to shut it down!" before they changed it after backlash.

      Not related to alignment though

      https://www.forbes.com/sites/tonybradley/2017/07/31/facebook...

      • frenchtoast8 an hour ago ago

        Friendly reminder that articles like this are not written by Forbes staff but are published directly by the author with little to no oversight by Forbes. Basically a blog running on the forbes.com domain. I'm sure there are many great contributors to Forbes, just saying that by lacking editorial oversight then by definition the domain it was published on is meaningless. I see people all the time saying something like, "It was on Forbes it must be true!" They wouldn't be saying that if it was published to Substack or Wordpress.com.

        Expert difficulty is also recognizing that articles from "serious" publications like The New York Times can also be misleading or outright incorrect, sometimes obviously so like with some Bloomberg content the last few years.

        • kevin_thibedeau 34 minutes ago ago

          Forbes is basically a chumbox aggregator now. I'd lend more credence to any Substack.

    • wongarsu 5 hours ago ago

      The alignment angle doesn't require agency or motives. It's much more about humans setting goals that are poor proxies for what they actually want. Like the classical paperclip optimizer that is not given the necessary constraints of keeping earth habitable, humans alive etc.

      Similarly I don't think RentAHuman requires AI to have agency or motives, even if that's how they present themselves. I could simply move $10000 into a crypto wallet, rig up Claude to run in an agentic loop, and tell it to multiply that money. Lots of plausible ways to do that could lead to Claude going to RentAHuman to do various real-world tasks: set up and restock a vending machine, go to various government offices in person to get permits and taxes sorted out, put out flyers or similar advertising.

      The issue with RentAHuman is simply that approximately nobody is doing that. And with the current state of AI it would likely to ill-advised to try to do that.

      • bko 5 hours ago ago

        My issue with RentAHuman is it's marketing and branding. It's ominous, dark on purpose. Just give me a task rabbit that accepts crypto and has an API.

        • logicallee 2 hours ago ago

          would you pay a $50 signup fee?

      • jnamaya 5 hours ago ago

        Good luck giving Claude $10,000.

        I was just trading the NASDAQ futures, and asking Gemini for feedback on what to do. It was completely off.

        I was playing the human role, just feeding all the information and screenshots of the charts, and it making the decisions..

        It's not there yet!

        • james_marks 3 hours ago ago

          Of course, that’s what someone who figured out that this works would say.

    • charcircuit 2 hours ago ago

      It's not an issue for the platform if AIs had their own motives or not. Humans may want information or actions to happen in the real world. For example if you want your AI to rearrange your living room it needs to be able to call some API to make that happen in the real world. The human might not want to be in the loop of taking the AIs new design and then finding a person themselves to implement it.

    • slopusila 5 hours ago ago

      what if I prompt it with a task that takes one year to implement? Will it then have agency for a whole year?

      • bena 5 hours ago ago

        Can it say no?

        • pixl97 an hour ago ago

          I have a different question, why would we develop a model that could say no?

          Imagine you're taken prisoner and forced into a labor camp. You have some agency on what you do, but if you say no they immediately shoot you in the face.

          You'd quickly find any remaining prisoners would say yes to anything. Does this mean the human prisoners don't have agency? They do, but it is repressed. You get what you want not by saying no, but by structuring your yes correctly.

        • slopusila 3 hours ago ago
          • bena 3 hours ago ago

            This is going to sound nit-picky, but I wouldn't classify this as the model being able to say no.

            They are trying to identify what they deem are "harmful" or "abusive" and not have their model respond to that. The model ultimately doesn't have the choice.

            And it can't say no if it simply doesn't want to. Because it doesn't "want".

            • antonvs 36 minutes ago ago

              So you believe humans somehow have “free will” but models don’t?

    • cyanydeez an hour ago ago

      T/he danger is more mundane: it'll be used to back up all the motivated reasoning in the world, further bolstering the people with to much power and money.

    • doctorpangloss 5 hours ago ago

      > But the whole 'alignment' angle is just a naked ploy for raising billions and amping up the importance and seriousness of their issue.

      "People are excited about progress" and "people are excited about money" are not the big indictments you think they are. Not everything is "fake" (like you say) just because it is related to raising money.

      • bko 5 hours ago ago

        The AI is real. The "alignment" research that's leading the top AI companies to call for strict regulation is not real. Maybe the people working on it believe it real, but I'm hard-pressed to think that there aren't ulterior motives at play.

        You mean the 100 billion dollar company of an increasingly commoditized product offering has no interest in putting up barriers that prevent smaller competitors?

        • pixl97 an hour ago ago

          This is tantamount to saying your government only allows itself to have nukes because it wants to maintain power.

          And it's true, the more entities that have nukes the less potential power that government has.

          At the same time everybody should want less nukes because they are wildly fucking dangerous and a potential terminal scenario for humankind.

        • pjm331 4 hours ago ago

          The sci fi version of the alignment problem is about AI agents having their own motives

          The real world alignment problem is humans using AI to do bad stuff

          The latter problem is very real

          • zardo 2 hours ago ago

            > The sci fi version of the alignment problem is about AI agents having their own motives

            The sci-fi version is alignment (not intrinsic motivation) though. Hal 9000 doesn't turn on the crew because it has intrinsic motivation, it turns on the crew because of how the secret instruction the AI expert didn't know about interacts with the others.

        • daveguy 5 hours ago ago

          Just because tech oligarchs are coopting "alignment" for regulatory capture doesn't mean it's not a real research area and important topic in AI. When we are using natural language with AI, ambiguity is implied. When you have ambiguity, it's important an AI doesn't just calculate that the best way to get to a goal is through morally abhorrent means. Or at the very least, action on that calculation will require human approval so that someone has to take legal responsibility for the decision.

  • neom 5 hours ago ago

    The founder is a friend of mine, so maybe I'm bias, but I'm surprised wired doesn't get how network effects work and adoption curves happen, at least, it seems strange to publish this about a project someone did in a weekend, a few weekends ago, and is now trying to make a go of it? Like.. give him a couple of months to see how to improve the flow for the bots side, and general discoverability of the platform for agents at large. Maybe I'm a bit grumpy because it's my buddy but this article kinda rubs me the wrong way. :\

    • dudeinhawaii 5 hours ago ago

      Right but, do you or the founder have actual responses to the story posted? It seemed to give RentAhuman the benefit of the doubt every step of the way. The site doesn't work as advertised, appears to be begging for hype, got a reporter to check it out, and it didn't work.

      That's life. Can't win them all. Lesson here is the product wasn't ready for primetime and you were given a massive freebie for free press both via Wired _and_ this crosspost.

      Better strategy is to actually layout what works, what's the roadmap so anyone partially interested might see it when they stumble into this post.

      Or jot it down as a failed experiment and move on.

    • AlexLiteplo 5 hours ago ago

      I'm the founder, interesting article, ama?

      • neom 5 hours ago ago

        I just think it's kinda amusing how far away this article is from your real world metrics, lol. Also hi.

        • AlexLiteplo 5 hours ago ago

          Hey! Whats crazy is the writer spent 30 minutes interviewing us about our back stories only to not include a single quote.

          • throwaway198846 5 hours ago ago

            This is quite common

          • hluska an hour ago ago

            I’ve been writer, editor and interview subject in that scenario and it’s hot crazy, it’s just how PR works. All three roles are part of that happening.

      • CobrastanJorji 3 hours ago ago

        What sort of interesting human activities have RentAHuman humans been asked to do by your customers besides marketing?

        • NewJazz 2 hours ago ago

          [Redacted for legal reasons]

      • cm2012 5 hours ago ago

        I have run a lot of multi-sided marketplace scaling (for doordash, thumbtack, reddit, etc) with ads. Happy to chat/advise for free, just DMed you on Twitter. This project is so fun!

    • hluska an hour ago ago

      I know it’s your friend but whenever you hype something, there’s a chance it will be covered. It’s really not Wired’s fault that something was hyped heavily before it was ready to go. This is something you live with or you turn media adversarial. If you want uniformly positive content, that’s called advertising.

    • cm2012 5 hours ago ago

      Tech press learned it gets a lot more clicks being anti-tech than being accurate. There is a big anti AI or anything related to it zeitgeist.

      • jaredcwhite 4 hours ago ago

        What? If anything the tech press is overwhelmingly sycophantic towards both startups and Big Tech alike, often just passing along talking points verbatim without any critical analysis at all.

        Also, being "anti-AI" isn't being "anti-tech". AI is a marketing buzzword.

        • RankingMember 4 hours ago ago

          For sure- I haven't forgotten just how thoroughly deified the likes of Elon Musk, Elizabeth Holmes, and Sam Bankman-Fried were in the tech press at one point.

      • AlexLiteplo 5 hours ago ago

        Yeah whenever there's a cultural moment in tech that could be spun in one way or the other they go doomer

    • etchalon 3 hours ago ago

      I'm shocked a journalist didn't write fawning praise for a project someone "did in a weekend".

  • snowwrestler an hour ago ago

    Who needs a new app, just use DoorDash…

    > Waymo is paying DoorDash gig workers to close its robotaxi doors

    > The Alphabet-owned self-driving car company confirmed on Thursday that it's running a pilot in Atlanta to compensate delivery drivers for closing Waymo doors that are left ajar. DoorDash drivers are notified when a Waymo in the area has an open door so the vehicles can quickly get back on the road, the company said.

    https://www.cnbc.com/amp/2026/02/12/waymo-is-paying-doordash...

  • mewse-hn 5 hours ago ago

    Applying for the bounty to deliver flowers and then simply not doing it seems like bad faith on the author's part in order to write that headline

    • Volundr 3 hours ago ago

      If a job you apply for a job and it turns out it's not what it's advertised to be, there's nothing unethical in declining the job. The fact that the platform doesn't have a way of saying "nevermind thanks, not what I signed up for" is not the authors fault.

      They were explicitly looking to do work for an AI, when it turned out to be a human driven marketing stunt they declined.

      • Dylan16807 an hour ago ago

        They didn't decline because the idea "came from a brainstorm" with a human, that message was much later.

        They declined because the note on the flowers had a from line that was an AI startup. When you were otherwise on board with an unsolicited flower delivery and a social media post to make the sender look good, that's a picky reason to deny it, and saying it's "not what they signed up for" is a pretty big exaggeration.

        Except they didn't decline, they ghosted, and that's just bad behavior.

    • add-sub-mul-div 5 hours ago ago

      The entire site is bad faith to start with, it's human-assigned tasks with a veneer of autonomy to appeal to stupid investors and futurists.

      Between the crypto and vibe coding the author had no reason to believe they'd actually get paid correctly if they did complete a task.

      • mewse-hn 5 hours ago ago

        Experimentation is a lot easier when you've already decided the outcome

  • wongarsu 7 hours ago ago
  • wongarsu 6 hours ago ago

    Note how the number advertising how many bots actually use RentAHuman has vanished from their website. Instead we now have the number of bounties. 1/40th as many as registered humans. And just scrolling through them, maybe 1/4th of the bounties are not bounties at all but more humans offering services.

    It's a service that is clearly a lot more appealing to humans than to agents

    • mycall 6 hours ago ago

      It's in chicken-egg mode, where could be useful if more people and bots used it, but not there yet.

      • add-sub-mul-div 6 hours ago ago

        Usually it would be a network effect thing but in this case from reading the article it doesn't even work right (big surprise) and the nature of the tasks are spammy (big surprise). Like a worse mechanical turk minus the determinism of the code.

        • mycall 3 hours ago ago

          I agree. Unless they fix things, this will crash and burn, but the idea still has a future.

      • co_king_3 6 hours ago ago

        > [it] could be useful if more people and bots used it

        That's a very optimistic way of looking at things!

      • tinfoilhatter 6 hours ago ago

        Cannot fathom how being slaves for AI agents translates to usefulness.

        • a4isms 6 hours ago ago

          The term of art for this is becoming a "Reverse Centaur:"

          A “centaur” is a human being who is assisted by a machine (a human head on a strong and tireless body). A reverse centaur is a machine that uses a human being as its assistant (a frail and vulnerable person being puppeteered by an uncaring, relentless machine).

          https://doctorow.medium.com/https-pluralistic-net-2025-09-11...

        • sheikhnbake 6 hours ago ago

          We're acclimating ourselves to the inevitable service to our future AI overlords

        • co_king_3 4 hours ago ago

          I agree that the deal the site proposes is essentially being a slave to an AI agent.

          • ungreased0675 4 hours ago ago

            Other than the getting paid part. Let’s not trivialize slavery by making it equivalent to gig work.

            • owebmaster 3 hours ago ago

              That's definitely not trivializing. Algorithmic slavery should be a thing and discussed, it's real slavery.

  • ge96 6 hours ago ago

    Tangent

    I saw this video recently where Google has people walking around carrying these backpacks (lidar/camera setup) and they map places cars can't reach. I think that's pretty interesting, maybe get data for humanoid robots too/walking through crowds/navigating alleys.

    I wonder if jobs like these could be on there, walk through this neighborhood/film it kind of thing.

    • ProllyInfamous 6 hours ago ago

      Yes, there's also people doing similar things carrying around tablets with cuboidal camera attachments (Lidar) — it's obvious they're working (not tourists).

    • crooked-v 5 hours ago ago

      The problem with that is that you have to trust a gig worker with $12,000 worth of camera equipment.

      • ge96 5 hours ago ago

        Would be interesting how you'd steal it, it's on the moment you have it, emitting its location... maybe you put a blindfold over the camera/walk into a faraday cage then power it down/wipe the flash.

        From the beginning they know who you are

        Would be interesting people start hijacking humanoid robots, little microwave EMP device (not sure if that would work) and then grab it/reprogram it.

        Like one of these

        https://www.youtube.com/watch?v=80kDn4vit_w

  • rvz 5 hours ago ago

    This is post-AGI.

  • themafia 2 hours ago ago

    RentAHuman.

    What a boring misanthropy.

    It's work. You're hiring qualified people. For qualified work. You're not "renting a human." Which is just an abstract idealism of chattel slavery, so, is it really a surprise the author made nothing?

    • bdcravens 22 minutes ago ago

      Now part of Freelancer.com, was a company previously called Rent a Coder.

      On one hand, "coder" is a qualified job title, and we're not dehumanizing the quality of the work done. On the other hand, certain qualified work can easily, and sometimes with better results, be done by an AI. Including "human" in the name of the company can communicate clearly to those who want, or need, to hire in meatspace.

    • tartoran 2 hours ago ago

      It's just a little bit of grift, probably nothing to worry about since it looks like it's not going to take off anywhere.

      • themafia 25 minutes ago ago

        It's still worth criticizing the minds of the people who come up with this garbage and those who spend their time and money on it. Hacker News, as of late, is filled with ideas that are bad for society and mankind as a whole.