This might actually be quite nice - the Blender Python API is currently very useful and very touchy. Lots of differences in behavior in headless mode which are hard to debug (because you can't open the GUI to see what's happening, because that changes the behavior).
Yes the blender API feels like it sits on top of the GUI rather than the GUI on top of the API. When you are writing scripts in the blender api you basically mechanically describe the steps you would take in the UI. It can be a little fragile at times.
I've used Claude to write some blender scripts and it's an excellent use case. I look forward to even better claude/blender interaction based on this annonuncement.
I've also used genAI to write script. It works splendid up to a point, then there is absolutely no way to move the needle further. And it's not even close to renders I would ever publish.
That being said, it's about the same for the code it produces for non purely creative things, but for artistic work, I doubt an LLM in between gives any gain. After all, we do have an interface. A human interface.
Artists mad about AI art ought to welcome this. This is about making art tools better, instead of replacing them entirely. The alternative to this is AI just generating art directly and making tools like Blender obsolete.
Art generators need to come a long way to completely replace art tools. I dabble, but if I were doing real work with it, there have been times it would have been faster to composite in a 3D model rather than keep trying to prompt an image generator into fixing something.
This is what I do. It’s been really helpful for taking existing FBX files and handing them off to the agent + Python Blender API to analyze the geometry, convert to GLBs, etc.
What model you using? With codex and gpt-5.4 set to xhigh (and now gpt-5.5) seems to have zero issues helping me with rigging and fixing glb/fbx models, works as a charm. One time I instructed it to iterate together with screenshots because it was a gnarly task, but usually it figures out everything even when headless.
I disagree that anyone should need LLMs for Blender, for example, because Blender is designed by people to be understood and used by people, even if it requires a learning curve. It sounds a bit dangerous to build new things we don't understand, or worse, reduce our understanding of what we currently use because (only after studying our use of the same technology) an LLM apears able to replicate it, mostly.
I'm reminded of Sam Altman's performative helplessness on Jimmy Kimmel, when he described being unable believe a baby without ChatGPT. That's something I believe humanity has been capable of doing for a good portion of its existence, and not something we should give up to the hands of a yet-unproven, yet-unprofitable technology.
Surely there's a middle ground where improved APIs can be leveraged by both people and LLMs alike while keeping those APIs approachable? Why is it necessary that changing the python APIs would lead to "need[ing] LLMs for Blender"? I'm nowhere close to an AI maximalist but this criticism seems grounded in execution concerns. I'm definitely not saying that they won't mess this up and make the APIs overly complex, I just don't think that's necessarily going to be the case.
Regarding whether AI can/could overcome the hurdle of human understanding: I'm not sure if that's really a hurdle. Let's say in theory, a system was crafted by AI to be interacted with exclusively by AI. Broadly, I assume the outcome of the system would be for people, and it would have some purpose or value. Now my question is: how do we verify it functions? If it is a black box that nobody understands, then we can't verify it at all, and we can't debug it if there's something wrong with it. We circle back to the human understanding issue.
(I'm sorry if my tangent about Altman was taken as a personal affront, as I did not mean it to be that. It just muddied the two interesting topics you brought up.)
Not everything is abstract art. Sometimes I want my subsurf modifier to only target certain vertex groups, and if I can use AI to make that happen in a few seconds, that's a huge win for me.
Blender (and CAD programs as well) get in the way of creativity.
I know what I want, no idea how to tool my way there.
I spend two months going through YT tutorials, mucking about in Blender in order to figure out how to put together the model I have in my head [1].
(A year later, a new project idea—and it's back to YouTube because the learning is not only a steep curve but also sometimes so esoteric that it's fleeting.)
Absolutely agree - I was not impressed, but it will be a lot easier to work with the tooling without a 10 month crash-course on UI and 3D terminology if I can ask for what I want in plain language instead of knowing which button buried three levels deep to press to get my desired results.
Frankly, I love the idea of an automation engine printing out tangible works. I actually build spritesheets that way! Load a bunch of individual gimp files as layers, set them offset by a given parameter, and boom, done!
Would be rad to incorporate some statistical procedurally generated designs based on my own aparatus.
What I do not want to see is this realm of LLMs hijacking decades of hard work and consideration for integration channels to more tailor towards their LLMs, not for the diligent engineer.
If they want to put their tentacles as far as they want while making products more difficult to work with innovation of a different color, they are making enemies out of, at least me.
Honestly, I think this is a stepping stone towards replacing industry CAD modeling tools.
AI _can_ work with 3D models already, but it's really bad at it. CAD requires an extra level of control and I think this is where I could see AI companies wanting to get a foot in the door.
e.g "Let's build an adapter between 2in BSP Male and 3/4in NPT Female threads with a third Hose Barb outlet with the following properties..."
AFAIK Anthropic hasn't built any image or video generation tools yet, just text/code generation. OpenAI/Google/xAI all built image/video generation teams though so it may only be a matter of time.
Saying stuff like this about any AI company is silly and makes critics sound more like stochastic parrots than AI models themselves.
Anthropic hasn't even shipped an image or video model. What is "stealing art", the fact that AI models are trained on data? What constitutes stealing in that?
A lot of those companies likely sponsor it because they use it themselves, and actively benefit from its continued development. The incentives are at least somewhat aligned.
I doubt Anthropic has much use for such a tool internally. They're sponsoring it because they want to inject their slop into it and replace the people who do use it.
I don't think any slop is getting injected into Blender:
> Blender Foundation’s mission remains to empower artists with free/open source technology and tools. Yet, we also maintain APIs for individuals and corporations to extend Blender, also beyond what’s aligned with Blender’s mission. We consider this part of the Software Freedom that’s embodied with Blender’s GNU GPL license.
Unrelated but what do you all get from this endless speculation about others motives?
To me it just comes across like the stereotype of a lonely house wife peaking through the blinds judging the neighbors.
This forum is just as absurd as Reddit but in a subtle way; politically correct language without the zany memes but nonetheless absurd sense of self righteousness and importance and the validity of endless unsubstantiated assertions and qualifications.
As if not posting about Harambe affords legitimacy while posting what boils down to intrusive thoughts about people and motives y'all are removed from.
The nostalgia fueled appeals to preserve your grasp of reality specifically are just a modern conservatism. Time moves on and has as little obligation to stand still for HN doomers as it does adherents of traditional religions, contemporary American capitalism .
The horse and buggy and rotary phone and other engineers screwed out of careers by off shoring playing a tiny violin for script kiddies who grew up to become expert Python and DevOps engineers.
Get over yourself. Your efforts are a drop in the ocean of human effort. Ffs this comes off as some fine whine.
> They're sponsoring it because they want to inject their slop into it and replace the people who do use it.
Oh, noes, the horrors of democratising access to an expert tool. What will onshape do now, that the free one is accessible to oom more regular people that could use a 3d shape but don't have the time to learn a very complicated yet powerful tool?
I guess people have said the same about game engines / coding tools that help artists turn their vision into working, compiling games, right? Riiight?
It's not democratising access to an expert tool, it's devaluing the skill, expertise, and hard work required to create art.
edit: I seem to be rate limited and unable to reply? I'll paste it here:
I'm sorry but I don't agree. People care about art when it is extraordinary, in the same way people watch professional sport because it is extraordinary, or they watch cooking shows because it's extraordinary. What you call "democratisation" I would call the trivialisation of something which used to take effort into something which does not. People don't watch random people who have never played soccer before at the World Cup, they don't watch someone who can barely cook Kraft dinner cook on MasterChef, and they don't go to museums to look at someone's first sketch. There is no reason to assume that the trivialisation of art wouldn't simply devalue the medium to the point of irrelevance. However since people seek what is extraordinary, you will always have gates which are kept, and for good reason.
edit 2, responding to hbosch:
You don't have to be an extraordinary soccer player to enjoy playing soccer, but that doesn't mean we should develop a pill that makes everyone a great soccer player with no skill development or effort required. We don't watch professional sports just to see a ball move fast, we watch to see what a human is capable of through discipline and hard work. If everyone could take a pill to become an elite athlete, the sport wouldn't be democratized, it would be deleted.
When you remove the effort barrier you don't make art easier, you collapse the meaning of striving for excellence. If the 'expert' and the 'novice' produce the same result with the same button press, we haven't empowered the novice, we’ve just made the expertise irrelevant.
Tools like Blender are force multipliers for human intent, generative AI is a replacement for it. If you use Blender to make a "stupid little game," you’ve gained a skill. If you use AI to generate the assets for that game, you haven't gained a skill, you’ve simply acted as a manager for an automated system. The value of that game to the creator isn't just the code, it’s the fact that they built it. I find it really hard to believe that people find value besides the initial novelty in having a computer generate stupid little games - for what purpose? If nobody is going to play it, and you haven't built it, precisely where does the value in it come from? It's like a simulacrum of human creation.
What I actually see is people who are unwilling to put in the effort but seek the rewards anyways. They want the accolades from creation but without the hard work. I dont see the value in enabling this.
Sorry, this is not a good argument. It's sad that some skills are devalued when so many have invested years into them, but it is a net win when more people can create something without having to become an expert. Experts don't deserve to have a moat built around them. I say this as a software engineer with 16yoe who is dealing with the same challenges.
Please explain how this is a net win beyond the extremely narrow-minded belief that more equals better.
Do you believe it's a good thing that all software is becoming noticeably lower quality? Do you believe it's a good thing that open source is on its death bed now that licenses don't mean anything and popular projects are drowning under AI generated PR spam? Even here on HN, Show HN is effectively dead as almost every single submission is some boring garbage generated in 30 minutes that nobody cares about, not even the person who submitted it.
Experts don't need to have a moat built around them, because they build their own moats with their skills and efforts. Just because you get jealous and feel entitled to the fruits of the experts' labor while being unwilling to put in the same work does not mean you have the right to steal their work and mix it up in a computer algorithm so you can later claim it as yours.
The concerns are: proliferation of slop, en masse. prosperity of artists who live off their work to be rendered impossible. It's already quite dire for them.
The upside? A new generation of content creator who may profit from automation.
We never had problems creating art. In fact, what's artistic is relative to the effort involved in the creation process; also, access to technology available at the time.
To me the argument is valid. It's devaluing the skills of existing artists, and the decade long investment they likely put into their craft.
Unity enabled a flood of slop games long ago. Dreamweaver enabled countless slop websites. Photoshop delivered us heaps of slop images. Amazon delivers thousands of slop products from slop manufacturers every day.
The slop isn't coming, it arrived decades ago. The Pandora's box of slop is already open. Maybe AI widens the aperture, but if you cannot handle the discernment required to separate slop from something useful or meaningful, that is your problem.
If I download Blender today, as a true beginner, is what I make extraordinary? If it's not, does that mean I am not allowed to use Blender? What if I want to use Blender and I am not interested in making anything extraordinary? What if I want to use Blender to make a stupid little iPhone game that no one will ever play? Is that considered extraordinary, or not? What is this criteria?
The truth is, the vast majority of art is not extraordinary, whether it comes from a canvas, a typewriter, Photoshop, or Blender. That is as true for AI as it is for humans. Likewise, the vast majority of people who kick a soccer ball will never be extraordinary soccer players.
I firmly believe that tools which enable people to get closer to their goals are always a good thing. The concept of what makes something "extraordinary" does not come from the maker, or the tool, but from the beholder. It is the audience's job to discern what is and isn't "extraordinary", not the makers'.
Did you really create a 3d model if you didn’t hand type all of the vertex coordinates? Anything less is cheating by using cheating tools and isn’t art. Oh you had to use a deform tool? Pathetic. Can’t calculate your own circle approximations at various details? Good. If you can’t do it, it shouldn’t happen.
> that doesn't mean we should develop a pill that makes everyone a great soccer player with no skill development or effort required
What are you talking about? We should absolutely do this. We should extend this to as many domains of human achievement as possible. By this logic, computers shouldn't have existed because it devalued the skill that scribes and accountants developed before word processors and spreadsheets. Blender itself is a tool that made 3D accessible to thousands of people who previously had to pay for expensive licenses, training, and SGI workstations. Literally the whole point of technology is to make more things possible for people unable to do it naturally or without great effort.
Google has been supporting Blender for over 20 years, initially through Summer of Code [1, 2], and also via corporate sponsorship once Blender started offering it.
And I'm pretty sure I've seen most of the other big names in tech on the sponsors page for many years now.
Can you imagine going to a football match and second-guessing which are the players who look human, but skin-deep are actually androids made at a factory? This is what it feels like with music and literature right now with so much AI. There are some pockets where you still can say "that's human-made", like 3D-rendered feature films with some particular artistic direction. That, it seems, AI companies also want it to go the way of the dodo.
Yesterday I saw a clip that went "viral" of a few hogs chased by a humanoid robot somewhere in Poland. I had to watch it a few times to figure out if it was real or generated. I still wasn't 100% sure. Asked around in a group, and apparently it's been widely reported on regular news, so I guess it's real? But we're slowly getting to the point where you won't be able to tell, especially from a short clip on a phone.
Yes, and tx for sharing the experience of the hog video - recommended to me too and I chose not to click, as I did not want the frustration of seeing another "tech run amuck" example, of tech disrupting YET ANOTHER norm.
Relatedly, IMO "trust" as a word / concept is deserving of being reevaluated nowadays.
E.g. I don't know that you, NitpickLawyer, are a real person. And when I go through the mental exercise of inventing the details, proofs, and evidence I'd need in order to satisfy my doubt, I never succeed until I reach the physical-contact-with-NitpickLawyer condition.
So I think we need to evaluate what is necessary for oneself to operate in society, separate from these untrustable things .. such as media / news reports, and all the other things I just don't want to worry about, right now. :-(
No-one cares dude. People like good enough, convenient things that serve their entertainment needs, which is shaped by said entertainment, so there is not really an issue here.
Since they are up against a insurmountable mountain of capital which will commoditize and optimize whatever it wants, they are kind of in for a pointless fight with an inevitable end. They could save themselves a lot of despair if they saw the writing on the wall and pivoted to something that still has value, or accepted the new reality instead of throwing a fit.
That is too difficult as the concept (of trusting one's perception) is, I believe, intertwined deeply with other aspects of being human, for many people.
It's not reasonable to require that those people be mentally organized in a manner that already mistrusts reality, in a healthy manner.
Maybe it is a pointless fight with an inevitable end but at least I'll die with my humanity and dignity intact rather than being a boot licker for Sam Altman, but you do you.
You can die with your humanity at a farm growing veggies and being surrounded by people you love and still be consistent with that I write. Seeing the inevitable does not equal loving or wanting it.
I care deeply. It is not single-handedly going to destroy humanity. However, we are clearly on a course where people are more isolated, less challenged, less social, and very very very unhappy. Music is one of those things that can really bring people together. If we flood the zone with AI music (or any other art form) we will slowly edge out the humans who are doing that. That is less new music. Less chances to come together. Less chances to dance together. It's a death by a thousand cuts. I, and many others, think it's worth fighting for because we want others to have the amazing experiences we're having.
Every generation has a new baseline. The younger generation will not be able to imagine having anything other than doctors and psychologists in the phone, and they are content with it because it's all they know. Social media might be all the social connection they have, and that will be the best thing where they will have the best experiences, they won't know another baseline. Eventually maybe the best experiences will be had with digital companions, etc.
The only losers here are old or bitter people who have tied up their worldview into their own time and cannot see or comprehend that the world has moved on with a different bound for the experiences and expectations.
> Eventually maybe the best experiences will be had with digital companions, etc.
Obviously I can't speak for all of Gen Z (and I realize we're no longer "the younger generation"), but my friends and I don't want any part of this, and feel optimistic rather than bitter that things won't go the way you're describing. I seldom meet anyone in my age group that isn't talking about moving away from social media, cancelling software subscriptions, all of the things that millenials and Gen X seem to be so excited to continue building and promoting.
Even at my workplace the "older" people are the ones that are excited about stuff like AI jazz remixes of rap songs and AI generated short films, while literally everyone else under 30 finds it pretty cringe and makes fun of them in DMs.
So all that to say, I disagree with your outlook, but I guess time will tell.
Talking about and doing something are different things. What are the social and market structures around your friends that lets them avoid having a smartphone, cancelling subscriptions, and uninstalling everything? Do you see this getting better with media consolidations from Substack(Andreassen), Twitter(Musk), and Youtube channels by the hyperscalars/billionaries and questionable merges like Paramount and Warner Bros?
When the social culture is based around platforms and content that has subscriptions, and when media and what you see is consolidated, you can't just exit without losing a big part of the social context because the people around you are eating the same thing.
I dislike slop as much as anyone else. I think it puts a higher burden on the receiver of information to filter the signal in a pile of trash. I just don't really see an actual way out if you look at it from a societal level with the existing structures and incentives.
> you can't just exit without losing a big part of the social context because the people around you are eating the same thing.
That's exactly it. The goal is lose a big part of the social context. It driven by rage bait, AI bots, state actors, and a thousand other influences that are predominantly negative. Of course amazing things happen online. However, the good is not worth bad. I'm raising my kids and they will never have a smart phone. Will they miss out on somethings? Of course! They also won't have their attention span destroyed, their ability to be bored and creative in the real world destroyed, they won't have body issues, they won't be caught up in the alt-right pipeline, they won't have their brains fried by content like Mr. Beast which is designed to be as hyper and addicting as possible. Missing out on the current social context is the entire goal. People were happier before it.
This structure expects all of their friends to live in similar systems. Otherwise their friends will talk about games, memes, series at school while your kids are isolated away as they are not a part of the culture and not in the loop.
I think this is only possible if you find a community with similar values, like religious, or hippie, where the focus is put on other things. Otherwise you might deprive your kids of what you want to give them because they will not feel socially connected.
I am not an idiot. I'm well aware they will pick up things at school My 5 year old already knows who Mr. Beast is. He's never watched a video of his and never will at my home. If he watches one or two at a friends house that of course is going to happen. But he won't be consuming that poison regularly every day. My 8 year old is doing just fine. Happy. Healthy. Active. Lots of friends. And when they're older and fully functioning adults unlike some of these Gen Z zombies who have had their brain fried, they will thank me.
The pearl clutching over the pedigree of art is getting tiring. No one has really ever cared. Most mainstream music is written by corporate teams. Elvis didn't write his own music. Frank Sinatra didn't write his own music. Nearly all pop artists don't. But suddenly, people are now clamoring for art, but they never gave a shit to begin with. Most people can't tell AI written music from anything else if a human performer played it. Most of it is better than any local bands anyway. Tired of people pretending they care.
It’s subjective, because it’s art. There’s no right answer.
If you like listening to AI generated content, then that’s fine! I’m glad you found something you enjoy.
For me, I consume art because I want to understand other people. For example, when I go to an art museum I want to emotionally connect with the artist: to feel what they were feeling, or understand an idea they’re conveying. I have little desire to emotionally connect with stochastic token sampling. It seems a vapid way to spend time
You still assume the artist in those examples is real. It could be a team, a ghost artist, etc - yea it's less likely than music, but still. The connection itself is quite difficult too, given the ease in which someone could plagiarize others work - sure they have mechanical skill, but did they really invest in the painting or was it ripped off from others ideas?
I suspect your connection to real artists won't be impacted. This, like the music example, just highlights our assumptions.
I'm not defending this AI garbage fwiw, i just don't think it's as interesting as most people make it out to be. I adore music, and i connect with songs i connect with. I don't typically think about the possible ghost writers, teams of writers, ghost players, etc. The music either speaks to me or it doesn't.
Though i'm not trying to connect to the musician as a person. However, as i was illustrating - if i really wanted to connect to musicians at face value, that ship sailed many, many years ago. Far before AI.
There are ways to mitigate this, but that balance will always be there - it was before AI, and it will be after. It's an evolution. Not an enjoyable one perhaps, but it is nonetheless.
I arrange gigs with real bands playing music. At least that will take quite a while to replace with AI. I am curious to see if we will get a backlash eventually around the content. It will probably be a mix of everything.
Storytelling didn’t go away when the theatre was invented. Theatre didn’t go away when cinema arrived. Cinema wasn’t replaced when radio arrived, ad that wasn’t completely replace by TV, etc. It is a mix of things these days and it will probably remain that way.
If Frank Sinatra had Ai he woulnt have had to perform any of that slop by Cole Porter, Irving Berlin, Kurt Weill, Rodgers & Hammerstein and other composers no one cares about
Can you imagine watching a movie, and not being able to tell which scenes have GC special effects and which don't? Oh no!!! GC totally ruins all movies!!! Even movies that don't use CG are ruined by the tension of dreading that they might, and wondering if they do, and doubting everything you see in the screen, even if they don't. CG has ruined everything!
> like 3D-rendered feature films with some particular artistic direction.
This is a really interesting example. Why do you foresee artistic direction going away as a result of AI? More importantly: why didn't we lose that with the transitions through the years of special effects - i.e., from practical to 3D-rendered?
It's not an uncommon opinion that we did lose artistic direction and aesthetics by moving to vfx - the ability to edit more and more things in post to change the direction or plot of a film personally seems like it's enabled more design by committee in marvel films, etc
1. That looks great and I hope they can make blender even better, or pay people to do that. Raise your rates for corps Blender!
2. What does this have to do with any strategic position for Claude.
Their goals are
“Enterprises pay us a fortune to either
1. Sift through unbelievable volumes of data to pick out fairly easy to find nuggets
2. Tell everyone their jobs are at risk to keep them in line”
There is no other play. AGI is science fiction.
So why are they launching this? Because they don’t have a strategic core running through - it’s always been “wow what else can this do”. That’s a research project at the price of billion dollar data centres.
Unless we do achieve AGI (which we won’t) the price tag is way beyond the returns we are seeing.
I've been using Claude with OpenSCAD to generate some simple models with repetitive geometry (a set of d8 dice with braille on them for a scrabble-like game for blind children). It's really good, though often I have to send a screenshot to Claude or describe a geometry issue.
Having more native integration into Blender, which I'm already much more familiar with, will be fantastic.
Similarly, I made an agent that lets Claude puppet OpenSCAD, generate screen shots, change the camera angle, etc. In general Claude seems to have a pretty good vision model that can create usable designs. It's also fun to let it make up new models of its own and then try to 3D print them.
This[0] is the original game. I downloaded the dice, made a list of letters for each die (I can't remember if I did this manually, I don't see a published list so I must have), and then I fiddled until I got something that looks decent and also printable. Each die face has the braille letter as well as a small English letter. Here's[1] my repo, I wasn't intending to make it public yet, so it still has the original creator's files in there and the README is autogenerated.
The biggest challenge at this point is figuring out how to make the dice print consistently. With each die face only having a few points of contact, they keep unsticking. What I'm trying now is cutting the dice in half, printing the halves, and then sticking them together with dowels.
I agree that it's not a good look for Blender, but I don't think that something actually bad will come from this. (Other than maybe a negative impact to Blender's reputation.)
I'm thrilled for the world where I can drive more things with an LLM. The big limiting factor for me for little home improvement things was that I'm not very good at modeling so I have to get my wife to go do things for me. That's fine, she's happy to do it, but sometimes you kind of just have to try yourself to see what you're really looking to do.
Recently I've been using Claude Code with `build123d`[0] and it's pretty good, but my wife uses Blender so it would be cool to come up with something at least halfway decent and then have her clean it up.
Not to be a hater or anything (I'm a hater), but I'm seeing people mention the potential of LLMs for the "grunt work" like retopo, but I can't really begin to imagine what the "correct" data representation and python api calls would even begin to look like in a training set?
Would an LLM really be querying vertices in relation to one another and estimate whether their distance "sounds" like good topology?
Oh yes, I think this part is easy to see and perhaps even logical. But I also don't think this part is the problem. After all, generating 3d models is primarily a technical consideration. I don't think the technical prowess of software is an issue in this here.
It's because LLMs will soon start building real-world objects via CAD. This is the first step. Look at things like the Adam plugin for Onshape. Works great with Opus. It built a toy car for me with one prompt.
If this is the case, they’ll want to improve the NURBS support within Blender. You can get some amazing results with subd, but digital twins require accuracy and you get that with NURBS geometry. Fortunately, Blender supports it already, it just needs some attention to tooling.
All the CAD and modelings tools have their own scripting languages that LLMs can write to, so you can just use that directly without any built-in LLM support. There will probably be someone doing a pelican-on-a-bicycle for CAD.
MAYA has extensive NURBS tools, which means it can import and export CAD data natively. While Blender does support basic NURBS geometry, it lacks tooling to fully support it.
If the idea is to support Blender for use with “Digital Twins” or “World Models” then the first step is to start with accurate geometry. Anything less is slop.
Or it might allow proficient blender users to become more productive, resulting in higher detailed scenes for the same budget.
We'll see how it shakes out. As a non-proficient Blender user, I'm kinda keen on this since I have had a lot of ideas that I haven't been able to realize in Blender.
So reducing budgets and suppressing wages then, what a great deal for the workers who have specialised in this field and who's work and effort the LLM has been trained on to replace them!
This is like arguing we should only have manual looms because the mechanical looms suppress wages and destroy the livelihoods of those expert loom operators.
The tech is here. We can fight it, or adapt and embrace it.
If previous examples from the industrial revolution are anything to go by, fighting automation is a losing battle.
Difference is that those tools modernised the work and actually created jobs. The ultimate aim of these AI sociopaths is to remove all work in all areas so they can hoover up all the money and let people starve.
But, there is plenty of open source stuff out there to enable people to have their own models, running on their own hardware. Business does not need to go to the big 3.
Business doesn't need to go to the big 3, but it will. The big companies will ensure that smaller, specialised or open models get restricted by laws paid for by their lobbying budgets, so that they can pull up the ladder after them and solidify their position. They've invested billions and will never allow the world become some tech utopia where we all have a personalised free AI in our pocket, they will guarantee their own dominance.
I don't think they can tell Blender what to do. As such it's just more money for Blender! Yes, Anthropic can use the Python API to do their AI BS, but an improved Python API is also good for anyone else. This doesn't mean that Blender themselves are integrating any gen AI (if you don't already count the denoise filters). Do you really think Blender should have denied the donation?
Shame that we have to choose between better financing of Blender for features we already want (Python API quality) and placating imo overly dramatic artists.
I think the worries of artists over gen AI are valid. I guess all the better that some of the money of those "not yet" profitable AI companies goes to a good open source project and not to some of their usual practices.
Yep. Anthropic's motives are obviously self-interested (Cluade <-> Blender integration), but I'm not donating to Blender, are you? That's the problem, we all want Blender to be able to pick and choose donations, but when all OSS is cash-strapped, it is easier said than done.
I'd prefer Blender get some additional funding out of this AI bubble at least.
Not to be a shill, but Hunyuan 3d studio is pretty good. You get 20 free credits/day which is pretty generous.
Texturing still is subpar. But I've found that using Hunyuan for modeling+retopo+unwrap -> clean up in blender -> texture in substance painter is actually a pretty nice workflow for some stuff.
The latest chatgpt Image generation model is producing really nice results for turning sprites into sprite animations. Which is something a year ago felt impossible to get right. But 3d has been impossible for me to get anything good.
Mixed feelings about creative work and AI, but if it wasn't for LLMs I would have chosen a different software than Blender for my hobby-level 2D animation.
Made a Blender plugin w/ Claude, and it's saving me so much time (:
The press watching side of me only has questions. Why was this published by Blender and not Anthropic? What does this actually mean? That the blender team gets free claude code max subscriptions?
What it means is here[1]. Anthropic is paying €240k a year and in return they get some marketing in the form of a press release and a website mention, as well as someone to talk to.
Presumably because they think agents will become the dominant primary users of tools like Blender, and want a seat at the architectural table to help accelerate that & create useful synergies with Anthropic products and models?
The press release calls out the Blender Python API, specifically, which makes sense for agentic use.
> This support will be dedicated towards Blender core development, to maintain and continuously improve foundational features like the Blender Python API
Pretty much spells it out. They have an interest in extending/supporting the ability for Claude/CC to use and interact with Blender. There may be gaps in endpoints that Anthropic needs to enable certain patterns of automated usage.
As literally stated in the second paragraph of the blog post:
> This support will be dedicated towards Blender core development, to maintain and continuously improve foundational features like the Blender Python API, which enables developers and artists alike to extend and improve the software for custom workflows.
Tbh, I am unsure. The things I needed were extremely simple and I could have used TinkerCAD or freeCAD, but I wanted to experiment. It was actually way faster with ChatGPT/OpenSCAD as I wanted to be able to hot swap SVG's to create different shaped tokens to print on my Bambu P1S with AMS.
They love art so much they want to take the humanity out of it. The funny thing is these sociopaths at the AI companies can't even comprehend something that cannot be quantified in a spreadsheet.
I think you missed my point - up until Anthropic decides there's a pressing need to replace them to furnish their wallets, the awesome people who create incredible art using Blender have learned how to use the tool and use it to convert their imagination into something we can all see.
I once thought the same about all the copyrighted works on which LLMs are currently trained. Surely they can't just hoover everything up? Haha, silly me.
I understand that creating an LLM itself is transformative, but an LLM trained on copyrighted works remains capable of generating derivative works, which eventually will result in successful copyright lawsuits against LLM users who redistribute those derivative works.
In advance of that day, the great race is to build a licensed corpus as aggressively as possible (see Github's latest decision to opt in Copilot usage). Even if Blender doesn't send your data on every save, various options can be developed, such as publishing to a Blender-controlled public channel.
I'm relatively sure the source code I've written and stored in my local computer is not sucked up in the LLM training data. And I believe people working with Blender models are pretty much in the same situation: they don't host their data in a third-party service and openly share it.
Aha - so that decision was not a consensus or community made one? I really don't know right now; no clue about the internals at blender. But that would be interesting ... I can see the headlines "Blender community sold to Anthropic. Forks starting in 3 ... 2 ... 1 ..."
Sponsorship decisions are not community polls, never have been.
And the worries about "blender just being sold to xyz..." have been around forever. Always wrong. People with AMD cards were screaming when Nvidia became sponsor, and other way round.
Anthropic seemingly is one of the only profitable AI companies at the moment.
The AI "bubble" is hard to pop, at least for this big companies which will just receive bailout after bailout from the Government if shit hits the fan.
I don't see how Anthropic sponsoring Blender is an "attempt at again stealing IP"
People surprised by Anthropic getting in on Blender funding obviously never saw any of the Blender/ChatGPT integrations a few years ago. This has been coming for a long time.
That was ancient times in LLM terms. I've seen demos that create whole scenes in a single prompt.
There are lots of non-coding use cases for LLMs that don't burn through compute but are still useful. Anthropic is starting to catch on, and it makes sense to focus there with the compute crunch.
They already have corporate sponsorships from Google, Meta, Nvidia, and other big companies. Anthropic is just joining the list. This is actually good for Blender.
Wasn't this one called "machine learning" for denoising and upscaling? That's completely different from an LLM replacing your job (after being trained on your work without permission).
Yes, Claude is the AI doing my denoising. I keep running out of tokens with my 4k renders.
AI is a nebulous term. AI denoisers are not the same thing as an LLM or image gen model, the ire is directed at LLMs and not AI denoisers because they are completely different things.
This makes sense. Blender has had (non-LLM) generative features for a long time, hooking LLMs into the Python API to generate art makes sense (it's probably already done but for it to be sponsored is nice).
I was wondering the other day if Ai could do tedious things like retopology and figuring out effecient uv unwrapping
Also was wondering how'd it would do things like sculpting? That sounds expensive like either you send millions of polygons for the model to explore? And that ruins the context window order doesn't even fit, or your sending tons of screenshots?
I have mixed feeling about this. Guess they can need the money ... but still. Data goes to Anthropic here. It also will buy influence in some ways, I am sure about that. We could see this with rubygems.org - when shopify threatened to cut funding some months ago, suddenly chaos erupted. Money buys influence, this is easy to see how.
And Blender tries to get funding from many different donors so that no single one can have any sway over them. Anthropic, as disgusting as they are, are just one more donor. Epic, Nvidia, Google, CoreWeave are also patrons. I don't worry about that donation.
This is idiotic. Blender runs an open development process. It's all out in the open, and they would never do such a thing. Obviously you know nothing about the Blender developer community, and you're not qualified to speculate without even bothering to do the most basic research, so your posts are much worse than any hallucinating ai slop, just insulting the work and integrity of hard working dedicated people. Your histrionic conspiracy theory posts in this discussion are absolutely off the rails, detached from reality, and your ignorant mindless attacks on Blender are helping nobody but Autodesk.
I think you missed my point completely - I am not attacking Blender, I'm attacking the Anthropic. You think Dario woke up this morning and just felt like doing charity? Or do you think there's an underlying business reason for getting into this space? If you believe they're doing it because they're lovely people I'm afraid we'll disagree about that.
Yes you most certainly are attacking Blender, accusing Blender developers of being in on a conspiracy of checking in code to Blender that sends data to Anthropic.
And your conspiracy theory about the Blender developers selling out to Anthropic and secretly checking in code (that anyone can see since it's public) that sends data to Anthropic is objectively idiotic.
Unity donated a lot of money to Blender too. Do you have any evidence that Blender sends data to Unity?
Can you point to the code in Blender that sends data to Unity? Or is your conspiracy theory that the official Blender builds contain secret spy code that is not checked into the repo? How many Blender developers are in on this conspiracy, and how have they kept it secret until now, now that you publicly announced your accusations that they intentionally spy on their users in exchange for donations from sponsors?
I think you'll find my original parent comment said "You know, for now..." - as in they aren't doing it yet. But do you really think Dario cracked open his wallet for fun without some later expectation of a return on investment? The Blender devs willingly or not are now frogs on the boil until the soulless ghouls over at Anthropic come calling.
Ugh. I love Blender, it's the greatest software of all time according to myself, and I absolutely hate this and I am terrified at what it implies. If they just want name recognition ok fine, but my guess is Anthropic will want changes to Blender itself and I find that totally unacceptable.
Ah well, the online artist community is unusually principled on matters like this, especially compared to here. If they start doing shady stuff it will get forked and probably spell the end of the Blender foundation, which would still be really bad of course.
> improve foundational features like the Blender Python API, which enables developers and artists alike
So they want claude to be able to talk to blender
This might actually be quite nice - the Blender Python API is currently very useful and very touchy. Lots of differences in behavior in headless mode which are hard to debug (because you can't open the GUI to see what's happening, because that changes the behavior).
Yes the blender API feels like it sits on top of the GUI rather than the GUI on top of the API. When you are writing scripts in the blender api you basically mechanically describe the steps you would take in the UI. It can be a little fragile at times.
I've used Claude to write some blender scripts and it's an excellent use case. I look forward to even better claude/blender interaction based on this annonuncement.
I've also used genAI to write script. It works splendid up to a point, then there is absolutely no way to move the needle further. And it's not even close to renders I would ever publish.
That being said, it's about the same for the code it produces for non purely creative things, but for artistic work, I doubt an LLM in between gives any gain. After all, we do have an interface. A human interface.
Yeah I am using blender to generate models for 3d printing - no rendering and Claude doesn't have to do anything artistic for my use case.
There's already an MCP for it, saw a post on LinkedIn the other day about it.
Not sure if this one was the one I saw, but Google gave me this one. You could use Claude Code to build things with Blender.
https://blender-mcp.com/
Anthropic just posted a video 1 hour ago of their own official MCP integration with Blender:
https://youtu.be/LZMWsZbZU5w
Artists mad about AI art ought to welcome this. This is about making art tools better, instead of replacing them entirely. The alternative to this is AI just generating art directly and making tools like Blender obsolete.
Art generators need to come a long way to completely replace art tools. I dabble, but if I were doing real work with it, there have been times it would have been faster to composite in a 3D model rather than keep trying to prompt an image generator into fixing something.
That's why it's great that it's able to work with existing art tools, like Blender, instead of replacing them.
or use a hybrid approach and have the best of both https://youtu.be/1vB3JXzewx0?si=DWKNgPcJcz5u4Bkp
You don't even need that, if your agent/harness can evaluate Python, it can trivially access the API through there and "import" basically.
This is what I do. It’s been really helpful for taking existing FBX files and handing them off to the agent + Python Blender API to analyze the geometry, convert to GLBs, etc.
me too. used codex to convert a bunch of riggings between lots of models via blender api.
it felt weak at it , like the corpus wasn't strong with blender/python work to look through , but it got going at it fairly fast with some coaxing.
What model you using? With codex and gpt-5.4 set to xhigh (and now gpt-5.5) seems to have zero issues helping me with rigging and fixing glb/fbx models, works as a charm. One time I instructed it to iterate together with screenshots because it was a gnarly task, but usually it figures out everything even when headless.
https://www.blender.org/development/blender-lab-activity-rep...
With examples: https://www.blender.org/lab/mcp-server/
+++ Has good examples.
We (I) need that.
"Some software" is approaching levels of complexity where, perhaps, it gets to a point where a human is barely able to even use it.
At the same time (brave new world) LLM assisted software opens up the possibility of levels of complexity we would not have considered before.
I disagree that anyone should need LLMs for Blender, for example, because Blender is designed by people to be understood and used by people, even if it requires a learning curve. It sounds a bit dangerous to build new things we don't understand, or worse, reduce our understanding of what we currently use because (only after studying our use of the same technology) an LLM apears able to replicate it, mostly.
I'm reminded of Sam Altman's performative helplessness on Jimmy Kimmel, when he described being unable believe a baby without ChatGPT. That's something I believe humanity has been capable of doing for a good portion of its existence, and not something we should give up to the hands of a yet-unproven, yet-unprofitable technology.
It also sounds like people with little ability can use this argument as a way to say “look how difficult this is for humans”
While it’s just a “you” problem. Some folks have better skills, knowledge and comfort with difficult subjects. And that’s fine.
Surely there's a middle ground where improved APIs can be leveraged by both people and LLMs alike while keeping those APIs approachable? Why is it necessary that changing the python APIs would lead to "need[ing] LLMs for Blender"? I'm nowhere close to an AI maximalist but this criticism seems grounded in execution concerns. I'm definitely not saying that they won't mess this up and make the APIs overly complex, I just don't think that's necessarily going to be the case.
I propose that, for some software, the learning curve is becoming harder to surmount.
Further, I'm suggesting "designed by people to be understood and used by people" might be a hurdle for some future software we might envision.
(Altman's performance is orthogonal as I'm suggesting a new level of software that has not yet been written/conceived.)
Regarding whether AI can/could overcome the hurdle of human understanding: I'm not sure if that's really a hurdle. Let's say in theory, a system was crafted by AI to be interacted with exclusively by AI. Broadly, I assume the outcome of the system would be for people, and it would have some purpose or value. Now my question is: how do we verify it functions? If it is a black box that nobody understands, then we can't verify it at all, and we can't debug it if there's something wrong with it. We circle back to the human understanding issue.
(I'm sorry if my tangent about Altman was taken as a personal affront, as I did not mean it to be that. It just muddied the two interesting topics you brought up.)
Why do we need that?
Art should demand more of the creator than the person experiencing it.
The alternative is 9 billion who cares slop things.
Not everything is abstract art. Sometimes I want my subsurf modifier to only target certain vertex groups, and if I can use AI to make that happen in a few seconds, that's a huge win for me.
Blender (and CAD programs as well) get in the way of creativity.
I know what I want, no idea how to tool my way there.
I spend two months going through YT tutorials, mucking about in Blender in order to figure out how to put together the model I have in my head [1].
(A year later, a new project idea—and it's back to YouTube because the learning is not only a steep curve but also sometimes so esoteric that it's fleeting.)
[1] https://github.com/EngineersNeedArt/Space-Tug_3DModel
It would take you as long or longer to draw with a pen too. Art is hard in general.
Can't wait for Gimp automation, so i can finally start using it!
There is already a Blender MCP. It works-ish! But could be a lot better in understanding 3D space.
As an amateur this is really exciting - but not sure about folks that are real pros at this stuff.
I'm not a pro, but I've been unimpressed by LLMs driving blender. Was left unexcited. Must be torture for professional to read this thread.
Absolutely agree - I was not impressed, but it will be a lot easier to work with the tooling without a 10 month crash-course on UI and 3D terminology if I can ask for what I want in plain language instead of knowing which button buried three levels deep to press to get my desired results.
I want claude to talk to blender, I personally hate using blender but love it's outputs
Frankly, I love the idea of an automation engine printing out tangible works. I actually build spritesheets that way! Load a bunch of individual gimp files as layers, set them offset by a given parameter, and boom, done!
Would be rad to incorporate some statistical procedurally generated designs based on my own aparatus.
What I do not want to see is this realm of LLMs hijacking decades of hard work and consideration for integration channels to more tailor towards their LLMs, not for the diligent engineer.
If they want to put their tentacles as far as they want while making products more difficult to work with innovation of a different color, they are making enemies out of, at least me.
Honestly, I think this is a stepping stone towards replacing industry CAD modeling tools.
AI _can_ work with 3D models already, but it's really bad at it. CAD requires an extra level of control and I think this is where I could see AI companies wanting to get a foot in the door.
e.g "Let's build an adapter between 2in BSP Male and 3/4in NPT Female threads with a third Hose Barb outlet with the following properties..."
Yeah, Google has MuJoCo so it seems natural to get hooks into Blender.
MuBlE: MuJoCo and Blender simulation Environment and Benchmark for Task Planning in Robot Manipulation: https://arxiv.org/abs/2503.02834
Not sure why this is getting backlash. Just look at https://fund.blender.org. Other corporate sponsors are Google, Meta, Nvidia, Netflix, even Adidas.
This just means more support for a major OSS project.
Because Blender is a tool for making art and Anthropic makes tools for stealing art
AFAIK Anthropic hasn't built any image or video generation tools yet, just text/code generation. OpenAI/Google/xAI all built image/video generation teams though so it may only be a matter of time.
Surely art also exists in textual realm.
Saying stuff like this about any AI company is silly and makes critics sound more like stochastic parrots than AI models themselves.
Anthropic hasn't even shipped an image or video model. What is "stealing art", the fact that AI models are trained on data? What constitutes stealing in that?
I would probably consider books to be a form of art.
Art is very varied, its not just paintings or photographs. Video games can be art, music can be art (artists!), an eloquent dance can be art.
I agree books are a form of art.
However I wouldn't call AI training on text in books "stealing art"
Do you consider reading a book and remembering what you read to be a form of stealing? How about writing a book review?
A lot of those companies likely sponsor it because they use it themselves, and actively benefit from its continued development. The incentives are at least somewhat aligned.
I doubt Anthropic has much use for such a tool internally. They're sponsoring it because they want to inject their slop into it and replace the people who do use it.
There is no scenario where more people using Blender is bad for Blender.
Or perhaps they're sponsoring it so artists can spend less time fiddling with Blender's UI and more time creating art?
Why would Anthropic want people to "create" art when they can "generate" it instead?
I don't think any slop is getting injected into Blender:
> Blender Foundation’s mission remains to empower artists with free/open source technology and tools. Yet, we also maintain APIs for individuals and corporations to extend Blender, also beyond what’s aligned with Blender’s mission. We consider this part of the Software Freedom that’s embodied with Blender’s GNU GPL license.
Unrelated but what do you all get from this endless speculation about others motives?
To me it just comes across like the stereotype of a lonely house wife peaking through the blinds judging the neighbors.
This forum is just as absurd as Reddit but in a subtle way; politically correct language without the zany memes but nonetheless absurd sense of self righteousness and importance and the validity of endless unsubstantiated assertions and qualifications.
As if not posting about Harambe affords legitimacy while posting what boils down to intrusive thoughts about people and motives y'all are removed from.
The nostalgia fueled appeals to preserve your grasp of reality specifically are just a modern conservatism. Time moves on and has as little obligation to stand still for HN doomers as it does adherents of traditional religions, contemporary American capitalism .
The horse and buggy and rotary phone and other engineers screwed out of careers by off shoring playing a tiny violin for script kiddies who grew up to become expert Python and DevOps engineers.
Get over yourself. Your efforts are a drop in the ocean of human effort. Ffs this comes off as some fine whine.
> They're sponsoring it because they want to inject their slop into it and replace the people who do use it.
Oh, noes, the horrors of democratising access to an expert tool. What will onshape do now, that the free one is accessible to oom more regular people that could use a 3d shape but don't have the time to learn a very complicated yet powerful tool?
I guess people have said the same about game engines / coding tools that help artists turn their vision into working, compiling games, right? Riiight?
It's not democratising access to an expert tool, it's devaluing the skill, expertise, and hard work required to create art.
edit: I seem to be rate limited and unable to reply? I'll paste it here:
I'm sorry but I don't agree. People care about art when it is extraordinary, in the same way people watch professional sport because it is extraordinary, or they watch cooking shows because it's extraordinary. What you call "democratisation" I would call the trivialisation of something which used to take effort into something which does not. People don't watch random people who have never played soccer before at the World Cup, they don't watch someone who can barely cook Kraft dinner cook on MasterChef, and they don't go to museums to look at someone's first sketch. There is no reason to assume that the trivialisation of art wouldn't simply devalue the medium to the point of irrelevance. However since people seek what is extraordinary, you will always have gates which are kept, and for good reason.
edit 2, responding to hbosch:
You don't have to be an extraordinary soccer player to enjoy playing soccer, but that doesn't mean we should develop a pill that makes everyone a great soccer player with no skill development or effort required. We don't watch professional sports just to see a ball move fast, we watch to see what a human is capable of through discipline and hard work. If everyone could take a pill to become an elite athlete, the sport wouldn't be democratized, it would be deleted.
When you remove the effort barrier you don't make art easier, you collapse the meaning of striving for excellence. If the 'expert' and the 'novice' produce the same result with the same button press, we haven't empowered the novice, we’ve just made the expertise irrelevant.
Tools like Blender are force multipliers for human intent, generative AI is a replacement for it. If you use Blender to make a "stupid little game," you’ve gained a skill. If you use AI to generate the assets for that game, you haven't gained a skill, you’ve simply acted as a manager for an automated system. The value of that game to the creator isn't just the code, it’s the fact that they built it. I find it really hard to believe that people find value besides the initial novelty in having a computer generate stupid little games - for what purpose? If nobody is going to play it, and you haven't built it, precisely where does the value in it come from? It's like a simulacrum of human creation.
What I actually see is people who are unwilling to put in the effort but seek the rewards anyways. They want the accolades from creation but without the hard work. I dont see the value in enabling this.
Sorry, this is not a good argument. It's sad that some skills are devalued when so many have invested years into them, but it is a net win when more people can create something without having to become an expert. Experts don't deserve to have a moat built around them. I say this as a software engineer with 16yoe who is dealing with the same challenges.
Please explain how this is a net win beyond the extremely narrow-minded belief that more equals better.
Do you believe it's a good thing that all software is becoming noticeably lower quality? Do you believe it's a good thing that open source is on its death bed now that licenses don't mean anything and popular projects are drowning under AI generated PR spam? Even here on HN, Show HN is effectively dead as almost every single submission is some boring garbage generated in 30 minutes that nobody cares about, not even the person who submitted it.
Experts don't need to have a moat built around them, because they build their own moats with their skills and efforts. Just because you get jealous and feel entitled to the fruits of the experts' labor while being unwilling to put in the same work does not mean you have the right to steal their work and mix it up in a computer algorithm so you can later claim it as yours.
The concerns are: proliferation of slop, en masse. prosperity of artists who live off their work to be rendered impossible. It's already quite dire for them.
The upside? A new generation of content creator who may profit from automation.
We never had problems creating art. In fact, what's artistic is relative to the effort involved in the creation process; also, access to technology available at the time.
To me the argument is valid. It's devaluing the skills of existing artists, and the decade long investment they likely put into their craft.
Unity enabled a flood of slop games long ago. Dreamweaver enabled countless slop websites. Photoshop delivered us heaps of slop images. Amazon delivers thousands of slop products from slop manufacturers every day.
The slop isn't coming, it arrived decades ago. The Pandora's box of slop is already open. Maybe AI widens the aperture, but if you cannot handle the discernment required to separate slop from something useful or meaningful, that is your problem.
If I download Blender today, as a true beginner, is what I make extraordinary? If it's not, does that mean I am not allowed to use Blender? What if I want to use Blender and I am not interested in making anything extraordinary? What if I want to use Blender to make a stupid little iPhone game that no one will ever play? Is that considered extraordinary, or not? What is this criteria?
The truth is, the vast majority of art is not extraordinary, whether it comes from a canvas, a typewriter, Photoshop, or Blender. That is as true for AI as it is for humans. Likewise, the vast majority of people who kick a soccer ball will never be extraordinary soccer players.
I firmly believe that tools which enable people to get closer to their goals are always a good thing. The concept of what makes something "extraordinary" does not come from the maker, or the tool, but from the beholder. It is the audience's job to discern what is and isn't "extraordinary", not the makers'.
By that logic, Blender shouldn't exist because it devalues the skill of hand animated art.
Nobody said that. You just made that up.
Did you really create a 3d model if you didn’t hand type all of the vertex coordinates? Anything less is cheating by using cheating tools and isn’t art. Oh you had to use a deform tool? Pathetic. Can’t calculate your own circle approximations at various details? Good. If you can’t do it, it shouldn’t happen.
Honestly these people are just so weird.
Why did you create an imaginary person in your head just to get mad at them?
> that doesn't mean we should develop a pill that makes everyone a great soccer player with no skill development or effort required
What are you talking about? We should absolutely do this. We should extend this to as many domains of human achievement as possible. By this logic, computers shouldn't have existed because it devalued the skill that scribes and accountants developed before word processors and spreadsheets. Blender itself is a tool that made 3D accessible to thousands of people who previously had to pay for expensive licenses, training, and SGI workstations. Literally the whole point of technology is to make more things possible for people unable to do it naturally or without great effort.
We probably shouldn't strive for pure equity of every outcome actually.
That's not the point at all and I think you know that.
Blender didn't have support from big tech for decades and has flourished. It's now on par with top of the line proprietary 3d software.
The upside is meaningless compared to what's at risk when for-profit grows influence.
Google has been supporting Blender for over 20 years, initially through Summer of Code [1, 2], and also via corporate sponsorship once Blender started offering it.
And I'm pretty sure I've seen most of the other big names in tech on the sponsors page for many years now.
[1] https://en.wikipedia.org/wiki/Google_Summer_of_Code
[2] https://code.blender.org/2011/04/google-summer-of-code-2011-...
Can you imagine going to a football match and second-guessing which are the players who look human, but skin-deep are actually androids made at a factory? This is what it feels like with music and literature right now with so much AI. There are some pockets where you still can say "that's human-made", like 3D-rendered feature films with some particular artistic direction. That, it seems, AI companies also want it to go the way of the dodo.
Yesterday I saw a clip that went "viral" of a few hogs chased by a humanoid robot somewhere in Poland. I had to watch it a few times to figure out if it was real or generated. I still wasn't 100% sure. Asked around in a group, and apparently it's been widely reported on regular news, so I guess it's real? But we're slowly getting to the point where you won't be able to tell, especially from a short clip on a phone.
Yes, and tx for sharing the experience of the hog video - recommended to me too and I chose not to click, as I did not want the frustration of seeing another "tech run amuck" example, of tech disrupting YET ANOTHER norm.
Relatedly, IMO "trust" as a word / concept is deserving of being reevaluated nowadays.
E.g. I don't know that you, NitpickLawyer, are a real person. And when I go through the mental exercise of inventing the details, proofs, and evidence I'd need in order to satisfy my doubt, I never succeed until I reach the physical-contact-with-NitpickLawyer condition.
So I think we need to evaluate what is necessary for oneself to operate in society, separate from these untrustable things .. such as media / news reports, and all the other things I just don't want to worry about, right now. :-(
No-one cares dude. People like good enough, convenient things that serve their entertainment needs, which is shaped by said entertainment, so there is not really an issue here.
“No one cares” except for all the people bringing up that they care.
Since they are up against a insurmountable mountain of capital which will commoditize and optimize whatever it wants, they are kind of in for a pointless fight with an inevitable end. They could save themselves a lot of despair if they saw the writing on the wall and pivoted to something that still has value, or accepted the new reality instead of throwing a fit.
That is too difficult as the concept (of trusting one's perception) is, I believe, intertwined deeply with other aspects of being human, for many people.
It's not reasonable to require that those people be mentally organized in a manner that already mistrusts reality, in a healthy manner.
Maybe it is a pointless fight with an inevitable end but at least I'll die with my humanity and dignity intact rather than being a boot licker for Sam Altman, but you do you.
You can die with your humanity at a farm growing veggies and being surrounded by people you love and still be consistent with that I write. Seeing the inevitable does not equal loving or wanting it.
I care deeply. It is not single-handedly going to destroy humanity. However, we are clearly on a course where people are more isolated, less challenged, less social, and very very very unhappy. Music is one of those things that can really bring people together. If we flood the zone with AI music (or any other art form) we will slowly edge out the humans who are doing that. That is less new music. Less chances to come together. Less chances to dance together. It's a death by a thousand cuts. I, and many others, think it's worth fighting for because we want others to have the amazing experiences we're having.
Every generation has a new baseline. The younger generation will not be able to imagine having anything other than doctors and psychologists in the phone, and they are content with it because it's all they know. Social media might be all the social connection they have, and that will be the best thing where they will have the best experiences, they won't know another baseline. Eventually maybe the best experiences will be had with digital companions, etc.
The only losers here are old or bitter people who have tied up their worldview into their own time and cannot see or comprehend that the world has moved on with a different bound for the experiences and expectations.
> Eventually maybe the best experiences will be had with digital companions, etc.
Obviously I can't speak for all of Gen Z (and I realize we're no longer "the younger generation"), but my friends and I don't want any part of this, and feel optimistic rather than bitter that things won't go the way you're describing. I seldom meet anyone in my age group that isn't talking about moving away from social media, cancelling software subscriptions, all of the things that millenials and Gen X seem to be so excited to continue building and promoting.
Even at my workplace the "older" people are the ones that are excited about stuff like AI jazz remixes of rap songs and AI generated short films, while literally everyone else under 30 finds it pretty cringe and makes fun of them in DMs.
So all that to say, I disagree with your outlook, but I guess time will tell.
Talking about and doing something are different things. What are the social and market structures around your friends that lets them avoid having a smartphone, cancelling subscriptions, and uninstalling everything? Do you see this getting better with media consolidations from Substack(Andreassen), Twitter(Musk), and Youtube channels by the hyperscalars/billionaries and questionable merges like Paramount and Warner Bros?
When the social culture is based around platforms and content that has subscriptions, and when media and what you see is consolidated, you can't just exit without losing a big part of the social context because the people around you are eating the same thing.
I dislike slop as much as anyone else. I think it puts a higher burden on the receiver of information to filter the signal in a pile of trash. I just don't really see an actual way out if you look at it from a societal level with the existing structures and incentives.
> you can't just exit without losing a big part of the social context because the people around you are eating the same thing.
That's exactly it. The goal is lose a big part of the social context. It driven by rage bait, AI bots, state actors, and a thousand other influences that are predominantly negative. Of course amazing things happen online. However, the good is not worth bad. I'm raising my kids and they will never have a smart phone. Will they miss out on somethings? Of course! They also won't have their attention span destroyed, their ability to be bored and creative in the real world destroyed, they won't have body issues, they won't be caught up in the alt-right pipeline, they won't have their brains fried by content like Mr. Beast which is designed to be as hyper and addicting as possible. Missing out on the current social context is the entire goal. People were happier before it.
This structure expects all of their friends to live in similar systems. Otherwise their friends will talk about games, memes, series at school while your kids are isolated away as they are not a part of the culture and not in the loop.
I think this is only possible if you find a community with similar values, like religious, or hippie, where the focus is put on other things. Otherwise you might deprive your kids of what you want to give them because they will not feel socially connected.
I am not an idiot. I'm well aware they will pick up things at school My 5 year old already knows who Mr. Beast is. He's never watched a video of his and never will at my home. If he watches one or two at a friends house that of course is going to happen. But he won't be consuming that poison regularly every day. My 8 year old is doing just fine. Happy. Healthy. Active. Lots of friends. And when they're older and fully functioning adults unlike some of these Gen Z zombies who have had their brain fried, they will thank me.
I hope you will be right. I think the teenage years might be hard for this, good luck.
The pearl clutching over the pedigree of art is getting tiring. No one has really ever cared. Most mainstream music is written by corporate teams. Elvis didn't write his own music. Frank Sinatra didn't write his own music. Nearly all pop artists don't. But suddenly, people are now clamoring for art, but they never gave a shit to begin with. Most people can't tell AI written music from anything else if a human performer played it. Most of it is better than any local bands anyway. Tired of people pretending they care.
It’s subjective, because it’s art. There’s no right answer.
If you like listening to AI generated content, then that’s fine! I’m glad you found something you enjoy.
For me, I consume art because I want to understand other people. For example, when I go to an art museum I want to emotionally connect with the artist: to feel what they were feeling, or understand an idea they’re conveying. I have little desire to emotionally connect with stochastic token sampling. It seems a vapid way to spend time
You still assume the artist in those examples is real. It could be a team, a ghost artist, etc - yea it's less likely than music, but still. The connection itself is quite difficult too, given the ease in which someone could plagiarize others work - sure they have mechanical skill, but did they really invest in the painting or was it ripped off from others ideas?
I suspect your connection to real artists won't be impacted. This, like the music example, just highlights our assumptions.
I'm not defending this AI garbage fwiw, i just don't think it's as interesting as most people make it out to be. I adore music, and i connect with songs i connect with. I don't typically think about the possible ghost writers, teams of writers, ghost players, etc. The music either speaks to me or it doesn't.
Though i'm not trying to connect to the musician as a person. However, as i was illustrating - if i really wanted to connect to musicians at face value, that ship sailed many, many years ago. Far before AI.
There are ways to mitigate this, but that balance will always be there - it was before AI, and it will be after. It's an evolution. Not an enjoyable one perhaps, but it is nonetheless.
I arrange gigs with real bands playing music. At least that will take quite a while to replace with AI. I am curious to see if we will get a backlash eventually around the content. It will probably be a mix of everything.
Storytelling didn’t go away when the theatre was invented. Theatre didn’t go away when cinema arrived. Cinema wasn’t replaced when radio arrived, ad that wasn’t completely replace by TV, etc. It is a mix of things these days and it will probably remain that way.
Check out this album, especially Bernard's Boogie, and Horses.
- https://donnybenet.bandcamp.com/album/il-basso
Totally not written by Google.
If Frank Sinatra had Ai he woulnt have had to perform any of that slop by Cole Porter, Irving Berlin, Kurt Weill, Rodgers & Hammerstein and other composers no one cares about
Did Frank Sinatra have an AI write his music? Did Elvis?
If not, doesn't your argument entirely miss the point?
Can you imagine watching a movie, and not being able to tell which scenes have GC special effects and which don't? Oh no!!! GC totally ruins all movies!!! Even movies that don't use CG are ruined by the tension of dreading that they might, and wondering if they do, and doubting everything you see in the screen, even if they don't. CG has ruined everything!
AI is currently the worse it will ever be. 18 months ago it couldn't draw hands.
> like 3D-rendered feature films with some particular artistic direction.
This is a really interesting example. Why do you foresee artistic direction going away as a result of AI? More importantly: why didn't we lose that with the transitions through the years of special effects - i.e., from practical to 3D-rendered?
It's not an uncommon opinion that we did lose artistic direction and aesthetics by moving to vfx - the ability to edit more and more things in post to change the direction or plot of a film personally seems like it's enabled more design by committee in marvel films, etc
I just wanted to note a couple of things
1. That looks great and I hope they can make blender even better, or pay people to do that. Raise your rates for corps Blender!
2. What does this have to do with any strategic position for Claude. Their goals are
“Enterprises pay us a fortune to either
1. Sift through unbelievable volumes of data to pick out fairly easy to find nuggets
2. Tell everyone their jobs are at risk to keep them in line”
There is no other play. AGI is science fiction.
So why are they launching this? Because they don’t have a strategic core running through - it’s always been “wow what else can this do”. That’s a research project at the price of billion dollar data centres.
Unless we do achieve AGI (which we won’t) the price tag is way beyond the returns we are seeing.
2. Donate to open source projects = be good
I've been using Claude with OpenSCAD to generate some simple models with repetitive geometry (a set of d8 dice with braille on them for a scrabble-like game for blind children). It's really good, though often I have to send a screenshot to Claude or describe a geometry issue.
Having more native integration into Blender, which I'm already much more familiar with, will be fantastic.
Similarly, I made an agent that lets Claude puppet OpenSCAD, generate screen shots, change the camera angle, etc. In general Claude seems to have a pretty good vision model that can create usable designs. It's also fun to let it make up new models of its own and then try to 3D print them.
I'm sure I'm not the only one who would appreciate hearing more about that game :)
This[0] is the original game. I downloaded the dice, made a list of letters for each die (I can't remember if I did this manually, I don't see a published list so I must have), and then I fiddled until I got something that looks decent and also printable. Each die face has the braille letter as well as a small English letter. Here's[1] my repo, I wasn't intending to make it public yet, so it still has the original creator's files in there and the README is autogenerated.
The biggest challenge at this point is figuring out how to make the dice print consistently. With each die face only having a few points of contact, they keep unsticking. What I'm trying now is cutting the dice in half, printing the halves, and then sticking them together with dowels.
[0] https://www.printables.com/model/821177-octobabble-a-word-ba...
[1] https://github.com/PeterFajner/braille_octobabble/
Almost every AI lab I talk to that deals with 3D has built their pipelines around Blender.
This is unsurprising as a general development other than Anthropic doesn’t have a 3D model generation framework.
I don’t think this is to create MCP servers necessarily but rather to improve the blender pipeline further.
People seem not to be aware of this: https://fund.blender.org/funding-policy/
I agree that it's not a good look for Blender, but I don't think that something actually bad will come from this. (Other than maybe a negative impact to Blender's reputation.)
I didn't think about their involvement with the American military. That changes my view on this.
Wait, wasn't that OpenAI, not Anthropic? Lost track of all those AI companies.
I'm thrilled for the world where I can drive more things with an LLM. The big limiting factor for me for little home improvement things was that I'm not very good at modeling so I have to get my wife to go do things for me. That's fine, she's happy to do it, but sometimes you kind of just have to try yourself to see what you're really looking to do.
Recently I've been using Claude Code with `build123d`[0] and it's pretty good, but my wife uses Blender so it would be cool to come up with something at least halfway decent and then have her clean it up.
0: https://wiki.roshangeorge.dev/w/Blog/2026-04-24/Modeling_Bet...
I prefer that outcome to them acquiring OSS projects and shutting them down once the bubble collapses.
I hope this drives the move towards what I have kept on saying is "good use" of generative ai: being able to generate animations.
It's tedious work, not the most fun thing to do especially for units like random enemy #20.
Not to be a hater or anything (I'm a hater), but I'm seeing people mention the potential of LLMs for the "grunt work" like retopo, but I can't really begin to imagine what the "correct" data representation and python api calls would even begin to look like in a training set? Would an LLM really be querying vertices in relation to one another and estimate whether their distance "sounds" like good topology?
I can imagine they would be interested in creating features to generate 3d models.
Oh yes, I think this part is easy to see and perhaps even logical. But I also don't think this part is the problem. After all, generating 3d models is primarily a technical consideration. I don't think the technical prowess of software is an issue in this here.
It's because LLMs will soon start building real-world objects via CAD. This is the first step. Look at things like the Adam plugin for Onshape. Works great with Opus. It built a toy car for me with one prompt.
If this is the case, they’ll want to improve the NURBS support within Blender. You can get some amazing results with subd, but digital twins require accuracy and you get that with NURBS geometry. Fortunately, Blender supports it already, it just needs some attention to tooling.
Haven’t used it but my understanding is that Blender isn’t really CAD. Is there a way to use it for CAD?
All the CAD and modelings tools have their own scripting languages that LLMs can write to, so you can just use that directly without any built-in LLM support. There will probably be someone doing a pelican-on-a-bicycle for CAD.
Blender isn't really made for CAD at all, although there are few CAD plugins. It's more for artistic modelling like MAYA or Cinema3D.
There already are LLM plugins for Blenders and prompt integration for model generation, rigging and co.
MAYA has extensive NURBS tools, which means it can import and export CAD data natively. While Blender does support basic NURBS geometry, it lacks tooling to fully support it.
If the idea is to support Blender for use with “Digital Twins” or “World Models” then the first step is to start with accurate geometry. Anything less is slop.
This looks interesting. Anything similar for FreeCAD?
I've been using it with OpenSCAD, which has the advantage of being entirely script-generated and so more easily understandable by AI.
I thought agents were bad at OpenSCAD
There will be community backlash. And it will not be uncalled for. Sad news.
Clearly I'm out of the loop, why would it not be uncalled for?
Because someone might argue blender is taking funding from an entity that wants to make proficient blender users obsolete.
Or it might allow proficient blender users to become more productive, resulting in higher detailed scenes for the same budget.
We'll see how it shakes out. As a non-proficient Blender user, I'm kinda keen on this since I have had a lot of ideas that I haven't been able to realize in Blender.
So reducing budgets and suppressing wages then, what a great deal for the workers who have specialised in this field and who's work and effort the LLM has been trained on to replace them!
This is like arguing we should only have manual looms because the mechanical looms suppress wages and destroy the livelihoods of those expert loom operators.
The tech is here. We can fight it, or adapt and embrace it.
If previous examples from the industrial revolution are anything to go by, fighting automation is a losing battle.
Difference is that those tools modernised the work and actually created jobs. The ultimate aim of these AI sociopaths is to remove all work in all areas so they can hoover up all the money and let people starve.
I won't argue with that.
But, there is plenty of open source stuff out there to enable people to have their own models, running on their own hardware. Business does not need to go to the big 3.
Business doesn't need to go to the big 3, but it will. The big companies will ensure that smaller, specialised or open models get restricted by laws paid for by their lobbying budgets, so that they can pull up the ladder after them and solidify their position. They've invested billions and will never allow the world become some tech utopia where we all have a personalised free AI in our pocket, they will guarantee their own dominance.
That is not possible because that's not how money works. One way you can tell it's not possible is that it's the plot of Atlas Shrugged.
Yes. It is the worst possible match right now.
I don't think they can tell Blender what to do. As such it's just more money for Blender! Yes, Anthropic can use the Python API to do their AI BS, but an improved Python API is also good for anyone else. This doesn't mean that Blender themselves are integrating any gen AI (if you don't already count the denoise filters). Do you really think Blender should have denied the donation?
I think they should have, it does not align with their community. Could they have denied, I am not sure about the legalities.
Money is good. But not antagonizing your community (as an open source project) is better.
Shame that we have to choose between better financing of Blender for features we already want (Python API quality) and placating imo overly dramatic artists.
I think the worries of artists over gen AI are valid. I guess all the better that some of the money of those "not yet" profitable AI companies goes to a good open source project and not to some of their usual practices.
I don't think it's just placating artists. Most Blender sponsors want more people getting good at Blender. Anthropic benefits if fewer people have to.
What do you mean?
A lot of 3D artists and VFX artists hate AI. Social media is full of them.
Some of them, like the illustrious MrDoob (behind Threejs), love AI and are all-in on it.
The VFX folks at Corridor Crew [1] have been leaning into AI for years now and showing a healthy attitude and path forward to using AI in workflows.
[1] https://www.youtube.com/@CorridorCrew
Ah thanks for clarifying it seemed like that was assumed knowledge in this thread.
Sadly so and it is the people who don't even fund open source projects.
Yep. Anthropic's motives are obviously self-interested (Cluade <-> Blender integration), but I'm not donating to Blender, are you? That's the problem, we all want Blender to be able to pick and choose donations, but when all OSS is cash-strapped, it is easier said than done.
I'd prefer Blender get some additional funding out of this AI bubble at least.
> I'm not donating to Blender, are you?
Exactly right. Everyone online is all to happy to proclaim what hill other people should die on, but is rarely willing to go up there themselves.
This inspired me to send them something, and I noticed that they have an activity feed for donations at https://fund.blender.org
it seems pretty active, albeit small donations at a time.
Has anyone found a really good ai modeling and animation workflow? Ive tried a few like Meshy which work but I am not super happy with the results.
Not to be a shill, but Hunyuan 3d studio is pretty good. You get 20 free credits/day which is pretty generous.
Texturing still is subpar. But I've found that using Hunyuan for modeling+retopo+unwrap -> clean up in blender -> texture in substance painter is actually a pretty nice workflow for some stuff.
No it's a big area of opportunity right now. All the existing solutions are pretty rough.
The latest chatgpt Image generation model is producing really nice results for turning sprites into sprite animations. Which is something a year ago felt impossible to get right. But 3d has been impossible for me to get anything good.
Yeah I almost mentioned this! The recent GPT upgrade might actually be the most helpful tool in the space.
Mixed feelings about creative work and AI, but if it wasn't for LLMs I would have chosen a different software than Blender for my hobby-level 2D animation. Made a Blender plugin w/ Claude, and it's saving me so much time (:
The press watching side of me only has questions. Why was this published by Blender and not Anthropic? What does this actually mean? That the blender team gets free claude code max subscriptions?
What it means is here[1]. Anthropic is paying €240k a year and in return they get some marketing in the form of a press release and a website mention, as well as someone to talk to.
[1]: https://fund.blender.org/corporate-memberships/
Blender [0] and also Autodesk Fusion [1]
[0] https://www.youtube.com/watch?v=LZMWsZbZU5w
[1] https://www.youtube.com/watch?v=Gen8rG40ntA
The backlash is because everytime opensource projects gets involved with lots of money, everything goes to shit.
Any ideas why anthropic is interested in blender funding?
Presumably because they think agents will become the dominant primary users of tools like Blender, and want a seat at the architectural table to help accelerate that & create useful synergies with Anthropic products and models?
The press release calls out the Blender Python API, specifically, which makes sense for agentic use.
> This support will be dedicated towards Blender core development, to maintain and continuously improve foundational features like the Blender Python API
Pretty much spells it out. They have an interest in extending/supporting the ability for Claude/CC to use and interact with Blender. There may be gaps in endpoints that Anthropic needs to enable certain patterns of automated usage.
As literally stated in the second paragraph of the blog post:
> This support will be dedicated towards Blender core development, to maintain and continuously improve foundational features like the Blender Python API, which enables developers and artists alike to extend and improve the software for custom workflows.
https://news.ycombinator.com/item?id=47936552
When in doubt, dogfooding: "make us popular with the Internet crowd, take a look at what popular companies have done. Here's a budget you can use"
Chances are they were expecting the agent to spoon-feed hundreds of influencers.
I think maybe they want to expand 3d creations and modeling for future video gen with avatar type situation or maybe get into 3d game development.
Anthropic CPO was in Figma's board and stayed there a day before Anthropic-Canva Figma-killer came up /s
good PR, probably
3D printer censorship
It's only a matter of time before parametric approaches (think LaTex for CAD, ie: via a descriptor language) enable LLMs to start 3D modeling.
already works well enough for a few projects for me in OpenSCAD.
How far can your LLM go with OpenSCAD?
Tbh, I am unsure. The things I needed were extremely simple and I could have used TinkerCAD or freeCAD, but I wanted to experiment. It was actually way faster with ChatGPT/OpenSCAD as I wanted to be able to hot swap SVG's to create different shaped tokens to print on my Bambu P1S with AMS.
Example: https://makerworld.com/en/models/2456180-token-creator
In fact, descriptor languages for everything...
Anthropic right now internally (probably):
“We love art :P”
They love art so much they want to take the humanity out of it. The funny thing is these sociopaths at the AI companies can't even comprehend something that cannot be quantified in a spreadsheet.
Blender has an actual spreadsheet view. Everything model-related can be quantified in it.
https://docs.blender.org/manual/en/latest/editors/spreadshee...
Yes but those numbers get there through human creativity, they don't generate themselves.
they can though https://docs.blender.org/manual/en/latest/modeling/geometry_...
I think you missed my point - up until Anthropic decides there's a pressing need to replace them to furnish their wallets, the awesome people who create incredible art using Blender have learned how to use the tool and use it to convert their imagination into something we can all see.
it’s not that your point is being missed. it’s that some people don’t agree with you, and the personal attacks aren’t helping
Will Blender start allowing Anthropic to train on your art automatically unless you opt-out?
No. You can read about what the various sponsorship levels entails here[1].
Blender already has ton of other Corporate Patron level sponsors, such as Netflix, Meta, Intel, BMW, Adobe and others.
[1]: https://fund.blender.org/corporate-memberships/
How could that possibly work?
I once thought the same about all the copyrighted works on which LLMs are currently trained. Surely they can't just hoover everything up? Haha, silly me.
I understand that creating an LLM itself is transformative, but an LLM trained on copyrighted works remains capable of generating derivative works, which eventually will result in successful copyright lawsuits against LLM users who redistribute those derivative works.
In advance of that day, the great race is to build a licensed corpus as aggressively as possible (see Github's latest decision to opt in Copilot usage). Even if Blender doesn't send your data on every save, various options can be developed, such as publishing to a Blender-controlled public channel.
I'm relatively sure the source code I've written and stored in my local computer is not sucked up in the LLM training data. And I believe people working with Blender models are pretty much in the same situation: they don't host their data in a third-party service and openly share it.
There's absolutely no precident for Blender Foundation sponsorships leading to such things... So no, they probably won't do that.
Only for the SaaS customers
That's already blowing up on Mastodon. Blenderartist silent for now. Won't stay like that for very long.
I wonder, if Ton was involved in that decision, or if it's only Francesco. Could turn out to be a very unlucky start into the leadership role.
Aha - so that decision was not a consensus or community made one? I really don't know right now; no clue about the internals at blender. But that would be interesting ... I can see the headlines "Blender community sold to Anthropic. Forks starting in 3 ... 2 ... 1 ..."
Sponsorship decisions are not community polls, never have been.
And the worries about "blender just being sold to xyz..." have been around forever. Always wrong. People with AMD cards were screaming when Nvidia became sponsor, and other way round.
It is more about the signal sent, in this case.
For everyone who is interested, here is the mastodon thread: https://mastodon.social/@Blender/116482997785333001 (it is just like to be expected though)
I doubt a fork would ever happen, Blender, being computer graphics software, has a huge knowledge gap between it's developers and it's users.
The militant open-source crowd never disappoints - rather burn down than build up.
it would be funny if this was meant as a goodwill gesture from anthropic to counter all the recent bad press, only for it to cause even more drama
Anthropic is BLEEDING through cash as we speak while valuations soar and they have the funds to do charity?
I feel this a thinly veiled attempt at again, stealing IP.
Anthropic seemingly is one of the only profitable AI companies at the moment.
The AI "bubble" is hard to pop, at least for this big companies which will just receive bailout after bailout from the Government if shit hits the fan.
I don't see how Anthropic sponsoring Blender is an "attempt at again stealing IP"
People surprised by Anthropic getting in on Blender funding obviously never saw any of the Blender/ChatGPT integrations a few years ago. This has been coming for a long time.
https://www.youtube.com/watch?v=xhN7P7ENu4g
That was ancient times in LLM terms. I've seen demos that create whole scenes in a single prompt.
There are lots of non-coding use cases for LLMs that don't burn through compute but are still useful. Anthropic is starting to catch on, and it makes sense to focus there with the compute crunch.
These AI companies need to be kept away from Open Source projects. A sad day for Blender :(
They already have corporate sponsorships from Google, Meta, Nvidia, and other big companies. Anthropic is just joining the list. This is actually good for Blender.
Weird attitude when 3D artists LOVE AI powered denoising and upscaling. Praying for the day AI makes it so I never have to UV unwrap a mesh again.
It's clear that the beef Artists vs AI is clearly about genAI and not about other kind of AI
Denoisers and upscalers don't make authorial decisions. This is a stupid comparison.
Wasn't this one called "machine learning" for denoising and upscaling? That's completely different from an LLM replacing your job (after being trained on your work without permission).
Yes, Claude is the AI doing my denoising. I keep running out of tokens with my 4k renders.
AI is a nebulous term. AI denoisers are not the same thing as an LLM or image gen model, the ire is directed at LLMs and not AI denoisers because they are completely different things.
This is not bad news.
If Blender doesn't grow AI capabilities, its utility in the future will be severely degraded.
If you haven't seen 3D mesh, texturing, PBR, and retopo tools, they're getting extremely good.
This makes sense. Blender has had (non-LLM) generative features for a long time, hooking LLMs into the Python API to generate art makes sense (it's probably already done but for it to be sponsored is nice).
I was wondering the other day if Ai could do tedious things like retopology and figuring out effecient uv unwrapping
Also was wondering how'd it would do things like sculpting? That sounds expensive like either you send millions of polygons for the model to explore? And that ruins the context window order doesn't even fit, or your sending tons of screenshots?
This will be a big step up from using python to generate shitty gifs
I love blender. They should get all the money they need.
So Claude will soon do 3D modelling?
I have mixed feeling about this. Guess they can need the money ... but still. Data goes to Anthropic here. It also will buy influence in some ways, I am sure about that. We could see this with rubygems.org - when shopify threatened to cut funding some months ago, suddenly chaos erupted. Money buys influence, this is easy to see how.
No data goes to Anthropic from Blender. At all. See also: https://fund.blender.org/funding-policy/
And Blender tries to get funding from many different donors so that no single one can have any sway over them. Anthropic, as disgusting as they are, are just one more donor. Epic, Nvidia, Google, CoreWeave are also patrons. I don't worry about that donation.
> "No data goes to Anthropic from Blender."
You know, for now...
It simply won't happen. There would be riots, mass exodus of sponsors, and forks that simply patch out any such "feature".
This is idiotic. Blender runs an open development process. It's all out in the open, and they would never do such a thing. Obviously you know nothing about the Blender developer community, and you're not qualified to speculate without even bothering to do the most basic research, so your posts are much worse than any hallucinating ai slop, just insulting the work and integrity of hard working dedicated people. Your histrionic conspiracy theory posts in this discussion are absolutely off the rails, detached from reality, and your ignorant mindless attacks on Blender are helping nobody but Autodesk.
I think you missed my point completely - I am not attacking Blender, I'm attacking the Anthropic. You think Dario woke up this morning and just felt like doing charity? Or do you think there's an underlying business reason for getting into this space? If you believe they're doing it because they're lovely people I'm afraid we'll disagree about that.
Yes you most certainly are attacking Blender, accusing Blender developers of being in on a conspiracy of checking in code to Blender that sends data to Anthropic.
And your conspiracy theory about the Blender developers selling out to Anthropic and secretly checking in code (that anyone can see since it's public) that sends data to Anthropic is objectively idiotic.
Unity donated a lot of money to Blender too. Do you have any evidence that Blender sends data to Unity?
Can you point to the code in Blender that sends data to Unity? Or is your conspiracy theory that the official Blender builds contain secret spy code that is not checked into the repo? How many Blender developers are in on this conspiracy, and how have they kept it secret until now, now that you publicly announced your accusations that they intentionally spy on their users in exchange for donations from sponsors?
I think you'll find my original parent comment said "You know, for now..." - as in they aren't doing it yet. But do you really think Dario cracked open his wallet for fun without some later expectation of a return on investment? The Blender devs willingly or not are now frogs on the boil until the soulless ghouls over at Anthropic come calling.
They didn't enter in any obligations. It's a donation.
Nowhere is that mentioned - perhaps Dario loves making 3D doodles on his days off :).
Is this going to be another app that I have to make sure I opt-out of training for?
I hope this doesn't mean enshittification of Blender.
I'm so sick of it. I'm so fucking sick of it.
Ugh. I love Blender, it's the greatest software of all time according to myself, and I absolutely hate this and I am terrified at what it implies. If they just want name recognition ok fine, but my guess is Anthropic will want changes to Blender itself and I find that totally unacceptable.
Ah well, the online artist community is unusually principled on matters like this, especially compared to here. If they start doing shady stuff it will get forked and probably spell the end of the Blender foundation, which would still be really bad of course.
Sigh. Not a happy Tuesday.
I see we're now entering the "Sam Bankman-Fried" stage of buying goodwill.