If you export your data [0] all your Claude Design chats are in a design_chats directory along with the code, even if your account currently has no access to Claude Design. It is .json, but converting that into usable code is easily done, either manually or by asking any fairly modern LLM via OpenCode. Just did it myself, it works. I will say that I'd still prefer if they allowed API use of Claude Design, it does have some niceties regarding the way follow up questions have been implemented that I feel make it worth it for very narrow UX experimentation but can't justify a whole sub at the moment, given I for the first time started experiencing regressions up to making Opus unusable via Claude Code with the Max subscription for the first time and the new pretrain in GPT-5.5 is very strong for very specific coding use cases. In fairness though, compaction and task adherence can be inferior compared to GPT-5.4 which did both better than any other model ever, so using both for their specific use cases is my go to.
Not feeling like commenting on every statement regarding SaaS and expectations, but I will say that some are mistaken/not considering the law and your rights by just telling you it is your fault and (at least) implying the data is lost. It can't be, think about it. Any temporary subscription cancellation/payment processing issue/bug on Anthropics part/etc. would mean permanent data loss. That'd be less than ideal, not least because Anthropic has in the past had trouble processing payments from verifiably covered accounts.
Users in consumer friendly area have the right to export and access their data, including data not exposed via any frontend or API if associated with their account. Doesn't matter whether they pay or not. Course, manual backups are always preferable. A provider could still have a data loss, but as long as they have the data, at least in my neck of the woods they have to give it to you. As it should be.
To end, I generally try not to comment on others comments or down outside of actual spam and bad faith, but if more than one comment already was helpful enough to tell OP that they should have exported/backed up, do we really need it repeated?
thanks for letting me know this exists for claude designs as well.
yeah.. anyway it will be my coding agent that will be reading these. and if needed, it can show me what they look like.
in the ideal world, I know all these things should be in place, but I was not sure, they had the bandwidth to implement all these before releasing these things into the wild. but i will use it to download my sessions.
as a dev, building the product is the fun part, implementing entitlements, payment gateway, rate limiting, usage calculation, billing, gdpr stuff, account creation, deletion, export, these are the boring parts. so I wasn't sure they would have implemented this part.
A lot of these things are made fast and loose, and unfortunately this is the reality of using the bleeding edge. Even Figma went through this kind of thing very early on.
To add something else to the discussion however, I'd encourage people to skip out on Claude Design for other reasons, and that is the inherent restrictions of LLMs for visual design. LLMs are blind, and spatial relativity is tremendously hard across layers of nested html / css.
If you're early on, I'd recommend starting with diffusion first. GPT-Image-2 is phenominal at UI design, and especially if you're just starting out will let you align on a direction more rapidly than an LLM can. The difficulty will be converting from image->html, but you'll be able to explore different directions more cheaply/faster than you could with Claude Design.
I will note a bias disclaimer here - I quit Figma to work on my own diffusion-based UI design tool. Not promoting that here, but wanted to at least share my findings in this space.
What do you mean LLMs are blind? All frontier models are multimodal, which means they literally consume images as tokens. They can “see” exactly as well as they can “read”.
Also, GPT-Image-2 is not a diffusion model, it is based on Transformers, like other LLMs are.
I guess they do "see" but more like "see an explanation of the image", not "see" as in experience visually. They're really bad at details and perfection when it comes to images, and doesn't understand things like visual hierarchy, affordances and other fundamental design concepts. Most of them are able to describe those things with letters, but doesn't seem to actually fundamentally grasp it when asking it to do UIs even when mentioning these things.
Try doing 100% vibe-coding with an agent and loosely specify what kind of application you want, and observe how the resulting UI and UX is a complete mess, unless you specify exactly how the UI and UX should work in practice.
If they actually had spatial understanding, together with being able to visually experience images, then they'd probably be able to build proper UI/UX from the get go, but since they only could describe what those things are, you end up with the messes even the current SOTAs produce.
Yes, I agree, we're saying the same thing, I'm just trying to highlight that the "visual intelligence" really isn't up to par for anything stringent when it comes to UI and UX. Explained further here: https://news.ycombinator.com/item?id=48133641
> I guess they do "see" but more like "see an explanation of the image", not "see" as in experience visually.
Images are tokenized and fed to the exact same model, they can “visually inspect” images, eg “find the 2 differences between two images” and “where’s Waldo”-style things.
So your mental model that they see descriptions is inaccurate.
Exactly, here is where the fidelity of an image is being lost, they don't "see" visually, they get a representation of the image via tokens, that's why I said they don't see but basically "see an explanation of the image". I don't mean like a caption, but in the end, they act and work with tokens, not pixels or actual images, internally.
Example from Grok and Claude, with a very simple test case. I made a white image with 7 dots, ask Claude and Grok to count the red dots. The filename is "8-red-dots.png" but actually only has 7 dots.
Because they don't actually receive the image itself, they receive "tokenized images" as you say, they don't seem to actually be able to see the number of red dots. ChatGPT correctly identified that there are only 7 dots, but only because it ended up using Python to actually count the pixels it seems.
Again, very simple (and dumb test), I won't claim this is science, but once you start trying to use these vision models for precise and exact UI and UX work, you'll notice over and over how bad fidelity and spatial awareness they actually have when it comes to images.
This is my experience too, but with all other aspects of the application. If you only loosely describe it, it comes out as a mess. You have to know what you're building to get the LLM to actually build something decent. I don't think this is purely a visual or design constraint.
When I'm using agents for programming, I can have a AGENTS.md outlining exactly what requirements, guidelines and constraints all the code need to follow, and the agent (codex in my case) will pretty much nail that.
I've tried doing the same for design work, just really outlining exactly how the UI and UX needs to look and work, but for some reason it struggles a whole bunch with it, regardless of how clear I am. Maybe it's I'm just worse at explaining and describing what UI and UX I'm actually after though, I suppose.
I once worked at a startup where the CEO was originally a designer. He once spent two days huddled with the main designer for the product, trying to pick exactly the right font for the product. I have no idea how you'd have that kind of discussion with an LLM.
But then, I would not spend more than five minutes on this decision, so I'm probably the wrong audience for this ;)
Used to work in a designer-heavy company doing frontend work, one of the founders could spot by naked eye if you got the alignment of something wrong by 2-3 pixels during the reviews.
The UI and UX of the product was amazing, and took some time to get used to actually delivering pixel-perfect designs across three different browsers, but fun times regardless :) Probably takes a certain individual to enjoy that sort of experience though.
LLMs are just one mechanical component. One might as well say "Ask your println how much time has passed". That is not a question that makes sense. As an example, I did not construct my agent specifically to answer your question and when I saw your question I queried the agent. And it is correct. https://imgur.com/a/j8j7hL9
As semiquaver said, modern LLMs are multi-modal, they can reason in image-space and audio-space as well as in text-space. It is not a translate then operate kind of situation. Claude Design is not a raw LLM, nor an instruction-tuned LLM. It is an agent harness around an LLM that allows it to do certain things.
> Also, GPT-Image-2 is not a diffusion model, it is based on Transformers, like other LLMs are.
Where are you getting this from btw? AFAIK, OpenAI hasn't openly talked about what exactly is powering the Images 2.0 stuff, unless I missed something? I think they've said it's not a diffusion model, but I'm not sure they've said what they're doing instead, have they?
I believe it's an evolution of the technique used in GPT-Image-1 (or whatever they called that), which was derived from their work on making GPT-4o an "omni" model that can directly output images and audio in addition to text.
"With GPT‑4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network."
Claude has been kicking ass at code, but I asked it to “sketch” a second floor with a stairway and bedrooms with large closets and it made … something that resembles something akin to not at all what I asked.
This has not been my experience. Claude artifacts at first, then Claude Design after it was released, are excellent at design! The way I can steer the model updating the design with different ideas and visions, even adopting different design systems like Material 3 or Apple’s HIG it has been phenomenal.
It's also by far the best in my experience at a request like "it's 3:55 and I need a few slides on the topic of the Gettysburg Address for a 4PM meeting."
I wish it was more integrated into PowerPoint but it's still the best slide generator I've used.
Thank you so much for your suggestion regarding UI design. As my main expertise is not this, I need some tool to depend on to ground my projects somehow. Even though stitch by google and claude design are not perfect, they give me some starting point. And then, after building the actual working project, will iterate until I like the look of it. This is how I'm using these right now. I can't even itearte on these design LLM's now, their own UX is very clunky and not very friendly, or its made more for the design folks.
But I will give GPT-Image-2 a try. Actually few months back I remember doing this UX/UI research on the chat gpt app itself, just asking it to generate what a certain app might look like and etc.
Please let me know your UI design tool. I'm want to try it out.
Or just use Google's Stitch, it integrates both code via Gemini and image UI generation via Nano Banana which I'd argue is even better than OpenAI's image models.
If you say the image models don't "see" you also have to say the text models don't "read": there's a meaningful case to be made for either claim but then you're left saying "they behave as if they see" or "they behave as if they read".
Yeah, I'm starting to be worried about Anthropic's security controls for customer information.
To say they'd have a firehose of sensitive info from customers would be a massive understatement. Hackers gaining access to that, especially for a non-trivial duration, would be a disaster.
No kidding - you can't even delete a design system, draft or otherwise. Research Preview is accurate, it can do some things (but every system I've tried building it has resorted to the "hero text with key word in a different color" trope, however I try different prompts), but there's a lot missing (and when you ask Claude Design how to delete a design system it gives you an absolutely inaccurate and hallucinated answer and you say fine, here's the project ID, do it for me, "Sorry, can't, only you can").
The lovely irony of a bleeding edge AI company being afflicted by the most mundane problem of all software engineering—the goldfish attention span of human coders.
Have any of you tried their support bot? Its insane bleeding edge AI company gets away with such experience. The bot decides the issue is solved and closes the ticket. WTFFF.
similar to claude code, they need to revolutionise customer support. Maybe from a ticket, if the agent decides if its real bug, and legitimate, it will go on and fix it.
>The lovely irony of a bleeding edge AI company being afflicted by the most mundane problem of all software engineering—the goldfish attention span of management
I was in the same boat, moving my max subscription to Codex instead of Claude, with a Claude Design project going. I was under the impression that Design was pro plans only, so I downloaded basically everything before cancelling.
It could be worth a quick $20 subscription just to grab your stuff, then cancelling. Trying to get support from either Claude or OpenAI seems pretty hopeless. Hopefully this post will get them to see you
So.. you unsubscribed from a SaaS and expected them not to purge your data? Why would that make sense?
Anthropic may be a bunch of skids but it sounds like they did the right thing here. Pretty much all SaaS applications, especially in B2B, are required by compliance to remove customer data within X amount of time at the end of the contractual relationship.
The standard across almost all services is to retain easy-to-retain data when someone leaves. It's just good business: you WANT them to come back.
The only example I can think of are the TV services: Netflix will erase your watched show list if you unsubscribe. But they are very purposefully doing it out of spite: they want to push you towards not unsubscribing at all (so they penalize it even at the cost of discouraging you from coming back ... because they know "subscription hopping" is a thing, and expect you'll come back anyway).
It's 100% a dick move when the TV services do it, but at least it (kind of) makes business sense for them to do it. For Claude it's just alienating their customers needlessly.
I guess it's not a termination, but a downgrade to the "free" tier. But that still makes sense, given Claude Design is limited "to Pro, Max, Team, and Enterprise plans". He's not on that plan anymore so.. what commercial reason could they possibly have to keep his data?
Google Workspace seems to halt access immediately[1] and purge data within 60d[2]. For comparison, Atlassian leaves you access for 15d, and purges data at 60d[3]. 365 gives you 90d[4] before purging.
This is a pretty regular thing across the industry.
So if I subscribe again? do I expect to have the previous data or it will start afresh?
I want to know if I should subscribe again to get this data or shouldn't bother.
You can talk about all these rules, but that was my data, when I was subscribed to their product, I'm not asking for access to generate more, just my past sessions.
Well, in an ideal scenario, you would have exported anything you wanted to keep, and you'd import it if you subscribe again.
How long would you expect them to keep your data for? Do you really expect them to pay storage costs for your data indefinitely just because you paid them $20 once upon a time?
And on the inverse side, would you really want your data to be compromised when they inevitably get breached just because you had a sub there once?
These are the reasons data retention policies exist.
but i still have my account with them. i havent deleted it. and was able to access chat sessions in the actual claude app. and obciously my own claude code sessions.
tbh, backups matter. but nobody would accept Word deleting your files when you cancel Office. somewhere along the way we stopped distinguishing backup from custody.
wait, they lock your old projects too after unsubscribing? that’s kinda wild. thanks for sharing this, i was actually planning to cancel my claude subscription too
I've lost access to plenty of Claude stuff without canceling anything. I am careful not to leave anything important in there and back up regularly.
It's funny because sometimes it will remember stuff that is lost and not be able to reference stuff that is clearly visible.
One area where I find ChatGPT superior (and this is just my own experience) is not losing things and also respecting project boundaries. Claude projects just seem to be a way to lose things faster, the model seems to be entirely unaware of projects as a concept.
I also encountered an issue with my credits. I was previously subscribed to the max plan, claimed credits, then downgraded to the pro plan and noticed I lost my credits. I didn't unsubscribe, just downgraded plans as I wasn't using claude enough to justify needing max.
yes, their "contract" is insane right now, with so many edge cases. giving poor user experience. when they have so many users, these edge cases also compound. They should simplify things to to give peace to their engineers.
Did you get a warning about your data being /dev/nulled with an admonition to download whatever you wanted to keep before unsubscribing? If you did, well... should'a heeded that warning to make backups, shouldn't you? If you did not get a warning I'd add that it would have been more customer-friendly for Anthropic to warn you about your data disappearing after unsubscription but I still think that you should have made sure you downloaded whatever data you wanted to keep around before `throwing the key in the mailbox'. Don't ever trust third parties to care for your data like you care for it, make sure to keep it somewhere you are sure you can get at no matter what.
>This is a first. I never lost access to any of my past sessions because I unsubscribed in any of the LLM apps.
It's not entirely unprecedented - seen these tactics in the google ecosystem. Google music. Unsubscribe killed(kills?) access to see you playlists which of course you only learn once it's done. Give them a credit card again and you can see and export them again. Magic!
Resubscribed for 1 month, exported it, unsubscribed, and swore to never trust google music again. idk why they implement patterns like that because sure you extorted $10 in cash out of me but it makes the brand toxic. There is no way that decision has a net positive future value. Hell it even got them a pissed hn post years later
When you lose access to your projects, does Anthropic acquire the intellectual property? It's a real issue when it's in a machine learning system, not passive storage like Github.
I dont think github is a passive storage system anymore lol. There was an opt-out mechanism we needed to check to not use our code or something like that recently.
There is no intellectual property in ai output, at least as far as legal rulings in the US currently stand. I guess you are asking whether they will use them once you unsubscribe. My cynical take would be probably they will, but read through their fine print ;)
I have been using Claude Design + Claude Code, and results have been excellent. I have explicit clean-up instructions in Claude Code, and the handoff skill in Claude Design is pretty solid.
I've been on product launches many times, so can drive the design side appropriately and keep things focused. Has been a wonderful addition to my workflow.
As usual with any agent-driven tool - GIGO. If the human driving has no product experience and is blindly accepting designs, well, that's... a choice.
Aside from OP's post there's another issue with claude design worth mentioning. Yes, it makes absolutely beautiful designs, stunningly so, but the actual code is not something a human could ever maintain. So its like ending up with an opaque blob. Write-once, read-never, or almost disposal code. This is kind of bad because code people aren't going to bother to read might contain vulnerabilities.
It's an extreme example of slop code since while normally LLMs can produce code that ranges from some-what-okay to utter garbage, the web code claude makes is awful. On the other hand: you get a single file (even if it is full of 20+ embedded SVGs, javascripts, and other such things.)
I dont use the code directly actually. It is just for me to understand how the app looks like initially as a starting point. Previously I have used stitch by google, and even then, it was just to explore product design in the initial stages. Just to ground myself, and see how the product looks end to end. Also mostly I will be doing them in react, so the html code isn't very useful. I would rather share the screenshot directly rather than the html code during development.
I actually find, claude models to have superior visual reasoning, in their multi modal llms, im not talking about image generation LLMs. so I just share the picture, to let it undersand the layout and go from there, and just iterate until I like the final look of it.
Have you actually gotten it to build stunning designs? From what I’ve seen it still falls apart very quickly. They can do a decent job at building blocks but usually not putting them together in a cohesive way in my experience.
Well subjectively I've been extremely happy with what it built for me. This was all made with claude design: www.warpgate.io. It's the website for my NAT traversal library. Even has different theme for it, kek.
I didn't know its expected behaviour until now. Will keep it in mind, but so far the big companies, let us browse our past chats, even after unsubscribing. Praying this will not become norm now.
I`m sorry, that`s horrible situation.
Coming from IT companies myself, I hope, that most likely they will receive your issue at some point and will act on it. Unfortunately, with all that fast paced development and race after new shiny thing to win user, companies might sacrifice quality. Only users can make an impact here - while we are buying new shiny stuff, things will happened. Investment in quality is expensive.
It is still there and you may get it easily.
If you export your data [0] all your Claude Design chats are in a design_chats directory along with the code, even if your account currently has no access to Claude Design. It is .json, but converting that into usable code is easily done, either manually or by asking any fairly modern LLM via OpenCode. Just did it myself, it works. I will say that I'd still prefer if they allowed API use of Claude Design, it does have some niceties regarding the way follow up questions have been implemented that I feel make it worth it for very narrow UX experimentation but can't justify a whole sub at the moment, given I for the first time started experiencing regressions up to making Opus unusable via Claude Code with the Max subscription for the first time and the new pretrain in GPT-5.5 is very strong for very specific coding use cases. In fairness though, compaction and task adherence can be inferior compared to GPT-5.4 which did both better than any other model ever, so using both for their specific use cases is my go to.
Not feeling like commenting on every statement regarding SaaS and expectations, but I will say that some are mistaken/not considering the law and your rights by just telling you it is your fault and (at least) implying the data is lost. It can't be, think about it. Any temporary subscription cancellation/payment processing issue/bug on Anthropics part/etc. would mean permanent data loss. That'd be less than ideal, not least because Anthropic has in the past had trouble processing payments from verifiably covered accounts.
Users in consumer friendly area have the right to export and access their data, including data not exposed via any frontend or API if associated with their account. Doesn't matter whether they pay or not. Course, manual backups are always preferable. A provider could still have a data loss, but as long as they have the data, at least in my neck of the woods they have to give it to you. As it should be.
To end, I generally try not to comment on others comments or down outside of actual spam and bad faith, but if more than one comment already was helpful enough to tell OP that they should have exported/backed up, do we really need it repeated?
[0] https://claude.ai/settings/data-privacy-controls
Thank you for this suggestion, just downloaded my data and the sessions are in there.
thanks for letting me know this exists for claude designs as well.
yeah.. anyway it will be my coding agent that will be reading these. and if needed, it can show me what they look like.
in the ideal world, I know all these things should be in place, but I was not sure, they had the bandwidth to implement all these before releasing these things into the wild. but i will use it to download my sessions.
as a dev, building the product is the fun part, implementing entitlements, payment gateway, rate limiting, usage calculation, billing, gdpr stuff, account creation, deletion, export, these are the boring parts. so I wasn't sure they would have implemented this part.
A lot of these things are made fast and loose, and unfortunately this is the reality of using the bleeding edge. Even Figma went through this kind of thing very early on.
To add something else to the discussion however, I'd encourage people to skip out on Claude Design for other reasons, and that is the inherent restrictions of LLMs for visual design. LLMs are blind, and spatial relativity is tremendously hard across layers of nested html / css.
If you're early on, I'd recommend starting with diffusion first. GPT-Image-2 is phenominal at UI design, and especially if you're just starting out will let you align on a direction more rapidly than an LLM can. The difficulty will be converting from image->html, but you'll be able to explore different directions more cheaply/faster than you could with Claude Design.
I will note a bias disclaimer here - I quit Figma to work on my own diffusion-based UI design tool. Not promoting that here, but wanted to at least share my findings in this space.
What do you mean LLMs are blind? All frontier models are multimodal, which means they literally consume images as tokens. They can “see” exactly as well as they can “read”.
Also, GPT-Image-2 is not a diffusion model, it is based on Transformers, like other LLMs are.
I guess they do "see" but more like "see an explanation of the image", not "see" as in experience visually. They're really bad at details and perfection when it comes to images, and doesn't understand things like visual hierarchy, affordances and other fundamental design concepts. Most of them are able to describe those things with letters, but doesn't seem to actually fundamentally grasp it when asking it to do UIs even when mentioning these things.
Try doing 100% vibe-coding with an agent and loosely specify what kind of application you want, and observe how the resulting UI and UX is a complete mess, unless you specify exactly how the UI and UX should work in practice.
If they actually had spatial understanding, together with being able to visually experience images, then they'd probably be able to build proper UI/UX from the get go, but since they only could describe what those things are, you end up with the messes even the current SOTAs produce.
the models can accept images directly as tokens. not a description of an image, the actual image itself.
yes, the visual intelligence is limited, but they do actually have vision capabilities.
Yes, I agree, we're saying the same thing, I'm just trying to highlight that the "visual intelligence" really isn't up to par for anything stringent when it comes to UI and UX. Explained further here: https://news.ycombinator.com/item?id=48133641
> I guess they do "see" but more like "see an explanation of the image", not "see" as in experience visually.
Images are tokenized and fed to the exact same model, they can “visually inspect” images, eg “find the 2 differences between two images” and “where’s Waldo”-style things.
So your mental model that they see descriptions is inaccurate.
> Images are tokenized
Exactly, here is where the fidelity of an image is being lost, they don't "see" visually, they get a representation of the image via tokens, that's why I said they don't see but basically "see an explanation of the image". I don't mean like a caption, but in the end, they act and work with tokens, not pixels or actual images, internally.
Example from Grok and Claude, with a very simple test case. I made a white image with 7 dots, ask Claude and Grok to count the red dots. The filename is "8-red-dots.png" but actually only has 7 dots.
Because they don't actually receive the image itself, they receive "tokenized images" as you say, they don't seem to actually be able to see the number of red dots. ChatGPT correctly identified that there are only 7 dots, but only because it ended up using Python to actually count the pixels it seems.
Original image + what the various LLMs responded: https://imgur.com/a/vh1tU6Y
Again, very simple (and dumb test), I won't claim this is science, but once you start trying to use these vision models for precise and exact UI and UX work, you'll notice over and over how bad fidelity and spatial awareness they actually have when it comes to images.
This is my experience too, but with all other aspects of the application. If you only loosely describe it, it comes out as a mess. You have to know what you're building to get the LLM to actually build something decent. I don't think this is purely a visual or design constraint.
When I'm using agents for programming, I can have a AGENTS.md outlining exactly what requirements, guidelines and constraints all the code need to follow, and the agent (codex in my case) will pretty much nail that.
I've tried doing the same for design work, just really outlining exactly how the UI and UX needs to look and work, but for some reason it struggles a whole bunch with it, regardless of how clear I am. Maybe it's I'm just worse at explaining and describing what UI and UX I'm actually after though, I suppose.
I once worked at a startup where the CEO was originally a designer. He once spent two days huddled with the main designer for the product, trying to pick exactly the right font for the product. I have no idea how you'd have that kind of discussion with an LLM.
But then, I would not spend more than five minutes on this decision, so I'm probably the wrong audience for this ;)
Used to work in a designer-heavy company doing frontend work, one of the founders could spot by naked eye if you got the alignment of something wrong by 2-3 pixels during the reviews.
The UI and UX of the product was amazing, and took some time to get used to actually delivering pixel-perfect designs across three different browsers, but fun times regardless :) Probably takes a certain individual to enjoy that sort of experience though.
Tokens are not a substitute for a numerical measurement.
Ask a LLM how much time has passed. Watch it hallucinate wildly.
Has anyone noticed that Opus has trouble building ascii diagrams (often leaves out spaces so lines are misaligned)?
LLMs are just one mechanical component. One might as well say "Ask your println how much time has passed". That is not a question that makes sense. As an example, I did not construct my agent specifically to answer your question and when I saw your question I queried the agent. And it is correct. https://imgur.com/a/j8j7hL9
As semiquaver said, modern LLMs are multi-modal, they can reason in image-space and audio-space as well as in text-space. It is not a translate then operate kind of situation. Claude Design is not a raw LLM, nor an instruction-tuned LLM. It is an agent harness around an LLM that allows it to do certain things.
Ok? Your comment is in no way responsive to anything I said.
> Also, GPT-Image-2 is not a diffusion model, it is based on Transformers, like other LLMs are.
Where are you getting this from btw? AFAIK, OpenAI hasn't openly talked about what exactly is powering the Images 2.0 stuff, unless I missed something? I think they've said it's not a diffusion model, but I'm not sure they've said what they're doing instead, have they?
I believe it's an evolution of the technique used in GPT-Image-1 (or whatever they called that), which was derived from their work on making GPT-4o an "omni" model that can directly output images and audio in addition to text.
The 2024 GPT-4o launch post https://openai.com/index/hello-gpt-4o/ hints about how that works:
"With GPT‑4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network."
Yeah, that's my belief as well, but haven't seen any concrete explanations about how it works, just the marketing/press releases sadly.
Claude has been kicking ass at code, but I asked it to “sketch” a second floor with a stairway and bedrooms with large closets and it made … something that resembles something akin to not at all what I asked.
This has not been my experience. Claude artifacts at first, then Claude Design after it was released, are excellent at design! The way I can steer the model updating the design with different ideas and visions, even adopting different design systems like Material 3 or Apple’s HIG it has been phenomenal.
It's also by far the best in my experience at a request like "it's 3:55 and I need a few slides on the topic of the Gettysburg Address for a 4PM meeting."
I wish it was more integrated into PowerPoint but it's still the best slide generator I've used.
I found gpt5.5 great at that too
Thank you so much for your suggestion regarding UI design. As my main expertise is not this, I need some tool to depend on to ground my projects somehow. Even though stitch by google and claude design are not perfect, they give me some starting point. And then, after building the actual working project, will iterate until I like the look of it. This is how I'm using these right now. I can't even itearte on these design LLM's now, their own UX is very clunky and not very friendly, or its made more for the design folks.
But I will give GPT-Image-2 a try. Actually few months back I remember doing this UX/UI research on the chat gpt app itself, just asking it to generate what a certain app might look like and etc.
Please let me know your UI design tool. I'm want to try it out.
Or just use Google's Stitch, it integrates both code via Gemini and image UI generation via Nano Banana which I'd argue is even better than OpenAI's image models.
It's really not, gpt-image-2 is #1 by over 100 ELO.
If you say the image models don't "see" you also have to say the text models don't "read": there's a meaningful case to be made for either claim but then you're left saying "they behave as if they see" or "they behave as if they read".
> A lot of these things are made fast and loose
Yeah, I'm starting to be worried about Anthropic's security controls for customer information.
To say they'd have a firehose of sensitive info from customers would be a massive understatement. Hackers gaining access to that, especially for a non-trivial duration, would be a disaster.
Multimodal LLMs are not blind.
Claude design in my experience is very, very solid.
I’ve only used it for fairly basic stuff, things that are very well represented in the training data. But for that it has made me happy.
Huh, I never thought of asking an image model to prototype a UI. It's a good idea though, I will try it next time.
> A lot of these things are made fast and loose
No kidding - you can't even delete a design system, draft or otherwise. Research Preview is accurate, it can do some things (but every system I've tried building it has resorted to the "hero text with key word in a different color" trope, however I try different prompts), but there's a lot missing (and when you ask Claude Design how to delete a design system it gives you an absolutely inaccurate and hallucinated answer and you say fine, here's the project ID, do it for me, "Sorry, can't, only you can").
> A lot of these things are made fast and loose, and unfortunately this is the reality of using the bleeding edge.
Anthropic lazily calls everything a preview and then pushes it hard on everyone. That feels dishonest
Cannot help but think Claude team are busy adding gimmicky side features instead of doing 'real' RSI and bugfixing.
The lovely irony of a bleeding edge AI company being afflicted by the most mundane problem of all software engineering—the goldfish attention span of human coders.
Have any of you tried their support bot? Its insane bleeding edge AI company gets away with such experience. The bot decides the issue is solved and closes the ticket. WTFFF.
similar to claude code, they need to revolutionise customer support. Maybe from a ticket, if the agent decides if its real bug, and legitimate, it will go on and fix it.
>The lovely irony of a bleeding edge AI company being afflicted by the most mundane problem of all software engineering—the goldfish attention span of management
FTFY
Shiny thing syndrome at its finest.
I don't think the product people and the RSI-adjacent people are the same people.
This is actually a form of AI psychosis.
It's really hard not to especially if you enjoy building.
I was in the same boat, moving my max subscription to Codex instead of Claude, with a Claude Design project going. I was under the impression that Design was pro plans only, so I downloaded basically everything before cancelling.
It could be worth a quick $20 subscription just to grab your stuff, then cancelling. Trying to get support from either Claude or OpenAI seems pretty hopeless. Hopefully this post will get them to see you
So.. you unsubscribed from a SaaS and expected them not to purge your data? Why would that make sense?
Anthropic may be a bunch of skids but it sounds like they did the right thing here. Pretty much all SaaS applications, especially in B2B, are required by compliance to remove customer data within X amount of time at the end of the contractual relationship.
The standard across almost all services is to retain easy-to-retain data when someone leaves. It's just good business: you WANT them to come back.
The only example I can think of are the TV services: Netflix will erase your watched show list if you unsubscribe. But they are very purposefully doing it out of spite: they want to push you towards not unsubscribing at all (so they penalize it even at the cost of discouraging you from coming back ... because they know "subscription hopping" is a thing, and expect you'll come back anyway).
It's 100% a dick move when the TV services do it, but at least it (kind of) makes business sense for them to do it. For Claude it's just alienating their customers needlessly.
You get two years of 'free' (readonly) storage if you unsubscribe from google, it's very unusual to just nuke all access immediately.
> are required by compliance to remove customer data within X amount of time at the end of the contractual relationship.
that's a very bullshit justification, we're not talking about the 'delete account' button - especially since claude has a free tier.
I guess it's not a termination, but a downgrade to the "free" tier. But that still makes sense, given Claude Design is limited "to Pro, Max, Team, and Enterprise plans". He's not on that plan anymore so.. what commercial reason could they possibly have to keep his data?
Google Workspace seems to halt access immediately[1] and purge data within 60d[2]. For comparison, Atlassian leaves you access for 15d, and purges data at 60d[3]. 365 gives you 90d[4] before purging.
This is a pretty regular thing across the industry.
[1] https://knowledge.workspace.google.com/admin/billing/cancel-...
[2] https://support.google.com/a/thread/345697828/recovering-dat...
[3] https://support.atlassian.com/security-and-access-policies/d...
[4] https://learn.microsoft.com/en-us/compliance/assurance/assur...
So if I subscribe again? do I expect to have the previous data or it will start afresh?
I want to know if I should subscribe again to get this data or shouldn't bother.
You can talk about all these rules, but that was my data, when I was subscribed to their product, I'm not asking for access to generate more, just my past sessions.
Well, in an ideal scenario, you would have exported anything you wanted to keep, and you'd import it if you subscribe again.
How long would you expect them to keep your data for? Do you really expect them to pay storage costs for your data indefinitely just because you paid them $20 once upon a time?
And on the inverse side, would you really want your data to be compromised when they inevitably get breached just because you had a sub there once?
These are the reasons data retention policies exist.
but i still have my account with them. i havent deleted it. and was able to access chat sessions in the actual claude app. and obciously my own claude code sessions.
tbh, backups matter. but nobody would accept Word deleting your files when you cancel Office. somewhere along the way we stopped distinguishing backup from custody.
Isn't that exactly why MS ties O365 to OneDrive like they do?
Backup data that’s important to you.
It’s pretty outrageous to lock out all your history just for canceling the subscription.
wait, they lock your old projects too after unsubscribing? that’s kinda wild. thanks for sharing this, i was actually planning to cancel my claude subscription too
I've lost access to plenty of Claude stuff without canceling anything. I am careful not to leave anything important in there and back up regularly.
It's funny because sometimes it will remember stuff that is lost and not be able to reference stuff that is clearly visible.
One area where I find ChatGPT superior (and this is just my own experience) is not losing things and also respecting project boundaries. Claude projects just seem to be a way to lose things faster, the model seems to be entirely unaware of projects as a concept.
Who ever thought obots designing human interfaces would be a fruitful endeavor?
I hear you lose access to Claude Design the minute you cancel you subscription.
I also encountered an issue with my credits. I was previously subscribed to the max plan, claimed credits, then downgraded to the pro plan and noticed I lost my credits. I didn't unsubscribe, just downgraded plans as I wasn't using claude enough to justify needing max.
yes, their "contract" is insane right now, with so many edge cases. giving poor user experience. when they have so many users, these edge cases also compound. They should simplify things to to give peace to their engineers.
Did you get a warning about your data being /dev/nulled with an admonition to download whatever you wanted to keep before unsubscribing? If you did, well... should'a heeded that warning to make backups, shouldn't you? If you did not get a warning I'd add that it would have been more customer-friendly for Anthropic to warn you about your data disappearing after unsubscription but I still think that you should have made sure you downloaded whatever data you wanted to keep around before `throwing the key in the mailbox'. Don't ever trust third parties to care for your data like you care for it, make sure to keep it somewhere you are sure you can get at no matter what.
>This is a first. I never lost access to any of my past sessions because I unsubscribed in any of the LLM apps.
It's not entirely unprecedented - seen these tactics in the google ecosystem. Google music. Unsubscribe killed(kills?) access to see you playlists which of course you only learn once it's done. Give them a credit card again and you can see and export them again. Magic!
Resubscribed for 1 month, exported it, unsubscribed, and swore to never trust google music again. idk why they implement patterns like that because sure you extorted $10 in cash out of me but it makes the brand toxic. There is no way that decision has a net positive future value. Hell it even got them a pissed hn post years later
When you lose access to your projects, does Anthropic acquire the intellectual property? It's a real issue when it's in a machine learning system, not passive storage like Github.
I dont think github is a passive storage system anymore lol. There was an opt-out mechanism we needed to check to not use our code or something like that recently.
There is no intellectual property in ai output, at least as far as legal rulings in the US currently stand. I guess you are asking whether they will use them once you unsubscribe. My cynical take would be probably they will, but read through their fine print ;)
I have been using Claude Design + Claude Code, and results have been excellent. I have explicit clean-up instructions in Claude Code, and the handoff skill in Claude Design is pretty solid.
I've been on product launches many times, so can drive the design side appropriately and keep things focused. Has been a wonderful addition to my workflow.
As usual with any agent-driven tool - GIGO. If the human driving has no product experience and is blindly accepting designs, well, that's... a choice.
DId you read the post? They aren't giving access to my projects, because I unsubsribed from my max subscription to try out codex.
And Claude occationally ban your account, causing the records loss too...
Looks like I need to do a daily backup of the artifacts generated.
What
Aside from OP's post there's another issue with claude design worth mentioning. Yes, it makes absolutely beautiful designs, stunningly so, but the actual code is not something a human could ever maintain. So its like ending up with an opaque blob. Write-once, read-never, or almost disposal code. This is kind of bad because code people aren't going to bother to read might contain vulnerabilities.
It's an extreme example of slop code since while normally LLMs can produce code that ranges from some-what-okay to utter garbage, the web code claude makes is awful. On the other hand: you get a single file (even if it is full of 20+ embedded SVGs, javascripts, and other such things.)
I dont use the code directly actually. It is just for me to understand how the app looks like initially as a starting point. Previously I have used stitch by google, and even then, it was just to explore product design in the initial stages. Just to ground myself, and see how the product looks end to end. Also mostly I will be doing them in react, so the html code isn't very useful. I would rather share the screenshot directly rather than the html code during development.
I actually find, claude models to have superior visual reasoning, in their multi modal llms, im not talking about image generation LLMs. so I just share the picture, to let it undersand the layout and go from there, and just iterate until I like the final look of it.
Have you actually gotten it to build stunning designs? From what I’ve seen it still falls apart very quickly. They can do a decent job at building blocks but usually not putting them together in a cohesive way in my experience.
Well subjectively I've been extremely happy with what it built for me. This was all made with claude design: www.warpgate.io. It's the website for my NAT traversal library. Even has different theme for it, kek.
Sorry but that one is on you. This sounds like expected behavior and I wouldn't blame any company for doing that.
I didn't know its expected behaviour until now. Will keep it in mind, but so far the big companies, let us browse our past chats, even after unsubscribing. Praying this will not become norm now.
And AI hypers suggest to build your whole career/identity on this shit. Already foresee "skill issue", "well you should've x, y, z, obviously", etc.
People often build their career skills on proprietary tech. Photoshop, Figma, Java, AWS architects
Just because people do it does not mean that it is a good idea.
Are you saying that building a career with Adobe Flash wasn't a good idea?
I can still function if any of those are down or gone, unlike LLM addicts.
I`m sorry, that`s horrible situation. Coming from IT companies myself, I hope, that most likely they will receive your issue at some point and will act on it. Unfortunately, with all that fast paced development and race after new shiny thing to win user, companies might sacrifice quality. Only users can make an impact here - while we are buying new shiny stuff, things will happened. Investment in quality is expensive.