Ngl I’m reading this article after having used ai to build a beautiful front end that is pixel perfect.
Yes ai can’t see, it only understands numbers. So tell it to use image magick to compare the screenshot to the actual mockup, tell it to get less than 5% difference and don’t use more than 20% blur. Thank me later.
I built a whole website in like 2 days with this technique.
Everyone seems to have trouble telling ai how to check its work and that’s the real problem imho.
Truly if you took the best dev in the world and had them write 1000 lines of code without stopping to check the result they would also get it wrong. And the machine is only made in a likeness of our image.
PS. You think Christian god was also pissed at how much we lie? :)
It's hard to interpret comments like this because we all have different standards and use cases. So it would really help if you could link to it. Even in a roundabout way if you want to avoid the impression of self-promotion.
Software developers have been calling their stuff "beautiful" for years now. It's bullshit. Almost none of it is beautiful. They just mean it looks like whatever is trendy at the time.
Ask it to take control of a browser using something like Playwright and use the UI itself like an end user would and evaluate whether it is a good experience.
This doesn't make sense. I think you mean "If you are really good at something, you'll find AI might not be as good as the something you are really good at"
I think the point is, there's always someone good at what you are evaluating. Anyone with expertise in the domain will recognize how back it sucks in any given domain.
Don't get me wrong, AI can definitely be used as a tool by someone who knows what they're doing to avoid boilerplate. But anyone using it in a domain they aren't already an expert in will unknowingly accept AI f ups.
Before I switched over the a career in tech, I made my living from music - playing live, session work, etc.
Honestly, I'm probably one of the biggest skeptics when it comes to GenAI - but at least for music, the recent models (as in the past year) do not suck. They are actually really, really good for what it is.
I have yet to hear anything truly original produced by those models. They seem to converge to the mean, and end up sounding very commercial, very average sounding - but in the sense of average "professional music". Suno can generate music which would have taken real people years to learn, thousands of dollars of equipment to make / produce, and pretty much ready for airplay - most listeners will not bat an eye.
Hell, these "AI artists" have been booked to festivals, since people can't hear the difference, and are enjoying the music.
I figure it will go the same way in other fields. The average consumer loses track of what's human made and what's AI made, and frankly won't care. The people "left behind" are the artists, craftspeople, etc. that are frustrated it came to this point.
Rather than an existential threat, I could see it becoming it's own genre rather than infecting every other genre - when in the future people collectively realise it's kinda bad but has it's place as an almost retro aesthetic.
Our idea of nostalgia was not that long ago. Also it could be generated on open weight local copyright free models that are super efficient in the future :P
There have been plenty of those is it AI or a real person music tests on the street you can find on YouTube. Almost no one knows which one is AI. There’s nothing there to be able to put them in different buckets.
I think that was the point being made; if you're looking at it from the perspective of being really good at something, its tendency towards an averaged result is substandard.
I think this probably says more about music in general and the long tail of people who think good enough is just spectacular, than to the brilliance of LLMs. Most music, just like most art, isn’t particularly original. It’s a shocker, I know, but there it is. Doesn’t mean it’s bad, just not particularly original.
Copying something that exists isn’t particularly difficult. It may require immense skill and incredible dexterity in the case of some musical instruments, but it doesn’t really require much more than time, patience and the ability to follow instructions. The blueprint already exists. With LLMs we now have the ability to skip the time and patience parts of the equation, we can produce mediocrity more or less instantly.
I don’t see this as particularly different from what happened at the turn of the last century and beyond, with machines being able to sow faster, carve wood and metals at a higher pace and precision, moving folks and goods between geographical points faster than ever before, etc. etc. It’s not much different from the IKEAs of the world making mediocre copies of brilliant designs, making fortunes selling to the large masses that think good enough is just great. Because honestly man, most of the time it probably is.
I’m not surprised people go to concerts to hear a recording made by an LLM either. People have been going to see DJs sling records for decades. It’s not the music, or the artist, it’s the community. Beyoncé is an amazing singer, but people don’t necessarily come to her shows to see just her, they come to see everyone else. They might say they want to see her, but they already have a thousand times in tickelitock and myfacespacebookgrams. They come to feel connected to something, to experience community.
LLMs are incredibly good at churning out stuff. Good stuff, bad stuff, just a ton of stuff. Nothing original but that’s ok, most things pre-LLMs weren’t either. We just have more of it now, and fewer trees. The creatives that are able to harness these tools will be able to do more with less. (Ostensibly at least, until the VC subsidies… subside.) Because they are creative they might be able to form an original idea and string together enough mediocrity to realize it. They’ll probably get drowned out in a sea of mediocre copies in the end, but that’s just the same as it always was. It’s just faster now.
The platform owners and hardware manufacturers will remain king until the technology can run on my TI calculator, maybe we’ll get there before the VC money runs out. No wonder Nvidia’s been killing it. Creativity and originality will return once this bubble bursts I’m sure, the world has this amazing ability to correct itself, even if violently so at times. Or we all die perhaps. Either way, all we can do I suppose is ride this wave of mediocrity into the sunset. :o)
No you don't understand, AI is VERY BAD at front-end and CSS to the point you cannot use anything.
It's not passable even slightly.
Everybody with experience knows that FE has always been "harder" than BE - but BE the stakes are higher since it's the business. FE is often "just UI" and despite that being very important too, you can throw it away and start over a lot easier with a UI than you can with a BE platform.
After years of writing native code for mobile apps I'm using Flutter, and finding that, if you do things step-wise, and check in intermediate results so you can easily roll back failed experiments, agent-assisted coding can accelerate your front end coding substantially, and you can deliver more polished results instead of obviously demo grade visual results that need refinement. And that makes it easier to communicate with your non-coder colleagues.
Wow, it's got some issues on Chrome (dropped frames on scroll) but Safari is another level. Selecting text takes time. Looking at Activity Monitor, I see "Safari Graphics and Media" using 200% of my CPU even at rest.
A quick profile on Safari shows some layout recalc happening regularly, but surely that shouldn't cause this bad of perf...
The last time I found something like this, it was because of 100's of box-shadows.
Edit 2: Ah, they're using shadow DOM for the img reflection, so we can't affect it. Good gravy is the shadow DOM stuff overwrought, it's 87 elements all told, just for one img.
Dunno. It’s really good with Preact + Tailwind. And I have to say that I think most problems can be solved this way and don’t require a special one-of-a-kind UI. In fact, the fewer special UIs I see, the better. I prefer standardized patterns unless they truly don’t fit a domain.
Exactly, it will do a decent job with designs using an established component library or design system. For most/many sites and web apps it will be better and faster than trying to design from scratch. There will continue to be a place for highly custom, unique designs but most smaller sites don't need to start there.
What about plain JS without React or another framework? I don’t do much front-end these days but I’d love to toss out as much front end-complexity as possible
Good design is not always logical. Color theory, if followed, results in pretty bad experiences. And interestingly, good design can't always be explained in a natural language.
Main thing is, it's very hard to get AI to have taste, because taste is not always statistically explainable.
The best I've gotten to is have it use something like ShadCN (or another well document package that's part of it's training) and make sure that it does two things, only runs the commands to create components, and does not change any stock components or introduce any Tailwind classes for colors and such. Also make it ensure that it maintains the global CSS.
This doesn't make the design look much better than what it is out of the box, but it doesn't turn it into something terrible. If left unprompted on these things, it lands up with mixing fonts that it has absolutely no idea if they look good or not, bringing serif fonts into body text, mixing and matching colors which would have looked really, really good in 2005. But just don't work any more.
My first instinct reading an article (especially one about LLMs) is to scroll down to see the structure..
Anyway.
Do people get the impression that LLMs are worse at frontend than not? I'd think it's same with other LLM uses: you benefit from having a good understanding of what you're trying to do; and it's probably decent for making a prototype quickly.
Really? Front End is the customer-facing part of the web site. It's also the part of the stack, that non-technical people, including management, have opinions on.
They might have opinions about it, but look at the pay for frontend engineers at the same company. It's not uncommon to see the same seniority be 20% lower than a backend role.
AI is great at front end. Scroll based animations are the devil and these "boring" designs it defaults to are (more often than not) super intuitive. Sure, some design quirks it'll guess are annoying, but have you seen the web?
I'm a backend dev and I'm always hearing about how LLMs are dramatically better at frontend because of much more available training data etc. Maybe my perspective isn't as skewed as I've been led to believe and LLMs need close supervision and rework of their output there too.
I would trust an LLM with backend much more than front-end. Especially if we're talking monolithic and good type system. Ideally compiled. When I say trust I mean it's not going to break the user-facing API contract, probably, even if internally it's a mess. If you let it do front-end blind, it will almost certainly embarrass you if you care at all about user experience.
If you do backend blind, it will also almost certainly embarrass you. I’ve never had an experience beyond the most basic crud app where I didn’t have to somehow use my engineering experience to dig it out of a hole.
Works mostly fine for me on Rust backends. As long as I'm willing to accept tight contracts at the edges with spaghetti in the middle, or otherwise gate approval for everything it does.
If I want good abstractions, sure, I can set up approvals and babysit it with reprompting, because it will do stupid things that an experienced engineer wouldn't. But the spaghetti also works in the sense that it takes the input types and largely correctly maps them to the output types.
That doesn't emarrass me with customers because they never see the internals. On front-end, obviously they will see and experience whatever abomination it cooks up directly.
One thing that helps with #2 ('It cannot see') -- Try playwright-cli. Your agent can use it to inspect the DOM, see what styles are applied to elements, simulate clicks, etc.
Everything is nuanced and generalizations help no one. There are absolutely frontend apps where AI straight up crushes. Sure these much be less novel apps but most of what people work on is a CRUD-esque interface.
The author of the post is known for some pretty fancy CSS wizardry. I’m guessing AI is not great on some very specific, advanced CSS use cases where there isn’t much prior work. But again this is an edge case compared to what the vast majority of us are doing
This is something that talk with some friends, How IA is doing things in front end is complelty different from Humans. Humans can select colors and themes based in their criteria, and IA only generate what they learn as a machine that they are, and It's not bad, but the thing is that people that use IA for develop front-end are adapting what IA generate, and in the other hand developer is adapting to client. Which are different approaches.
Who says it sucks at front end? Unlike Stackoverflow, AI does a great job of "center a div." I tend to like working from reference documentation which is great for Python and Java but challenging for CSS where you have to navigate roughly 50 documents that relate to each other in complex ways to find answers.
Like I don't give it 100% responsibility for front end tasks but I feel like working together with AI I feel like I am really in control of CSS in a way I haven't been before. If I am using something like MUI it also tends to do really good at answering questions and making layouts.
Thing is, I don't treat AI as an army of 20 slaves will get "shit" done while I sleep but rather as a coding buddy. I very much anthropomorphize it with lots of "thank you" and "that's great!" and "does this make sense?", "do you have any questions for me?" and "how would you go about that?" and if makes me a prototype of something I will ask pointed questions about how it works, ask it to change things, change the code manually a bit to make it my own, and frequently open up a library like MUI in another IDE window and ask Junie "how do i?" and "how does it work when I set prop B?"
It doesn't 10x my speed and I think the main dividend from using it for me is quality, not compressed schedule, because I will use the speed to do more experiments and get to the bottom of things. Another benefit is that it helps me manage my emotional energy, like in the morning it might be hard for me to get started and a few low-effort spikes are great to warm me up.
CSS has definitely become a breeze to work with since LLMs have become a thing. Conceptually it's very "memorize how a billion possible combinations of obscure parameters interact with one another under various conditions" kind of setup so it's a perfect fit for machines and a terrible fit for humans.
The main limitation I think is that they're blind as a bat and don't understand how things stand visually and render in the end. Even the best VLMs are still complete trash and can't even tell if two lines intersect. Slapping on an encoder post training doesn't do anything to help with visual understanding, it just adds some generic features the text model can react to.
I'll grant that. A lot of times I want to give it a screenshot and say "here is what is wrong" and this is usually useless.
I will say though that multimodal capability varies between models. Like if I show Copilot a picture of a flower and ask for an id it is always wrong, often spectacularly so. If I show them to Google Lens the accuracy is good. Overall I wouldn't try anything multimodal with Copilot.
For that matter I am finding these days that Google's AI mode outperforms Copilot and Junie at many coding questions. Like faced with a Vite problem, Copilot will write a several-line Vite plugin that doesn't work, Google says "use the vite-ignore" attribute.
I still struggle with the design, but once that's locked in getting it to implement things is pretty straightforward. I do have to fight the AI a bit to make sure things are simple and clean, but it's pretty good at that with the right hand-holding.
The design is still a problem though, precisely because I am not a designer. I don't know what's actually good, I only know what's good enough for me. I can't tell the difference between "this is actually good" and "this is vibe-designed slop" but I have enough experience to at least make sure the implementation is robust.
Ngl I’m reading this article after having used ai to build a beautiful front end that is pixel perfect.
Yes ai can’t see, it only understands numbers. So tell it to use image magick to compare the screenshot to the actual mockup, tell it to get less than 5% difference and don’t use more than 20% blur. Thank me later.
I built a whole website in like 2 days with this technique.
Everyone seems to have trouble telling ai how to check its work and that’s the real problem imho.
Truly if you took the best dev in the world and had them write 1000 lines of code without stopping to check the result they would also get it wrong. And the machine is only made in a likeness of our image.
PS. You think Christian god was also pissed at how much we lie? :)
It's hard to interpret comments like this because we all have different standards and use cases. So it would really help if you could link to it. Even in a roundabout way if you want to avoid the impression of self-promotion.
What’s the point in saying you built something beautiful and not showing it?
Share it. I used Claude earlier to test out its design capabilities and what I got as output was flat and tasteless.
Software developers have been calling their stuff "beautiful" for years now. It's bullshit. Almost none of it is beautiful. They just mean it looks like whatever is trendy at the time.
The last time I tried to make AI built a drag and drop UI, it failed miserably. Things wouldn't line up or even didn't work at all. Any tips for that?
Ask it to take control of a browser using something like Playwright and use the UI itself like an end user would and evaluate whether it is a good experience.
> Ngl I’m reading this article after having used ai to build a beautiful front end that is pixel perfect.
Was about to say the same thing
Yes can you share the front end that you created using this technique?
If you are really good at something, you'll find AI sucks at everything.
I think this correct it’s mediocre at a lot. It’s only 10x when you don’t know what you’re doing or doing something simple.
It's also more often than not good enough, which for a specialist is bad, and for most everyone else is absolutely sufficient.
This doesn't make sense. I think you mean "If you are really good at something, you'll find AI might not be as good as the something you are really good at"
> If you are really good at something, you'll find AI sucks at everything.
Nah, just at that something :-)
I think the point is, there's always someone good at what you are evaluating. Anyone with expertise in the domain will recognize how back it sucks in any given domain.
Don't get me wrong, AI can definitely be used as a tool by someone who knows what they're doing to avoid boilerplate. But anyone using it in a domain they aren't already an expert in will unknowingly accept AI f ups.
I cant even actually receive good essay from it and still writing each word myself.
LLMs are just replacing consultants as the #1 generator of sloppy code.
The majority of humans are average, as is the training set
They use a curated training set.
Before I switched over the a career in tech, I made my living from music - playing live, session work, etc.
Honestly, I'm probably one of the biggest skeptics when it comes to GenAI - but at least for music, the recent models (as in the past year) do not suck. They are actually really, really good for what it is.
I have yet to hear anything truly original produced by those models. They seem to converge to the mean, and end up sounding very commercial, very average sounding - but in the sense of average "professional music". Suno can generate music which would have taken real people years to learn, thousands of dollars of equipment to make / produce, and pretty much ready for airplay - most listeners will not bat an eye.
Hell, these "AI artists" have been booked to festivals, since people can't hear the difference, and are enjoying the music.
I figure it will go the same way in other fields. The average consumer loses track of what's human made and what's AI made, and frankly won't care. The people "left behind" are the artists, craftspeople, etc. that are frustrated it came to this point.
Rather than an existential threat, I could see it becoming it's own genre rather than infecting every other genre - when in the future people collectively realise it's kinda bad but has it's place as an almost retro aesthetic.
Our idea of nostalgia was not that long ago. Also it could be generated on open weight local copyright free models that are super efficient in the future :P
There have been plenty of those is it AI or a real person music tests on the street you can find on YouTube. Almost no one knows which one is AI. There’s nothing there to be able to put them in different buckets.
People genuinely believe that a "trust me bro" system of denoting use of AI is viable approach to the problem.
I mean I go to gigs of people I like, it's not hard to work out if someone is real if they're on stage/meeting up with fans afterwards
> They seem to converge to the mean
I think that was the point being made; if you're looking at it from the perspective of being really good at something, its tendency towards an averaged result is substandard.
I think this probably says more about music in general and the long tail of people who think good enough is just spectacular, than to the brilliance of LLMs. Most music, just like most art, isn’t particularly original. It’s a shocker, I know, but there it is. Doesn’t mean it’s bad, just not particularly original.
Copying something that exists isn’t particularly difficult. It may require immense skill and incredible dexterity in the case of some musical instruments, but it doesn’t really require much more than time, patience and the ability to follow instructions. The blueprint already exists. With LLMs we now have the ability to skip the time and patience parts of the equation, we can produce mediocrity more or less instantly.
I don’t see this as particularly different from what happened at the turn of the last century and beyond, with machines being able to sow faster, carve wood and metals at a higher pace and precision, moving folks and goods between geographical points faster than ever before, etc. etc. It’s not much different from the IKEAs of the world making mediocre copies of brilliant designs, making fortunes selling to the large masses that think good enough is just great. Because honestly man, most of the time it probably is.
I’m not surprised people go to concerts to hear a recording made by an LLM either. People have been going to see DJs sling records for decades. It’s not the music, or the artist, it’s the community. Beyoncé is an amazing singer, but people don’t necessarily come to her shows to see just her, they come to see everyone else. They might say they want to see her, but they already have a thousand times in tickelitock and myfacespacebookgrams. They come to feel connected to something, to experience community.
LLMs are incredibly good at churning out stuff. Good stuff, bad stuff, just a ton of stuff. Nothing original but that’s ok, most things pre-LLMs weren’t either. We just have more of it now, and fewer trees. The creatives that are able to harness these tools will be able to do more with less. (Ostensibly at least, until the VC subsidies… subside.) Because they are creative they might be able to form an original idea and string together enough mediocrity to realize it. They’ll probably get drowned out in a sea of mediocre copies in the end, but that’s just the same as it always was. It’s just faster now.
The platform owners and hardware manufacturers will remain king until the technology can run on my TI calculator, maybe we’ll get there before the VC money runs out. No wonder Nvidia’s been killing it. Creativity and originality will return once this bubble bursts I’m sure, the world has this amazing ability to correct itself, even if violently so at times. Or we all die perhaps. Either way, all we can do I suppose is ride this wave of mediocrity into the sunset. :o)
[dead]
No you don't understand, AI is VERY BAD at front-end and CSS to the point you cannot use anything.
It's not passable even slightly.
Everybody with experience knows that FE has always been "harder" than BE - but BE the stakes are higher since it's the business. FE is often "just UI" and despite that being very important too, you can throw it away and start over a lot easier with a UI than you can with a BE platform.
I digress, AI sucks fucking dick at UI.
What is an example UI that AI would fail to create?
Google Stitch is pretty awesome at front end layouts.
After years of writing native code for mobile apps I'm using Flutter, and finding that, if you do things step-wise, and check in intermediate results so you can easily roll back failed experiments, agent-assisted coding can accelerate your front end coding substantially, and you can deliver more polished results instead of obviously demo grade visual results that need refinement. And that makes it easier to communicate with your non-coder colleagues.
If AI really sucked at front end I'd have a job right now.
That assumes all companies care about providing a good front end experience. Many do not. Many are actively hostile to their users.
AI is much better at front-end than me, it has really enabled me to build visual apps as a normally backend/ML guy.
Does this website run at 10fps for anyone else? I'm on a mac M4 w/ safari. Really doesn't help the author's point.
Wow, it's got some issues on Chrome (dropped frames on scroll) but Safari is another level. Selecting text takes time. Looking at Activity Monitor, I see "Safari Graphics and Media" using 200% of my CPU even at rest.
A quick profile on Safari shows some layout recalc happening regularly, but surely that shouldn't cause this bad of perf...
The last time I found something like this, it was because of 100's of box-shadows.
Edit: Sure enough, this cures Safari:
It's a combination of box-shadows and gradients.Edit 2: Ah, they're using shadow DOM for the img reflection, so we can't affect it. Good gravy is the shadow DOM stuff overwrought, it's 87 elements all told, just for one img.
I thought it was just me. I'm running a M2 MacBook Pro and scrolling down the article on Safari is quite stuttery.
Dunno. It’s really good with Preact + Tailwind. And I have to say that I think most problems can be solved this way and don’t require a special one-of-a-kind UI. In fact, the fewer special UIs I see, the better. I prefer standardized patterns unless they truly don’t fit a domain.
Ai ain't gonna perform at level of top front end designer but you will get you halfway at significantly less cost.
Exactly, it will do a decent job with designs using an established component library or design system. For most/many sites and web apps it will be better and faster than trying to design from scratch. There will continue to be a place for highly custom, unique designs but most smaller sites don't need to start there.
Aside from the obvious "AI can't see" criticism, AI sucks at frontend because frontend sucks. Why does frontend suck? Churn.
To quote the article:
1. "It trained on ancient garbage" which is the by product of massive churn and this attitude leads to even more churn
2. "It doesn't know WHY we do things" because we don't either... even the paradigms used in frontend dev have needlessly churned
My fix? I switched from React/Next to Vue/Nuxt. The React ecosystem is by far the worst offender.
What about plain JS without React or another framework? I don’t do much front-end these days but I’d love to toss out as much front end-complexity as possible
Design is an interesting beast.
Good design is not always logical. Color theory, if followed, results in pretty bad experiences. And interestingly, good design can't always be explained in a natural language.
Main thing is, it's very hard to get AI to have taste, because taste is not always statistically explainable.
The best I've gotten to is have it use something like ShadCN (or another well document package that's part of it's training) and make sure that it does two things, only runs the commands to create components, and does not change any stock components or introduce any Tailwind classes for colors and such. Also make it ensure that it maintains the global CSS.
This doesn't make the design look much better than what it is out of the box, but it doesn't turn it into something terrible. If left unprompted on these things, it lands up with mixing fonts that it has absolutely no idea if they look good or not, bringing serif fonts into body text, mixing and matching colors which would have looked really, really good in 2005. But just don't work any more.
I thought reinforcement learning with human feedback was meant to get that quantification of "taste"
My first instinct reading an article (especially one about LLMs) is to scroll down to see the structure..
Anyway.
Do people get the impression that LLMs are worse at frontend than not? I'd think it's same with other LLM uses: you benefit from having a good understanding of what you're trying to do; and it's probably decent for making a prototype quickly.
Kimi k2.5 has been so far the best model for frontend. At least from my experience so far
Humm... better than Opus 4.6 or 5.4?
What are you using for the frontend? React component libraries?
Sure, but most companies don't seem to value Front End
Really? Front End is the customer-facing part of the web site. It's also the part of the stack, that non-technical people, including management, have opinions on.
Or do you mean something else?
They might have opinions about it, but look at the pay for frontend engineers at the same company. It's not uncommon to see the same seniority be 20% lower than a backend role.
AI is great at front end. Scroll based animations are the devil and these "boring" designs it defaults to are (more often than not) super intuitive. Sure, some design quirks it'll guess are annoying, but have you seen the web?
I'm a backend dev and I'm always hearing about how LLMs are dramatically better at frontend because of much more available training data etc. Maybe my perspective isn't as skewed as I've been led to believe and LLMs need close supervision and rework of their output there too.
I would trust an LLM with backend much more than front-end. Especially if we're talking monolithic and good type system. Ideally compiled. When I say trust I mean it's not going to break the user-facing API contract, probably, even if internally it's a mess. If you let it do front-end blind, it will almost certainly embarrass you if you care at all about user experience.
If you do backend blind, it will also almost certainly embarrass you. I’ve never had an experience beyond the most basic crud app where I didn’t have to somehow use my engineering experience to dig it out of a hole.
Works mostly fine for me on Rust backends. As long as I'm willing to accept tight contracts at the edges with spaghetti in the middle, or otherwise gate approval for everything it does.
If I want good abstractions, sure, I can set up approvals and babysit it with reprompting, because it will do stupid things that an experienced engineer wouldn't. But the spaghetti also works in the sense that it takes the input types and largely correctly maps them to the output types.
That doesn't emarrass me with customers because they never see the internals. On front-end, obviously they will see and experience whatever abomination it cooks up directly.
One thing that helps with #2 ('It cannot see') -- Try playwright-cli. Your agent can use it to inspect the DOM, see what styles are applied to elements, simulate clicks, etc.
Imo front end is what it’s best at.
Everything is nuanced and generalizations help no one. There are absolutely frontend apps where AI straight up crushes. Sure these much be less novel apps but most of what people work on is a CRUD-esque interface.
I'm wondering what would be a specific example that AI would fail to create front-end wise, since I've been having quite good experiences with it.
The author of the post is known for some pretty fancy CSS wizardry. I’m guessing AI is not great on some very specific, advanced CSS use cases where there isn’t much prior work. But again this is an edge case compared to what the vast majority of us are doing
But even then, what is a fancy CSS wizardry that AI couldn't do for instance, but would be trivial for the author?
Or is it that AI is not as creative?
I'd say UI is mostly a 2D tweaking + state management job, they don't exactly fit in a seq2seq style.
This is something that talk with some friends, How IA is doing things in front end is complelty different from Humans. Humans can select colors and themes based in their criteria, and IA only generate what they learn as a machine that they are, and It's not bad, but the thing is that people that use IA for develop front-end are adapting what IA generate, and in the other hand developer is adapting to client. Which are different approaches.
double down on betting your career on being a css expert? what could go wrong
Site has been slashdotted
Who says it sucks at front end? Unlike Stackoverflow, AI does a great job of "center a div." I tend to like working from reference documentation which is great for Python and Java but challenging for CSS where you have to navigate roughly 50 documents that relate to each other in complex ways to find answers.
Like I don't give it 100% responsibility for front end tasks but I feel like working together with AI I feel like I am really in control of CSS in a way I haven't been before. If I am using something like MUI it also tends to do really good at answering questions and making layouts.
Thing is, I don't treat AI as an army of 20 slaves will get "shit" done while I sleep but rather as a coding buddy. I very much anthropomorphize it with lots of "thank you" and "that's great!" and "does this make sense?", "do you have any questions for me?" and "how would you go about that?" and if makes me a prototype of something I will ask pointed questions about how it works, ask it to change things, change the code manually a bit to make it my own, and frequently open up a library like MUI in another IDE window and ask Junie "how do i?" and "how does it work when I set prop B?"
It doesn't 10x my speed and I think the main dividend from using it for me is quality, not compressed schedule, because I will use the speed to do more experiments and get to the bottom of things. Another benefit is that it helps me manage my emotional energy, like in the morning it might be hard for me to get started and a few low-effort spikes are great to warm me up.
CSS has definitely become a breeze to work with since LLMs have become a thing. Conceptually it's very "memorize how a billion possible combinations of obscure parameters interact with one another under various conditions" kind of setup so it's a perfect fit for machines and a terrible fit for humans.
The main limitation I think is that they're blind as a bat and don't understand how things stand visually and render in the end. Even the best VLMs are still complete trash and can't even tell if two lines intersect. Slapping on an encoder post training doesn't do anything to help with visual understanding, it just adds some generic features the text model can react to.
I'll grant that. A lot of times I want to give it a screenshot and say "here is what is wrong" and this is usually useless.
I will say though that multimodal capability varies between models. Like if I show Copilot a picture of a flower and ask for an id it is always wrong, often spectacularly so. If I show them to Google Lens the accuracy is good. Overall I wouldn't try anything multimodal with Copilot.
For that matter I am finding these days that Google's AI mode outperforms Copilot and Junie at many coding questions. Like faced with a Vite problem, Copilot will write a several-line Vite plugin that doesn't work, Google says "use the vite-ignore" attribute.
I still struggle with the design, but once that's locked in getting it to implement things is pretty straightforward. I do have to fight the AI a bit to make sure things are simple and clean, but it's pretty good at that with the right hand-holding.
The design is still a problem though, precisely because I am not a designer. I don't know what's actually good, I only know what's good enough for me. I can't tell the difference between "this is actually good" and "this is vibe-designed slop" but I have enough experience to at least make sure the implementation is robust.
No worse than humans then.
>It's notoriously bad at math,
If you are going to criticize LLMs for being out of date, at least make sure your understanding isn't out of date.
...not in my experience. It does what I need it to do; center a div.
...Does AI suck at front-end? This is news to me.
Except... it doesn't
[dead]
[dead]