I reckon something like this has only been possible to develop because of how homogenous the internet has become in terms of design ever since the glass effect and drop-shadows took over in Web 2.0 and Twitter Bootstrap entered the scene.
You'll get a competent UI with little effort but nothing truly unique or mind-blowing.
Impressive technology, but that old skool artisanal weirdness of yore only becomes more valuable and nostalgic.
There's no shame in being homogenous and obvious, though.
If I'm building out an internal tool for, say, a hospital lawyer to search through malpractice lawsuits, I want my tool to be the most familiar, obvious, least-surprising UI/UX possible. Just stay out of the way and do what it's supposed to do.
The trick is, of course, that the human is still responsible for knowing when homogenous is fine, or when there's real value in the presentation. If you're making a website for, say, a VST plugin for musicians, your site may need to have a little more "pizzazz" to make your product more attractive to the target audience.
> There's no shame in being homogenous and obvious, though.
The real world analog is this...
The reason people (especially Americans) stay in Marriott property hotels is because they are homogenous. If all I want to do is travel to Phoenix, AZ for work I want to know that the hotel room has the same mattress, desk, TV, customer service, etc. There is real legitimate value to that. So I'll book the Courtyard in Phoenix because I know exactly what I'm going to get.
On the other hand, when I'm traveling the Amalfi Coast in Italy, I want the Airbnb experience. Sure the bed is stiff, there's no A/C, and the 80 year old door frame is hard to close, but there is something magical about it.
It is actually a rational choice. It is a defense against extremely bad experiences.
A personal example from a few weeks back. My SO booked a hotel for a weekend as a birthday present. We went there, it had a fantastic spa, dinner was delicious, the room great, clean, and so on. Individually designed, well thought out, friendly staff.
Breakfast came around and the coffee was abysmal. Really truly abysmal. What did we do? While eating breakfast we looked for a McDonalds, as we know for sure, that regardless where you are - you will at least find an okay and drinkable coffee at McDonalds. It is not a great coffee. And will never be. But the likelyhood is very low that you will find a shit coffee.
Marriott is basically the same for hotels. Or MotelOne in Germany. It is the power of brand - you get a solid 7 out of ten. And to be honest - when I am traveling for work, this is all I want. I want to know, that I will have a clean room, a bed that is good to sleep in. And the knowledge, that I will likely wake up rested the next day when I have to be at my best for my clients.
The risk of ending in a shit-hole got smaller because nowadays people write their experiences - but on the other hand, having seen how many of my reviews were being deleted by Google, Yelp, TripAdvisor and the likes because some lawyer requested it - I don't give a rat's shit for online reviews.
Personally, if I had to go to Phoenix, AZ for work and stay at a Marriott hotel, I think I would rather convince my boss that this business trip could be a zoom call, and during that zoom call I notice that participants have all sorts of fun virtual backgrounds, filters, emoji in their statuses etc.
Because it turns out, the type who don’t want fun little differences are exactly the types who will gladly go on a business trip to Phoenix Arizona and stay at a Marriott hotel.
> all sorts of fun virtual backgrounds, filters, emoji in their statuses
I don't want more pieces of flair in my life, thanks
You generally won't get to know someone well enough to appreciate their unique aspects unless you see them in person at least sometimes, unless that person has the habit of letting their freak flag fly in all circumstances, which has its own downsides.
That's why I miss the days of old fashioned GUI toolkits (before the web thought of itself as an application distribution platform): you would just design any app as a bag of typical controls in typical containers, and you and your users would live with the expectation that they would look and feel just like the rest of the operating system, nothing more, nothing less. Frivolity would be generally frowned upon, with the result that applications were overall more homogeneous, effective, discoverable and efficient (also in dev time).
I remember when people would vigorously complain that Toolkit X was simply unsuitable for any task because it did not conform to the operating system's standard visual appearance.
Now I struggle to even define what an "operating system's standard visual appearance" is. Apple's still the best but not what they used to be on that front even so.
I'll still die on this hill, but I think that the reason there's a computer literacy problem is because we moved away from following OS conventions (when they existed) and into bespoke, branded UIs for everything, and then eventually to web where every site and webapp behaves differently.
In the early days, if you learned the OS, those usage patterns and skilled transferred to every app on that OS. They all looked roughly the same, shared the same menus, shame shortcuts, same icons, etc. You didn't have to learn how to use Apps x, y, and z. You just had to learn Windows (to an extent).
Then marketing got involved, and then the web, and then suddenly every piece of software had to stand out and look and behave as unique as possible, throwing years of HIG research out the window.
Notice that several examples in the Claude Design demo video are typing in English things that could be accomplished through UI controls, if the user only knew where to find them.
Not all OS's, unfortunately. I'm on the boat that says conforming to Gnome HIG's is a bad idea.
Just today I had the disk usage analyzer (baobab) open and I was navigating inside directories so I want to go up a directory and clicked on the "<-" left arrow in the headerbar, which went "back" a screen, discarding all the work done scanning the filesystem.
If this app had a traditional menubar and a toolbar this wouldn't have happened.
This is a common type of experience I have every time I use a Gnome app. It almost feels like someone deliberately researched how to make desktop apps as counter-intuitive as possible and implemented that as the policy for some reason.
I miss the days when there was no "standard visual appearance" for the OS (e.g. DOS). I liked the diversity of interfaces.
Years ago, I remarked to a friend that I'd spent half of my (computing) life post-high speed Internet, yet almost all my happy memories are from before that. It was the same for him, and we both explored why that was.
The homogeneity of interfaces was actually one of the reasons we came up with on why doing work at a computer is a lot less appealing.
That may be true, and had you asked me half a lifetime ago, I would have likely said "The old days were better".
But:
I would have still said I enjoyed using computers. And I wouldn't have said "Today's interface sucks" (well, other than my HW not being able to keep up with eye candy...)
I simply don't enjoy using the computer these days. And I do think the interface sucks. Pretty much anything that involves using the web browser sucks - be it a local app or a web app.
I don't remember people complaining about Winamp being a non-standard UI, but if it were slow then there'd be tons of complaints - and many of the "fancy" UIs were terribly slow (or the programs were, hard for a user to tell the difference).
For internal stuff you’re absolutely correct - but using “main stream” design language (the current trend of rounded 3 column AI layouts, corporate Memphis, skeuomorphism, stock photos of help desk workers, wordart, etc) that isn’t unique makes your brand forgettable. Sure it was mind blowing when it first came out but it quickly loses its uniqueness and starts becoming a sign of crapiness/scaminess/enshitificarion.
Your users will never make it to your no-nonsense backend if your marketing is completely cookie cutter.
Of course, there is indeed no shame. There is also no pride.
Standardized interfaces are as exciting as kettle thermal switches or physical knobs in cars. Useful, probably optimal and will be around for decades to come. Also nobody talks about it, treats it with interest, or pays above market rate to work on it.
The value becomes the architecture of the value of the tool, not the interface. There is still value being generated, but the need for a highly paid UX designer evaporates, and is ultimately replaced by the above.
not only possible but sometimes necessary because sometimes you need to sacrifice familiarity and question the assumptions we have to truly make meaningful improvements
True, but why would people use yet another lookalike tool over the one they're currently using? Or is the implication that looks don't matter as long as it works? Because if that's the case, Why do we need CSS?
A better example might be why we build stairs with a standard riser height and tread run. If you've ever accidentally tripped on an unusual or non-standard stair, you already know this.
Users don't need to think about how to use them; they are ubiquitous and familiar, and therefore intuitive and automatic.
If every set of stairs (or, worse, if every stair in a set) was radically different, every time you approached some stairs you would have to think carefully about how to use them so you don't fall.
> nobody talks about it, treats it with interest, or pays above market rate to work on it.
Definitely needs a citation for this one. For so many products the user isn't paying for standout design. They're paying for insight, leverage, velocity, convenience, whatever. The market definitely supports this by paying above market salaries.
Good design can be a useful differentiator but it isn't the only way for a tool or product to "spark joy" and often _fancy_ design (not good design) is used as a crutch for a subpar product.
> They're paying for insight, leverage, velocity, convenience, whatever.
Correct, they are paying for work done by people in other roles, who's title isn't UI or UX designer. It's on the backend person for velocity, it's for business development for leverage, it's on data scientists for insight, it's on logistics for convenience. Those people will be paid for solving those problems, not for tweaking CSS. My team, who falls into this category of more invisible work, has not hired UI or UX person at all. Which by mathematically speaking by default, is simply below the average rate for that work. Meanwhile Apple will pay easily mid six figures for someone in a more flashy role.
To prove the above person’s point, sap and salesforce have some of the most notoriously bad ux in the market and yes they make bank.
Design is much harder for power user tools compared to consumer. There is far more complexity and the expectation often is users must be trained to even use the tool.
Because the people making purchasing decisions for SAP and Salesforce are not people who spend any substantial share of their time using it directly or care about the UX.
I don't take pride in having an original UI for most tasks: I take pride in having one that's easy to use and gets the job done. I am not disrespecting people who are making a creative/artistic UI: That adds fun and life to the world. But it's not required for every project.
You should feel pride when you deliver the easiest-to-use system that the hospital lawyer has ever used. When you get them in and out of the system quickly because it's intuitive and has an appropriate architecture.
I think many companies need a UX professional to stop developers from deploying bespoke interfaces and forcing them to follow whatever idioms and patterns the users are most familiar with.
I think there's something nice about the idea of a store owner which has unnecessarily decorated the store with love, even with the liability of a cat; it's not delivering the product better and the cat may actually make things worse because of allergies.
A cold American convenience store may be delivering the fundamental value at American prices, but there's something to be said about that "extra" human or creative element. One might say the same thing about the changing nature of the web over time, less individual CSS chaos and more Facebook aesthetics.
There's nothing stopping people from decorating their boutique stores (or personal blogs, portfolios, and fan websites) the way they want. And that's fun and delightful for me, as a visitor, just like boutique shops are IRL.
But I really don't need that quirkiness at Home Depot, the DMV or my bank (or Amazon, or government websites, or my banking site). I'm there to purchase some screws, register my car or pick up some checks. I just need a storefront (or a website) that lets me do that as fast and homogenously as possible.
99.9% of stores (and UIs) are the latter, not the former.
There's a real problem with everything looking the same though. For a consumer product, you lose brand recognition. For a B2B product, you can confuse your users because Tool A and Tool B look exactly the same. You have to look hard at the name, kind of like prescription pill bottles.
There is little reason to invent a completely new design system if your goal is to encourage brand recognition and prevent an operator from confusing tools.
Apple/SwiftUI has accentColor for example where you can inject a brand colour. This is subtle but effective for UI differentiation - colour is a design primitive that evokes subconscious pattern recognition and can be more effective than a complicated design framework that forces a larger context switch in the user's mind.
Agreed. I only make internal tools where I work, and homogeneity is great here. These apps should be the most boring apps, yet clear, easy to use, and importantly, consistent across the company.
Bootstrap was great for this. You got a clean web interface that was simple, yet didn't have to be completely ugly. Basic and functional. A form to submit POs doesn't have to stand out, be glassy, or have animations. It needs to be easy to parse and stay out of the way.
There have been studies showing aesthetics matter quite a bit for UX - users perceive things that are attractive as being easier to use and less frustrating.
This is reducing the role of Design as some lego-blocks assembling process. And higher quality being seen as adding ‘pizzazz’.
You are right, though. Many products don’t need more than that. But I fear that this will greatly impact design innovation and progress. We might get stuck in the current UI paradigm for a long time.
We can skip Web3... Web 4.0 is twilight gradients, glassmorphism, text size xs in tailwind, and cards and pills for every UI component. Along with self-explanatory help text acting as filler under every header.
Is "design innovation" a thing we really need? I'm not trying to be flippant, but every time I've come across an "innovative" design the only thing it's done is made me spend time learning whatever bespoke conventions the designer put in.
It's no different to people trying to reduce the role of Programming to the same lego-block assembling process. And I believe the same conclusion follows.
And no-one is preventing you from caring about those things. I build UIs with Claude a lot and I still spend a lot of the time thinking about the user experience and working with Claude to make an app as intuitive and easy to use as possible.
I do similar, but I dislike writing CSS because it's practically impossible to keep up with the standards. And because I dislike writing CSS I don't feel like writing HTML that much either.
Web Components were a bit too slow to take off so the mental model of JSX has stuck with me, even if the ecosystem with hooks and various approaches towards reactive state are in many ways inferior to a problem Smalltalk already solved back in the day.
The issue is that you actually don't want it to look like the modern ubiquitous UI we see everywhere, because it's some of the most jarring, least-intuitive crap we could possibly design. Even I struggle with it when trying to help my parents out, so of course they have no chance, and if they have no chance neither does the hospital lawyer. Modern UI is garbage, and thus this just outputs garbage. Believe it or not, creating good UI takes real skill and experience. You can't just slop it out and expect your tool to do what it's supposed to do.
> You'll get a competent UI with little effort but nothing truly unique or mind-blowing.
90+% of attempts at making a truly unique or mind-blowing UI produce a mind-blowingly bad UI. For 0.5 seconds of wow factor, you've added substantial unnecessary friction. Outside of art projects where that wow factor is the point, it really should not be attempted, most certainly not by someone without the appropriate skillset.
The old skool artisanal weirdness was not a purposeful stylistic choice, it was a bunch of people trying to do the best they could with crappy tools. There may be some je ne sais quoi which is lost with the shift to mass adoption, but the reason for the mass adoption of these particular design trends was that they were objectively superior.
> You'll get a competent UI with little effort but nothing truly unique or mind-blowing.
Which is exactly what I want. Do you have any idea how hard it is to get a competent UI?
Why do people celebrate consistency and uniformity in desktop apps, wanting to crucify developers for not following platform idioms and guidelines... and then suddenly want things that are "truly unique" or "mind-blowing" or "artisanal weirdness" when it comes to a web app?
The remaining issue is that even an AI-generated UI needs considerable UX input in order to work well, especially when you have to fit it around domain specific knowledge, use-cases, and prior art. Is it for power users or not? All that.
At risk of shifting the goalposts on what I originally said, unique here isn't meant to mean quirky or weird but, simply, something that hasn't been done before, or hasn't been done as effectively.
This is the challenge for B2B startups that are switching to LLM-based development and are trying to offer more than the reselling of cloud compute at a markup with specialised functionality, because AI turns SaaS into a sexy version of MS Access.
There are still SO MANY insanely ugly, hard-to-use, absolutely horrible apps out there though. Sure, in consumer-focused apps, there's a lot of competition and pretty much everything popular is well-designed. But in enterprise? My god, it's still a massive shitshow.
The hilarious thing is that I would be willing to bet than in a decade, it's STILL a massive shitshow in enterprise. That's because the problem with enterprise software is not that good design is all that difficult to pull off (it just requires caring!) It's that the people making enterprise software have terrible taste and can't even see (I am convinced) that the thing they built is ugly and hard-to-use.
> It's that the people making enterprise software have terrible taste and can't even see (I am convinced) that the thing they built is ugly and hard-to-use.
Generally the issue with enterprise is that its designed to appeal to the stakeholders who will make the purchasing decision, not the person who is actually going to use it. The people making it may have great taste and know damn well what they could do to make it more usable, but if a clean and easy tool doesn't match someone's preconceived notion of what the purchaser thinks the tool ought to look like then it's not going to fly.
Having just checked my child in for their doctor appointment, 90% of web software would be dramatically improved by using very boring best practices and readable and accessible web practices.
It's a legitimate boon for throwing together little internal apps, mind.
I can slap something together with Claude over a few evenings to fill a gap on tooling, or I can wrestle with Jira and CI and all that to tie things together with their own integrations.
No thanks, I'll just take the API keys and build on top, to my exact specifications, and the interface will be passable even if it needs a lot of polish. Tailwind has worked wonders for that.
I'd argue it's relatively unimpressive given the ability to create design systems and apply themes to them to create relatively generic content has existed for a long time now.
Sure, some prototypes will be spun up more quickly. But if this was a real problem large companies faced it would have been solved in software already.
I wasn't suggesting that the problem would be solved by large companies internally. If anything this is worse for smaller companies, who have already solved this problem for decades at this point by simply not caring about design too much and using the web UI framework du jour. We've already seen with Tailwind that moving to "just put money in the AI machine" comes at the expense of open source UI framework sustainability, with the upside of being slightly faster at making a first-pass boring design.
> You'll get a competent UI with little effort but nothing truly unique or mind-blowing.
The shelf-life of unique and mindblowing has reduced to a week (being generous) before it's copied by slop artists looking for a resume booster or funding, and months tops before it's part of training data for everyone. Unless you find it in that small time window everything will seem homogenous.
It could just be a systemic result; unless you deliberately take the lonely road to parts of the internet where other people aren't, you will not see unique and mind blowing things. Which by definition you can't source from a place that has a lot of users, like social media or popular forums.
Music isn't really new either it's just recombining riffs already created. But the recombinations create new experiences. Might be the same with design?
AI can already generate music and it stands out a mile both in terms of the lyrical content and how middle of the road it all is. People will create some slop on Suno and lip-sync to it on TikTok because they absolutely don't sound like a thousand country singers blended into one.
So it's competent, for sure, but that is damning it with faint praise.
In a direction where the AI model basically serves you everything live. No sites, no front end, just databases and model embodying them.
I mean why even code anything in the future where it is cheap and fast enough to just come up with everything each time based on each user need.
I am not saying it’s good but it’s lazy. And if one thing is for certain is that laziness prevails. Some even mistake it for progress.
But then, is human programming language really the most optimal way for an ai to steer the silicon? Some kind of bare AI OS with kernel, drivers and there in the middle a fat specialised asic ai chip to orchestrate everything.
> Impressive technology, but that old skool artisanal weirdness of yore only becomes more valuable and nostalgic.
but does it still exists? Even without AI everyone is utilizating the same css frameworks, same libraries and templates... design is pretty much boring these days. CSS Zen Garden anyone?
I hate it so much. Ah, your website/app/program is comprised of rounded-corner cards in four colors (color/pale color/white/grey), with a dark theme. Your clickable text isn't visually distinguishable from your non-clickable text. All of your logos are sans-serif SVGs. Your settings and action menus are split across four different primary hidden locations. Your scroll bars disappear even when there's text hidden offscreen. You try to guess what I want to click on by showing a series of competing horizontally-organized pills over the top of the content instead of just giving me a consistent set of action buttons.
AI companies: "good news, everyone! We've automated all those steps so they're even easier to generate!"
I think the same thing is happening in physical construction. Ah, I see you've designed a new box with four primary color tones and slightly offset vertical lines to break up the windows.
What's great is I can take what you just said now and use it as context when generating my new DESIGN.md system and making sure it doesn't look like any of the other stuff. Thank you! Superpowers will show me all the options in their built-in visual companion when brainstorming. :)
I don't know, I looked at their demo video and it was tile/cards all over the place. I haven't seen an old-fashioned user interface like the kind we saw before 2020 in ages.
There was a screenshot of Valve's front page back when Half Life 2 was released in the early 00s[0]. It was well laid out, straight to the point, and had design flourishes that would have been painful to put together at a time where CSS was new and not supported very well.
Obviously a product of its time and laid out similar to how it'd be printed in a magazine (the characters slightly overflowing the borders and such like). Accessibility wasn't a thing back then.
If a different company did that in 2018 you'd be seeing the G-man in corporate memphis, downloading about 500mb of assets, with 178 separate ad trackers in a consent popup, and then you'd be scrolling like mad to get through all sorts of animations that hijack the scrollbar, in order to get to any useful info.
IMO it doesn't flatten design into one thing. it splits it. cheap obvious work at scale, and a way smaller premium tier for real authorship. the middle is what actually gets crushed.
Sort of. At the moment there is a fad of websites that mess with your scrolling and have very low content density. They are all trying to imitate Apple's marketing pages. Most startup websites do this. It's not at all good design, it's user-hostile, but it's trendy and popular right now.
Everyone else is already pointing out how competent over unique is purely a positive, so I want to criticize the other implicit assumption here.
This comment is just a rehash of the increasingly outdated and incorrect assertion that LLMs can't possibly exhibit any creativity -- and it's also incorrect.
If you're yearning for "old skool artisanal weirdness of yore", look up the trend on Twitter a month or two ago of people asking Claude to make YTPs. They ended up very weird and artisanal in a way distinct from how any human would do it.
that's how i've felt about all AI design. the harnesses get better and cooler, and the outputs up the baseline of utter crap to "whoa that doesn't look bad at all!" which works for probably 90% of the web, but anything truly unique still requires a lot of human taste. maybe that will change one day, but I hope it doesn't.
I am not so sure. I lean towards client work on desktop/mobile/web and while the initial output is workable as new requirements come in it starts to fall apart largely because the vibe coder doesn’t understand design basics. It is one of those you dont know what you dont know and not that ai cannot write workable css or w/e.
Homogenous design is a good thing. The internet isn't nearly homogeneous enough actually. The mid-90s desktop apps got it right and we've been regressing ever since then because web designers are like kids with crayons.
Look up in an old city, look at the facades of the buildings. They have quirks, uniqueness, it makes the city almost a living thing. Every time we shave off another edge we lose that. Nevermind the fact that shoehorning everything into the same patterns is actually an antipattern and very good paradigms have been invented after the 90s.
It's not perfect, but I'd rather have a bit of a mess than boring emptiness.
UI Design is an art. Like any other art, it's bound to have constant currents and counter currents. More than the designer's whims, it's the population's need for novelty, generational differences, and the desire of companies to stand out what is driving the wheel.
For my agency this won't replace Figma or designers. It's just a really useful tool to express yourself and communicate intent.
Before these tools, when a client wanted a specific section built, we'd spend hours hunting references across the web. The output always ended up feeling like a mesh of 2-3 sites, never fully unique. Then we'd burn more time explaining the intent to the client's designers and devs, usually with multiple rounds because words don't convey layout well.
Now we throw a quick mockup together in Claude or Lovable and send it. The designer gets the idea in 30 seconds instead of a 45-minute call, then pushes it further with their own taste and the client's branding.
It's not replacing designers. Most clients don't know what they want until they see it. These tools collapse that feedback loop from weeks to minutes, so the designer actually spends their time on the parts that need human taste, not on decoding a vague brief.
Indeed. Kitbashing is a thing, and it was always a thing. Designers I worked with would spend hours doomscrolling pinterest, google images, etc. looking for their, uh... 'spark' when they were given a briefing.
This is just a really cool way of building.
I'm impressed. I tried Google Stitch but it was slow and useless. Sad, because Gemini has a pretty good creative flair, ironically enough.
On Notes on the Synthesis of Form, Alexander defines design as the rationalization of the forces that define a problem. You’ll won’t find a better definition. But people tend to think design is the synthesis and its results. This misunderstanding of the role of design and the designer is responsible for all the unfit designs we encounter on a daily basis. Anyone equipped with a synthesis tool and feeling empowered to quickly and cheaply generate forms will almost inevitably become blind to the very nature of the underlying problems they set to solve. They’ll be fitting the problem to the available forms. They’ll skip the understanding, the conversations, the conflicts and disagreements, and happily and wrongly assume a design problem can be solved hermetically, in isolation. They’ll think quality is a factor of aesthetics, when in truth, aesthetics is an effect; nevertheless these effects is all they’ll have control over, as it’s all the tool can do. The tool will hinder their ability to be rational; to see the inner structures; to find the hidden but essential semantics; to create the ontologies that’ll support not only the immediate synthesis, but that will sustain the evolution of the design over its lifetime. They’ll be denied the enlightenment that comes with gradual, slow construction; the only place and moment where innovative ideas reveal themselves. They’ll be impoverished and confuse output with agency. I feel sorry for anyone that will think using tools equals doing design, because of the truly marvelous human experiences that they’ll miss, and that could never be replaced by the shallow pride of empty achievement.
This is a really verbose way to say that using generative AI has a detrimental effect on the user because one deprives themselves of the learning experience.
Agreed on your take on the parent, although I have to say I feel that AI has had the opposite effect for me. It has only accelerated learning quite significantly. In fact not only is learning more effective/efficient, I have more time for it because I am not spending nearly as much time tracking down stupid issues.
It really really really depends on how you are using it and what you are using it for.
I can get LLMs to write most CSS I need by treating it like a slot machine and pulling the handle till it spits out what I need, this doesnt cause me to learn CSS at all.
I find it a lot more useful to dive into bugs involving multiple layers and versions of 3rd party dependencies. Deep issues where when I see the answer I completely understand what it did to find it and what the problem was (so in essence I wouldn't of learned anything diving deep into the issue), but it was able to do so in a much more efficient fashion than me referencing code across multiple commits on github, docs, etc...
This allows me to focus my attention on important learning endeavors, things I actually want to learn and are not forced to simply because a vendor was sloppy and introduced a bug in v3.4.1.3.
LLMS excel when you can give them a lot of relevant context and they behave like an intelligent search function.
Indeed, many if not most bugs are intellectually dull. They're just lodged within a layered morass of cruft and require a lot of effort to unearth. It is rarely intellectually stimulating, and when it is as a matter of methodology, it is often uninteresting as a matter of acquired knowledge.
The real fun of programming is when it becomes a vector for modeling something, communicating that model to others, and talking about that model with others. That is what programming is, modeling. There's a domain you're operating within. Programming is a language you use to talk about part of it. It's annoying when a distracting and unessential detail derails this conversation.
Pure vibe coding is lazy, but I see no problem with AI assistants. They're not a difference in kind, but of degree. No one argues that we should throw away type checking, because it reduces the cognitive load needed to infer the types of expressions in dynamic languages in your head. The reduction in wasteful cognitive load is precisely the point.
Quoting Aristotle's Politics, "all paid employments [..] absorb and degrade the mind". There's a scale, arguably. There are intellectual activities that are more worthy and better elevate the mind, and there are those that absorb its attention, mold it according to base concerns, drag it into triviality, and take time away away from higher pursuits.
I agree with your definition of programming (and I’ve been saying the same thing here), but
> It's annoying when a distracting and unessential detail derails this conversation
there is no such details.
The model (the program) and the simulation (the process) are intrinsically linked as the latter is what gives the former its semantic. The simulation apparatus may be noisy (when it’s own model blends into our own), but corrective and transformative models exists (abstraction).
> No one argues that we should throw away type checking,…
That’s not a good comparison. Type checking helps with cognitive load in verifying correctness, but it does increase it, when you’re not sure of the final shape of the solution. It’s a bit like Pen vs Pencil in drawing. Pen is more durable and cleaner, while Pencil feels more adventurous.
As long as you can pattern match to get a solution, LLM can help you, but that does requires having encountered the pattern before to describe it. It can remove tediousness, but any creative usage is problematic as it has no restraints.
> Yes but that’s why you ask it to teach you what it just did.
Are you really going to do that though? The whole point of using AI for coding is to crank shit out as fast as possible. If you’re gonna stop and try to “learn” everything, why not take that approach to begin with? You’re fooling yourself if you think “ok, give me the answer first then teach me” is the same as learning and being able to figure out the answer yourself.
This isn’t necessarily a bad thing. I know a little css and have zero desire or motivation to know more; the things I’d like done that need css just wouldn’t have been done without LLMs.
I find it intellectually exhausting to describe to a machine what I want, when I could build something better in the same amount of time, and it isn't for lack of understanding how the LLM works.
It takes a lot of cajoling to get an LLM to produce a result I want to use. It takes no cajoling for me to do it myself.
The only time "AI" helps is in domains that I am unfamiliar with, and even then it's more miss than hit.
> I find it intellectually exhausting to describe to a machine what I want, when I could build something better in the same amount of time, and it isn't for lack of understanding how the LLM works.
I don’t even bother. Most of my use cases have been when I’m sure I’ve done the same type of work before (tests, crud query,…). I describe the structure of the code and let it replicate the pattern.
For any fundamental alteration, I bring out my vim/emacs-fu. But after a while, you start to have good abstractions, and you spend your time more on thinking than on coding (most solutions are a few lines of codes).
It is better than doomscrolling on Instagram for hours like the new generations. At least the brain is active, creating ideas or reading some text nonstop to keep itself active.
Sounds like you're talking about research AI and not generative AI. You can't learn artistic/creative techniques when you're not practicing those techniques. You can have a vision, but the AI will execute that vision, and you only get the end result without learning the techniques used to execute it.
Well, the research is sometimes 10x quicker with AI assistant. But not always. Building phase is maybe 20-100% quicker for me at least, depending on the complexity of the project. Green field without 15 years of legacy that is never allowed to break is many times faster, always has been.
That's a really useful distinction to have explicitly articulated. It's also why plan mode feels like a super power. Research vs Generative AI are different: I'm going to use this.
I guess I was more referring to just using generative AI when learning new subjects and exploring new ideas. It's a really efficient tutor and/or sidekick who can either explain topics in more depth, find better sources, or help me explore new theories. I was thinking beyond just generating code, which is incredibly useful but only mildly interesting.
Okay, this is a pet peeve of mine, so forgive me if I come off a little curt here, but-- I disagree strongly with how this was phrased.
"Generative AI" isn't just an adjective applied to a noun, it's a specific marketing term that's used as the collective category for language models and image/video model -- things which "generate" content.
What I assume you mean is "I think <term> is misleading, and would prefer to make a distinction".
But how you actually phrased it reads as "<term> doesn't mean <accepted definition of the term>, but rather <definition I made up which contains only the subset of the original definition I dislike>. What you mean is <term made up on the spot to distinguish the 'good' subset of the accepted definition>"
I see this all the time in politics, and it muddies the discussion so much because you can't have a coherent conversation. (And AI is very much a political topic these days.) It's the illusion of nuance -- which actually just serves as an excuse to avoid engaging with the nuance that actually exists in the real category. (Research AI is generative AI; they are not cleanly separable categories which you can define without artificial/external distinctions.)
> I have more time for it because I am not spending nearly as much time tracking down stupid issues.
It is a truism that the majority of effort and time a software dev spends is allocated toward boilerplate, plumbing, and other tedious and intellectually uninteresting drudgery. LLMs can alleviate much of that, and if used wisely, function as a tool for aiding the understanding of principles, which is ultimately what knowledge concerns, and not absorbing the mind in ephemeral and essentially arbitrary fluff. In fact, the occupation hazard is that you'll become so absorbed in some bit of minutia, you'll forget the context you were operating in. You'll forget what the point of it all was.
Life is short. While knowing how to calculate mentally and/or with pen and paper is good for mastering principles and basic facility (the same is true of programming, btw), no one is clamoring to go back to the days before the calculator. There's a reason physicists would outsource the numerical bullshit to teams of human computers.
Are you sure that is not the illusion of learning? If you don't know the domains, how can you know how much you now know? Especially consider that these models are all Dunning Kruger-inducing machines.
Agree on that too. And I use these as tools. I don't think I'm missing out on anything if I use this drill press to put a hole through an inch of steel instead of trying to spend a day doing it wobbly with a hand-drill.
"Verbose" is the wrong adjective. Yours is a terse projection into a lower space, valid in itself, but lacking the power and precision of its archetype.
The argument is not that only designers can design, nor that everyone should design like a designer. It’s to not confuse shopping for or generating generic solutions with the activity of problem solving. Per Alexander, trivial problems, those that can be solved without balancing interactions between conflicting requirements, are not design problems. So, don’t worry and just pick what you need and like!
Presumably you care about the quality of your marketing. Otherwise why do it at all. Worst case scenario, your marketing turns people off to your music, who would have otherwise been listeners.
Actually there’s some interesting problems here because a huge part of music marketing is in a visual medium, like a poster or album cover. It is literally impossible to include a clip of your sound.
So you should be really interested in how to capture the “vibe” of your music in a visual medium.
But if you don’t care at all whether ppl actually listen to your music, then yeah you don’t have to deep dive.
"Actually there’s some interesting problems here because a huge part of music marketing is in a visual medium, like a poster or album cover. It is literally impossible to include a clip of your sound."
The term you are looking for is 'aesthetic'.
And indeed.. music is far more than just a sound or whatever simple thing one tries to boil it down to.
Im convinced many (especially here) really dislike that - they want it just be a case of typing in a few things in an LLM and bam... there you go. They have zero clue about the nature of the economy, what's really going on in various markets etc etc.
I think that the beauty of the human experience is that all you need to learn is to practice. You automatically improve at what you're doing. The kinds of skills that atrophy when you use AI are skills that AI can already automate. And nobody is going to pay you to do slowly what a machine can do quickly/cheaply.
When you deploy AI to build something, you wind up doing the work that the AI itself can't do. Holding large amounts of context, maintaining a vision, writing apis and defining interfaces. Alongside like, project management. How much time is spent on features vs refactoring vs testing.
Was thinking similarly... Without the friction, you're unable to explore the space, the space doesn't even exist at all... So it's not even clear where you're going from or where you'll arrive at.
Not really. It’s saying that most people in tech have no fucking idea what designers do, but somehow feel qualified to evaluate their output, and think tools that make things that look nice are designing things. What you reference is one effect of what the comment is about. Another effect is developers, combining this with engineer’s disease, being incredibly irritating to work with because they constantly make reductive comments that completely miss the point while other developers nod and say “yeah that sounds right.” I was a developer for ten years— I’ve seen this from both sides.
> I feel sorry for anyone that will think using tools equals doing design, because of the truly marvelous human experiences that they’ll miss, and that could never be replaced by the shallow pride of empty achievement.
What if you don’t give a shit about design and it’s a means to an end for a project that involves something different that you do care about?
I think maybe how you are conceptualizing design and how the GP meant it are not in agreement, and if you came to agreement on what it meant you wouldn't really disagree about the point either.
For example, I think design, as they mean it, could be described as "how to get that thing we care about". The correct amount of design depends on how exacting the outcome and outputs needs to be across different dimensions (how fast, how accurate, how easy to interpret, how easy to utilize as an input for some other system). For generalized things where there's not exacting standards for that, AI works well. For systems with exacting standards along one or more of those aspects, the process of design allows for the needed control and accuracy as the person or people doing the work are in a constant feedback loop and can dial in to what's needed. If you give up control of the inside of that loop, you lose the fine grained control required for even knowing how far you are away from theoretical maximums for those aspects.
> What if you don’t give a shit about design and it’s a means to an end for a project that involves something different that you do care about?
Thank you for so succinctly demonstrating the problem with using AI for everything. You used to have to either care enough to do the design yourself or find someone who cared and specialized in that to do it for you. Now you quickly and cheaply fill in the parts you don't personally care about with sawdust, and as this becomes normalized you deprive yourself and others from discovering that they care about the design part. You'll ship your thing now, and it'll be fine. The damage is delayed and externalized.
I won't advocate against use of new technology to make yourself more productive, but it's important to at least understand what you're losing.
> You used to have to either care enough to do the design yourself or find someone who cared and specialized in that to do it for you.
You think most UI/UX designers, or the artists creating slop for content marketing spam factories for the past decades, cared? Some, maybe. Most probably had higher ambitions, but are doing what actually pays their bills.
It's similar to software developers. Most of those being paid to code couldn't care less, they're in there for the fat paycheck; everyone else mostly complains the work is boring or dumb (or worse), but once you have those skills, it makes no economic sense to switch careers (unless, of course, you're into management, or into playing the entrepreneurship roulette).
I think the more you industrialize a process, the more those involved become cogs (or get replaced with actual or metaphorical cogs in a machine). This is fine, even desirable, for anything we can produce en masse and apply quality control to. I do not mind that my rivets and screws are not artisanal. We figured out how to make a useful and reliable widget and can churn them out on an industrial scale no problem. I do not see the value in doing the same with software. We already get mass-production for free because the product is bytes. Why are we industrializing the process of making millions of variations of the same thing? Surely the effort would be better spent finding the "screw" of software, perfecting it, and making it trivial for users to accomplish whatever task they want without having to generate the gaps between with untested code. I want modularity and better design, not automated design.
The paychecks weren’t great. Everyone was offering to pay designers with “exposure”. If they didn’t innately care about the field they would have done something more lucrative.
Balancing requirements to achieve something you care about is doing design. I take that by “design” here you mean perhaps a particular interface or media, and you reckon that such element is not critical to your solution. If that’s the case then there’s no conflict at all. By reaching that conclusion you isolated what’s important and are correctly applying energy where it matters. This happens a lot in design, where producing or perfecting media interfaces is not necessary.
> what if you don't give a shit about design and it's a means to an end…
the parent's point is that it doesn't work that way. The point is self reinforcing. Design is not a thing. it's the earned scars from the process. Fine to disagree but it reinforces the point.
Man so much of this thread is full of such high minded philosophizing, it's like we're debating wine instead of talking about interfaces for doing things.
Like, maybe I just want to make an interface to configure my homemade espresso dohickey, do I have to wear a turtleneck and read Christopher Alexander now? I just wanted a couple buttons and some sliders.
We don't all have to be experts in everything, some people just need a means to an end, and that's ok. I won't like the wave of slop that's coming, but the antidote certainly isn't this.
Why do you want sliders when a config file would do the same just fine?
It's true that design theory writing is annoyingly verbose and intangible, but that doesn't make it wrong. Give someone a concrete language spec and they will not really know how it feels to use the language, and even once they do experience its use they will not be able to explain that feeling using the language spec. Invariably the language will tend to become intangible and likely very verbose.
But to answer your question: no, it's of course perfectly serviceable to just copy the interface others have created, and if the needs aren't exactly the same you can just put up with the inevitable discomfort from where the original doesn't translate into the copy.
I agree, though I'd offer a counter-point to the implied idea that tools like this stifle exploration and creativity.
I'm an engineer who also loves design. I've read a lot of the books (including the one referenced), I know some concepts and terminology, and I understand the general process — but I'll never be a professional designer. My knowledge is limited, and I find most design tools so complex they actually get in the way of problem exploration and creativity.
For people like me, this tools removes the friction which actually prevents me from being more focused on the valuable parts of the design process. I can more easily discover and learn new concepts, and ultimately spend more time being creative and exploring the problem space.
There’s no conflict here. Using a tool to automate what you have validated to be the trivial parts of a production process is the proper use of the tool. Professional designers also use this bias. For instance, I might recognize that creating a custom font or illustration is not core to my solution, so I can employ an off the shelf font or illustration and focus, say, in the written content. Same principle. The problem is most people won’t even acknowledge or validate the essential aspects of the solution and just iterate mindlessly.
The issue is that UI design has different constraints compared to general graphic design, just like product design is not sculplture. Most UI designers only care about the visual aspects while neglecting the interactive aspect.
A whiteboard or a wireframing software would be better, because it lets you focus first on the interactive part. And once that’s solved, the visual part is easier.
This is such a beautiful distillation of everything I believe about the dangers of over-reliance on AI. I implore thee, good sir, to write a longer essay on this.
Creativity is a very big part of design, these Gen AI tools allow for stepping through a lot of variations and creative ideas very quickly, even creating working artifacts and protoypes on the fly and iterating rapidly
This speed and variation wins for me. But yes without a designers eye laziness can get lost in slop design too..
To me the value of Gen Ai is an accelerant (not slop factory) for ideation and solutions not a replacement of the human owning the process.. but laziness ususally wins
> because of the truly marvelous human experiences that they’ll miss
when people wax philosophical/poetical about what is essentially capital production already i'm always so perplexed - do you not realize that you're not doing art/you're not an artisan? your labor is always actively being transformed into a product sold on a market. there are no "marvelous human experiences", there is only production and consumption.
> They’ll be impoverished and confuse output with agency
> your labor is always actively being transformed into a product sold on a market. there are no "marvelous human experiences", there is only production and consumption.
The first time I used Mac OS/X, circa 2004-2005, I was blown away by the design and how they managed to expose the power of the underlying Unix-ish kernel without making it hurt for people who didn't want that experience. My SO couldn't have cared less about Terminal.app, but loved the UI. I also loved the UI and appreciated how they took the time to integrate cli tools with it.
I would say it was a marvelous human experience _for me_.
Sure it was the Apple engineers' and designers' labor transformed into a product, but it was a fucking great product and something that I'm sure those teams were very proud of. The same was true with the the iPod and the iPhone.
I work on niche products, so I've never done something as widely appreciated as those examples, but on the products I've worked on, I can easily say that I really enjoy making things that other people want to use, even if it's just an internal tool. I also enjoy getting paid for my labor. I've found that this is often a win-win situation.
Work doesn't have to be exploitive. Products don't have to exploit their users.
Viewing everything through the lens of production and consumption is like viewing the whole world as a big constraint optimization problem: (1) you end up torturing the meaning of words to fit your preconceived ideas, and (2) by doing so you miss hearing what other people are saying.
> Sure it was the Apple engineers' and designers' labor transformed into a product, but it was a fucking great product and something that I'm sure those teams were very proud of. The same was true with the the iPod and the iPhone.
...
> Work doesn't have to be exploitive. Products don't have to exploit their users.
bruh do people have any idea what they're writing as they write it? you're talking about "work doesn't have to be [exploitative]" in the same breath as Apple which is the third largest market cap company in the world and who's well known for exploiting child labor to produce its products. like has this comment "jumped the shark"?
> Viewing everything through the lens of production and consumption
i don't view everything through any lens - i view work through the lens of work (and therefore production/consumption). i very clearly delineated between this lens and at least one other lens (art).
The guy in Cupertino aren't the ones behind bars so they can't jump their deaths; for someone who supposedly "clearly delineated", you sure are mixing up those who are being exploited with the people who benefitted.
Ultimately the exploitative pyramid always terminates in a peak, and the guys working up there can for sure be having a hecking great time doing their jobs.
Maybe you'll dismiss it as another poetic waxing but what I understand they're saying is that capitalism hasn't yet captured all the inefficiencies of the human experience.
just repeating the same mistake as op: sadness/happiness is completely outside the scope here. these are aspects of a job - "design" explicitly relates to products not art. and wondering about the sadness/happiness of a job is like wondering about the marketability of a piece of art - it's completely besides the point!
OP never talked about art. Design is not art, it's problem solving. And good design according to Dieter Rams:
1. Good design is innovative
2. Good design makes a product useful
3. Good design is aesthetic
4. Good design makes a product understandable
5. Good design is unobtrusive
6. Good design is honest
7. Good design is long-lasting
8. Good design is thorough down to the last detail
9. Good design is environmentally friendly
10. Good design is as little design as possible
Generative AI just tries to predict based on its training data.
a product can be a piece of art and design can and does in practice often go hand in had with art, practically most designers also other than the utilitarian role practice the artistic one, wether you would want to group art within design as one is a matter of definitions
Whatever the merits or demerits of 'marvelous human experiences' are from the point of view of production and consumption, the OP's conclusion leaves out the important point that Alexander's 'rationalization of forces that define a problem' produces designs that come closer to solving real-life problems (even in production and consumption) than simply putting attractive lipstick on an economic utility pig. If production isn't solving real human problems, consumers will go elsewhere.
> If production isn't solving real human problems, consumers will go elsewhere.
of course but that's well within the scope of the whole paradigm (as opposed to how it is originally phrased it in relation to a loss of "marvelous human experiences"): if i use a bad tool to solve my customer's problems in an unsatisfactory way then my customers will no longer be my customers (assuming the all knowing guiding hand of the free market). so there's no new observation whatsoever in OP.
If you look at Figmas stock price, it started falling right at 11 AM as this news was released.
Anyways, this is 100% a shot at Figma, but also catching Lovable in the crossfire. If anybody from Anthropic is reading this, if you keep developing this with features in Figma and other design tools, you'll have a major hit on your hands.
The folks at Wall Street do not understand this does not replace Figma.
Figma is targeted towards designers who create thoughtful design systems and cohesive UIs and who don't code, while this is targeted towards vibe coders who can't design. Two different circles that intersect to some level.
But like you said, if anthropic adds the tools in Figma, only then they can can take customers from Figma IMO.
>Figma is targeted towards designers who create thoughtful design systems and cohesive UIs and who don't code, while this is targeted towards vibe coders who can't design. Two different circles that intersect to some level.
The challenge is that this sets an expectation of what "design" is, de-valuing the former and shifting us culturally towards the latter and a space where "design" is seen as a subjective visual exercise with little intrinsic value.
I think there's a parallel here in advertising and what AI has done there. It's clearly used nowadays, a seasoned user can probably spot it straight away even if it gets harder over time. Still, it's deemed "good enough". The savings versus having a team and shooting on location etc. can be enormous. Even before this launch, I see it on the web. It's already happening.
Tools like Figma are for an era (and persona) who still wants to have all the various knobs and dials to dial in exactly what they want. And that is one way of working if, like you said people are trying to be more thoughtful and know exactly what they want.
But for the other 95% of people, being able to just say "ok can you make it look more modern" and have 4 variants in 5 mins, (like me) Figma will lose users like me.
But then again I was never a "designer" – more a builder.
I'm much closer to your persona than a professional designer. 5 years ago if I was going to spin up a landing page for a side project I was probably getting something mediocre together with bootstrap or material UI. Today I'd probably get something marginally better together with a tool like this. In both scenarios I'd end up with an undifferentiated but acceptable end state.
I've never paid for a figma seat. A couple of employers have so that I can collaborate with designers in the product, but I don't think this changes that.
In an era where it's cheaper and more common to end up at that undifferentiated state, the ability for companies to make their products go above and beyond it is more valuable, not less.
I see this across the board with AI. It lowers the bar to get to passable, but as slop fills the internet we're already seeing people place more value in good products, good writing, good art, thoughtful code architecture, etc. Everyone and their cousin's uber driver is vibe coding a SaaS startup no one's going to pay for right now.
Ah, slopper is hilarious. Too long has the title of builder just been an excuse to make dog shit UI and excusing yourself. If you're going to build user-facing tools, good UI/UX is a requirement not an option. Couldn't imagine this excuse flying in any other industry. Yeah I just made a chair where all 4 legs are different lengths and the back rest is in the middle of the seat, "I'm just more of a builder"
Would you like to attempt a more good faith interpretation on what I meant, and address that (you can even imagine doing this in front a user/client and iterating in minutes with them, ultimately getting even better outcomes), instead of inventing the most un-generous interpretation of what I said, that I'm just adding AI slop?
I don’t think I can interpret it in better faith. You’re excusing low quality output by calling yourself a “builder” (meaningless term btw), is “slopper” not an accurate term here? How else would you describe somebody who spends 5 minutes prompting an LLM on one of the most important aspects of a product?
Everyone who creates something is a “builder”, that term doesn’t excuse someone from not putting effort in. I don’t care if you aren’t a designer, it’s about the effort you put into your work :)
The obvious bad faith part of your argument is assuming that it's "low quality output." Another is using a blanket negative and dismissive term like slopper, without taking a chance to actually see the work output (at least in my case).
You also clearly misread what I said. I didn't say I spent 5 minutes prompting an LLM. I say the ability to get FEEDBACK (a revision) in 5 minutes is amazing. And I stand by that. That allows me to do 20 more revisions and do in a couple of hours what would take two weeks.
You seem to be romanticizing the concept of grunt work – that for something to have value or be of good quality, you have to put in some sort of minimum amount of time on it, and it has to be tedious. It's the same concept that nobody can make a good quality piece of furniture unless they used a hand saw and spoke sweet nothings to the tree before it was cut.
There are ways to do things quicker while preserving quality. I had already left a caveat saying that for the 5% of people that really want to push web design forward, totally, go ahead. But for the rest of us (including those of us who have lived and breathed code and engineering principles for decades), these tools are phenomenal for iterating quickly.
Anyway, the term builder is more about separating the goals from a vanilla "programmer" - even though i've programmed my whole life, it's always been in service of an outcome. And the outcome is almost never "good code for the sake of good code" - it has to serve a real outcome in the real world.
By the way, lots of good designers are also using coding agents now, so you can keep romanticizing grunt work while most of the market moves on.
> But for the other 95% of people, being able to just say "ok can you make it look more modern" and have 4 variants in 5 mins, (like me) Figma will lose users like me.
Perhaps this phrasing is what invited the interpretation you seem to be annoyed with.
There is not much to gain by suggesting everyone is simply bad faith.
No the bad faith part comes from assuming that the output is low quality, and that just because I get _feedback_ in five minutes (read again what I said) it somehow implies that I spent 5 minutes on it and then moved on, never to revisit.
I think you like the other person is assuming that 5 minutes = low quality. Instead of thinking "5 mins means you can make 8-10 iterations in an hour" or "5 minutes making the front end look pretty good means I can spend more time on the backend"
There are many ways to interpret this, yes. I only mean to disrupt the framing you keep asserting of good and bad faith, I'm still not sure I understand what you are getting at.
No one is assuming the output is strictly low quality from what I can tell. I am personally evaluating the method you provided, which suggested you are championing a sloppy but highly iterative design flow against a seasoned curated suite for defining design. I dont see any reason to assume the other comment was doing anything otherwise.
You made a broad generalized strong claim and were met with the opposing force, which is actually acting from their own understanding of good faith, believe it or not (see how this analysis is void of meaning?).
> Figma is targeted towards designers who create thoughtful design systems and cohesive UIs and who don't code, while this is targeted towards vibe coders who can't design. Two different circles that intersect to some level.
this overlap has been widening incredibly quickly. lots of designers are now writing code with the help of cursor, claude code, etc.
even if you believe "real designers" wont ever use this product, it's not hard to see how a low barrier-of-entry tool could affect Figams bottom line. slowing down Figma's adoption from the new wave of entry-level designers who dont already have muscle memory would not at all surprise me at all.
Figma was never needed. they were useful when enterprises allowed people with no coding experience to mandate how ui should look. It is the powerpoint of dumb people that wanted a career in tech. happy to see it dying.
Hard disagree. There's more to UX than pushing pixels around. Usability, accessibility, and capturing the broader customer experience at 40,000 ft isn't a trivial process when you're designing a large product (or suite of products) especially.
These areas obviously tie into engineering very closely, but the thinking that goes into them happens at the design stage, at a lower cost than starting with engineering. AI models suck at getting every facet of this process right, because designers are achieving a balance between branding, usability, standards, taste, and differentiation -- the exact opposite of a model trained to reach for the most average outputs.
I'm not sure they don't care anymore, as much as they experienced the same pressure every company faced when AI went mainstream.
Had they not included support for it, where would they be now? I'd wager a critical mass would be screeching to High Heaven for integrations, seeing as a Figma document is effectively a config file that can be translated to real code.
I think they understand that the people running businesses are going to look at this vs a human who uses Figma and realize how much more cost and time efficient it is to pay for a machine than a human.
Just last week, I asked the designer on my team to try working in Codex instead of Figma. It’s just not a great workflow to pass a figma file to a developer to implement. She hasn’t wanted to go back yet…
I think the target market for this is small businesses wanting to throw together quick concepts without needing to hire a contractor necessarily. This smells more like Squarespace and what they did for brochure websites / portfolios than anything else, but perhaps more general purpose.
This is 100% a response to "Stitch With Google" which has been doing amazing work in this space for a long time now, and is Google's answer to Lovable.
I'm now pasting all my Stitch output into Claude Design to see what happens.
edit: First impressions are great. It asked me a ton of really great questions about my design aspirations and direction, which were incredibly relevant and insightful. Waiting to see what it makes.
edit2: It did astonishingly well with the first design pass. Really outstanding. This is probably going to be my primary prototyping tool until the Next Best Thing(tm) drops in a few weeks.
I too use Stitch, seems like not many have heard of it. Stitch also asks good questions, do you have any examples of both side by side with the same prompt or idea?
Really annoying but I can't figure out how to share a link from Claude Design? It seems to only allow you to share the design with other people on your account? It'll let me export all the HTML and assets, but that's a bit rough to have to download it all and upload it to a host somewhere just to send it to a client for review or something. On Stitch you can just grab a link to the design.
Figma's stock has been on a sharp downward trend over the last year. This isn't a notice-able change to their stock price at all. They're down 30% just in the last month, with many days being -5% to -10%.
If we expect Anthropic not only trying to replace just all software developers with Claude, but also all software product companies with Claude, then i really wanna know why the fuck are we cheering here for Anthropic on Hacker News? What kind of dumb sheep would do this?
We all hope this to be easy but it is extremely difficult to get 2 departments with different reporting structure work together. Getting 2 companies work together for mutual benefit is like asking 2 political parties to work together.
Judging by Anthropic's track record for sloppy, buggy software, I can't see this taking off quite as well as people might think, when compared to Figma and its captive customer base.
Figma actually put the work in to make a great product that performs well and offers anything you could imagine to design just about anything you need, with AI integrations and deep manual editing to sweat the details.
That valuation is more insane than most, I would’ve loved to hear the arguments for it, it was a given they would have to compete with the companies who provides them with the models, anyone who thought they would just leave that market alone is a damn fool (the vendor lock-ins are great too, add some hosting, domain selling, etc and you got free money).
I started using Pencil for my UI designs and they just introduced animations and I was like they just kicked Figma and now this. Figma might just become another Blockbuster.
The more I think about it the more this isn't good for design [EDIT], for a few reasons:
- The best design is original, groundbreaking and often counterintuitive. An AI model is incapable of that, it's uninspired, it will absolutely converge to the norm and homogeneity (you see it everywhere now, just scroll on ShowHN and take a look at the UIs) and produce the safest design that appeals to its understanding of the ideal user.
- Good designers will reject this, they prefer to be hands-on and draw from multiple sources of inspiration which is what Figma boards and Canva is good for, also mainly for cross-collaboration. If you've seen how quickly a great design engineer can prototype you'll know that "speed" they advertise in this video is not worth the tradeoff.
- Creatives typically have a very very very high aversion to AI.
- Non-designers will not see a purpose for this tool, basic design can already be done through Claude Code and Claude.ai, I fail to see what this could offer unless they leverage a model that is more creative and unique by default (you can not prompt/context/harness engineer creativity believe me I've tried).
- Design is a lot more than just UI. Tools like this ignore so many other important aspects like: motion, typography, images, weight, whitespace, sound, feel.
> The best design is original, groundbreaking and often counterintuitive
Designing a user inteface involves thousands of small decisions. When trading off pros/cons for each of these decisions, in 99% of the cases, the right answer is ‘optimize familiarity.
That’s why Android and iOS look the same, and why the small differences between them are where contention happen.
If you adopt existing patterns, your users would be instantly familiar with your app, and the design will not get in their way.
You're arguing for familiarity in tactful design, while I agree that for most users this is a good thing, repeatability of existing patterns does create that immediate familiarity.
HOWEVER, that familiarity is only a virtue because someone, once, deviated hard enough that their deviation became the new familiar. AI can only optimise toward the current snapshot of "familiar". It cannot produce the next one. If designers outsource all their thinking to a model even in tactful design we would never have groundbreaking design concepts like "pull to refresh" or the command palette.
> someone, once, deviated hard enough that their deviation became the new familiar
That’s not necessarily what happened though. Apple innovated not out of sheer daring but because they also had the best metaphysical paradigm for GUIs that people could also just intuitively grasp. There was a structural correctness to their approach, underlying all the things that we find visually appealing. In the beginning, Google dared and deviated hard from Apple’s design language to establish their own unique identity, but anyone who’s working in the mobile space would Have noticed that Android coalesced into roughly the same patterns over time because of that structural correctness.
When you aim for familiarity you also make the assumption that someone else's judgement and opinion was and is the correct one, when you question the assumed only then can you make meaningful improvements. See the iphone which was totally different to the "standard" phones of its time.
If you want to be creative, you should make art. I love art. I think it's a great idea for people to make art.
If you want to make a GUI, it should be familiar. Extremely familiar. It shouldn't invent new ways to interact most of the time.
It is well-known that "intuitive" in UX almost always means "what I'm used to". If you're regularly "innovating" in UI design, you may be making the product harder to use, maybe much harder to use.
It certainly isn't unheard of for new ways to interact with computers to be better than the old, but they are usually tied to new physical aspects of our tools: Touchscreens needed new ways to interact, and maybe there's still some room for creativity there, but not much. The mouse obviously required innovative ideas for several years. But, also, the odds of your wacky new idea being the right way to change how people interact with computers are pretty low, unless you're working at FAANG and have a UX research team and budget to test it.
You can get creative in how it looks, but you cannot get creative in how it works.
I agree somewhat, there's a common language for building products that most people understand and expect.
Innovation comes from the ways people differentiate, without straying too far from the tried-and-true patterns. It's the tiny decisions that situate UI elements and yes, reinvent the wheel sometimes, that can tip users over to whatever you're building because you did it better, or in a way "most" (the average) never thought of.
If people aren't creative in how it works, then really they're all just making the same, boring products, without truly competing against anyone in a meaningful way in the problem space. Visual appeal isn't a sole differentiator.
I could see there being an 80/20-style argument for this sort of tool being used for more generic usecases, with "good designers" using Figma et al. for programs where the UI itself is a selling point.
I will stand by the first point unless models start being trained with different objectives instead of RLHF's three objectives: Helpfulness, Harmlessness and Instruction-following
I noticed in your list that you didn't mention accessibility. I would personally rather have an accessible design than one which is "original, groundbreaking and often counterintuitive." and here we are.
I should have mentioned accessibility. It supports my argument more than yours. Accessibility like captions, voice, keyboard nav, dark mode are all a deviation from the norm by a minority (something AI is completely incapable of doing) and a fight against familiarity which now serves as a great benefit to the majority.
This ... This is simply not true. I use a screen reader. I am using it right now. I can confirm that AI-generated code, by default, is far, far more accessible, cares far more about keyboard nav, about DOM order, about using the right semantic HTML, about the things that I care about than your average human-designed slop.
And no, it doesn't just add ARIA to everything as is so typical by poor practitioners.
I think we're arguing two different points. You're arguing about implementation, AI is great at this given the existing defaults and the right prompting. AI was trained on 30+ years of accessibility standards that a minority of great humans fought to establish as a familiar practice.
I'm arguing about invention. It is extremely unlikely that AI will be the one to invent the next accessibility paradigm, because that requires deviating from the training distribution, which it CAN'T DO.
I'm also arguing that this homogeneity in design will lead to an atrophy in inventive, unique and original thinking.
It is extremely unlikely that AI will be the one to invent the next accessibility paradigm, because that requires deviating from the training distribution, which it CAN'T DO.
What is it about our own architecture that lets us innovate beyond our training distribution?
I have no idea how everything will play out, but this sounds a lot like the people saying "good programmers will reject this" six months ago.
Quite apart from anything else, it ignores the fact that—particularly within large organisations—designers (and programmers) frequently have very little say in the matter.
This is a great bridge between non-designers with taste and designers who can't fully technically implement their solutions (or want to more rapidly prototype their solutions). Well done AI implementation is like cosmetic surgery. The trashiest implementations you can tell immediately and the more tasteful ones are subtle
IMO AI will make plain the divergence between "good design" and what people actually want. You're absolutely right that from an artistic perspective, it will produce the heat death of UI. I just struggle to think if teams building will actually care. Boring but polished is completely fine for SaaS.
Plus: So much of excellent user interface design is done through iterating on feedback from live humans testing it with their human sensory system.
Until we have embodied AI's with eyes and hands that provide good enough approximations, the aspect of design bottlenecked on human experience will stay bottlenecked.
Web design / digital design is a dying field as businesses will start paying one person who does 3 to 4 roles (PM, UX Research, Design and UI Development - tho why use a design tool for web stuff when AI tools generate designs in code), as well now tons of ppl can do this work using AI tools. Further, is the future of digital experiences user interfaces aka the web or will there be an AI Phone where everything is done / seen on the lock screen (AI generates the visuals as you text or talk to it) and or its more of a text and voice digital experience less UI.
Overall after being laid off in January and a 17 year UX Research/Design/Dev career Im starting school in my early 50s to change careers.
>AI Phone where everything is done / seen on the lock screen (AI generates the visuals as you text or talk to it) and or its more of a text and voice digital experience less UI.
I think more expressive UIs are the future but i disagree with this sort of thing being accomplished with a non deterministic tool such as AI generating UIs, you are throwing stability and consistency along with familiarity out the window.
The idea of tools being almost UI-less and composable and modular has been a "dream" since xerox parc or see for example the book "the humane interface" which happens to also ahead of its time outline reasons why such generative interfaces would be a bad idea especially at such a large scale.
AI can potentially relieve some friction with that paradigm but definitely not in that way or even that extent.
You could have said the same thing about powerpoint vs high quality marketing departments. The "pros don't want this" argument doesn't really hold weight.
This is for non-designers to crank out slop with less effort. They can still be swayed by all the shiny knobs to feel in control.
I hate to hand anything to Generative AI tools, but
While Great design breaks the mould, Very Good design is about surfacing the most expected outcomes for any action which reduces friction and lets people get work done. And this generation of Generative tools is very good at identifying the most common/most expected response to a prompt.
I use it all day every day with Claude Code. I sometimes wonder past code if this has had the biggest impact on my day to day productivity, either having to make do with semi-bad looking reports or have a designer design them (which is slow).
Sort of feel sorry for Figma in a way though, given all the "partnerships" (highlighting their MCPs) and case studies they've done with Anthropic and then they release this. I note there isn't a testimonial from them this time.
I'm surprised how poorly Figma have used "AI" in general - given they were the "gold standard" in taking emerging technologies (WASM etc) and making an incredible product. The Figma Make thing was incredibly underwhelming, I managed to extract the system prompt out and it's basically just Gemini 3 Pro with a design prompt. Perhaps the original team has left?
They are extremely exposed imo. While all the UI/UX designers will continue using it for the forseeable, I strongly suspect a lot of their (A/M)RR was coming from extra seats for PMs, developers, etc to view and export and do commenting on the files - not core designer usage. I think a lot of this just won't happen on Figma as much.
I think you hit the nail on the head - I bet Adobe and Figma get most of their sold seats from people who don’t really need the full tool and are basically just using it as a viewer and to make very small adjustments or notes
>I strongly suspect a lot of their (A/M)RR was coming from extra seats for PMs, developers, etc
their seats system has always been brutal it’s extremely easy to have the seats balloon if you’re not careful and if they’re yearly there is only a 30 day window a year where you can cancel them when the banner to do so appears.
I tried Figma again after a few years expecting that they'd surely have a tool that lets me describe a design and then it generates a Figma design file.
Nope. Figma Make first renders an HTML/React app with your design. Then you could convert to a Figma design file if you have a pro plan. Extremely underwhelming.
There's hardly any difference between using Figma and just designing it with Codex and Claude Code. And now, Claude Design seems to get it right.
My feedback for whatever it's worth as a 25yr design veteran.
* Massive token usage, some small tasks burned through $50 of credits and did not offer $50 of value.
* Terrible at logo work. Comically bad. This is something that is "hard" so it could add great value if it could deliver.
* Repeatedly forgot prior feedback - when iterating it would re-implement prior iterations after being told why we didn't want that result which made for a very frustrating UX.
* Prone to adding visual clutter - kept adding extra elements that look "pretty" but add no value to the user.
* Seems better at "pretty" vs user focused / UX.
* Did not do a good job at using my existing design / UI library
* REALLY wanted to start from scratch. Could not be coaxed into designing part of an application, it wanted to redesign the whole thing.
It really feels like Anthropic's product area is extremely overextended at this point. If they want to extend themselves horizontally in an unlimited fashion, they will need unlimited focus, and agents can't supply that. Things will fall through the cracks. Why should I believe that Anthropic will care about this product in 2, 3 years? Whereas I firmly believe that Figma will care greatly about its product in that time
They are building infrastructure components that they will soon wire together with an orchestration layer. Managed agents, scheduled tasks, workflow webhook automation.
The infrastructure piece is what they do best. I'd be happy if Anthropic became the AWS of AI. But this product is just a mediocre shot at Figma, when no such shot is strictly necessary for them. These kinds of consumer products are not what they do best.
Unifying experiences and tying them together is always harder than net new. It's the GRRM problem - expanding out the universe is easy, wrapping it up on the other hand...
It is officially the 2010 Google era at Anthropic (the era where Google released tons of new products and spread themselves too thin).
Anyone remember Google's social media platform??? Google Plus?
This is a good era to be in! Its the era of product experimentation.
As long as you realize that 90% of the products will not be supported long term if it doesn't contribute to bottom line revenue, then just appreciate it for what it is, a bunch of smart people trying to create useful products.
Just don't be surprised if Anthropic goes the Google route, which is shutting down the majority of the products that are too small / not successful enough to impact their revenue.
You mean you aren't still using Google Duo and Allo? Google Reader? Playing games on your Stadia? I'd be worried about really locking into a specific Anthropic product at this point other than Claude Code
> Why should I believe that Anthropic will care about this product in 2, 3 years?
There's no reason to believe Anthropic will stop caring about this product--they're not Google [1] after all.
> It really feels like Anthropic's product area is extremely overextended at this point.
I don't think so. They have one core product: the Claude model; they're enabling different ways of accessing it. Claude Code for developers, Cowork for general business tasks, and chat for consumers.
This is their first graphic design product, but it fits nicely because once you create a prototype, you can hand it over to Claude Code to make the website, mobile app, or whatever.
The advantage Anthropic has is their ecosystem. A Claude user will be way more productive using Design because all of their context is with Claude; other AI tools don't "know you" the way Claude does. Claude already knows your style and your preferences; it's much more likely to create designs you'd like.
When you go to an AI you don’t normally use, you essentially have to start from scratch.
I feel like it's a good way to build the best PoC in any vertical. Either they create a product, or the big players will but Anthropic can provide them with the infra.
Ultimately OAI and ANthropic are destined to crash and burn - they have gambled a lot on LLM's paving the way toward accelerated research, discovery etc etc. But its not happened and its not happening - the only 'large' revenue stream they have all converged on is selling tokens in relation to software production. Which is pretty anti-climatic given all the hyping.
Keeping the hype alive through to IPO is critical now.
Are they, though? My thinking is that their roadmap is heavily focused on the SDLC and solving problems related to software development, so their model will be optimized for that domain. That leaves room in the market for models that are specialized in other areas of expertise.
Agree. I'm building a software factory at my job right now that leverages Claude Code. It seems like almost daily Anthropic is releasing new products or features on existing products that seem to overlap with the factory I'm building. I still think my project provides a more coherent workflow from a SDLC standpoint, bringing an idea to a ticket, to a prompt, to a plan, and finally coding and creating a PR, but it definitely feels like Anthropic is heading in that direction as well. It'll be interesting if they eventually bring all of this together into a single uber tool.
If that is what they are optimizing for, then they are not doing a great job on that either. Their latest model is more expensive and less impressive, their downtime is abysmal by any comparison, their "Dispatch" product is unworkable - conversation threads don't sync properly, the UX is quite bad -, their Desktop app is slow and clunky, Cowork is by all accounts mediocre.
The labs team cooked hard with this. As a designer who's been using Claude Code a lot to make better prototypes, I still go back to mockups for comparing many iterations, collecting precise feedback with comments, and documenting decisions for decks or sharing with other departments like product marketing. This seems to solve for all of these use cases (or at least start to).
It's also just a beautiful product. The interaction model, styling and integrations via exporting is all super thoughtful
"create polished visual work like designs, prototypes, slides, one-pagers, and more."
I use Opus to generate Typst for that and I'm already pretty happy with that approach. It gives me a degree of control I do not have with other methods, because
1. Typst is really powerful
2. Opus is really good at surgically modifying Typst
I basically never look at the Typst code for this. Telling Opus visually what I want changed is usually good enough.
For me it was the missing piece when working with Claude Code. I am a PM by formation so neither do I know how to design nor do I know how to code but I am pretty good at describing what I want and why. I just played with Claude Design for a while and it made it really easy to explore different solutions, reorganize the interface, adjust little detail with the "comment" function, move buttons around, etc. Then export to Claude code including the design system, and I spend way less time writing a spec and can focus more on corner cases and the ugly details.
2 years ago I still had to hire a freelance designer and a developer for small projects, now for the fraction of the cost I am totally independent and can iterate as much as I want. We always mention that "the design is not unique" or "the software architecture is not clean and the code to verbose" - I get it I managed Series C startup product teams before I got sick of the VC shit show. Now I am working mostly with lifestyle businesses and SMBs that have the ambition to be profitable - and average is good enough for them. They gain access to custom designed software for specific use cases which was completely out of reach for them 3 years ago. Custom solutions meant working with mediocre, overpriced agencies creating "solutions" with Wordpress. Anything I can do with the Claude stack is on higher level at a fraction of the cost. And as long as it works and looks good those business don't give a ** about unique design and scalable software engineering.
Good for crunching out some prototypes, ideas and getting inspirations I guess. Two prompts - the initial one and one refinement - took about ten minutes and used up 90% of the token budget. I wonder what the real costs are. After the IPO, they will no longer be able to subsidize token costs. The question will then be whether it's still cheap enough just for prototypes, ideas and inspiration.
As long as tokens cost less than humans people will pay for them. If you're human you need to differentiate yourself bigly from what will quickly become mainstream AI slop.
True. I didn't expect it to provide novel designs. Maybe Anthropic should find a better replacement for 'Design'.
In my example, I expected it to create UI elements for a business application / expert system. And it did fine. In fact, I believe its perfect for creating average and functional designs. Its a better way to test variations of UIs for expert systems. But I want to know what the actual costs are.
Seems like Claude is actually building almost like a layered Figma wireframe that you can do fine grained adjustments afterwards (e.g. adjust font size).
Interesting that Canva provided a quote of support. I'm not familiar with the differentiation, but it seems like this will directly siphon customers from Canva, right?
There's an "export to Canva" button in Claude Design, so perhaps they're hoping this will be another entry point for new users, or that they'll be able to "lock in" as the default design software for Claude users.
Our mission is to empower the world to design, and we believe in making Canva available in every place where ideas begin. Being the most interoperable platform creates mutually better products, more value for community, and more value and growth for our company.
We've been working closely with Anthropic for many years, and we see this as complementary. Our MCP, integrations, and plugins have already introduced millions of new users to the full power of Canva, and we're excited to continue doubling down here.
What’s interesting here is that with AI, all our interfaces should evolve away from previous generation rigid forms / buttons / tables etc. towards something more fluid / dynamic / “natural”. Yet all the AI coding is geared towards producing more of the former.
i think of them as tiers of expertise— need to master the basics of structure and form before the robot has the learned representations to competently model user interactions with more fluid instantiations (by downprojecting into the overlearned fixed-semantics)
I think it’s clear that Anthropic are leading the industry right now and I believe it’s because they are better at using their own tools to develop software than anyone else. I suspect that trend will accelerate.
I think that's the wrong question. What kind of shadow spy network must exist around all these companies such that they all happen to be working on the same features at the same time.
> What kind of shadow spy network must exist around all these companies such that they all happen to be working on the same features at the same time.
Stuff like that happened even before the invention of the telephone, humans within the same geographic location is even more predicable, so surely this shouldn't come as a surprise.
I mean your initial question kind of implies something that is likely to be the wrong answer, so sounds like at least someone could potentially be surprised by that :)
Frequently, two movies with very similar concepts drop the same year. Is that because they're spying, or because the companies make decisions in similar ways based on similar input information?
It's possible and even likely there's industrial espionage going on. But imo, you don't need that. I've worked in cutting edge industries, and even when you don't know what your competition is doing, there are usually only so many logical next steps.
This. / Who remembers the "birth" of crowdfunding? Why did so much seem to happen at all once? The most likely explanation imo is that it was "in the air" -- we share culture and ideas. These ideas don't have to be stolen to co-occur... quite the opposite.
The human brain strikes again. It is built into our cognitive machinery to look for patterns and naively ascribe causation. We're not rational beings that sometimes mess up. We're a clusterf--k of cognitive biases all the way down.*
Cool pattern! Sure, maybe there is something there.** And/or maybe our brain is doing "conspiracy theorizing lite". Its all on the same spectrum -- the same flawed cognitive machinery trying to operate in a weird modern world quite different from where we came from.
A better way: write out your favorite hypothesis. But don't stop there... keep going... write out many hypotheses. Then find ways to test them. To tap into our best selves, I recommend The Scout Mindset (book). Here is an infographic summary of part of it: https://imgur.com/qN31PX8
Probably not a better way: float one's first gut feels to the Internet phrased as i.e. the better question and feed empty calories to our pattern-craving brains. There is reason some of our brain functions are considered higher order.
* Maybe I'm overstating this. Let me know? I want to read Rationality and the Reflective Mind by Keith Stanovich (https://academic.oup.com/book/5930) as a counterpoint to the usual suspects (such as Tversky & Kahneman)
It's rolling out progressively, it works for me – it actually seems very polished, the examples are really good; and it lets you create your design system from your codebase
So how much of this is fully generated vs AI running through all the knobs on template widgets?
Is that globe made from whole cloth or is there a bespoke "telecom globe" widget that it dropped in? Could I ask for mock up of molecules with the same fidelity of knobs, down to nucleus size and such?
It's generated, when you try it you can see this is mostly a harness around claude opus 4.7 that helps it create a good design plan, it also supports asking you questions as it goes along, letting you review and feedback on mockups, etc, but ultimately if you look at what it's generating as it does it – it's just code
"Its just code" is meaningless to me. Is the code its generating using mostly well known widgets with predefined knobs, or is every element completely custom and the knobs are created on the spot with slightly different naming and function every time?
I actually think I would prefer the more boring "it composes well known widgets" because then there's a chance I could just use this to generate a presentation layer and integrate it instead of new blobs of code I need to essentially reverse engineer or remake.
The Anthropic team looks to be eating all the usecases and application layer. I personally know of many figma + AI startups that are going to feel shaken up with this launch.
Anthropic has distribution on their side, their engineers are excellent (I have ran with them across the ggb in the past and they work 12 hours plus a day regularly.)
I think what actually might be slowing them down is the public releases and pr lol, not ideas or execution
"Hot" companies with stupid managers often have such workdays.
In the case of engineers and programmers, the amount of useful completed "work" has only a very weak correlation with the length of the workdays.
Good engineers or programmers will think anyway most of the time about the problems that they must currently solve, regardless whether they are in the office or at home or in any other place, and regardless whether to an external observer they appear to be "working" or they appear to do nothing.
Programmers who spend all day typing lines of code into a computer, are more likely to not be competent programmers, because otherwise they would have found ways to automate such activities that require continuous physical involvement, making impossible the allocation of enough time for thinking about the right solution.
If whatever they do does not require true thinking, then that is the kind of job that can be done by AI agents.
> their engineers are excellent... they work 12 hours plus a day regularly
What? In my experience people who are good at their job can get it done in a reasonable amount of time. Working 12 hours a day is obsession, no competence. There can be overlap, but there is no causation.
Lovable was a TailwindCSS recombinator, that’s it.
Lately it is more and more ShadCN as well.
TailwindCSS is a masterpiece but ironically doesn’t really get its fare share while “Build on top of TW” frameworks make money.
TailwindCSS is the final evolution after all other frameworks always had its benefits but also massive limitations.
BEM anyone?
TW is really elegant a new paradigm in its purest sense and brilliantly executed. No wholes could be poked in it for years and the extensibility shows, how brilliant it is.
Bootstrap will always be held dearly but it was about browser quirks etc first. Important milestone but stands no chance against TW.
personally I went back to bootstrap when LLM vibecoding took over. 10yo old patterns and even weak model can oneshot anything blindly, at scale. react+bootstrap is kind of cheat mode for frontend dev now with zero confusions about specific classes or api version changes of selectors and what have you, this is stupid solid after all this time.
As a dev manually typing I loved tailwind for sure, with LLMs not so much, and bootstrap in particular nails it the best IME. and yeah one can customize bootstrap quickly to look however it should, just tell your frontier LLM of choice your wishes.
Am I the only one who doesn't like Tailwind? The way it totally pollutes your html makes it seem like, while the benefits are impressive, they're not worth the collateral damage.
Tailwind is fantastic precisely because the biggest benefit (tree-shaking to minimize the CSS that ships) massively outweighs the fact that Tailwind syntax "looks like" an anti-pattern and makes your code "look" ugly. Also, you get used to bundling your styling and JS code in one place with any component-driven framework like Next.js/React, and Tailwind works seamlessly with all of them. I guess I just prefer the benefits to the alternative, and I feel like the collateral damage of the alternative is definitely not worth trying to make front-end design code look simpler.
It's pretty much all personal preference. You can do anything with css modules, or use any other CSS extension.
I don't ever recall hearing about the specific, tangible benefits tailwind brings. Just a loose "it's faster", or "it's easier". It just feels like one of those things, in front end development, that are just hype-driven rather than actually bringing any ostensible benefit.
You are not the only one. I also dislike it immensely. For a framework that established itself as "for developers who don't know or want to learn CSS", polluting the HTML in the manner you describe makes no sense. And no company I've worked at figured out how to prevent it from becoming a bloated, impossible-to-maintain mess.
I'm in the same boat. Tailwind always seemed insane to me, even after really giving it the benefit of the doubt and trying it out. I use it now only because its so easy for the LLMs to use, so I don't need to actually interface with it at all.
Maybe I am using it wrong but it feels much closer to Lovable than Figma. I was expecting this to feel like the two products combined. Certainly better than Lovable though, but a little disappointing.
The design process was completely dictated by Figma. After the demise of Adobe XD, Invison etc they were the only option. Happy to see alternatives entering to save us from the curse of Figma !
Let's not fool ourselves: their goal isn't to build a design tool, it's another training tool for a field where training data is hard to collect. We collectively reinforce predictions to sharpen the models. The pattern is clear across all their products. This is training for future one-shot generative UIs. UIs are already generative; this will scale that up to a world where applications are mostly UI-less.
I threw my sales deck at it and asked it to implement our brand guidelines (attaching that as a PDF). It did a great job and then began giving me internal server errors... I'm going to assume this part of their model farm is totally overwhelmed.
Really interesting response to Google's Stitch - and seemingly a better alternative given some of the features shown in the video. If everything actually works at least close to how it's advertised, this'll be useful. I'm sure it's no Dieter Rams, but it wouldn't be a surprise if it's already better than many devs at design work.
So I guess I should just give up on my dream of having a useful AI assistant for day to day "human" tasks. We're just hell bent on replacing humans in jobs.
I've been spending the last two days building a large number of mockups for a new product. Literally the last two days.
I'm wondering how i can CONTINUE that in this design thing, can i import something? Because they show it the other way... you can start and edit, and then export to claude code.
> Import from anywhere. Start from a text prompt, upload images and documents (DOCX, PPTX, XLSX), or point Claude at your codebase. You can also use the web capture tool to grab elements directly from your website so prototypes look like the real product.
Having generated ~250k web design images the past few months, I've concluded generative models, circa 2026, still suck at it. Presumably because the difference between an 'OK' design and a 'Great' one is pretty darn small and far too nuanced. That said, Imagen easily takes the cake for workable design creativity, but even then, it takes 1000ish gens to get something decent, maybe 200 if you aren't too picky.
I personally dont think Anthropic thinks this is a hit product per se. Its all about keeping the hype alive - we all saw what 'SaaS-polcalypse' did.
The amount of hype is too much for me - its smoke and mirrors. A firm that knew it was on track to change the world (as much as they have been boosting) would not do this. Much like how Apple kept complete silence about the iPhone and then shocked the world.
Unless you want something that looks like it's designed by Anthropic, this is still pretty shit. Amazingly "AI" hasn't replaced the very first target on their radar - design.
I've been using stich from Gemini, and just plain zAi for helping redesign my website. You can use the generated code to copy and paste the design to fit your own templates, but that's a pain. Unless you are ok with using tailwind and the dozen or so classes on every element and don't want to edit anything.
What I found valuable is the design.md that was produced. It's a guide for building each component. So using these tools becomes akin to PSD to html we used do. At least that's when I find them most effective.
I am not an engineer, I can't look at code and determine if it is good, performant, or elegant. However, I can look at designs and make a judgement. I'm curious to see how Claude Design changes how I think about AI and its capabilities.
Non designers will vibe-design a prototype with claude, export it to canva and let the designers finish it up
If code doesn't go this direction soon, I'd be surprised. PM builds a prototype with claude, or designer designs something in figma/canva - claude vibe codes 70% of the solution using your company's frameworks and design system, then hands it off to the developer who finishes it and productionizes it
Google ai studio has been doing this pretty well for mockups for me at least. I do see people running out of their already limited tokens using this though, unless they silo them to another collection of tokens.
Edit: I see they have a new token tier 'Claude Design' so that's good.
To people saying its replacing Lovable. It's still missing the ability to deploy right from the app with a backend ready to go. I'm sure it will get there tho.
This Ai will help vibe coders but won't be better than UI/UX designer with experience. We saw huge difference in our app when we have started working with one.
Will give it a try but my experience with Claude and browser use so far is that it’s extremely lazy: it rarely notices or cares when something doesn’t look right, it needs lots of pointing out “hey you ignored that broken render” etc
One thing I am wondering: How much logic (in the programming sense) is there in this product and how much of it is „just“ a lean wrapper around the LLM?
We used to have RAD based IDE's and now I guess with AI we're going full circle, I'm curious how their mobile thing works, if its just React Native based or what.
Seems to me like Anthropic is desperately trying to find as many product-market fits as possible before they IPO. They're reaching a chaotic weekly release cadence--each new product chockful of unclear, overlapping capability with their previous.
Combine that with the obvious hackernews manipulation that somehow gets each and every haphazard release instantly to the top, and you can see they're starting to feel some real heat.
It’s interesting to claim that because everything they do goes to the top on hacker news that they must be in trouble. I haven’t heard that particular chain of effect before.
Feeling some heat != in trouble. Just that the pressure cooker is turning to a higher temp.
But, I'll gladly admit that I am bias: I'm tired of seeing blatant astroturfing by a company whose main marketing tactic is to play on societal fear, while simultaneously employing safety theatre to look like the "good guys".
It could also be that this is an exciting new, fast changing technology that happens to directly overlap and significantly impact the core audience of the site. I don’t think any form of maliciousness or secret astroturfing is required at all.
This stuff has changed a ton of what it means to exist in this whole “tech space”. The entire software development lifecycle got put into a stick blender and is in the process of getting mixed up in new and unusual ways.
It’s super cool. I haven’t been this excited about our industry since way back when the universe was just starting to get onto dialup and I grabbed my very first mp3 or wrote my first shitty program in VB or when AJAX was just entering the universe.
I think a lot of people forgot how fast shit changes in this industry and how learning new things is one of the most important skills to being successful. Everything changes all the time.
This is a tech site called hacker news. Where else would something like this be constantly discussed?
I think it's probably both in the end. Anthropic has a lot of fans, and combine that with excited employees and investors, they probably don't need to do much explicit astroturfing to reach top of HN.
But they also desperately need users (and the data those users bring) to build their products, and the people who do have the power to manipulate this site are on their team. And it does get tiring to see a new Claude feature with like 1 comment and 25 points right at the top, multiple times in the last two week. Keeping their needs in mind, it has begun to look like manipulation, even if the above effect could explain it.
I'm glad the technology foments it excitement for you. The idea that we can share intellectual processes broadly and implement them without the previously requisite skills will obviously change the world. That it could change the world for the better, excites me too.
But many of us have our excitement tampered by the messaging, the questionable ethics behind how it has been done, and the fact that a real % of the space is basically driven by eschatological thinking. And it especially annoys me that Anthropic is the company whose messaging simultaneously encourages that eschatological thinking, and preys upon the emotional reactions it creates.
I think it is increasingly clear--if you look at recent public sentiment and feel what is in the air--that they are a villain in this aspect. I don't think we want the people who believe they are building the future to be doing so both out of fear--of China--and gaining power through others' fear of what they are doing.
But villains can ultimately do good in the world, despite their villainy. Let's hope that is how it plays out.
Correct they're trying to bamboozle the stock market.
Im looking at this product and thinking - so...? Where's the vision?
Oh there is none. Its about spraying and praying that the hype continues and feeding off analysts who don't really understand most of the firms that they spend all day studying the valuation of.
ultimately if it’s so close to the finished product you may as well just do it in cursor rather than have an extra step.
The design problem to solve post-ai isn’t this it’s how the space for thinking fits into all this, getting to the end result slower so human ideation can play out. This is just optimized for the first generic output + tweaks.
Reminder that Claude's own AI design skill (which is probably incorporated directly into this product) says things like
>NEVER use generic AI-generated aesthetics like overused font families (Inter, Roboto, Arial, system fonts), cliched color schemes (particularly purple gradients on white backgrounds), predictable layouts and component patterns, and cookie-cutter design that lacks context-specific character.
> Interpret creatively and make unexpected choices that feel genuinely designed for the context. No design should be the same. Vary between light and dark themes, different fonts, different aesthetics. NEVER converge on common choices (Space Grotesk, for example) across generations.
Will be taking this for a spin imminently. What seems to be very unique here vs other canvas tools are the generative UI controls.
If you’re confused about why everyone is shipping their own canvas tool - this is what I wrote when I reviewed Stitch from Google in my newsletter:
“every SOTA (State-of-the-Art) model can already do this. Give it a prompt, it'll spit out an HTML design. Ask for 10 responsive variations, you'll get them. Stitch is a Ul and context harness on top of Gemini, in the same way Claude Code is a harness on top of Claude's models. This means every Al lab will likely ship their own version, and they'll all interoperate because at the end of the day it's all HTML and markdown.”
More generally, this is a competition of where the product development work starts and lives. The business value will accrue to those who become a destination, not a plugin.
AI can now power a new sort of tool that supports the entire process (not just coding or just design). So there’s no reason for Anthropic or any other lab to give that up to another tool.
Nice video but the expectations vs reality is brutal: provide it with an actual PDF with a design system, wait for 30min and the entire thing is stuck.
You talk to it, it consumes "usage" and no design system created.
Had a similar experience with their online code sessions.
All this slop generated code is so freaking bad. We'll get drowned in so much slop code that at some point only these garbage machines will be able to "work" with it.
I'm not sure if I'm holding it wrong, but at these usage rates, I can hardly see this being useful for designers in their daily work. In two prompts using the Max 20x plan, it consumed 11% of my weekly limit for Claude Design, which is separate from your normal limits. A day of work would exhaust over four weeks of usage. Is this meant for intermittent use only? Lately I've been getting the feeling that Anthropic is forgetting how absurdly much we are already paying for these tools, compared to conventional development tools, or even competing inference providers.
This seems a bit fiddly... I gave it the ChatGPT iframe html for my (not yet released) Liveclip MCP server that's like tables [https://x.com/firasd/status/2039926453408776649]. And I had this chat in the sidebar and it popped up some questions in a form
> do you see our current html
Hi-fi design
Interactive prototype
html
> do you see our current html
Hi-fi design
Interactive prototype
html
● Claude
Yes — I can see your html/ folder. It has 7 files:
files-list.html
math-result.html
rows-view.html
table-view.html
text-editor-doc-view.html
trending-topics.html
unique-result.html
Want me to open them up and take a look at what's in there? And what would you like to do — polish them, turn them into an interactive prototype, build something new on top, or something else?
> yeah in table view i'm thinking about how to deal with cells that may be big. i guess we want to do something different than excel or sheets in that auto expand rows with cells that contain big values. we can show like the first -- what does twitter show, 280? so we can show 280 chars and then [...]
● Let me take a look at the current table view to understand what we're working with.
Got it — I see the current table view. Right now every cell is clamped to max-width:220px with white-space:nowrap and text-overflow:ellipsis, so long values just get cut off at whatever ~30 chars fits the column. You're thinking about the opposite end: what if a cell genuinely holds 2,000 chars of text? Neither pure clip nor Excel-style row expansion is quite right.
Before I start sketching, a few quick questions so I don't guess wrong:
> Questions answered:
- truncation_threshold: ~280 chars (new tweet)
- expansion_ux: nothing right now -- it's not interactive
- row_height_policy: Grow freely like Notion/Airtable
- content_types: Not sure yet
- indicator_style: [...] at the end of the text
- newlines: Render them (row grows)
- copy_affordance: no interaction right now
- column_width: Fixed width, all columns equal
- variation_count: 2 variations
- aesthetic_direction: Same bones, but let you push it a bit
● Got it — so: non-interactive, rows can grow freely, 280-char cap with [...], newlines render, same bones but push it a bit, 2 variations. Let me sketch this out.
Set up variations file with two table treatments
V1: Notion/Airtable-style free-grow with [...] cap
V2: pushed aesthetic variant
Wire both to the same sample data w/ long cells
Let me peek at a couple of the sibling files so the variations stay in-family with the rest of the widget set.
Good — now I have the visual family. The shared vocabulary across widgets:
#f5f5f3 page, #fff card, #d0d0cc borders, #f0f0ec headers
System font, 12–14px
Subtle alternating row tints, no drop shadows, flat corners
Color used sparingly; #666 for secondary text
My plan for the two variations (side-by-side in a canvas so you can compare on the same rows):
V1 — "Notion-ish, stay in family"
Keep current palette, borders, zebra striping
Rows grow vertically as needed; white-space: pre-wrap so newlines render
Cap each cell at 280 chars; if longer, slice to 280 + […] inline
Same fixed column widths, everything else identical
V2 — "Same bones, pushed"
Tightens the grid (hairline borders instead of panels, more whitespace, denser typography)
Column rail with sticky row numbers
Soft fade on the bottom of ultra-tall cells instead of mid-text […] when content is really long (hybrid: char cap + visual)
Monospaced tabular digits for row numbers; type column ⌾ / ▤ markers subtly hinting content type; column width slightly wider for breathing room
Let me build it.
Now let me build the two variations. I'll set up realistic sample data with a mix of short values, tweet-length values, long-long values, and ones with newlines, so the behaviors are visible.
Now the two table components.
Clean. Two variations on the canvas, both rendering the same 7 rows so you can compare directly:
V1 — in-family. Your current widget, with white-space: pre-wrap so rows grow, newlines render, and a soft grey […] appended when a cell goes past 280 chars. Row 3's summary (double-long prose) and row 4's URL show the cap.
V2 — same bones, pushed. Hairline borders instead of filled header panels, monospaced tabular row gutter, column letters as uppercased small-caps above the header names row, tighter typographic rhythm, cells > 280 → […] legend in the footer. Row 7 shows how newlines render naturally.
Both slice at the nearest word boundary when one is within ~40 chars of the 280 cap, otherwise hard-cut — so […] never lands mid-word.
A couple of things worth calling out, not changed:
280 is pretty generous when a row has multiple long cells side-by-side — row 3 is ~2 cells × 280 = a very tall row. If that ends up being painful in real usage, dropping to ~180 or adding a per-row max-height with fade might be worth a follow-up.
Neither variation is interactive per your answer; there's no click-to-expand, hover, or copy. Easy to bolt on later.
Fork verifier agent
Both variations are live on the canvas. Let me know which direction feels right and I'll fold it back into html/table-view.html."
-----
It's basically similar to their Artifacts system in a harness I guess
This largely appears to be a HTML generator at its core, not necessarily what Figma does with layers/canvases etc. There's no collaborative nature to it either.
It feels like a lightly designed product that moves claude CLI to their backend, generates the HTMLs and renders them in browser on claude.ai website for you. Sure, it accepts your design system as an input from you or imports from your repo, but you could feed the same into claude CLI as well?
I'm curious what exactly it gives besides having claude CLI + prompting it well with your design system + skills.
The IBM/Microsoft analogy is a classic. It’s always fascinating to watch these 'frenemy' dynamics play out. In these cases, the one who owns the direct interface with the end-user usually wins the long game, while the 'infrastructure' partner risks becoming just another utility. Will be interesting to see if Canva can maintain its identity or just become a shell for Claude's output.
Yep agree it looks like it’s taking the existing generated artefact, parameterising it within an inch of its life, exposing a pseudo WYSIWYG for the parameters and calling it a day with a few export options. Not a huge leap from what they’ve got already but it’s a clever adjacent step for sure. Same product new chrome.
Until the large model output variety/mode collapse is solved, tools like this one will be generating monotonous slop. Very few people using it will be specifying each detail of what they want.
Anthropic is making the exact same mistake OpenAI did: lack of focus. Release, release, release, all good apps, all full of bugs. This is a massive mess and people aren't talking enough about how bad all of this is.
BUt what it really shows is there's no vision. If you had a vision and coherent strategy you'd concentrate resources - not dilute them across a myriad of projects.
To be fair, the vision isn't achievable yet. More training data is needed. These projects spring up to try and compel users into offering up training information in the niches that need more data. They don't really have to be good, or even long-lasting, if they satisfy the goal of getting the data sought after.
lmao at the idea of figma integrations, categorically. I don't ever want to look at figma again in my life. figma is what people used to ideate quickly because they couldn't build directly in the browser. we can now ideate quickly while working directly in the browser. figma's entire raison d'etre is dinosaur tier over.
With that said, this seems like it's just claude code + simplified devtools in a container? Will need to spend more time with it to see how EG commenting influences things, but 5 minutes in, this seems like it's needless formalization on top of how I already used claude for deisgn
Just started messing around with this but I like it. It produces better results than just using Claude Code on its own. The initial output has a lot of junk that needs to be removed (just like anything LLMs generate). I suspect it's only good at reproducing content that is relatively cookie-cutter and prominent in the training data. But still, as a non-designer this produces better results than I can and in line with the level of quality of many paid templates.
I think we will have maybe 5 to 10 years of all this crap, devaluing human made art and human made products, vibe coding everything. But eventually it will all fall apart. Long term only a minority will be happy seeing AI generated crap everywhere and most people will pay for quality human made goods.
Unfortunately it's going to be a tough few years until that happens, where it really does feel like the idiots are winning.
I'd be pretty annoyed if it was burning my Claude Code quota.
So far it's burned through 45% my design quota. On a positive note, I finally have my fonts rendering. I'm still waiting for it to figure out how it mangled the SVG versions of my logo. I've watched it trying a lot of stuff for 20 minutes and now I've got something that's not my logo.
Postmodernists are annoying because they are right about design. "Taste", aesthetics, design, vibes they're all as socially-constructed and post-truth as that annoying sophomore says everything is. The world is design is an isekai manga in which Derrida's delusions are the rules of physics. It's weird.
Part of this weirdness is the continual relativism of design. A taste-meme is good or cringe only relative to the prevailing social environment, never itself. An AI can never do "design", properly understood, because design is the work done by a reluctance motor, spinning endlessly in a Sysphean quest to align itself with a moving magnetic field and producing torque by side effect.
All efforts like this can do is capture the field alignment at an instant in time. It cannot do work. It cannot produce motion, not as long as its weights are as fixed as the field lines of a neodymium magnet. The instant AI design is good, it becomes bad through the act of becoming good.
Producing work through motion of taste may be one of the last human endeavors to be absorbed.
Interesting take. What do you think of using AI exploitatively? I have no doubt it can easily generate drafts or copy the style of one thing onto another, letting us rapidly try out ideas.
Nothing wrong with that. My point is that AI, for this purpose, is a passive component. Passive components are often essential parts of active machines.
I agree with what you're saying, but I imagine products like this one aren't aimed at replacing this. LLMs are partially a dashboard business, and this is just one tool to aim at your boring business data
Interesting take and I pretty much agree (also, I find it funny that now the only interesting comments on HN are all found at the bottom).
However, there is of course a raw mechanical side to design that comes more into play on practical applications like a software UI vs. pure play memes/vibes like branding or a landing page.
Think hierarchy, leading, kerning, scale, contrast, balance, etc. These things remain constant throughout trend cycles and can absolutely get you half way there. Then aesthetic memes can be sprinkled on top by a human to make people believe "this feels cool."
And another step toward a world, where product managers/owners/whatever and other boring people can generate what they once needed creative, passionate and skilled people for. Go ahead, its just the natural evolution of extreme capitalism.
I disagree with most of the takes here. The reason this will fail has nothing to do with design, designers, or taste.
This will fail because it's already a forgotten side project within anthropic, and anthropic also has pretty bad product DNA as a company. Their headcount is already too large and the culture is already set. They grew revenue so fast they speedran the stage at which you could build software product chops into the company culture (think Google circa 2008 vs. Google circa 2018).
They should focus on what they're good at: the actual AI models and B2B sales. Let OpenAI play early Google and churn through 100 different consumer product experiences to see what sticks, they're better positioned for it anyways.
Maybe they realised its actually a dead-end? Why else would you not double-down and concentrate resources? This would be an amateur error if it turned out to be true.
The faster we commodify design the faster we can get back to some sane consistent normal interfaces. Only the very biggest platforms (e.g. Google, Microsoft, Apple) should be spending any time on "design systems".
Imagine if a designer were hired to custom design the lightswitches in every building. We need to get back to off-the-shelf interfaces and stop wasting smart people's time reinventing UI widgets.
I think a big part of the reason AI is so divisive and only declining in popularity, is because much of the discourse is shaped by mid to senior career SWEs who fantasize about putting everyone who's not a mid to senior career SWE out of work. People who want to replace everything we like with streamlined slop and want everyone we love struggling to get by. Everyone is inessential except the guy who just sits there telling Claude "can confirm, looks good ship it" who of course isn't deserving of even a paycut
I don't agree. For novel use cases, yes there's some truth to that. But consistency is huge in a UX. If basic controls work well for a situation, they should be used. Designers should not be getting "creative" or "original" for those sorts of things.
I don't think you are really disagreeing. I'd guess mikeaskew4 would include the constraints of following platform conventions / HIGs and using native widgets for any GUI. Those constraints do not preclude original thought.
Transparently, I'm not a designer, I'm a biologist. That said, the things I want designed by Claude don't need great design. I need a slide format that is consistent from one study to the next so the reader can follow. I need a tool that tracks the number of mice in each lab and flags if someone is using more resources than we expect. I need a personal site that is easy to work with that tracks my pet geckos' feed and environment.
If I have a product out of my lab that makes it to human trials, there will be a full team of marketers and designers tasked to the brand image.
I doubt you’d be singing the same tune if Anthropic released Claude Researcher and a bunch of people were saying “I’m a designer, not a biologist. I don’t need great research. I just need X”
Ironically I think AI will replace researchers before it replaces artists.
I'm not the one saying my job is uniquely human. Last week, I watched a video of a vendor's new facility that entirely automates antibody production and screening lab. The machines run literally every step from picking antibodies, sequencing, scaling up, and in vitro screening without a human interfering or donning a labcoat. A machine that is essentially a nicer roomba fetches the flask and drives it to where the shakers are and puts it on an empty shaker spot. I have no doubt the lab tasks I do can be automated. Fortunately for me, a large portion of my job at the moment is in handling animals, and there are much higher barriers there, but again, I don't think that part of my job makes me irreplaceable.
Having furniture made by a master designer and artisan woodworker is great. But sometimes, you don't have the budget for that and Ikea furnitures does the job just fine. Most of the time, actually.
But the mass market (who this is ultimately for) doesn't care about great design. They care about "seeing something on the screen." If they can get something that looks 80-90% aligned with what they observe to be modern design, they won't think twice (even if the end result is clunky or not on par with what a professional designer would produce). It's the Ikea Effect on steroids.
Tell this to the 99% of designers who are designing the 5th page in some RBAC modal or some obscure settings page. Design is like code - there are a few people doing really groundbreaking stuff, but vastly more doing the utilitarian plumbing everyone depends on.
You can let the LLM create slop for you, sure. But only amateurs are using it for that. You’ll be much happier if you treat it as a tool and use it like any other, a force multiplier to take your ideas and creations and pushes them further along faster.
If you treat it like a black box used to outsource your own thinking, you are holding it wrong.
It’s honestly been my experience. I think I’ve vibe coded one thing that I’ve found to be genuinely useful to me, everything else has ended up as useless slop given time.
I think we probably need less software, but higher quality, not more. Unfortunately AI only goes in one direction…
> Great design is original thought. AI is wholly incapable of that
This is entirely accurate, however I fear there's a lack of perspective:
If you're in the middle of the desert and need to sit down, that random rock looks and feels great because there's nothing even close, around!
One issue that a lot of experts fail to recognize is that "great" is relative: It's not apparent to the experts because they are only pulled in when their expertise is needed. Most of the time when experts are pulled in, requirements are clear, you have traction, scale and now you need to optimize.
Once you're spoiled for choices, you have lots of options and then that random rock doesn't look appealing at all: now you're considering other factors like budget - IKEA vs Adirondack.
What AI is making a huge difference are places where "great" isnt that valuable:
- people in the desert: Someone wants to track what words their toddler is saying or their groceries or how much kitty litter they should buy soon and Claude will spit something out reasonable even if it makes the skin of experts crawl.
- commodity and bean counters: in cut throat industries like power or insurance, it's all commodity services competing on price. Most people arent going to pay a premium for a better looking, more intuitive insurance app. It just needs to not suck and fall over. Or you're making a knockoff of an existing, well understood product
You'll get a competent UI with little effort but nothing truly unique or mind-blowing.
Impressive technology, but that old skool artisanal weirdness of yore only becomes more valuable and nostalgic.
If I'm building out an internal tool for, say, a hospital lawyer to search through malpractice lawsuits, I want my tool to be the most familiar, obvious, least-surprising UI/UX possible. Just stay out of the way and do what it's supposed to do.
The trick is, of course, that the human is still responsible for knowing when homogenous is fine, or when there's real value in the presentation. If you're making a website for, say, a VST plugin for musicians, your site may need to have a little more "pizzazz" to make your product more attractive to the target audience.
The real world analog is this...
The reason people (especially Americans) stay in Marriott property hotels is because they are homogenous. If all I want to do is travel to Phoenix, AZ for work I want to know that the hotel room has the same mattress, desk, TV, customer service, etc. There is real legitimate value to that. So I'll book the Courtyard in Phoenix because I know exactly what I'm going to get.
On the other hand, when I'm traveling the Amalfi Coast in Italy, I want the Airbnb experience. Sure the bed is stiff, there's no A/C, and the 80 year old door frame is hard to close, but there is something magical about it.
A personal example from a few weeks back. My SO booked a hotel for a weekend as a birthday present. We went there, it had a fantastic spa, dinner was delicious, the room great, clean, and so on. Individually designed, well thought out, friendly staff.
Breakfast came around and the coffee was abysmal. Really truly abysmal. What did we do? While eating breakfast we looked for a McDonalds, as we know for sure, that regardless where you are - you will at least find an okay and drinkable coffee at McDonalds. It is not a great coffee. And will never be. But the likelyhood is very low that you will find a shit coffee.
Marriott is basically the same for hotels. Or MotelOne in Germany. It is the power of brand - you get a solid 7 out of ten. And to be honest - when I am traveling for work, this is all I want. I want to know, that I will have a clean room, a bed that is good to sleep in. And the knowledge, that I will likely wake up rested the next day when I have to be at my best for my clients.
The risk of ending in a shit-hole got smaller because nowadays people write their experiences - but on the other hand, having seen how many of my reviews were being deleted by Google, Yelp, TripAdvisor and the likes because some lawyer requested it - I don't give a rat's shit for online reviews.
Good pizza in Italy, goos ramen in Japan, grilled Picanha in Brazil, that's why you go there and want it different/original.
But in software UI this is often overdone. I want the pizzazz in my audio software in what it produces, not in how the UI looks like.
Because it turns out, the type who don’t want fun little differences are exactly the types who will gladly go on a business trip to Phoenix Arizona and stay at a Marriott hotel.
I don't want more pieces of flair in my life, thanks
You generally won't get to know someone well enough to appreciate their unique aspects unless you see them in person at least sometimes, unless that person has the habit of letting their freak flag fly in all circumstances, which has its own downsides.
Now I struggle to even define what an "operating system's standard visual appearance" is. Apple's still the best but not what they used to be on that front even so.
In the early days, if you learned the OS, those usage patterns and skilled transferred to every app on that OS. They all looked roughly the same, shared the same menus, shame shortcuts, same icons, etc. You didn't have to learn how to use Apps x, y, and z. You just had to learn Windows (to an extent).
Then marketing got involved, and then the web, and then suddenly every piece of software had to stand out and look and behave as unique as possible, throwing years of HIG research out the window.
Just today I had the disk usage analyzer (baobab) open and I was navigating inside directories so I want to go up a directory and clicked on the "<-" left arrow in the headerbar, which went "back" a screen, discarding all the work done scanning the filesystem.
If this app had a traditional menubar and a toolbar this wouldn't have happened.
This is a common type of experience I have every time I use a Gnome app. It almost feels like someone deliberately researched how to make desktop apps as counter-intuitive as possible and implemented that as the policy for some reason.
Years ago, I remarked to a friend that I'd spent half of my (computing) life post-high speed Internet, yet almost all my happy memories are from before that. It was the same for him, and we both explored why that was.
The homogeneity of interfaces was actually one of the reasons we came up with on why doing work at a computer is a lot less appealing.
I understand your feelings but it is extremely tipical in human history to keep remembering "the good old times"
But:
I would have still said I enjoyed using computers. And I wouldn't have said "Today's interface sucks" (well, other than my HW not being able to keep up with eye candy...)
I simply don't enjoy using the computer these days. And I do think the interface sucks. Pretty much anything that involves using the web browser sucks - be it a local app or a web app.
Your users will never make it to your no-nonsense backend if your marketing is completely cookie cutter.
Standardized interfaces are as exciting as kettle thermal switches or physical knobs in cars. Useful, probably optimal and will be around for decades to come. Also nobody talks about it, treats it with interest, or pays above market rate to work on it.
The value becomes the architecture of the value of the tool, not the interface. There is still value being generated, but the need for a highly paid UX designer evaporates, and is ultimately replaced by the above.
But there's is "pride" in making tools people actually use without issue
why do we build with right angles, straight lines, regular curves, etc? Why not random angles, crooked lines, etc for style and "excitement"?
Why don't we assemble a furniture set from a random assortment of pieces from flea markets? People sense that that is ugly.
Users don't need to think about how to use them; they are ubiquitous and familiar, and therefore intuitive and automatic.
If every set of stairs (or, worse, if every stair in a set) was radically different, every time you approached some stairs you would have to think carefully about how to use them so you don't fall.
Is the pride not in solving the users' problems?
> nobody talks about it, treats it with interest, or pays above market rate to work on it.
Definitely needs a citation for this one. For so many products the user isn't paying for standout design. They're paying for insight, leverage, velocity, convenience, whatever. The market definitely supports this by paying above market salaries.
Good design can be a useful differentiator but it isn't the only way for a tool or product to "spark joy" and often _fancy_ design (not good design) is used as a crutch for a subpar product.
Correct, they are paying for work done by people in other roles, who's title isn't UI or UX designer. It's on the backend person for velocity, it's for business development for leverage, it's on data scientists for insight, it's on logistics for convenience. Those people will be paid for solving those problems, not for tweaking CSS. My team, who falls into this category of more invisible work, has not hired UI or UX person at all. Which by mathematically speaking by default, is simply below the average rate for that work. Meanwhile Apple will pay easily mid six figures for someone in a more flashy role.
Design is much harder for power user tools compared to consumer. There is far more complexity and the expectation often is users must be trained to even use the tool.
Design only goes so far.
Describe the idea of what you want to do, not the inscrutable steps the application requires to get there.
Why ? Since its so notoriously bad why have there been no attempts to improve it ?
Respectfully disagree.
You should feel pride when you deliver the easiest-to-use system that the hospital lawyer has ever used. When you get them in and out of the system quickly because it's intuitive and has an appropriate architecture.
I disagree completely. The pride should come from the value that is delivered. Specifically, this:
>> Useful, probably optimal and will be around for decades to come.
Is something to be proud of, full stop.
A cold American convenience store may be delivering the fundamental value at American prices, but there's something to be said about that "extra" human or creative element. One might say the same thing about the changing nature of the web over time, less individual CSS chaos and more Facebook aesthetics.
But I really don't need that quirkiness at Home Depot, the DMV or my bank (or Amazon, or government websites, or my banking site). I'm there to purchase some screws, register my car or pick up some checks. I just need a storefront (or a website) that lets me do that as fast and homogenously as possible.
99.9% of stores (and UIs) are the latter, not the former.
Apple/SwiftUI has accentColor for example where you can inject a brand colour. This is subtle but effective for UI differentiation - colour is a design primitive that evokes subconscious pattern recognition and can be more effective than a complicated design framework that forces a larger context switch in the user's mind.
Bootstrap was great for this. You got a clean web interface that was simple, yet didn't have to be completely ugly. Basic and functional. A form to submit POs doesn't have to stand out, be glassy, or have animations. It needs to be easy to parse and stay out of the way.
There have been studies showing aesthetics matter quite a bit for UX - users perceive things that are attractive as being easier to use and less frustrating.
Maybe it's true that yellow is just the best, and should be used in 99% of circumstances?
You are right, though. Many products don’t need more than that. But I fear that this will greatly impact design innovation and progress. We might get stuck in the current UI paradigm for a long time.
But I reckon, nobody cares. Just let Claude decide and go with it... Sad state for UX designers / researchers.
Web Components were a bit too slow to take off so the mental model of JSX has stuck with me, even if the ecosystem with hooks and various approaches towards reactive state are in many ways inferior to a problem Smalltalk already solved back in the day.
90+% of attempts at making a truly unique or mind-blowing UI produce a mind-blowingly bad UI. For 0.5 seconds of wow factor, you've added substantial unnecessary friction. Outside of art projects where that wow factor is the point, it really should not be attempted, most certainly not by someone without the appropriate skillset.
The old skool artisanal weirdness was not a purposeful stylistic choice, it was a bunch of people trying to do the best they could with crappy tools. There may be some je ne sais quoi which is lost with the shift to mass adoption, but the reason for the mass adoption of these particular design trends was that they were objectively superior.
And people sometimes overestimate their designs because beauty is subjective, and because all children are beautiful in the eyes of their parent.
Also, there’s a reason why the mass adopted plastic, monobloc, stackable chair design is worldwide common and is studied as a cornerstone of design.
Which is exactly what I want. Do you have any idea how hard it is to get a competent UI?
Why do people celebrate consistency and uniformity in desktop apps, wanting to crucify developers for not following platform idioms and guidelines... and then suddenly want things that are "truly unique" or "mind-blowing" or "artisanal weirdness" when it comes to a web app?
A competent UI with little effort is a godsend.
This is exactly what I want in a UI.
At risk of shifting the goalposts on what I originally said, unique here isn't meant to mean quirky or weird but, simply, something that hasn't been done before, or hasn't been done as effectively.
This is the challenge for B2B startups that are switching to LLM-based development and are trying to offer more than the reselling of cloud compute at a markup with specialised functionality, because AI turns SaaS into a sexy version of MS Access.
The hilarious thing is that I would be willing to bet than in a decade, it's STILL a massive shitshow in enterprise. That's because the problem with enterprise software is not that good design is all that difficult to pull off (it just requires caring!) It's that the people making enterprise software have terrible taste and can't even see (I am convinced) that the thing they built is ugly and hard-to-use.
Generally the issue with enterprise is that its designed to appeal to the stakeholders who will make the purchasing decision, not the person who is actually going to use it. The people making it may have great taste and know damn well what they could do to make it more usable, but if a clean and easy tool doesn't match someone's preconceived notion of what the purchaser thinks the tool ought to look like then it's not going to fly.
Or “2000s aesthetic” for something before Web 2.0 (although you’ll get a generic 2000s aesthetic unless you provide more detail).
I guess post IPO, after the insiders cash in out of lock period its irrelevant.
I can slap something together with Claude over a few evenings to fill a gap on tooling, or I can wrestle with Jira and CI and all that to tie things together with their own integrations.
No thanks, I'll just take the API keys and build on top, to my exact specifications, and the interface will be passable even if it needs a lot of polish. Tailwind has worked wonders for that.
Sure, some prototypes will be spun up more quickly. But if this was a real problem large companies faced it would have been solved in software already.
Good for everybody who isn't a large company then?
The shelf-life of unique and mindblowing has reduced to a week (being generous) before it's copied by slop artists looking for a resume booster or funding, and months tops before it's part of training data for everyone. Unless you find it in that small time window everything will seem homogenous.
It could just be a systemic result; unless you deliberately take the lonely road to parts of the internet where other people aren't, you will not see unique and mind blowing things. Which by definition you can't source from a place that has a lot of users, like social media or popular forums.
So it's competent, for sure, but that is damning it with faint praise.
In a direction where the AI model basically serves you everything live. No sites, no front end, just databases and model embodying them.
I mean why even code anything in the future where it is cheap and fast enough to just come up with everything each time based on each user need.
I am not saying it’s good but it’s lazy. And if one thing is for certain is that laziness prevails. Some even mistake it for progress.
But then, is human programming language really the most optimal way for an ai to steer the silicon? Some kind of bare AI OS with kernel, drivers and there in the middle a fat specialised asic ai chip to orchestrate everything.
but does it still exists? Even without AI everyone is utilizating the same css frameworks, same libraries and templates... design is pretty much boring these days. CSS Zen Garden anyone?
AI companies: "good news, everyone! We've automated all those steps so they're even easier to generate!"
I think the same thing is happening in physical construction. Ah, I see you've designed a new box with four primary color tones and slightly offset vertical lines to break up the windows.
Obviously a product of its time and laid out similar to how it'd be printed in a magazine (the characters slightly overflowing the borders and such like). Accessibility wasn't a thing back then.
If a different company did that in 2018 you'd be seeing the G-man in corporate memphis, downloading about 500mb of assets, with 178 separate ad trackers in a consent popup, and then you'd be scrolling like mad to get through all sorts of animations that hijack the scrollbar, in order to get to any useful info.
[0] https://www.reddit.com/r/HalfLife/comments/10sx4ve/what_stea...
You might just as well bemoan the homogeneity of Windows 95 apps. All those gray buttons in the bottom right of windows.
This comment is just a rehash of the increasingly outdated and incorrect assertion that LLMs can't possibly exhibit any creativity -- and it's also incorrect.
If you're yearning for "old skool artisanal weirdness of yore", look up the trend on Twitter a month or two ago of people asking Claude to make YTPs. They ended up very weird and artisanal in a way distinct from how any human would do it.
I think it's because Steve Jobs killed Flash.
This is most every corporate website.
Look up in an old city, look at the facades of the buildings. They have quirks, uniqueness, it makes the city almost a living thing. Every time we shave off another edge we lose that. Nevermind the fact that shoehorning everything into the same patterns is actually an antipattern and very good paradigms have been invented after the 90s.
It's not perfect, but I'd rather have a bit of a mess than boring emptiness.
Before these tools, when a client wanted a specific section built, we'd spend hours hunting references across the web. The output always ended up feeling like a mesh of 2-3 sites, never fully unique. Then we'd burn more time explaining the intent to the client's designers and devs, usually with multiple rounds because words don't convey layout well.
Now we throw a quick mockup together in Claude or Lovable and send it. The designer gets the idea in 30 seconds instead of a 45-minute call, then pushes it further with their own taste and the client's branding.
It's not replacing designers. Most clients don't know what they want until they see it. These tools collapse that feedback loop from weeks to minutes, so the designer actually spends their time on the parts that need human taste, not on decoding a vague brief.
This is just a really cool way of building.
I'm impressed. I tried Google Stitch but it was slow and useless. Sad, because Gemini has a pretty good creative flair, ironically enough.
But jeez, is it buggy, slow and unintuitive at times.
Complete shift in google's old engineering culture of high quality - they seem to be shipping quickly in favor of stability
I can get LLMs to write most CSS I need by treating it like a slot machine and pulling the handle till it spits out what I need, this doesnt cause me to learn CSS at all.
This allows me to focus my attention on important learning endeavors, things I actually want to learn and are not forced to simply because a vendor was sloppy and introduced a bug in v3.4.1.3.
LLMS excel when you can give them a lot of relevant context and they behave like an intelligent search function.
The real fun of programming is when it becomes a vector for modeling something, communicating that model to others, and talking about that model with others. That is what programming is, modeling. There's a domain you're operating within. Programming is a language you use to talk about part of it. It's annoying when a distracting and unessential detail derails this conversation.
Pure vibe coding is lazy, but I see no problem with AI assistants. They're not a difference in kind, but of degree. No one argues that we should throw away type checking, because it reduces the cognitive load needed to infer the types of expressions in dynamic languages in your head. The reduction in wasteful cognitive load is precisely the point.
Quoting Aristotle's Politics, "all paid employments [..] absorb and degrade the mind". There's a scale, arguably. There are intellectual activities that are more worthy and better elevate the mind, and there are those that absorb its attention, mold it according to base concerns, drag it into triviality, and take time away away from higher pursuits.
> It's annoying when a distracting and unessential detail derails this conversation
there is no such details.
The model (the program) and the simulation (the process) are intrinsically linked as the latter is what gives the former its semantic. The simulation apparatus may be noisy (when it’s own model blends into our own), but corrective and transformative models exists (abstraction).
> No one argues that we should throw away type checking,…
That’s not a good comparison. Type checking helps with cognitive load in verifying correctness, but it does increase it, when you’re not sure of the final shape of the solution. It’s a bit like Pen vs Pencil in drawing. Pen is more durable and cleaner, while Pencil feels more adventurous.
As long as you can pattern match to get a solution, LLM can help you, but that does requires having encountered the pattern before to describe it. It can remove tediousness, but any creative usage is problematic as it has no restraints.
Are you really going to do that though? The whole point of using AI for coding is to crank shit out as fast as possible. If you’re gonna stop and try to “learn” everything, why not take that approach to begin with? You’re fooling yourself if you think “ok, give me the answer first then teach me” is the same as learning and being able to figure out the answer yourself.
It takes a lot of cajoling to get an LLM to produce a result I want to use. It takes no cajoling for me to do it myself.
The only time "AI" helps is in domains that I am unfamiliar with, and even then it's more miss than hit.
Quality is a different issue, sure.
I don’t even bother. Most of my use cases have been when I’m sure I’ve done the same type of work before (tests, crud query,…). I describe the structure of the code and let it replicate the pattern.
For any fundamental alteration, I bring out my vim/emacs-fu. But after a while, you start to have good abstractions, and you spend your time more on thinking than on coding (most solutions are a few lines of codes).
"Generative AI" isn't just an adjective applied to a noun, it's a specific marketing term that's used as the collective category for language models and image/video model -- things which "generate" content.
What I assume you mean is "I think <term> is misleading, and would prefer to make a distinction".
But how you actually phrased it reads as "<term> doesn't mean <accepted definition of the term>, but rather <definition I made up which contains only the subset of the original definition I dislike>. What you mean is <term made up on the spot to distinguish the 'good' subset of the accepted definition>"
I see this all the time in politics, and it muddies the discussion so much because you can't have a coherent conversation. (And AI is very much a political topic these days.) It's the illusion of nuance -- which actually just serves as an excuse to avoid engaging with the nuance that actually exists in the real category. (Research AI is generative AI; they are not cleanly separable categories which you can define without artificial/external distinctions.)
It is a truism that the majority of effort and time a software dev spends is allocated toward boilerplate, plumbing, and other tedious and intellectually uninteresting drudgery. LLMs can alleviate much of that, and if used wisely, function as a tool for aiding the understanding of principles, which is ultimately what knowledge concerns, and not absorbing the mind in ephemeral and essentially arbitrary fluff. In fact, the occupation hazard is that you'll become so absorbed in some bit of minutia, you'll forget the context you were operating in. You'll forget what the point of it all was.
Life is short. While knowing how to calculate mentally and/or with pen and paper is good for mastering principles and basic facility (the same is true of programming, btw), no one is clamoring to go back to the days before the calculator. There's a reason physicists would outsource the numerical bullshit to teams of human computers.
Actually there’s some interesting problems here because a huge part of music marketing is in a visual medium, like a poster or album cover. It is literally impossible to include a clip of your sound.
So you should be really interested in how to capture the “vibe” of your music in a visual medium.
But if you don’t care at all whether ppl actually listen to your music, then yeah you don’t have to deep dive.
The term you are looking for is 'aesthetic'.
And indeed.. music is far more than just a sound or whatever simple thing one tries to boil it down to.
Im convinced many (especially here) really dislike that - they want it just be a case of typing in a few things in an LLM and bam... there you go. They have zero clue about the nature of the economy, what's really going on in various markets etc etc.
When you deploy AI to build something, you wind up doing the work that the AI itself can't do. Holding large amounts of context, maintaining a vision, writing apis and defining interfaces. Alongside like, project management. How much time is spent on features vs refactoring vs testing.
If only all great works could just be an X post!
What if you don’t give a shit about design and it’s a means to an end for a project that involves something different that you do care about?
For example, I think design, as they mean it, could be described as "how to get that thing we care about". The correct amount of design depends on how exacting the outcome and outputs needs to be across different dimensions (how fast, how accurate, how easy to interpret, how easy to utilize as an input for some other system). For generalized things where there's not exacting standards for that, AI works well. For systems with exacting standards along one or more of those aspects, the process of design allows for the needed control and accuracy as the person or people doing the work are in a constant feedback loop and can dial in to what's needed. If you give up control of the inside of that loop, you lose the fine grained control required for even knowing how far you are away from theoretical maximums for those aspects.
Thank you for so succinctly demonstrating the problem with using AI for everything. You used to have to either care enough to do the design yourself or find someone who cared and specialized in that to do it for you. Now you quickly and cheaply fill in the parts you don't personally care about with sawdust, and as this becomes normalized you deprive yourself and others from discovering that they care about the design part. You'll ship your thing now, and it'll be fine. The damage is delayed and externalized.
I won't advocate against use of new technology to make yourself more productive, but it's important to at least understand what you're losing.
Or worse, you gave up because you did not have the time to learn the skill or the money to hire somebody. In this case, your dream just died.
If Grok didn't create the fake nudes users were dreaming about but couldn't create with Photoshop,
would my headstone crumble down?
As "intel" dashboards stay a dream,
the Hollywood wind's a howl
As photos are just still
The Kremlin's falling
As Einstein is not wrong
Radio 4 is static
You think most UI/UX designers, or the artists creating slop for content marketing spam factories for the past decades, cared? Some, maybe. Most probably had higher ambitions, but are doing what actually pays their bills.
It's similar to software developers. Most of those being paid to code couldn't care less, they're in there for the fat paycheck; everyone else mostly complains the work is boring or dumb (or worse), but once you have those skills, it makes no economic sense to switch careers (unless, of course, you're into management, or into playing the entrepreneurship roulette).
The paychecks weren’t great. Everyone was offering to pay designers with “exposure”. If they didn’t innately care about the field they would have done something more lucrative.
the parent's point is that it doesn't work that way. The point is self reinforcing. Design is not a thing. it's the earned scars from the process. Fine to disagree but it reinforces the point.
Like, maybe I just want to make an interface to configure my homemade espresso dohickey, do I have to wear a turtleneck and read Christopher Alexander now? I just wanted a couple buttons and some sliders.
We don't all have to be experts in everything, some people just need a means to an end, and that's ok. I won't like the wave of slop that's coming, but the antidote certainly isn't this.
It's true that design theory writing is annoyingly verbose and intangible, but that doesn't make it wrong. Give someone a concrete language spec and they will not really know how it feels to use the language, and even once they do experience its use they will not be able to explain that feeling using the language spec. Invariably the language will tend to become intangible and likely very verbose.
But to answer your question: no, it's of course perfectly serviceable to just copy the interface others have created, and if the needs aren't exactly the same you can just put up with the inevitable discomfort from where the original doesn't translate into the copy.
I'm an engineer who also loves design. I've read a lot of the books (including the one referenced), I know some concepts and terminology, and I understand the general process — but I'll never be a professional designer. My knowledge is limited, and I find most design tools so complex they actually get in the way of problem exploration and creativity.
For people like me, this tools removes the friction which actually prevents me from being more focused on the valuable parts of the design process. I can more easily discover and learn new concepts, and ultimately spend more time being creative and exploring the problem space.
A whiteboard or a wireframing software would be better, because it lets you focus first on the interactive part. And once that’s solved, the visual part is easier.
This speed and variation wins for me. But yes without a designers eye laziness can get lost in slop design too..
To me the value of Gen Ai is an accelerant (not slop factory) for ideation and solutions not a replacement of the human owning the process.. but laziness ususally wins
when people wax philosophical/poetical about what is essentially capital production already i'm always so perplexed - do you not realize that you're not doing art/you're not an artisan? your labor is always actively being transformed into a product sold on a market. there are no "marvelous human experiences", there is only production and consumption.
> They’ll be impoverished and confuse output with agency
ironic.
The first time I used Mac OS/X, circa 2004-2005, I was blown away by the design and how they managed to expose the power of the underlying Unix-ish kernel without making it hurt for people who didn't want that experience. My SO couldn't have cared less about Terminal.app, but loved the UI. I also loved the UI and appreciated how they took the time to integrate cli tools with it.
I would say it was a marvelous human experience _for me_.
Sure it was the Apple engineers' and designers' labor transformed into a product, but it was a fucking great product and something that I'm sure those teams were very proud of. The same was true with the the iPod and the iPhone.
I work on niche products, so I've never done something as widely appreciated as those examples, but on the products I've worked on, I can easily say that I really enjoy making things that other people want to use, even if it's just an internal tool. I also enjoy getting paid for my labor. I've found that this is often a win-win situation.
Work doesn't have to be exploitive. Products don't have to exploit their users.
Viewing everything through the lens of production and consumption is like viewing the whole world as a big constraint optimization problem: (1) you end up torturing the meaning of words to fit your preconceived ideas, and (2) by doing so you miss hearing what other people are saying.
...
> Work doesn't have to be exploitive. Products don't have to exploit their users.
bruh do people have any idea what they're writing as they write it? you're talking about "work doesn't have to be [exploitative]" in the same breath as Apple which is the third largest market cap company in the world and who's well known for exploiting child labor to produce its products. like has this comment "jumped the shark"?
> Viewing everything through the lens of production and consumption
i don't view everything through any lens - i view work through the lens of work (and therefore production/consumption). i very clearly delineated between this lens and at least one other lens (art).
Ultimately the exploitative pyramid always terminates in a peak, and the guys working up there can for sure be having a hecking great time doing their jobs.
just repeating the same mistake as op: sadness/happiness is completely outside the scope here. these are aspects of a job - "design" explicitly relates to products not art. and wondering about the sadness/happiness of a job is like wondering about the marketability of a piece of art - it's completely besides the point!
1. Good design is innovative 2. Good design makes a product useful 3. Good design is aesthetic 4. Good design makes a product understandable 5. Good design is unobtrusive 6. Good design is honest 7. Good design is long-lasting 8. Good design is thorough down to the last detail 9. Good design is environmentally friendly 10. Good design is as little design as possible
Generative AI just tries to predict based on its training data.
a product can be a piece of art and design can and does in practice often go hand in had with art, practically most designers also other than the utilitarian role practice the artistic one, wether you would want to group art within design as one is a matter of definitions
of course but that's well within the scope of the whole paradigm (as opposed to how it is originally phrased it in relation to a loss of "marvelous human experiences"): if i use a bad tool to solve my customer's problems in an unsatisfactory way then my customers will no longer be my customers (assuming the all knowing guiding hand of the free market). so there's no new observation whatsoever in OP.
Anyways, this is 100% a shot at Figma, but also catching Lovable in the crossfire. If anybody from Anthropic is reading this, if you keep developing this with features in Figma and other design tools, you'll have a major hit on your hands.
Figma is targeted towards designers who create thoughtful design systems and cohesive UIs and who don't code, while this is targeted towards vibe coders who can't design. Two different circles that intersect to some level.
But like you said, if anthropic adds the tools in Figma, only then they can can take customers from Figma IMO.
The challenge is that this sets an expectation of what "design" is, de-valuing the former and shifting us culturally towards the latter and a space where "design" is seen as a subjective visual exercise with little intrinsic value.
But for the other 95% of people, being able to just say "ok can you make it look more modern" and have 4 variants in 5 mins, (like me) Figma will lose users like me.
But then again I was never a "designer" – more a builder.
The Anthropic video on that page at 0:53 literally shows them clicking a "knobs" button and adjusting the pixel CSS value.
I know it's not exactly the same ... but it has that functionality to a degree.
I've never paid for a figma seat. A couple of employers have so that I can collaborate with designers in the product, but I don't think this changes that.
In an era where it's cheaper and more common to end up at that undifferentiated state, the ability for companies to make their products go above and beyond it is more valuable, not less.
I see this across the board with AI. It lowers the bar to get to passable, but as slop fills the internet we're already seeing people place more value in good products, good writing, good art, thoughtful code architecture, etc. Everyone and their cousin's uber driver is vibe coding a SaaS startup no one's going to pay for right now.
If you are talking about a consumer product, one of these is not like the others.
Everyone who creates something is a “builder”, that term doesn’t excuse someone from not putting effort in. I don’t care if you aren’t a designer, it’s about the effort you put into your work :)
You also clearly misread what I said. I didn't say I spent 5 minutes prompting an LLM. I say the ability to get FEEDBACK (a revision) in 5 minutes is amazing. And I stand by that. That allows me to do 20 more revisions and do in a couple of hours what would take two weeks.
You seem to be romanticizing the concept of grunt work – that for something to have value or be of good quality, you have to put in some sort of minimum amount of time on it, and it has to be tedious. It's the same concept that nobody can make a good quality piece of furniture unless they used a hand saw and spoke sweet nothings to the tree before it was cut.
There are ways to do things quicker while preserving quality. I had already left a caveat saying that for the 5% of people that really want to push web design forward, totally, go ahead. But for the rest of us (including those of us who have lived and breathed code and engineering principles for decades), these tools are phenomenal for iterating quickly.
Anyway, the term builder is more about separating the goals from a vanilla "programmer" - even though i've programmed my whole life, it's always been in service of an outcome. And the outcome is almost never "good code for the sake of good code" - it has to serve a real outcome in the real world.
By the way, lots of good designers are also using coding agents now, so you can keep romanticizing grunt work while most of the market moves on.
Perhaps this phrasing is what invited the interpretation you seem to be annoyed with.
There is not much to gain by suggesting everyone is simply bad faith.
I think you like the other person is assuming that 5 minutes = low quality. Instead of thinking "5 mins means you can make 8-10 iterations in an hour" or "5 minutes making the front end look pretty good means I can spend more time on the backend"
There are many good faith ways to interpret this.
No one is assuming the output is strictly low quality from what I can tell. I am personally evaluating the method you provided, which suggested you are championing a sloppy but highly iterative design flow against a seasoned curated suite for defining design. I dont see any reason to assume the other comment was doing anything otherwise.
You made a broad generalized strong claim and were met with the opposing force, which is actually acting from their own understanding of good faith, believe it or not (see how this analysis is void of meaning?).
this overlap has been widening incredibly quickly. lots of designers are now writing code with the help of cursor, claude code, etc.
even if you believe "real designers" wont ever use this product, it's not hard to see how a low barrier-of-entry tool could affect Figams bottom line. slowing down Figma's adoption from the new wave of entry-level designers who dont already have muscle memory would not at all surprise me at all.
Not convinced Figma cares about traditional design craft anymore.
These areas obviously tie into engineering very closely, but the thinking that goes into them happens at the design stage, at a lower cost than starting with engineering. AI models suck at getting every facet of this process right, because designers are achieving a balance between branding, usability, standards, taste, and differentiation -- the exact opposite of a model trained to reach for the most average outputs.
Had they not included support for it, where would they be now? I'd wager a critical mass would be screeching to High Heaven for integrations, seeing as a Figma document is effectively a config file that can be translated to real code.
> The folks at Wall Street do not understand
Not entirely but I would use this and not Figma. I am passionate about system design not visual design so I don’t want to waste time in figma.
How many such people does the world need? Probably less than 1,000. Not a very big market for Figma.
But for me, I will never use it again.
He should probably go and let someone else take the reigns.
https://stitch.withgoogle.com/
I'm now pasting all my Stitch output into Claude Design to see what happens.
edit: First impressions are great. It asked me a ton of really great questions about my design aspirations and direction, which were incredibly relevant and insightful. Waiting to see what it makes.
edit2: It did astonishingly well with the first design pass. Really outstanding. This is probably going to be my primary prototyping tool until the Next Best Thing(tm) drops in a few weeks.
They're down 80% over the last year. Ouch.
Figma actually put the work in to make a great product that performs well and offers anything you could imagine to design just about anything you need, with AI integrations and deep manual editing to sweat the details.
- The best design is original, groundbreaking and often counterintuitive. An AI model is incapable of that, it's uninspired, it will absolutely converge to the norm and homogeneity (you see it everywhere now, just scroll on ShowHN and take a look at the UIs) and produce the safest design that appeals to its understanding of the ideal user.
- Good designers will reject this, they prefer to be hands-on and draw from multiple sources of inspiration which is what Figma boards and Canva is good for, also mainly for cross-collaboration. If you've seen how quickly a great design engineer can prototype you'll know that "speed" they advertise in this video is not worth the tradeoff.
- Creatives typically have a very very very high aversion to AI.
- Non-designers will not see a purpose for this tool, basic design can already be done through Claude Code and Claude.ai, I fail to see what this could offer unless they leverage a model that is more creative and unique by default (you can not prompt/context/harness engineer creativity believe me I've tried).
- Design is a lot more than just UI. Tools like this ignore so many other important aspects like: motion, typography, images, weight, whitespace, sound, feel.
Designing a user inteface involves thousands of small decisions. When trading off pros/cons for each of these decisions, in 99% of the cases, the right answer is ‘optimize familiarity.
That’s why Android and iOS look the same, and why the small differences between them are where contention happen.
If you adopt existing patterns, your users would be instantly familiar with your app, and the design will not get in their way.
HOWEVER, that familiarity is only a virtue because someone, once, deviated hard enough that their deviation became the new familiar. AI can only optimise toward the current snapshot of "familiar". It cannot produce the next one. If designers outsource all their thinking to a model even in tactful design we would never have groundbreaking design concepts like "pull to refresh" or the command palette.
That’s not necessarily what happened though. Apple innovated not out of sheer daring but because they also had the best metaphysical paradigm for GUIs that people could also just intuitively grasp. There was a structural correctness to their approach, underlying all the things that we find visually appealing. In the beginning, Google dared and deviated hard from Apple’s design language to establish their own unique identity, but anyone who’s working in the mobile space would Have noticed that Android coalesced into roughly the same patterns over time because of that structural correctness.
Which needs to be done intentionally in context, not homogeneously as a rapid output of a generative tool.
If you want to make a GUI, it should be familiar. Extremely familiar. It shouldn't invent new ways to interact most of the time.
It is well-known that "intuitive" in UX almost always means "what I'm used to". If you're regularly "innovating" in UI design, you may be making the product harder to use, maybe much harder to use.
It certainly isn't unheard of for new ways to interact with computers to be better than the old, but they are usually tied to new physical aspects of our tools: Touchscreens needed new ways to interact, and maybe there's still some room for creativity there, but not much. The mouse obviously required innovative ideas for several years. But, also, the odds of your wacky new idea being the right way to change how people interact with computers are pretty low, unless you're working at FAANG and have a UX research team and budget to test it.
You can get creative in how it looks, but you cannot get creative in how it works.
Innovation comes from the ways people differentiate, without straying too far from the tried-and-true patterns. It's the tiny decisions that situate UI elements and yes, reinvent the wheel sometimes, that can tip users over to whatever you're building because you did it better, or in a way "most" (the average) never thought of.
If people aren't creative in how it works, then really they're all just making the same, boring products, without truly competing against anyone in a meaningful way in the problem space. Visual appeal isn't a sole differentiator.
"Good designers will reject this."
^ Famous last words.
I will very likely be wrong on the second point.
And no, it doesn't just add ARIA to everything as is so typical by poor practitioners.
I'm arguing about invention. It is extremely unlikely that AI will be the one to invent the next accessibility paradigm, because that requires deviating from the training distribution, which it CAN'T DO.
I'm also arguing that this homogeneity in design will lead to an atrophy in inventive, unique and original thinking.
What is it about our own architecture that lets us innovate beyond our training distribution?
You’re talking about art, not design.
I have no idea how everything will play out, but this sounds a lot like the people saying "good programmers will reject this" six months ago.
Quite apart from anything else, it ignores the fact that—particularly within large organisations—designers (and programmers) frequently have very little say in the matter.
Not everyone is looking for unique design, 70% of the web is still using Wordpress. I would say majority prefer familiarity and appreciate uniqueness.
If you want to talk in absolutes, I'd say the best design is the one that results in the desired behaviour of your audience.
most of those "breakthroughs" were just constraint hacks. no room for a reload button. no room for another menu.
enterprise buyers don't pay for counterintuitive. they pay so the new hire finds save without training.
Until we have embodied AI's with eyes and hands that provide good enough approximations, the aspect of design bottlenecked on human experience will stay bottlenecked.
Overall after being laid off in January and a 17 year UX Research/Design/Dev career Im starting school in my early 50s to change careers.
I think more expressive UIs are the future but i disagree with this sort of thing being accomplished with a non deterministic tool such as AI generating UIs, you are throwing stability and consistency along with familiarity out the window.
The idea of tools being almost UI-less and composable and modular has been a "dream" since xerox parc or see for example the book "the humane interface" which happens to also ahead of its time outline reasons why such generative interfaces would be a bad idea especially at such a large scale.
AI can potentially relieve some friction with that paradigm but definitely not in that way or even that extent.
Even the most deluded AI bulls don't say that AI is even meant to replace the best that humanity has to offer
This is for non-designers to crank out slop with less effort. They can still be swayed by all the shiny knobs to feel in control.
While Great design breaks the mould, Very Good design is about surfacing the most expected outcomes for any action which reduces friction and lets people get work done. And this generation of Generative tools is very good at identifying the most common/most expected response to a prompt.
I use it all day every day with Claude Code. I sometimes wonder past code if this has had the biggest impact on my day to day productivity, either having to make do with semi-bad looking reports or have a designer design them (which is slow).
Sort of feel sorry for Figma in a way though, given all the "partnerships" (highlighting their MCPs) and case studies they've done with Anthropic and then they release this. I note there isn't a testimonial from them this time.
I'm surprised how poorly Figma have used "AI" in general - given they were the "gold standard" in taking emerging technologies (WASM etc) and making an incredible product. The Figma Make thing was incredibly underwhelming, I managed to extract the system prompt out and it's basically just Gemini 3 Pro with a design prompt. Perhaps the original team has left?
They are extremely exposed imo. While all the UI/UX designers will continue using it for the forseeable, I strongly suspect a lot of their (A/M)RR was coming from extra seats for PMs, developers, etc to view and export and do commenting on the files - not core designer usage. I think a lot of this just won't happen on Figma as much.
their seats system has always been brutal it’s extremely easy to have the seats balloon if you’re not careful and if they’re yearly there is only a 30 day window a year where you can cancel them when the banner to do so appears.
Nope. Figma Make first renders an HTML/React app with your design. Then you could convert to a Figma design file if you have a pro plan. Extremely underwhelming.
There's hardly any difference between using Figma and just designing it with Codex and Claude Code. And now, Claude Design seems to get it right.
* Massive token usage, some small tasks burned through $50 of credits and did not offer $50 of value.
* Terrible at logo work. Comically bad. This is something that is "hard" so it could add great value if it could deliver.
* Repeatedly forgot prior feedback - when iterating it would re-implement prior iterations after being told why we didn't want that result which made for a very frustrating UX.
* Prone to adding visual clutter - kept adding extra elements that look "pretty" but add no value to the user.
* Seems better at "pretty" vs user focused / UX.
* Did not do a good job at using my existing design / UI library
* REALLY wanted to start from scratch. Could not be coaxed into designing part of an application, it wanted to redesign the whole thing.
OK but what we really want to know, what's it like when it comes to drawing pelicans riding on bicycles?
Anyone remember Google's social media platform??? Google Plus?
This is a good era to be in! Its the era of product experimentation.
As long as you realize that 90% of the products will not be supported long term if it doesn't contribute to bottom line revenue, then just appreciate it for what it is, a bunch of smart people trying to create useful products.
Just don't be surprised if Anthropic goes the Google route, which is shutting down the majority of the products that are too small / not successful enough to impact their revenue.
Not every Google product release used Google search. Some of them were completely outside of Google's domain.
There's no reason to believe Anthropic will stop caring about this product--they're not Google [1] after all.
> It really feels like Anthropic's product area is extremely overextended at this point.
I don't think so. They have one core product: the Claude model; they're enabling different ways of accessing it. Claude Code for developers, Cowork for general business tasks, and chat for consumers.
This is their first graphic design product, but it fits nicely because once you create a prototype, you can hand it over to Claude Code to make the website, mobile app, or whatever.
The advantage Anthropic has is their ecosystem. A Claude user will be way more productive using Design because all of their context is with Claude; other AI tools don't "know you" the way Claude does. Claude already knows your style and your preferences; it's much more likely to create designs you'd like.
When you go to an AI you don’t normally use, you essentially have to start from scratch.
[1]: https://killedbygoogle.com
Keeping the hype alive through to IPO is critical now.
It's also just a beautiful product. The interaction model, styling and integrations via exporting is all super thoughtful
I use Opus to generate Typst for that and I'm already pretty happy with that approach. It gives me a degree of control I do not have with other methods, because
1. Typst is really powerful
2. Opus is really good at surgically modifying Typst
I basically never look at the Typst code for this. Telling Opus visually what I want changed is usually good enough.
It can't render all your math in exactly the same way. If you need it to do that, it may not be for you.
In my example, I expected it to create UI elements for a business application / expert system. And it did fine. In fact, I believe its perfect for creating average and functional designs. Its a better way to test variations of UIs for expert systems. But I want to know what the actual costs are.
Seems like Claude is actually building almost like a layered Figma wireframe that you can do fine grained adjustments afterwards (e.g. adjust font size).
Interesting that Canva provided a quote of support. I'm not familiar with the differentiation, but it seems like this will directly siphon customers from Canva, right?
Our mission is to empower the world to design, and we believe in making Canva available in every place where ideas begin. Being the most interoperable platform creates mutually better products, more value for community, and more value and growth for our company.
We've been working closely with Anthropic for many years, and we see this as complementary. Our MCP, integrations, and plugins have already introduced millions of new users to the full power of Canva, and we're excited to continue doubling down here.
Why would we want to move away from hard fought UX design lessons? Dynamic and fluid UX is infuriating.
I wonder what other features they're cooking right now.
Stuff like that happened even before the invention of the telephone, humans within the same geographic location is even more predicable, so surely this shouldn't come as a surprise.
Cool pattern! Sure, maybe there is something there.** And/or maybe our brain is doing "conspiracy theorizing lite". Its all on the same spectrum -- the same flawed cognitive machinery trying to operate in a weird modern world quite different from where we came from.
A better way: write out your favorite hypothesis. But don't stop there... keep going... write out many hypotheses. Then find ways to test them. To tap into our best selves, I recommend The Scout Mindset (book). Here is an infographic summary of part of it: https://imgur.com/qN31PX8
Probably not a better way: float one's first gut feels to the Internet phrased as i.e. the better question and feed empty calories to our pattern-craving brains. There is reason some of our brain functions are considered higher order.
* Maybe I'm overstating this. Let me know? I want to read Rationality and the Reflective Mind by Keith Stanovich (https://academic.oup.com/book/5930) as a counterpoint to the usual suspects (such as Tversky & Kahneman)
** But what is there. What kind of pattern? What kind(s) of causation could be at work? See Judea Pearl's "ladder of causation". Nice write-up here: https://samuel-book.github.io/causal_inference_notebook/pear...
Previous comment with the prior 3: https://news.ycombinator.com/item?id=47794419
Is that globe made from whole cloth or is there a bespoke "telecom globe" widget that it dropped in? Could I ask for mock up of molecules with the same fidelity of knobs, down to nucleus size and such?
I actually think I would prefer the more boring "it composes well known widgets" because then there's a chance I could just use this to generate a presentation layer and integrate it instead of new blobs of code I need to essentially reverse engineer or remake.
Anthropic has distribution on their side, their engineers are excellent (I have ran with them across the ggb in the past and they work 12 hours plus a day regularly.)
I think what actually might be slowing them down is the public releases and pr lol, not ideas or execution
What a toxic workplace :/
And money aside, it is certainly one of the most exciting companies in the world to work for.
In the case of engineers and programmers, the amount of useful completed "work" has only a very weak correlation with the length of the workdays.
Good engineers or programmers will think anyway most of the time about the problems that they must currently solve, regardless whether they are in the office or at home or in any other place, and regardless whether to an external observer they appear to be "working" or they appear to do nothing.
Programmers who spend all day typing lines of code into a computer, are more likely to not be competent programmers, because otherwise they would have found ways to automate such activities that require continuous physical involvement, making impossible the allocation of enough time for thinking about the right solution.
If whatever they do does not require true thinking, then that is the kind of job that can be done by AI agents.
What? In my experience people who are good at their job can get it done in a reasonable amount of time. Working 12 hours a day is obsession, no competence. There can be overlap, but there is no causation.
Then look at the changelog of Claude Code. They are releasing daily.
/s
Lately it is more and more ShadCN as well.
TailwindCSS is a masterpiece but ironically doesn’t really get its fare share while “Build on top of TW” frameworks make money.
TailwindCSS is the final evolution after all other frameworks always had its benefits but also massive limitations.
BEM anyone?
TW is really elegant a new paradigm in its purest sense and brilliantly executed. No wholes could be poked in it for years and the extensibility shows, how brilliant it is.
Bootstrap will always be held dearly but it was about browser quirks etc first. Important milestone but stands no chance against TW.
As a dev manually typing I loved tailwind for sure, with LLMs not so much, and bootstrap in particular nails it the best IME. and yeah one can customize bootstrap quickly to look however it should, just tell your frontier LLM of choice your wishes.
I don't ever recall hearing about the specific, tangible benefits tailwind brings. Just a loose "it's faster", or "it's easier". It just feels like one of those things, in front end development, that are just hype-driven rather than actually bringing any ostensible benefit.
I'm not a big tailwind fan, but keeping styling in a separate file feels like a net negative
A little misleading, as I thought they would showcase different types of site design entirely.
Very interesting though
nature beckons its creatures to become whatever they must become to be useful in the greater living ecosystem (the loop)
when you can't be useful in the loop you get flushed out of existence
humans who can't be useful will also thus be buffered out of existence
I'm wondering how i can CONTINUE that in this design thing, can i import something? Because they show it the other way... you can start and edit, and then export to claude code.
Until then, I guess it's back to just using CC
> Import from anywhere. Start from a text prompt, upload images and documents (DOCX, PPTX, XLSX), or point Claude at your codebase. You can also use the web capture tool to grab elements directly from your website so prototypes look like the real product.
The amount of hype is too much for me - its smoke and mirrors. A firm that knew it was on track to change the world (as much as they have been boosting) would not do this. Much like how Apple kept complete silence about the iPhone and then shocked the world.
As someone who's thinking about side project-ing a game, this caught my eye.
I am curious to explore what Claude can yolo in terms of a retro style indie game... One who's audience might only be me.
What I found valuable is the design.md that was produced. It's a guide for building each component. So using these tools becomes akin to PSD to html we used do. At least that's when I find them most effective.
Page not found Claude can help with many things, but finding this page isn’t one of them.
when logged in.
It is that in one case it is obvious for you that you don’t understand what you are looking at, and in another, for some reason, it is not.
Very true.
This app is pretty slick, this will funnel a huge number of customers away from Figma + Canva imo.
If code doesn't go this direction soon, I'd be surprised. PM builds a prototype with claude, or designer designs something in figma/canva - claude vibe codes 70% of the solution using your company's frameworks and design system, then hands it off to the developer who finishes it and productionizes it
Just like with code, AI will give you something to start with, but it's still going to require tweaking and manual input.
Edit: I see they have a new token tier 'Claude Design' so that's good.
Fed it my figma file to it and creating a design system. And yeah, it looks like it, but it's still wayyy off at the same time
```
<YOUR APP IDEA HERE>
----
To get your credentials and learn about InstantDB, fetch this URL:
https://instantdb.com/llm-rules/create-app/AGENTS.md
```
Every cURL request to AGENTS.md spins up a new backend and splices in the credentials.
I used it to build this multiplayer pelican game:
https://349b7470-3747-49e7-8ece-960a45b0b2da.claudeuserconte...
Disclaimer: I am the founder of InstantDB
I have been doing fine just instructing Claude code to use Tailwind and reference design documents
Combine that with the obvious hackernews manipulation that somehow gets each and every haphazard release instantly to the top, and you can see they're starting to feel some real heat.
But, I'll gladly admit that I am bias: I'm tired of seeing blatant astroturfing by a company whose main marketing tactic is to play on societal fear, while simultaneously employing safety theatre to look like the "good guys".
So take my opinion with a grain of salt :)
This stuff has changed a ton of what it means to exist in this whole “tech space”. The entire software development lifecycle got put into a stick blender and is in the process of getting mixed up in new and unusual ways.
It’s super cool. I haven’t been this excited about our industry since way back when the universe was just starting to get onto dialup and I grabbed my very first mp3 or wrote my first shitty program in VB or when AJAX was just entering the universe.
I think a lot of people forgot how fast shit changes in this industry and how learning new things is one of the most important skills to being successful. Everything changes all the time.
This is a tech site called hacker news. Where else would something like this be constantly discussed?
But they also desperately need users (and the data those users bring) to build their products, and the people who do have the power to manipulate this site are on their team. And it does get tiring to see a new Claude feature with like 1 comment and 25 points right at the top, multiple times in the last two week. Keeping their needs in mind, it has begun to look like manipulation, even if the above effect could explain it.
I'm glad the technology foments it excitement for you. The idea that we can share intellectual processes broadly and implement them without the previously requisite skills will obviously change the world. That it could change the world for the better, excites me too.
But many of us have our excitement tampered by the messaging, the questionable ethics behind how it has been done, and the fact that a real % of the space is basically driven by eschatological thinking. And it especially annoys me that Anthropic is the company whose messaging simultaneously encourages that eschatological thinking, and preys upon the emotional reactions it creates.
I think it is increasingly clear--if you look at recent public sentiment and feel what is in the air--that they are a villain in this aspect. I don't think we want the people who believe they are building the future to be doing so both out of fear--of China--and gaining power through others' fear of what they are doing.
But villains can ultimately do good in the world, despite their villainy. Let's hope that is how it plays out.
Im looking at this product and thinking - so...? Where's the vision?
Oh there is none. Its about spraying and praying that the hype continues and feeding off analysts who don't really understand most of the firms that they spend all day studying the valuation of.
The design problem to solve post-ai isn’t this it’s how the space for thinking fits into all this, getting to the end result slower so human ideation can play out. This is just optimized for the first generic output + tweaks.
>NEVER use generic AI-generated aesthetics like overused font families (Inter, Roboto, Arial, system fonts), cliched color schemes (particularly purple gradients on white backgrounds), predictable layouts and component patterns, and cookie-cutter design that lacks context-specific character.
> Interpret creatively and make unexpected choices that feel genuinely designed for the context. No design should be the same. Vary between light and dark themes, different fonts, different aesthetics. NEVER converge on common choices (Space Grotesk, for example) across generations.
[1] https://github.com/anthropics/claude-code/blob/main/plugins/...
That link is redirecting me to https://claude.ai/404, anyone else?
If you’re confused about why everyone is shipping their own canvas tool - this is what I wrote when I reviewed Stitch from Google in my newsletter:
“every SOTA (State-of-the-Art) model can already do this. Give it a prompt, it'll spit out an HTML design. Ask for 10 responsive variations, you'll get them. Stitch is a Ul and context harness on top of Gemini, in the same way Claude Code is a harness on top of Claude's models. This means every Al lab will likely ship their own version, and they'll all interoperate because at the end of the day it's all HTML and markdown.”
More generally, this is a competition of where the product development work starts and lives. The business value will accrue to those who become a destination, not a plugin.
AI can now power a new sort of tool that supports the entire process (not just coding or just design). So there’s no reason for Anthropic or any other lab to give that up to another tool.
Deriving a bland average of creativity is the saddest thing you could do. I don’t even enjoy design and I find this offensive.
You talk to it, it consumes "usage" and no design system created.
Had a similar experience with their online code sessions.
All this slop generated code is so freaking bad. We'll get drowned in so much slop code that at some point only these garbage machines will be able to "work" with it.
> do you see our current html
Hi-fi design Interactive prototype html
> do you see our current html
Hi-fi design Interactive prototype html
● Claude Yes — I can see your html/ folder. It has 7 files:
files-list.html math-result.html rows-view.html table-view.html text-editor-doc-view.html trending-topics.html unique-result.html Want me to open them up and take a look at what's in there? And what would you like to do — polish them, turn them into an interactive prototype, build something new on top, or something else?
> yeah in table view i'm thinking about how to deal with cells that may be big. i guess we want to do something different than excel or sheets in that auto expand rows with cells that contain big values. we can show like the first -- what does twitter show, 280? so we can show 280 chars and then [...]
● Let me take a look at the current table view to understand what we're working with.
Got it — I see the current table view. Right now every cell is clamped to max-width:220px with white-space:nowrap and text-overflow:ellipsis, so long values just get cut off at whatever ~30 chars fits the column. You're thinking about the opposite end: what if a cell genuinely holds 2,000 chars of text? Neither pure clip nor Excel-style row expansion is quite right.
Before I start sketching, a few quick questions so I don't guess wrong:
> Questions answered: - truncation_threshold: ~280 chars (new tweet) - expansion_ux: nothing right now -- it's not interactive - row_height_policy: Grow freely like Notion/Airtable - content_types: Not sure yet - indicator_style: [...] at the end of the text - newlines: Render them (row grows) - copy_affordance: no interaction right now - column_width: Fixed width, all columns equal - variation_count: 2 variations - aesthetic_direction: Same bones, but let you push it a bit
● Got it — so: non-interactive, rows can grow freely, 280-char cap with [...], newlines render, same bones but push it a bit, 2 variations. Let me sketch this out.
Set up variations file with two table treatments V1: Notion/Airtable-style free-grow with [...] cap V2: pushed aesthetic variant Wire both to the same sample data w/ long cells Let me peek at a couple of the sibling files so the variations stay in-family with the rest of the widget set.
Good — now I have the visual family. The shared vocabulary across widgets:
#f5f5f3 page, #fff card, #d0d0cc borders, #f0f0ec headers System font, 12–14px Subtle alternating row tints, no drop shadows, flat corners Color used sparingly; #666 for secondary text My plan for the two variations (side-by-side in a canvas so you can compare on the same rows):
V1 — "Notion-ish, stay in family"
Keep current palette, borders, zebra striping Rows grow vertically as needed; white-space: pre-wrap so newlines render Cap each cell at 280 chars; if longer, slice to 280 + […] inline Same fixed column widths, everything else identical V2 — "Same bones, pushed"
Tightens the grid (hairline borders instead of panels, more whitespace, denser typography) Column rail with sticky row numbers Soft fade on the bottom of ultra-tall cells instead of mid-text […] when content is really long (hybrid: char cap + visual) Monospaced tabular digits for row numbers; type column ⌾ / ▤ markers subtly hinting content type; column width slightly wider for breathing room Let me build it.
Now let me build the two variations. I'll set up realistic sample data with a mix of short values, tweet-length values, long-long values, and ones with newlines, so the behaviors are visible.
Now the two table components.
Clean. Two variations on the canvas, both rendering the same 7 rows so you can compare directly:
V1 — in-family. Your current widget, with white-space: pre-wrap so rows grow, newlines render, and a soft grey […] appended when a cell goes past 280 chars. Row 3's summary (double-long prose) and row 4's URL show the cap. V2 — same bones, pushed. Hairline borders instead of filled header panels, monospaced tabular row gutter, column letters as uppercased small-caps above the header names row, tighter typographic rhythm, cells > 280 → […] legend in the footer. Row 7 shows how newlines render naturally. Both slice at the nearest word boundary when one is within ~40 chars of the 280 cap, otherwise hard-cut — so […] never lands mid-word.
A couple of things worth calling out, not changed:
280 is pretty generous when a row has multiple long cells side-by-side — row 3 is ~2 cells × 280 = a very tall row. If that ends up being painful in real usage, dropping to ~180 or adding a per-row max-height with fade might be worth a follow-up. Neither variation is interactive per your answer; there's no click-to-expand, hover, or copy. Easy to bolt on later. Fork verifier agent Both variations are live on the canvas. Let me know which direction feels right and I'll fold it back into html/table-view.html." -----
It's basically similar to their Artifacts system in a harness I guess
It feels like a lightly designed product that moves claude CLI to their backend, generates the HTMLs and renders them in browser on claude.ai website for you. Sure, it accepts your design system as an input from you or imports from your repo, but you could feed the same into claude CLI as well?
I'm curious what exactly it gives besides having claude CLI + prompting it well with your design system + skills.
Then Claude came for the designers with Claude Design, and I did not speak out— Because I was not a designer.
...
https://en.wikipedia.org/wiki/First_They_Came
BUt what it really shows is there's no vision. If you had a vision and coherent strategy you'd concentrate resources - not dilute them across a myriad of projects.
With that said, this seems like it's just claude code + simplified devtools in a container? Will need to spend more time with it to see how EG commenting influences things, but 5 minutes in, this seems like it's needless formalization on top of how I already used claude for deisgn
Unfortunately it's going to be a tough few years until that happens, where it really does feel like the idiots are winning.
THE IDIOTS ARE WINNING
So far it's burned through 45% my design quota. On a positive note, I finally have my fonts rendering. I'm still waiting for it to figure out how it mangled the SVG versions of my logo. I've watched it trying a lot of stuff for 20 minutes and now I've got something that's not my logo.
Now its 5 Hour usage, Weekly Usage, Claude Design, Daily Routine Runs, and the Extra Usage portion...
Maybe AI is not good at everything, yet.
Part of this weirdness is the continual relativism of design. A taste-meme is good or cringe only relative to the prevailing social environment, never itself. An AI can never do "design", properly understood, because design is the work done by a reluctance motor, spinning endlessly in a Sysphean quest to align itself with a moving magnetic field and producing torque by side effect.
All efforts like this can do is capture the field alignment at an instant in time. It cannot do work. It cannot produce motion, not as long as its weights are as fixed as the field lines of a neodymium magnet. The instant AI design is good, it becomes bad through the act of becoming good.
Producing work through motion of taste may be one of the last human endeavors to be absorbed.
However, there is of course a raw mechanical side to design that comes more into play on practical applications like a software UI vs. pure play memes/vibes like branding or a landing page.
Think hierarchy, leading, kerning, scale, contrast, balance, etc. These things remain constant throughout trend cycles and can absolutely get you half way there. Then aesthetic memes can be sprinkled on top by a human to make people believe "this feels cool."
This will fail because it's already a forgotten side project within anthropic, and anthropic also has pretty bad product DNA as a company. Their headcount is already too large and the culture is already set. They grew revenue so fast they speedran the stage at which you could build software product chops into the company culture (think Google circa 2008 vs. Google circa 2018).
They should focus on what they're good at: the actual AI models and B2B sales. Let OpenAI play early Google and churn through 100 different consumer product experiences to see what sticks, they're better positioned for it anyways.
Maybe they realised its actually a dead-end? Why else would you not double-down and concentrate resources? This would be an amateur error if it turned out to be true.
Imagine if a designer were hired to custom design the lightswitches in every building. We need to get back to off-the-shelf interfaces and stop wasting smart people's time reinventing UI widgets.
Like what they have some superior knowledge or ability to design ?
Let me remind you they weren't always tech giants, what if we said lets stop innovating before apple or any of these became a thing.
Go ahead and roast me.
I don't agree. For novel use cases, yes there's some truth to that. But consistency is huge in a UX. If basic controls work well for a situation, they should be used. Designers should not be getting "creative" or "original" for those sorts of things.
If I have a product out of my lab that makes it to human trials, there will be a full team of marketers and designers tasked to the brand image.
Ironically I think AI will replace researchers before it replaces artists.
Also, a lot of very good software developers are bad at design and unwilling or unable to pay for a designer. This will be an improvement for them.
But the mass market (who this is ultimately for) doesn't care about great design. They care about "seeing something on the screen." If they can get something that looks 80-90% aligned with what they observe to be modern design, they won't think twice (even if the end result is clunky or not on par with what a professional designer would produce). It's the Ikea Effect on steroids.
If you treat it like a black box used to outsource your own thinking, you are holding it wrong.
I think we probably need less software, but higher quality, not more. Unfortunately AI only goes in one direction…
Is there also a place in the world for not-great-but-good-enough design?
not really, great design in a web application is no surprises.
Still human?
This is entirely accurate, however I fear there's a lack of perspective:
If you're in the middle of the desert and need to sit down, that random rock looks and feels great because there's nothing even close, around!
One issue that a lot of experts fail to recognize is that "great" is relative: It's not apparent to the experts because they are only pulled in when their expertise is needed. Most of the time when experts are pulled in, requirements are clear, you have traction, scale and now you need to optimize.
Once you're spoiled for choices, you have lots of options and then that random rock doesn't look appealing at all: now you're considering other factors like budget - IKEA vs Adirondack.
What AI is making a huge difference are places where "great" isnt that valuable:
- people in the desert: Someone wants to track what words their toddler is saying or their groceries or how much kitty litter they should buy soon and Claude will spit something out reasonable even if it makes the skin of experts crawl.
- commodity and bean counters: in cut throat industries like power or insurance, it's all commodity services competing on price. Most people arent going to pay a premium for a better looking, more intuitive insurance app. It just needs to not suck and fall over. Or you're making a knockoff of an existing, well understood product
The catch is that the person making the decision might not know or care about the difference.