分类: Uncategorized

  • Intent Prototyping: The Allure And Danger Of Pure Vibe Coding In Enterprise UX (Part 1)

    There is a spectrum of opinions on how dramatically all creative professions will be changed by the coming wave of agentic AI, from the very skeptical to the wildly optimistic and even apocalyptic. I think that even if you are on the “skeptical” end of the spectrum, it makes sense to explore ways this new technology can help with your everyday work. As for my everyday work, I’ve been doing UX and product design for about 25 years now, and I’m always keen to learn new tricks and share them with colleagues. Right now, I’m interested in AI-assisted prototyping, and I’m here to share my thoughts on how it can change the process of designing digital products.

    To set your expectations up front: this exploration focuses on a specific part of the product design lifecycle. Many people know about the Double Diamond framework, which shows the path from problem to solution. However, I think it’s the Triple Diamond model that makes an important point for our needs. It explicitly separates the solution space into two phases: Solution Discovery (ideating and validating the right concept) and Solution Delivery (engineering the validated concept into a final product). This article is focused squarely on that middle diamond: Solution Discovery.

    How AI can help with the preceding (Problem Discovery) and the following (Solution Delivery) stages is out of the scope of this article. Problem Discovery is less about prototyping and more about research, and while I believe AI can revolutionize the research process as well, I’ll leave that to people more knowledgeable in the field. As for Solution Delivery, it is more about engineering optimization. There’s no doubt that software engineering in the AI era is undergoing dramatic changes, but I’m not an engineer — I’m a designer, so let me focus on my “sweet spot”.

    And my “sweet spot” has a specific flavor: designing enterprise applications. In this world, the main challenge is taming complexity: dealing with complicated data models and guiding users through non-linear workflows. This background has had a big impact on my approach to design, putting a lot of emphasis on the underlying logic and structure. This article explores the potential of AI through this lens.

    I’ll start by outlining the typical artifacts designers create during Solution Discovery. Then, I’ll examine the problems with how this part of the process often plays out in practice. Finally, we’ll explore whether AI-powered prototyping can offer a better approach, and if so, whether it aligns with what people call “vibe coding,” or calls for a more deliberate and disciplined way of working.

    What We Create During Solution Discovery

    The Solution Discovery phase begins with the key output from the preceding research: a well-defined problem and a core hypothesis for a solution. This is our starting point. The artifacts we create from here are all aimed at turning that initial hypothesis into a tangible, testable concept.

    Traditionally, at this stage, designers can produce artifacts of different kinds, progressively increasing fidelity: from napkin sketches, boxes-and-arrows, and conceptual diagrams to hi-fi mockups, then to interactive prototypes, and in some cases even live prototypes. Artifacts of lower fidelity allow fast iteration and enable the exploration of many alternatives, while artifacts of higher fidelity help to understand, explain, and validate the concept in all its details.

    It’s important to think holistically, considering different aspects of the solution. I would highlight three dimensions:

    1. Conceptual model: Objects, relations, attributes, actions;
    2. Visualization: Screens, from rough sketches to hi-fi mockups;
    3. Flow: From the very high-level user journeys to more detailed ones.

    One can argue that those are layers rather than dimensions, and each of them builds on the previous ones (for example, according to Semantic IxD by Daniel Rosenberg), but I see them more as different facets of the same thing, so the design process through them is not necessarily linear: you may need to switch from one perspective to another many times.

    This is how different types of design artifacts map to these dimensions:

    As Solution Discovery progresses, designers move from the left part of this map to the right, from low-fidelity to high-fidelity, from ideating to validating, from diverging to converging.

    Note that at the beginning of the process, different dimensions are supported by artifacts of different types (boxes-and-arrows, sketches, class diagrams, etc.), and only closer to the end can you build a live prototype that encompasses all three dimensions: conceptual model, visualization, and flow.

    This progression shows a classic trade-off, like the difference between a pencil drawing and an oil painting. The drawing lets you explore ideas in the most flexible way, whereas the painting has a lot of detail and overall looks much more realistic, but is hard to adjust. Similarly, as we go towards artifacts that integrate all three dimensions at higher fidelity, our ability to iterate quickly and explore divergent ideas goes down. This inverse relationship has long been an accepted, almost unchallenged, limitation of the design process.

    The Problem With The Mockup-Centric Approach

    Faced with this difficult trade-off, often teams opt for the easiest way out. On the one hand, they need to show that they are making progress and create things that appear detailed. On the other hand, they rarely can afford to build interactive or live prototypes. This leads them to over-invest in one type of artifact that seems to offer the best of both worlds. As a result, the neatly organized “bento box” of design artifacts we saw previously gets shrunk down to just one compartment: creating static high-fidelity mockups.

    This choice is understandable, as several forces push designers in this direction. Stakeholders are always eager to see nice pictures, while artifacts representing user flows and conceptual models receive much less attention and priority. They are too high-level and hardly usable for validation, and usually, not everyone can understand them.

    On the other side of the fidelity spectrum, interactive prototypes require too much effort to create and maintain, and creating live prototypes in code used to require special skills (and again, effort). And even when teams make this investment, they do so at the end of Solution Discovery, during the convergence stage, when it is often too late to experiment with fundamentally different ideas. With so much effort already sunk, there is little appetite to go back to the drawing board.

    It’s no surprise, then, that many teams default to the perceived safety of static mockups, seeing them as a middle ground between the roughness of the sketches and the overwhelming complexity and fragility that prototypes can have.

    As a result, validation with users doesn’t provide enough confidence that the solution will actually solve the problem, and teams are forced to make a leap of faith to start building. To make matters worse, they do so without a clear understanding of the conceptual model, the user flows, and the interactions, because from the very beginning, designers’ attention has been heavily skewed toward visualization.

    The result is often a design artifact that resembles the famous “horse drawing” meme: beautifully rendered in the parts everyone sees first (the mockups), but dangerously underdeveloped in its underlying structure (the conceptual model and flows).

    While this is a familiar problem across the industry, its severity depends on the nature of the project. If your core challenge is to optimize a well-understood, linear flow (like many B2C products), a mockup-centric approach can be perfectly adequate. The risks are contained, and the “lopsided horse” problem is unlikely to be fatal.

    However, it’s different for the systems I specialize in: complex applications defined by intricate data models and non-linear, interconnected user flows. Here, the biggest risks are not on the surface but in the underlying structure, and a lack of attention to the latter would be a recipe for disaster.

    Transforming The Design Process

    This situation makes me wonder:

    How might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one?

    If we were able to answer this question, we would:

    • Learn faster.
      By going straight from intent to a testable artifact, we cut the feedback loop from weeks to days.
    • Gain more confidence.
      Users interact with real logic, which gives us more proof that the idea works.
    • Enforce conceptual clarity.
      A live prototype cannot hide a flawed or ambiguous conceptual model.
    • Establish a clear and lasting source of truth.
      A live prototype, combined with a clearly documented design intent, provides the engineering team with an unambiguous specification.

    Of course, the desire for such a process is not new. This vision of a truly prototype-driven workflow is especially compelling for enterprise applications, where the benefits of faster learning and forced conceptual clarity are the best defense against costly structural flaws. But this ideal was still out of reach because prototyping in code took so much work and specialized talents. Now, the rise of powerful AI coding assistants changes this equation in a big way.

    The Seductive Promise Of “Vibe Coding”

    And the answer seems to be obvious: vibe coding!

    “Vibe coding is an artificial intelligence-assisted software development style popularized by Andrej Karpathy in early 2025. It describes a fast, improvisational, collaborative approach to creating software where the developer and a large language model (LLM) tuned for coding is acting rather like pair programmers in a conversational loop.”

    Wikipedia

    The original tweet by Andrej Karpathy:

    The allure of this approach is undeniable. If you are not a developer, you are bound to feel awe when you describe a solution in plain language, and moments later, you can interact with it. This seems to be the ultimate fulfillment of our goal: a direct, frictionless path from an idea to a live prototype. But is this method reliable enough to build our new design process around it?

    The Trap: A Process Without A Blueprint

    Vibe coding mixes up a description of the UI with a description of the system itself, resulting in a prototype based on changing assumptions rather than a clear, solid model.

    The pitfall of vibe coding is that it encourages us to express our intent in the most ambiguous way possible: by having a conversation.

    This is like hiring a builder and telling them what to do one sentence at a time without ever presenting them a blueprint. They could make a wall that looks great, but you can’t be sure that it can hold weight.

    I’ll give you one example illustrating problems you may face if you try to jump over the chasm between your idea and a live prototype relying on pure vibe coding in the spirit of Andrej Karpathy’s tweet. Imagine I want to prototype a solution to keep track of tests to validate product ideas. I open my vibe coding tool of choice (I intentionally don’t disclose its name, as I believe they all are awesome yet prone to similar pitfalls) and start with the following prompt:

    I need an app to track tests. For every test, I need to fill out the following data:
    - Hypothesis (we believe that...) 
    - Experiment (to verify that, we will...)
    - When (a single date, or a period) 
    - Status (New/Planned/In Progress/Proven/Disproven)
    

    And in a minute or so, I get a working prototype:

    Inspired by success, I go further:

    Please add the ability to specify a product idea for every test. Also, I want to filter tests by product ideas and see how many tests each product idea has in each status.
    

    And the result is still pretty good:

    But then I want to extend the functionality related to product ideas:

    Okay, one more thing. For every product idea, I want to assess the impact score, the confidence score, and the ease score, and get the overall ICE score. Perhaps I need a separate page focused on the product idea, with all the relevant information and related tests.
    

    And from this point on, the results are getting more and more confusing.

    The flow of creating tests hasn’t changed much. I can still create a bunch of tests, and they seem to be organized by product ideas. But when I click “Product Ideas” in the top navigation, I see nothing:

    I need to create my ideas from scratch, and they are not connected to the tests I created before:

    Moreover, when I go back to “Tests”, I see that they are all gone. Clearly something went wrong, and my AI assistant confirms that:

    No, this is not expected behavior — it’s a bug! The issue is that tests are being stored in two separate places (local state in the Index page and App state), so tests created on the main page don’t sync with the product ideas page.

    Sure, eventually it fixed that bug, but note that we encountered this just on the third step, when we asked to slightly extend the functionality of a very simple app. The more layers of complexity we add, the more roadblocks of this sort we are bound to face.

    Also note that this specific problem of a not fully thought-out relationship between two entities (product ideas and tests) is not isolated at the technical level, and therefore, it didn’t go away once the technical bug was fixed. The underlying conceptual model is still broken, and it manifests in the UI as well.

    For example, you can still create “orphan” tests that are not connected to any item from the “Product Ideas” page. As a result, you may end up with different numbers of ideas and tests on different pages of the app:

    Let’s diagnose what really happened here. The AI’s response that this is a “bug” is only half the story. The true root cause is a conceptual model failure. My prompts never explicitly defined the relationship between product ideas and tests. The AI was forced to guess, which led to the broken experience. For a simple demo, this might be a fixable annoyance. But for a data-heavy enterprise application, this kind of structural ambiguity is fatal. It demonstrates the fundamental weakness of building without a blueprint, which is precisely what vibe coding encourages.

    Don’t take this as a criticism of vibe coding tools. They are creating real magic. However, the fundamental truth about “garbage in, garbage out” is still valid. If you don’t express your intent clearly enough, chances are the result won’t fulfill your expectations.

    Another problem worth mentioning is that even if you wrestle it into a state that works, the artifact is a black box that can hardly serve as reliable specifications for the final product. The initial meaning is lost in the conversation, and all that’s left is the end result. This makes the development team “code archaeologists,” who have to figure out what the designer was thinking by reverse-engineering the AI’s code, which is frequently very complicated. Any speed gained at the start is lost right away because of this friction and uncertainty.

    From Fast Magic To A Solid Foundation

    Pure vibe coding, for all its allure, encourages building without a blueprint. As we’ve seen, this results in structural ambiguity, which is not acceptable when designing complex applications. We are left with a seemingly quick but fragile process that creates a black box that is difficult to iterate on and even more so to hand off.

    This leads us back to our main question: how might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one, without getting caught in the ambiguity trap? The answer lies in a more methodical, disciplined, and therefore trustworthy process.

    In Part 2 of this series, “A Practical Guide to Building with Clarity”, I will outline the entire workflow for Intent Prototyping. This method places the explicit intent of the designer at the forefront of the process while embracing the potential of AI-assisted coding.

    Thank you for reading, and I look forward to seeing you in Part 2.

  • Ambient Animations In Web Design: Principles And Implementation (Part 1)

    Unlike timeline-based animations, which tell stories across a sequence of events, or interaction animations that are triggered when someone touches something, ambient animations are the kind of passive movements you might not notice at first. But, they make a design look alive in subtle ways.

    In an ambient animation, elements might subtly transition between colours, move slowly, or gradually shift position. Elements can appear and disappear, change size, or they could rotate slowly.

    Ambient animations aren’t intrusive; they don’t demand attention, aren’t distracting, and don’t interfere with what someone’s trying to achieve when they use a product or website. They can be playful, too, making someone smile when they catch sight of them. That way, ambient animations add depth to a brand’s personality.

    To illustrate the concept of ambient animations, I’ve recreated the cover of a Quick Draw McGraw comic book (PDF) as a CSS/SVG animation. The comic was published by Charlton Comics in 1971, and, being printed, these characters didn’t move, making them ideal candidates to transform into ambient animations.

    FYI: Original cover artist Ray Dirgo was best known for his work drawing Hanna-Barbera characters for Charlton Comics during the 1970s. Ray passed away in 2000 at the age of 92. He outlived Charlton Comics, which went out of business in 1986, and DC Comics acquired its characters.

    Tip: You can view the complete ambient animation code on CodePen.

    Choosing Elements To Animate

    Not everything on a page or in a graphic needs to move, and part of designing an ambient animation is knowing when to stop. The trick is to pick elements that lend themselves naturally to subtle movement, rather than forcing motion into places where it doesn’t belong.

    Natural Motion Cues

    When I’m deciding what to animate, I look for natural motion cues and think about when something would move naturally in the real world. I ask myself: “Does this thing have weight?”, “Is it flexible?”, and “Would it move in real life?” If the answer’s “yes,” it’ll probably feel right if it moves. There are several motion cues in Ray Dirgo’s cover artwork.

    For example, the peace pipe Quick Draw’s puffing on has two feathers hanging from it. They swing slightly left and right by three degrees as the pipe moves, just like real feathers would.

    #quick-draw-pipe {
      animation: quick-draw-pipe-rotate 6s ease-in-out infinite alternate;
    }
    
    @keyframes quick-draw-pipe-rotate {
      0% { transform: rotate(3deg); }
      100% { transform: rotate(-3deg); }
    }
    
    #quick-draw-feather-1 {
      animation: quick-draw-feather-1-rotate 3s ease-in-out infinite alternate;
    }
    
    #quick-draw-feather-2 {
      animation: quick-draw-feather-2-rotate 3s ease-in-out infinite alternate;
    }
    
    @keyframes quick-draw-feather-1-rotate {
      0% { transform: rotate(3deg); }
      100% { transform: rotate(-3deg); }
    }
    
    @keyframes quick-draw-feather-2-rotate {
      0% { transform: rotate(-3deg); }
      100% { transform: rotate(3deg); }
    }
    

    Atmosphere, Not Action

    I often choose elements or decorative details that add to the vibe but don’t fight for attention.

    Ambient animations aren’t about signalling to someone where they should look; they’re about creating a mood.

    Here, the chief slowly and subtly rises and falls as he puffs on his pipe.

    #chief {
      animation: chief-rise-fall 3s ease-in-out infinite alternate;
    }
    
    @keyframes chief-group-rise-fall {
      0% { transform: translateY(0); }
      100% { transform: translateY(-20px); }
    }
    

    For added effect, the feather on his head also moves in time with his rise and fall:

    #chief-feather-1 {
      animation: chief-feather-1-rotate 3s ease-in-out infinite alternate;
    }
    
    #chief-feather-2 {
      animation: chief-feather-2-rotate 3s ease-in-out infinite alternate;
    }
    
    @keyframes chief-feather-1-rotate {
      0% { transform: rotate(0deg); }
      100% { transform: rotate(-9deg); }
    }
    
    @keyframes chief-feather-2-rotate {
      0% { transform: rotate(0deg); }
      100% { transform: rotate(9deg); }
    }
    

    Playfulness And Fun

    One of the things I love most about ambient animations is how they bring fun into a design. They’re an opportunity to demonstrate personality through playful details that make people smile when they notice them.

    Take a closer look at the chief, and you might spot his eyebrows raising and his eyes crossing as he puffs hard on his pipe. Quick Draw’s eyebrows also bounce at what look like random intervals.

    #quick-draw-eyebrow {
      animation: quick-draw-eyebrow-raise 5s ease-in-out infinite;
    }
    
    @keyframes quick-draw-eyebrow-raise {
      0%, 20%, 60%, 100% { transform: translateY(0); }
      10%, 50%, 80% { transform: translateY(-10px); }
    }
    

    Keep Hierarchy In Mind

    Motion draws the eye, and even subtle movements have a visual weight. So, I reserve the most obvious animations for elements that I need to create the biggest impact.

    Smoking his pipe clearly has a big effect on Quick Draw McGraw, so to demonstrate this, I wrapped his elements — including his pipe and its feathers — within a new SVG group, and then I made that wobble.

    #quick-draw-group {
      animation: quick-draw-group-wobble 6s ease-in-out infinite;
    }
    
    @keyframes quick-draw-group-wobble {
      0% { transform: rotate(0deg); }
      15% { transform: rotate(2deg); }
      30% { transform: rotate(-2deg); }
      45% { transform: rotate(1deg); }
      60% { transform: rotate(-1deg); }
      75% { transform: rotate(0.5deg); }
      100% { transform: rotate(0deg); }
    }
    

    Then, to emphasise this motion, I mirrored those values to wobble his shadow:

    #quick-draw-shadow {
      animation: quick-draw-shadow-wobble 6s ease-in-out infinite;
    }
    
    @keyframes quick-draw-shadow-wobble {
      0% { transform: rotate(0deg); }
      15% { transform: rotate(-2deg); }
      30% { transform: rotate(2deg); }
      45% { transform: rotate(-1deg); }
      60% { transform: rotate(1deg); }
      75% { transform: rotate(-0.5deg); }
      100% { transform: rotate(0deg); }
    }
    

    Apply Restraint

    Just because something can be animated doesn’t mean it should be. When creating an ambient animation, I study the image and note the elements where subtle motion might add life. I keep in mind the questions: “What’s the story I’m telling? Where does movement help, and when might it become distracting?”

    Remember, restraint isn’t just about doing less; it’s about doing the right things less often.

    Layering SVGs For Export

    In “Smashing Animations Part 4: Optimising SVGs,” I wrote about the process I rely on to “prepare, optimise, and structure SVGs for animation.” When elements are crammed into a single SVG file, they can be a nightmare to navigate. Locating a specific path or group can feel like searching for a needle in a haystack.

    That’s why I develop my SVGs in layers, exporting and optimising one set of elements at a time — always in the order they’ll appear in the final file. This lets me build the master SVG gradually by pasting it in each cleaned-up section.

    I start by exporting background elements, optimising them, adding class and ID attributes, and pasting their code into my SVG file.

    Then, I export elements that often stay static or move as groups, like the chief and Quick Draw McGraw.

    Before finally exporting, naming, and adding details, like Quick Draw’s pipe, eyes, and his stoned sparkles.

    Since I export each layer from the same-sized artboard, I don’t need to worry about alignment or positioning issues as they all slot into place automatically.

    Implementing Ambient Animations

    You don’t need an animation framework or library to add ambient animations to a project. Most of the time, all you’ll need is a well-prepared SVG and some thoughtful CSS.

    But, let’s start with the SVG. The key is to group elements logically and give them meaningful class or ID attributes, which act as animation hooks in the CSS. For this animation, I gave every moving part its own identifier like #quick-draw-tail or #chief-smoke-2. That way, I could target exactly what I needed without digging through the DOM like a raccoon in a trash can.

    Once the SVG is set up, CSS does most of the work. I can use @keyframes for more expressive movement, or animation-delay to simulate randomness and stagger timings. The trick is to keep everything subtle and remember I’m not animating for attention, I’m animating for atmosphere.

    Remember that most ambient animations loop continuously, so they should be lightweight and performance-friendly. And of course, it’s good practice to respect users who’ve asked for less motion. You can wrap your animations in an @media prefers-reduced-motion query so they only run when they’re welcome.

    @media (prefers-reduced-motion: no-preference) {
      #quick-draw-shadow {
        animation: quick-draw-shadow-wobble 6s ease-in-out infinite;
      }
    }
    

    It’s a small touch that’s easy to implement, and it makes your designs more inclusive.

    Ambient Animation Design Principles

    If you want your animations to feel ambient, more like atmosphere than action, it helps to follow a few principles. These aren’t hard and fast rules, but rather things I’ve learned while animating smoke, sparkles, eyeballs, and eyebrows.

    Keep Animations Slow And Smooth

    Ambient animations should feel relaxed, so use longer durations and choose easing curves that feel organic. I often use ease-in-out, but cubic Bézier curves can also be helpful when you want a more relaxed feel and the kind of movements you might find in nature.

    Loop Seamlessly And Avoid Abrupt Changes

    Hard resets or sudden jumps can ruin the mood, so if an animation loops, ensure it cycles smoothly. You can do this by matching start and end keyframes, or by setting the animation-direction to alternate the value so the animation plays forward, then back.

    Use Layering To Build Complexity

    A single animation might be boring. Five subtle animations, each on separate layers, can feel rich and alive. Think of it like building a sound mix — you want variation in rhythm, tone, and timing. In my animation, sparkles twinkle at varying intervals, smoke curls upward, feathers sway, and eyes boggle. Nothing dominates, and each motion plays its small part in the scene.

    Avoid Distractions

    The point of an ambient animation is that it doesn’t dominate. It’s a background element and not a call to action. If someone’s eyes are drawn to a raised eyebrow, it’s probably too much, so dial back the animation until it feels like something you’d only catch if you’re really looking.

    Consider Accessibility And Performance

    Check prefers-reduced-motion, and don’t assume everyone’s device can handle complex animations. SVG and CSS are light, but things like blur filters and drop shadows, and complex CSS animations can still tax lower-powered devices. When an animation is purely decorative, consider adding aria-hidden="true" to keep it from cluttering up the accessibility tree.

    Quick On The Draw

    Ambient animation is like seasoning on a great dish. It’s the pinch of salt you barely notice, but you’d miss when it’s gone. It doesn’t shout, it whispers. It doesn’t lead, it lingers. It’s floating smoke, swaying feathers, and sparkles you catch in the corner of your eye. And when it’s done well, ambient animation adds personality to a design without asking for applause.

    Now, I realise that not everyone needs to animate cartoon characters. So, in part two, I’ll share how I created animations for several recent client projects. Until next time, if you’re crafting an illustration or working with SVG, ask yourself: What would move if this were real? Then animate just that. Make it slow and soft. Keep it ambient.

    You can view the complete ambient animation code on CodePen.

  • 智泊AI-AGI大模型全栈课12期【VIP】

    你是否已经掌握了 Transformer 的原理,能背诵 Attention 机制的公式,却在面对一个真实的企业需求时感到无从下手?你是否刷了上百道算法题,却依然不知道如何将一个模型封装成稳定、高效、
  • Android AI解放生产力(五)实战:解放写API接口的繁琐工作

    公司当前项目使用的是Apifox作为API调试/文档工具,所以以这个为例。市面上的工具应该都逐步开放了AI能力,需要关注一下,只要能提供原始数据就行,不能的话那也没办法喂给AI。 上一篇已经让AI写U
  • 深入理解 React 中 useState 与 useEffect

    在 React 函数组件的世界里,useState 和 useEffect 是两个最基础却也最关键的 Hooks。它们看似简单,但若只停留在“会用”的层面,很容易在复杂场景中踩坑。今天,我们就结合一段
  • 管理100台服务器是什么体验?Python一行代码搞定

    前言 运维日常: 登录服务器A,执行命令 登录服务器B,执行同样命令 登录服务器C… 这太痛苦了!用Python + Fabric实现批量自动化。
  • 字符串比较的经典坑:== vs equals

    一、Bug 场景 在一个 Java 程序中,涉及到字符串的比较操作。开发人员在判断两个字符串是否相等时,误使用了 == 运算符,而不是 equals 方法。程序在部分情况下运行正常,但在其他情况下却出
  • 树莓派家庭服务器(下):自动化追剧的配置

    发现了有更加好用、更加优雅的解决方案,不但可以自动搜寻并下载想要的资源,还可以自动分类重命名整理好,并且还能在完成下载后推送通知到我的手机上。查看全文