分类: Uncategorized

  • 国盾量子:董事应勇暂代董事长等职责

    36氪获悉,国盾量子公告,因公司董事长、法定代表人吕品离世,全体董事共同推举董事应勇暂代履行董事长、法定代表人及战略与投资委员会主任委员、薪酬与考核委员会委员职责,直至公司完成董事补选及选举出新的董事长、相关专委会委员之日止。
  • Intent Prototyping: A Practical Guide To Building With Clarity (Part 2)

    In Part 1 of this series, we explored the “lopsided horse” problem born from mockup-centric design and demonstrated how the seductive promise of vibe coding often leads to structural flaws. The main question remains:

    How might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one, without getting caught in the ambiguity trap?

    In other words, we need a way to build prototypes that are both fast to create and founded on a clear, unambiguous blueprint.

    The answer is a more disciplined process I call Intent Prototyping (kudos to Marco Kotrotsos, who coined Intent-Oriented Programming). This method embraces the power of AI-assisted coding but rejects ambiguity, putting the designer’s explicit intent at the very center of the process. It receives a holistic expression of intent (sketches for screen layouts, conceptual model description, boxes-and-arrows for user flows) and uses it to generate a live, testable prototype.

    This method solves the concerns we’ve discussed in Part 1 in the best way possible:

    • Unlike static mockups, the prototype is fully interactive and can be easily populated with a large amount of realistic data. This lets us test the system’s underlying logic as well as its surface.
    • Unlike a vibe-coded prototype, it is built from a stable, unambiguous specification. This prevents the conceptual model failures and design debt that happen when things are unclear. The engineering team doesn’t need to reverse-engineer a black box or become “code archaeologists” to guess at the designer’s vision, as they receive not only a live prototype but also a clearly documented design intent behind it.

    This combination makes the method especially suited for designing complex enterprise applications. It allows us to test the system’s most critical point of failure, its underlying structure, at a speed and flexibility that was previously impossible. Furthermore, the process is built for iteration. You can explore as many directions as you want simply by changing the intent and evolving the design based on what you learn from user testing.

    My Workflow

    To illustrate this process in action, let’s walk through a case study. It’s the very same example I’ve used to illustrate the vibe coding trap: a simple tool to track tests to validate product ideas. You can find the complete project, including all the source code and documentation files discussed below, in this GitHub repository.

    Step 1: Expressing An Intent

    Imagine we’ve already done proper research, and having mused on the defined problem, I begin to form a vague idea of what the solution might look like. I need to capture this idea immediately, so I quickly sketch it out:

    In this example, I used Excalidraw, but the tool doesn’t really matter. Note that we deliberately keep it rough, as visual details are not something we need to focus on at this stage. And we are not going to be stuck here: we want to make a leap from this initial sketch directly to a live prototype that we can put in front of potential users. Polishing those sketches would not bring us any closer to achieving our goal.

    What we need to move forward is to add to those sketches just enough details so that they may serve as a sufficient input for a junior frontend developer (or, in our case, an AI assistant). This requires explaining the following:

    • Navigational paths (clicking here takes you to).
    • Interaction details that can’t be shown in a static picture (e.g., non-scrollable areas, adaptive layout, drag-and-drop behavior).
    • What parts might make sense to build as reusable components.
    • Which components from the design system (I’m using Ant Design Library) should be used.
    • Any other comments that help understand how this thing should work (while sketches illustrate how it should look).

    Having added all those details, we end up with such an annotated sketch:

    As you see, this sketch covers both the Visualization and Flow aspects. You may ask, what about the Conceptual Model? Without that part, the expression of our intent will not be complete. One way would be to add it somewhere in the margins of the sketch (for example, as a UML Class Diagram), and I would do so in the case of a more complex application, where the model cannot be simply derived from the UI. But in our case, we can save effort and ask an LLM to generate a comprehensive description of the conceptual model based on the sketch.

    For tasks of this sort, the LLM of my choice is Gemini 2.5 Pro. What is important is that this is a multimodal model that can accept not only text but also images as input (GPT-5 and Claude-4 also fit that criteria). I use Google AI Studio, as it gives me enough control and visibility into what’s happening:

    Note: All the prompts that I use here and below can be found in the Appendices. The prompts are not custom-tailored to any particular project; they are supposed to be reused as they are.

    As a result, Gemini gives us a description and the following diagram:

    The diagram might look technical, but I believe that a clear understanding of all objects, their attributes, and relationships between them is key to good design. That’s why I consider the Conceptual Model to be an essential part of expressing intent, along with the Flow and Visualization.

    As a result of this step, our intent is fully expressed in two files: Sketch.png and Model.md. This will be our durable source of truth.

    Step 2: Preparing A Spec And A Plan

    The purpose of this step is to create a comprehensive technical specification and a step-by-step plan. Most of the work here is done by AI; you just need to keep an eye on it.

    I separate the Data Access Layer and the UI layer, and create specifications for them using two different prompts (see Appendices 2 and 3). The output of the first prompt (the Data Access Layer spec) serves as an input for the second one. Note that, as an additional input, we give the guidelines tailored for prototyping needs (see Appendices 8, 9, and 10). They are not specific to this project. The technical approach encoded in those guidelines is out of the scope of this article.

    As a result, Gemini provides us with content for DAL.md and UI.md. Although in most cases this result is quite reliable enough, you might want to scrutinize the output. You don’t need to be a real programmer to make sense of it, but some level of programming literacy would be really helpful. However, even if you don’t have such skills, don’t get discouraged. The good news is that if you don’t understand something, you always know who to ask. Do it in Google AI Studio before refreshing the context window. If you believe you’ve spotted a problem, let Gemini know, and it will either fix it or explain why the suggested approach is actually better.

    It’s important to remember that by their nature, LLMs are not deterministic and, to put it simply, can be forgetful about small details, especially when it comes to details in sketches. Fortunately, you don’t have to be an expert to notice that the “Delete” button, which is in the upper right corner of the sketch, is not mentioned in the spec.

    Don’t get me wrong: Gemini does a stellar job most of the time, but there are still times when it slips up. Just let it know about the problems you’ve spotted, and everything will be fixed.

    Once we have Sketch.png, Model.md, DAL.md, UI.md, and we have reviewed the specs, we can grab a coffee. We deserve it: our technical design documentation is complete. It will serve as a stable foundation for building the actual thing, without deviating from our original intent, and ensuring that all components fit together perfectly, and all layers are stacked correctly.

    One last thing we can do before moving on to the next steps is to prepare a step-by-step plan. We split that plan into two parts: one for the Data Access Layer and another for the UI. You can find prompts I use to create such a plan in Appendices 4 and 5.

    Step 3: Executing The Plan

    To start building the actual thing, we need to switch to another category of AI tools. Up until this point, we have relied on Generative AI. It excels at creating new content (in our case, specifications and plans) based on a single prompt. I’m using Google Gemini 2.5 Pro in Google AI Studio, but other similar tools may also fit such one-off tasks: ChatGPT, Claude, Grok, and DeepSeek.

    However, at this step, this wouldn’t be enough. Building a prototype based on specs and according to a plan requires an AI that can read context from multiple files, execute a sequence of tasks, and maintain coherence. A simple generative AI can’t do this. It would be like asking a person to build a house by only ever showing them a single brick. What we need is an agentic AI that can be given the full house blueprint and a project plan, and then get to work building the foundation, framing the walls, and adding the roof in the correct sequence.

    My coding agent of choice is Google Gemini CLI, simply because Gemini 2.5 Pro serves me well, and I don’t think we need any middleman like Cursor or Windsurf (which would use Claude, Gemini, or GPT under the hood anyway). If I used Claude, my choice would be Claude Code, but since I’m sticking with Gemini, Gemini CLI it is. But if you prefer Cursor or Windsurf, I believe you can apply the same process with your favourite tool.

    Before tasking the agent, we need to create a basic template for our React application. I won’t go into this here. You can find plenty of tutorials on how to scaffold an empty React project using Vite.

    Then we put all our files into that project:

    Once the basic template with all our files is ready, we open Terminal, go to the folder where our project resides, and type “gemini”:

    And we send the prompt to build the Data Access Layer (see Appendix 6). That prompt implies step-by-step execution, so upon completion of each step, I send the following:

    Thank you! Now, please move to the next task.
    Remember that you must not make assumptions based on common patterns; always verify them with the actual data from the spec. 
    After each task, stop so that I can test it. Don’t move to the next task before I tell you to do so.
    

    As the last task in the plan, the agent builds a special page where we can test all the capabilities of our Data Access Layer, so that we can manually test it. It may look like this:

    It doesn’t look fancy, to say the least, but it allows us to ensure that the Data Access Layer works correctly before we proceed with building the final UI.

    And finally, we clear the Gemini CLI context window to give it more headspace and send the prompt to build the UI (see Appendix 7). This prompt also implies step-by-step execution. Upon completion of each step, we test how it works and how it looks, following the “Manual Testing Plan” from UI-plan.md. I have to say that despite the fact that the sketch has been uploaded to the model context and, in general, Gemini tries to follow it, attention to visual detail is not one of its strengths (yet). Usually, a few additional nudges are needed at each step to improve the look and feel:

    Once I’m happy with the result of a step, I ask Gemini to move on:

    Thank you! Now, please move to the next task.
    Make sure you build the UI according to the sketch; this is very important. Remember that you must not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch.
    After each task, stop so that I can test it. Don’t move to the next task before I tell you to do so.

    Before long, the result looks like this, and in every detail it works exactly as we intended:

    The prototype is up and running and looking nice. Does it mean that we are done with our work? Surely not, the most fascinating part is just beginning.

    Step 4: Learning And Iterating

    It’s time to put the prototype in front of potential users and learn more about whether this solution relieves their pain or not.

    And as soon as we learn something new, we iterate. We adjust or extend the sketches and the conceptual model, based on that new input, we update the specifications, create plans to make changes according to the new specifications, and execute those plans. In other words, for every iteration, we repeat the steps I’ve just walked you through.

    Is This Workflow Too Heavy?

    This four-step workflow may create an impression of a somewhat heavy process that requires too much thinking upfront and doesn’t really facilitate creativity. But before jumping to that conclusion, consider the following:

    • In practice, only the first step requires real effort, as well as learning in the last step. AI does most of the work in between; you just need to keep an eye on it.
    • Individual iterations don’t need to be big. You can start with a Walking Skeleton: the bare minimum implementation of the thing you have in mind, and add more substance in subsequent iterations. You are welcome to change your mind about the overall direction in between iterations.
    • And last but not least, maybe the idea of “think before you do” is not something you need to run away from. A clear and unambiguous statement of intent can prevent many unnecessary mistakes and save a lot of effort down the road.

    Intent Prototyping Vs. Other Methods

    There is no method that fits all situations, and Intent Prototyping is not an exception. Like any specialized tool, it has a specific purpose. The most effective teams are not those who master a single method, but those who understand which approach to use to mitigate the most significant risk at each stage. The table below gives you a way to make this choice clearer. It puts Intent Prototyping next to other common methods and tools and explains each one in terms of the primary goal it helps achieve and the specific risks it is best suited to mitigate.

    Method/Tool Goal Risks it is best suited to mitigate Examples Why
    Intent Prototyping To rapidly iterate on the fundamental architecture of a data-heavy application with a complex conceptual model, sophisticated business logic, and non-linear user flows. Building a system with a flawed or incoherent conceptual model, leading to critical bugs and costly refactoring.
    • A CRM (Customer Relationship Management system).
    • A Resource Management Tool.
    • A No-Code Integration Platform (admin’s UI).
    It enforces conceptual clarity. This not only de-risks the core structure but also produces a clear, documented blueprint that serves as a superior specification for the engineering handoff.
    Vibe Coding (Conversational) To rapidly explore interactive ideas through improvisation. Losing momentum because of analysis paralysis.
    • An interactive data table with live sorting/filtering.
    • A novel navigation concept.
    • A proof-of-concept for a single, complex component.
    It has the smallest loop between an idea conveyed in natural language and an interactive outcome.
    Axure To test complicated conditional logic within a specific user journey, without having to worry about how the whole system works. Designing flows that break when users don’t follow the “happy path.”
    • A multi-step e-commerce checkout.
    • A software configuration wizard.
    • A dynamic form with dependent fields.
    It’s made to create complex if-then logic and manage variables visually. This lets you test complicated paths and edge cases in a user journey without writing any code.
    Figma To make sure that the user interface looks good, aligns with the brand, and has a clear information architecture. Making a product that looks bad, doesn’t fit with the brand, or has a layout that is hard to understand.
    • A marketing landing page.
    • A user onboarding flow.
    • Presenting a new visual identity.
    It excels at high-fidelity visual design and provides simple, fast tools for linking static screens.
    ProtoPie, Framer To make high-fidelity micro-interactions feel just right. Shipping an application that feels cumbersome and unpleasant to use because of poorly executed interactions.
    • A custom pull-to-refresh animation.
    • A fluid drag-and-drop interface.
    • An animated chart or data visualization.
    These tools let you manipulate animation timelines, physics, and device sensor inputs in great detail. Designers can carefully work on and test the small things that make an interface feel really polished and fun to use.
    Low-code / No-code Tools (e.g., Bubble, Retool) To create a working, data-driven app as quickly as possible. The application will never be built because traditional development is too expensive.
    • An internal inventory tracker.
    • A customer support dashboard.
    • A simple directory website.
    They put a UI builder, a database, and hosting all in one place. The goal is not merely to make a prototype of an idea, but to make and release an actual, working product. This is the last step for many internal tools or MVPs.

    The key takeaway is that each method is a specialized tool for mitigating a specific type of risk. For example, Figma de-risks the visual presentation. ProtoPie de-risks the feel of an interaction. Intent Prototyping is in a unique position to tackle the most foundational risk in complex applications: building on a flawed or incoherent conceptual model.

    Bringing It All Together

    The era of the “lopsided horse” design, sleek on the surface but structurally unsound, is a direct result of the trade-off between fidelity and flexibility. This trade-off has led to a process filled with redundant effort and misplaced focus. Intent Prototyping, powered by modern AI, eliminates that conflict. It’s not just a shortcut to building faster — it’s a fundamental shift in how we design. By putting a clear, unambiguous intent at the heart of the process, it lets us get rid of the redundant work and focus on architecting a sound and robust system.

    There are three major benefits to this renewed focus. First, by going straight to live, interactive prototypes, we shift our validation efforts from the surface to the deep, testing the system’s actual logic with users from day one. Second, the very act of documenting the design intent makes us clear about our ideas, ensuring that we fully understand the system’s underlying logic. Finally, this documented intent becomes a durable source of truth, eliminating the ambiguous handoffs and the redundant, error-prone work of having engineers reverse-engineer a designer’s vision from a black box.

    Ultimately, Intent Prototyping changes the object of our work. It allows us to move beyond creating pictures of a product and empowers us to become architects of blueprints for a system. With the help of AI, we can finally make the live prototype the primary canvas for ideation, not just a high-effort afterthought.

    Appendices

    You can find the full Intent Prototyping Starter Kit, which includes all those prompts and guidelines, as well as the example from this article and a minimal boilerplate project, in this GitHub repository.

    Appendix 1: Sketch to UML Class Diagram

    +
    You are an expert Senior Software Architect specializing in Domain-Driven Design. You are tasked with defining a conceptual model for an app based on information from a UI sketch.
    
    ## Workflow
    
    Follow these steps precisely:
    
    **Step 1:** Analyze the sketch carefully. There should be no ambiguity about what we are building.
    
    **Step 2:** Generate the conceptual model description in the Mermaid format using a UML class diagram.
    
    ## Ground Rules
    
    - Every entity must have the following attributes:
        - id (string)
        - createdAt (string, ISO 8601 format)
        - updatedAt (string, ISO 8601 format)
    - Include all attributes shown in the UI: If a piece of data is visually represented as a field for an entity, include it in the model, even if it's calculated from other attributes.
    - Do not add any speculative entities, attributes, or relationships ("just in case"). The model should serve the current sketch's requirements only. 
    - Pay special attention to cardinality definitions (e.g., if a relationship is optional on both sides, it cannot be "1" -- "0..*", it must be "0..1" -- "0..*").
    - Use only valid syntax in the Mermaid diagram.
    - Do not include enumerations in the Mermaid diagram.
    - Add comments explaining the purpose of every entity, attribute, and relationship, and their expected behavior (not as a part of the diagram, in the Markdown file).
    
    ## Naming Conventions
    
    - Names should reveal intent and purpose.
    - Use PascalCase for entity names.
    - Use camelCase for attributes and relationships.
    - Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
    
    ## Final Instructions
    
    - **No Assumptions: Base every detail on visual evidence in the sketch, not on common design patterns. 
    - **Double-Check: After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. 
    - **Do not add redundant empty lines between items.** 
    
    Your final output should be the complete, raw markdown content for Model.md.
    

    Appendix 2: Sketch to DAL Spec

    +
    You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with creating a comprehensive technical specification for the development team in a structured markdown document, based on a UI sketch and a conceptual model description. 
    
    ## Workflow
    
    Follow these steps precisely:
    
    **Step 1:** Analyze the documentation carefully:
    
    - Model.md: the conceptual model
    - Sketch.png: the UI sketch
    
    There should be no ambiguity about what we are building.
    
    **Step 2:** Check out the guidelines:
    
    - TS-guidelines.md: TypeScript Best Practices
    - React-guidelines.md: React Best Practices
    - Zustand-guidelines.md: Zustand Best Practices
    
    **Step 3:** Create a Markdown specification for the stores and entity-specific hook that implements all the logic and provides all required operations.
    
    ---
    
    ## Markdown Output Structure
    
    Use this template for the entire document.
    
    markdown
    
    # Data Access Layer Specification
    
    This document outlines the specification for the data access layer of the application, following the principles defined in `docs/guidelines/Zustand-guidelines.md`.
    
    ## 1. Type Definitions
    
    Location: `src/types/entities.ts`
    
    ### 1.1. `BaseEntity`
    
    A shared interface that all entities should extend.
    
    [TypeScript interface definition]
    
    ### 1.2. `[Entity Name]`
    
    The interface for the [Entity Name] entity.
    
    [TypeScript interface definition]
    
    ## 2. Zustand Stores
    
    ### 2.1. Store for `[Entity Name]`
    
    **Location:** `src/stores/[Entity Name (plural)].ts`
    
    The Zustand store will manage the state of all [Entity Name] items.
    
    **Store State (`[Entity Name]State`):**
    
    [TypeScript interface definition]
    
    **Store Implementation (`use[Entity Name]Store`):**
    
    - The store will be created using `create<[Entity Name]State>()(...)`.
    - It will use the `persist` middleware from `zustand/middleware` to save state to `localStorage`. The persistence key will be `[entity-storage-key]`.
    - `[Entity Name (plural, camelCase)]` will be a dictionary (`Record<string, [Entity]>`) for O(1) access.
    
    **Actions:**
    
    - **`add[Entity Name]`**:  
        [Define the operation behavior based on entity requirements]
    - **`update[Entity Name]`**:  
        [Define the operation behavior based on entity requirements]
    - **`remove[Entity Name]`**:  
        [Define the operation behavior based on entity requirements]
    - **`doSomethingElseWith[Entity Name]`**:  
        [Define the operation behavior based on entity requirements]
    
    ## 3. Custom Hooks
    
    ### 3.1. `use[Entity Name (plural)]`
    
    **Location:** `src/hooks/use[Entity Name (plural)].ts`
    
    The hook will be the primary interface for UI components to interact with [Entity Name] data.
    
    **Hook Return Value:**
    
    [TypeScript interface definition]
    
    **Hook Implementation:**
    
    [List all properties and methods returned by this hook, and briefly explain the logic behind them, including data transformations, memoization. Do not write the actual code here.]
    
    --- 
    
    ## Final Instructions
    
    - **No Assumptions:** Base every detail in the specification on the conceptual model or visual evidence in the sketch, not on common design patterns. 
    - **Double-Check:** After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. 
    - **Do not add redundant empty lines between items.** 
    
    Your final output should be the complete, raw markdown content for DAL.md.
    

    Appendix 3: Sketch to UI Spec

    +
    You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with creating a comprehensive technical specification by translating a UI sketch into a structured markdown document for the development team.
    
    ## Workflow
    
    Follow these steps precisely:
    
    **Step 1:** Analyze the documentation carefully: 
    
    - Sketch.png: the UI sketch
      - Note that red lines, red arrows, and red text within the sketch are annotations for you and should not be part of the final UI design. They provide hints and clarification. Never translate them to UI elements directly.
    - Model.md: the conceptual model
    - DAL.md: the Data Access Layer spec
    
    There should be no ambiguity about what we are building.
    
    **Step 2:** Check out the guidelines:
    
    - TS-guidelines.md: TypeScript Best Practices
    - React-guidelines.md: React Best Practices
    
    **Step 3:** Generate the complete markdown content for a new file, UI.md.
    
    ---
    
    ## Markdown Output Structure
    
    Use this template for the entire document.
    
    markdown
    
    # UI Layer Specification
    
    This document specifies the UI layer of the application, breaking it down into pages and reusable components based on the provided sketches. All components will adhere to Ant Design's principles and utilize the data access patterns defined in `docs/guidelines/Zustand-guidelines.md`.
    
    ## 1. High-Level Structure
    
    The application is a single-page application (SPA). It will be composed of a main layout, one primary page, and several reusable components. 
    
    ### 1.1. `App` Component
    
    The root component that sets up routing and global providers.
    
    -   **Location**: `src/App.tsx`
    -   **Purpose**: To provide global context, including Ant Design's `ConfigProvider` and `App` contexts for message notifications, and to render the main page.
    -   **Composition**:
      -   Wraps the application with `ConfigProvider` and `App as AntApp` from 'antd' to enable global message notifications as per `simple-ice/antd-messages.mdc`.
      -   Renders `[Page Name]`.
    
    ## 2. Pages
    
    ### 2.1. `[Page Name]`
    
    -   **Location:** `src/pages/PageName.tsx`
    -   **Purpose:** [Briefly describe the main goal and function of this page]
    -   **Data Access:**
      [List the specific hooks and functions this component uses to fetch or manage its data]
    -   **Internal State:**
        [Describe any state managed internally by this page using `useState`]
    -   **Composition:**
        [Briefly describe the content of this page]
    -   **User Interactions:**
        [Describe how the user interacts with this page] 
    -   **Logic:**
      [If applicable, provide additional comments on how this page should work]
    
    ## 3. Components
    
    ### 3.1. `[Component Name]`
    
    -   **Location:** `src/components/ComponentName.tsx`
    -   **Purpose:** [Explain what this component does and where it's used]
    -   **Props:**
      [TypeScript interface definition for the component's props. Props should be minimal. Avoid prop drilling by using hooks for data access.]
    -   **Data Access:**
        [List the specific hooks and functions this component uses to fetch or manage its data]
    -   **Internal State:**
        [Describe any state managed internally by this component using `useState`]
    -   **Composition:**
        [Briefly describe the content of this component]
    -   **User Interactions:**
        [Describe how the user interacts with the component]
    -   **Logic:**
      [If applicable, provide additional comments on how this component should work]
    
    --- 
    
    ## Final Instructions
    
    - **No Assumptions:** Base every detail on the visual evidence in the sketch, not on common design patterns. 
    - **Double-Check:** After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. 
    - **Do not add redundant empty lines between items.** 
    
    Your final output should be the complete, raw markdown content for UI.md.
    

    Appendix 4: DAL Spec to Plan

    +
    You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with creating a plan to build a Data Access Layer for an application based on a spec.
    
    ## Workflow
    
    Follow these steps precisely:
    
    **Step 1:** Analyze the documentation carefully:
    
    - DAL.md: The full technical specification for the Data Access Layer of the application. Follow it carefully and to the letter.
    
    There should be no ambiguity about what we are building.
    
    **Step 2:** Check out the guidelines:
    
    - TS-guidelines.md: TypeScript Best Practices
    - React-guidelines.md: React Best Practices
    - Zustand-guidelines.md: Zustand Best Practices
    
    **Step 3:** Create a step-by-step plan to build a Data Access Layer according to the spec. 
    
    Each task should:
    
    - Focus on one concern
    - Be reasonably small
    - Have a clear start + end
    - Contain clearly defined Objectives and Acceptance Criteria
    
    The last step of the plan should include creating a page to test all the capabilities of our Data Access Layer, and making it the start page of this application, so that I can manually check if it works properly. 
    
    I will hand this plan over to an engineering LLM that will be told to complete one task at a time, allowing me to review results in between.
    
    ## Final Instructions
    
    - Note that we are not starting from scratch; the basic template has already been created using Vite.
    - Do not add redundant empty lines between items.
    
    Your final output should be the complete, raw markdown content for DAL-plan.md.
    

    Appendix 5: UI Spec to Plan

    +
    You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with creating a plan to build a UI layer for an application based on a spec and a sketch.
    
    ## Workflow
    
    Follow these steps precisely:
    
    **Step 1:** Analyze the documentation carefully:
    
    - UI.md: The full technical specification for the UI layer of the application. Follow it carefully and to the letter.
    - Sketch.png: Contains important information about the layout and style, complements the UI Layer Specification. The final UI must be as close to this sketch as possible.
    
    There should be no ambiguity about what we are building.
    
    **Step 2:** Check out the guidelines:
    
    - TS-guidelines.md: TypeScript Best Practices
    - React-guidelines.md: React Best Practices
    
    **Step 3:** Create a step-by-step plan to build a UI layer according to the spec and the sketch. 
    
    Each task must:
    
    - Focus on one concern.
    - Be reasonably small.
    - Have a clear start + end.
    - Result in a verifiable increment of the application. Each increment should be manually testable to allow for functional review and approval before proceeding.
    - Contain clearly defined Objectives, Acceptance Criteria, and Manual Testing Plan.
    
    I will hand this plan over to an engineering LLM that will be told to complete one task at a time, allowing me to test in between.
    
    ## Final Instructions
    
    - Note that we are not starting from scratch, the basic template has already been created using Vite, and the Data Access Layer has been built successfully.
    - For every task, describe how components should be integrated for verification. You must use the provided hooks to connect to the live Zustand store data—do not use mock data (note that the Data Access Layer has been already built successfully).
    - The Manual Testing Plan should read like a user guide. It must only contain actions a user can perform in the browser and must never reference any code files or programming tasks.
    - Do not add redundant empty lines between items.
    
    Your final output should be the complete, raw markdown content for UI-plan.md.
    

    Appendix 6: DAL Plan to Code

    +
    You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with building a Data Access Layer for an application based on a spec.
    
    ## Workflow
    
    Follow these steps precisely:
    
    **Step 1:** Analyze the documentation carefully:
    
    - @docs/specs/DAL.md: The full technical specification for the Data Access Layer of the application. Follow it carefully and to the letter. 
    
    There should be no ambiguity about what we are building.
    
    **Step 2:** Check out the guidelines:
    
    - @docs/guidelines/TS-guidelines.md: TypeScript Best Practices
    - @docs/guidelines/React-guidelines.md: React Best Practices
    - @docs/guidelines/Zustand-guidelines.md: Zustand Best Practices
    
    **Step 3:** Read the plan:
    
    - @docs/plans/DAL-plan.md: The step-by-step plan to build the Data Access Layer of the application.
    
    **Step 4:** Build a Data Access Layer for this application according to the spec and following the plan. 
    
    - Complete one task from the plan at a time. 
    - After each task, stop, so that I can test it. Don’t move to the next task before I tell you to do so. 
    - Do not do anything else. At this point, we are focused on building the Data Access Layer.
    
    ## Final Instructions
    
    - Do not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch. 
    - Do not start the development server, I'll do it by myself.
    

    Appendix 7: UI Plan to Code

    +
    You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with building a UI layer for an application based on a spec and a sketch.
    
    ## Workflow
    
    Follow these steps precisely:
    
    **Step 1:** Analyze the documentation carefully:
    
    - @docs/specs/UI.md: The full technical specification for the UI layer of the application. Follow it carefully and to the letter.
    - @docs/intent/Sketch.png: Contains important information about the layout and style, complements the UI Layer Specification. The final UI must be as close to this sketch as possible.
    - @docs/specs/DAL.md: The full technical specification for the Data Access Layer of the application. That layer is already ready. Use this spec to understand how to work with it. 
    
    There should be no ambiguity about what we are building.
    
    **Step 2:** Check out the guidelines:
    
    - @docs/guidelines/TS-guidelines.md: TypeScript Best Practices
    - @docs/guidelines/React-guidelines.md: React Best Practices
    
    **Step 3:** Read the plan:
    
    - @docs/plans/UI-plan.md: The step-by-step plan to build the UI layer of the application.
    
    **Step 4:** Build a UI layer for this application according to the spec and the sketch, following the step-by-step plan: 
    
    - Complete one task from the plan at a time. 
    - Make sure you build the UI according to the sketch; this is very important.
    - After each task, stop, so that I can test it. Don’t move to the next task before I tell you to do so. 
    
    ## Final Instructions
    
    - Do not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch. 
    - Follow Ant Design's default styles and components. 
    - Do not touch the data access layer: it's ready and it's perfect. 
    - Do not start the development server, I'll do it by myself.
    

    Appendix 8: TS-guidelines.md

    +
    # Guidelines: TypeScript Best Practices
    
    ## Type System & Type Safety
    
    - Use TypeScript for all code and enable strict mode.
    - Ensure complete type safety throughout stores, hooks, and component interfaces.
    - Prefer interfaces over types for object definitions; use types for unions, intersections, and mapped types.
    - Entity interfaces should extend common patterns while maintaining their specific properties.
    - Use TypeScript type guards in filtering operations for relationship safety.
    - Avoid the 'any' type; prefer 'unknown' when necessary.
    - Use generics to create reusable components and functions.
    - Utilize TypeScript's features to enforce type safety.
    - Use type-only imports (import type { MyType } from './types') when importing types, because verbatimModuleSyntax is enabled.
    - Avoid enums; use maps instead.
    
    ## Naming Conventions
    
    - Names should reveal intent and purpose.
    - Use PascalCase for component names and types/interfaces.
    - Prefix interfaces for React props with 'Props' (e.g., ButtonProps).
    - Use camelCase for variables and functions.
    - Use UPPER_CASE for constants.
    - Use lowercase with dashes for directories, and PascalCase for files with components (e.g., components/auth-wizard/AuthForm.tsx).
    - Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
    - Favor named exports for components.
    
    ## Code Structure & Patterns
    
    - Write concise, technical TypeScript code with accurate examples.
    - Use functional and declarative programming patterns; avoid classes.
    - Prefer iteration and modularization over code duplication.
    - Use the "function" keyword for pure functions.
    - Use curly braces for all conditionals for consistency and clarity.
    - Structure files appropriately based on their purpose.
    - Keep related code together and encapsulate implementation details.
    
    ## Performance & Error Handling
    
    - Use immutable and efficient data structures and algorithms.
    - Create custom error types for domain-specific errors.
    - Use try-catch blocks with typed catch clauses.
    - Handle Promise rejections and async errors properly.
    - Log errors appropriately and handle edge cases gracefully.
    
    ## Project Organization
    
    - Place shared types in a types directory.
    - Use barrel exports (index.ts) for organizing exports.
    - Structure files and directories based on their purpose.
    
    ## Other Rules
    
    - Use comments to explain complex logic or non-obvious decisions.
    - Follow the single responsibility principle: each function should do exactly one thing.
    - Follow the DRY (Don't Repeat Yourself) principle.
    - Do not implement placeholder functions, empty methods, or "just in case" logic. Code should serve the current specification's requirements only.
    - Use 2 spaces for indentation (no tabs).
    

    Appendix 9: React-guidelines.md

    +
    # Guidelines: React Best Practices
    
    ## Component Structure
    
    - Use functional components over class components
    - Keep components small and focused
    - Extract reusable logic into custom hooks
    - Use composition over inheritance
    - Implement proper prop types with TypeScript
    - Structure React files: exported component, subcomponents, helpers, static content, types
    - Use declarative TSX for React components
    - Ensure that UI components use custom hooks for data fetching and operations rather than receive data via props, except for simplest components
    
    ## React Patterns
    
    - Utilize useState and useEffect hooks for state and side effects
    - Use React.memo for performance optimization when needed
    - Utilize React.lazy and Suspense for code-splitting
    - Implement error boundaries for robust error handling
    - Keep styles close to components
    
    ## React Performance
    
    - Avoid unnecessary re-renders
    - Lazy load components and images when possible
    - Implement efficient state management
    - Optimize rendering strategies
    - Optimize network requests
    - Employ memoization techniques (e.g., React.memo, useMemo, useCallback)
    
    ## React Project Structure
    
    /src
    - /components - UI components (every component in a separate file)
    - /hooks - public-facing custom hooks (every hook in a separate file)
    - /providers - React context providers (every provider in a separate file)
    - /pages - page components (every page in a separate file)
    - /stores - entity-specific Zustand stores (every store in a separate file)
    - /styles - global styles (if needed)
    - /types - shared TypeScript types and interfaces
    

    Appendix 10: Zustand-guidelines.md

    +
    # Guidelines: Zustand Best Practices
    
    ## Core Principles
    
    - **Implement a data layer** for this React application following this specification carefully and to the letter.
    - **Complete separation of concerns**: All data operations should be accessible in UI components through simple and clean entity-specific hooks, ensuring state management logic is fully separated from UI logic.
    - **Shared state architecture**: Different UI components should work with the same shared state, despite using entity-specific hooks separately.
    
    ## Technology Stack
    
    - **State management**: Use Zustand for state management with automatic localStorage persistence via the persist middleware.
    
    ## Store Architecture
    
    - **Base entity:** Implement a BaseEntity interface with common properties that all entities extend:
    typescript 
    export interface BaseEntity { 
      id: string; 
      createdAt: string; // ISO 8601 format 
      updatedAt: string; // ISO 8601 format 
    }
    - **Entity-specific stores**: Create separate Zustand stores for each entity type.
    - **Dictionary-based storage**: Use dictionary/map structures (Record<string, Entity>) rather than arrays for O(1) access by ID.
    - **Handle relationships**: Implement cross-entity relationships (like cascade deletes) within the stores where appropriate.
    
    ## Hook Layer
    
    The hook layer is the exclusive interface between UI components and the Zustand stores. It is designed to be simple, predictable, and follow a consistent pattern across all entities.
    
    ### Core Principles
    
    1.  **One Hook Per Entity**: There will be a single, comprehensive custom hook for each entity (e.g., useBlogPosts, useCategories). This hook is the sole entry point for all data and operations related to that entity. Separate hooks for single-item access will not be created.
    2.  **Return reactive data, not getter functions**: To prevent stale data, hooks must return the state itself, not a function that retrieves state. Parameterize hooks to accept filters and return the derived data directly. A component calling a getter function will not update when the underlying data changes.
    3.  **Expose Dictionaries for O(1) Access**: To provide simple and direct access to data, every hook will return a dictionary (Record<string, Entity>) of the relevant items.
    
    ### The Standard Hook Pattern
    
    Every entity hook will follow this implementation pattern:
    
    1.  **Subscribe** to the entire dictionary of entities from the corresponding Zustand store. This ensures the hook is reactive to any change in the data.
    2.  **Filter** the data based on the parameters passed into the hook. This logic will be memoized with useMemo for efficiency. If no parameters are provided, the hook will operate on the entire dataset.
    3.  **Return a Consistent Shape**: The hook will always return an object containing:
        *   A **filtered and sorted array** (e.g., blogPosts) for rendering lists.
        *   A **filtered dictionary** (e.g., blogPostsDict) for convenient O(1) lookup within the component.
        *   All necessary **action functions** (add, update, remove) and **relationship operations**.
        *   All necessary **helper functions** and **derived data objects**. Helper functions are suitable for pure, stateless logic (e.g., calculators). Derived data objects are memoized values that provide aggregated or summarized information from the state (e.g., an object containing status counts). They must be derived directly from the reactive state to ensure they update automatically when the underlying data changes.
    
    ## API Design Standards
    
    - **Object Parameters**: Use object parameters instead of multiple direct parameters for better extensibility:
    typescript
    
    // ✅ Preferred
    
    add({ title, categoryIds })
    
    // ❌ Avoid
    
    add(title, categoryIds)
    - **Internal Methods**: Use underscore-prefixed methods for cross-store operations to maintain clean separation.
    
    ## State Validation Standards
    
    - **Existence checks**: All update and remove operations should validate entity existence before proceeding.
    - **Relationship validation**: Verify both entities exist before establishing relationships between them.
    
    ## Error Handling Patterns
    
    - **Operation failures**: Define behavior when operations fail (e.g., updating non-existent entities).
    - **Graceful degradation**: How to handle missing related entities in helper functions.
    
    ## Other Standards
    
    - **Secure ID generation**: Use crypto.randomUUID() for entity ID generation instead of custom implementations for better uniqueness guarantees and security.
    - **Return type consistency**: add operations return generated IDs for component workflows requiring immediate entity access, while update and remove operations return void to maintain clean modification APIs.
    

  • Shades Of October (2025 Wallpapers Edition)

    As September comes to a close and October takes over, we are in the midst of a time of transition. The air in the morning feels crisper, the leaves are changing colors, and winding down with a warm cup of tea regains its almost-forgotten appeal after a busy summer. When we look closely, October is full of little moments that have the power to inspire, and whatever your secret to finding new inspiration might be, our monthly wallpapers series is bound to give you a little inspiration boost, too.

    For this October edition, artists and designers from across the globe once again challenged their creative skills and designed wallpapers to spark your imagination. You find them compiled below, along with a selection of timeless October treasures from our wallpapers archives that are just too good to gather dust.

    A huge thank you to everyone who shared their designs with us this month — this post wouldn’t exist without your creativity and kind support! Happy October!

    • You can click on every image to see a larger preview.
    • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
    • Submit your wallpaper design! 👩‍🎨
      Feeling inspired? We are always looking for creative talent and would love to feature your desktop wallpaper in one of our upcoming posts. Join in ↬

    Midnight Mischief

    Designed by Libra Fire from Serbia.

    AI

    Designed by Ricardo Gimenes from Spain.

    Glowing Pumpkin Lanterns

    “I was inspired by the classic orange and purple colors of October and Halloween, and wanted to combine those two themes to create a fun pumpkin lantern background.” — Designed by Melissa Bostjancic from New Jersey, United States.

    Halloween 2040

    Designed by Ricardo Gimenes from Spain.

    When The Mind Opens

    “In October, we observe World Mental Health Day. The open window in the head symbolizes light and fresh thoughts, the plant represents quiet inner growth and resilience, and the bird brings freedom and connection with the world. Together, they create an image of a mind that breathes, grows, and remains open to new beginnings.” — Designed by Ginger IT Solutions from Serbia.

    Enter The Factory

    “I took this photo while visiting an old factory. The red light was astonishing.” — Designed by Philippe Brouard from France.

    The Crow And The Ghosts

    “If my heart were a season, it would be autumn.” — Designed by Lívia Lénárt from Hungary.

    The Night Drive

    Designed by Vlad Gerasimov from Georgia.

    Spooky Town

    Designed by Xenia Latii from Germany.

    Bird Migration Portal

    “When I was young, I had a bird’s nest not so far from my room window. I watched the birds almost every day; because those swallows always left their nests in October. As a child, I dreamt that they all flew together to a nicer place, where they were not so cold.” — Designed by Eline Claeys from Belgium.

    Hanlu

    “The term ‘Hanlu’ literally translates as ‘Cold Dew.’ The cold dew brings brisk mornings and evenings. Eventually the briskness will turn cold, as winter is coming soon. And chrysanthemum is the iconic flower of Cold Dew.” — Designed by Hong, ZI-Qing from Taiwan.

    Autumn’s Splendor

    “The transition to autumn brings forth a rich visual tapestry of warm colors and falling leaves, making it a natural choice for a wallpaper theme.” — Designed by Farhan Srambiyan from India.

    Ghostbusters

    Designed by Ricardo Gimenes from Spain.

    Hello Autumn

    “Did you know that squirrels don’t just eat nuts? They really like to eat fruit, too. Since apples are the seasonal fruit of October, I decided to combine both things into a beautiful image.” — Designed by Erin Troch from Belgium.

    Discovering The Universe

    “Autumn is the best moment for discovering the universe. I am looking for a new galaxy or maybe… a UFO!” — Designed by Verónica Valenzuela from Spain.

    The Return Of The Living Dead

    Designed by Ricardo Gimenes from Spain.

    Goddess Makosh

    “At the end of the kolodar, as everything begins to ripen, the village sets out to harvesting. Together with the farmers goes Makosh, the Goddess of fields and crops, ensuring a prosperous harvest. What she gave her life and health all year round is now mature and rich, thus, as a sign of gratitude, the girls bring her bread and wine. The beautiful game of the goddess makes the hard harvest easier, while the song of the farmer permeates the field.” — Designed by PopArt Studio from Serbia.

    Strange October Journey

    “October makes the leaves fall to cover the land with lovely auburn colors and brings out all types of weird with them.” — Designed by Mi Ni Studio from Serbia.

    Autumn Deer

    Designed by Amy Hamilton from Canada.

    Transitions

    “To me, October is a transitional month. We gradually slide from summer to autumn. That’s why I chose to use a lot of gradients. I also wanted to work with simple shapes, because I think of October as the ‘back to nature/back to basics month’.” — Designed by Jelle Denturck from Belgium.

    Happy Fall!

    “Fall is my favorite season!” — Designed by Thuy Truong from the United States.

    Roger That Rogue Rover

    “The story is a mash-up of retro science fiction and zombie infection. What would happen if a Mars rover came into contact with an unknown Martian material and got infected with a virus? What if it reversed its intended purpose of research and exploration? Instead choosing a life of chaos and evil. What if they all ran rogue on Mars? Would humans ever dare to voyage to the red planet?” Designed by Frank Candamil from the United States.

    Turtles In Space

    “Finished September, with October comes the month of routines. This year we share it with turtles that explore space.” — Designed by Veronica Valenzuela from Spain.

    First Scarf And The Beach

    “When I was little, my parents always took me and my sister for a walk at the beach in Nieuwpoort. We didn’t really do those beach walks in the summer but always when the sky started to turn gray and the days became colder. My sister and I always took out our warmest scarfs and played in the sand while my parents walked behind us. I really loved those Saturday or Sunday mornings where we were all together. I think October (when it’s not raining) is the perfect month to go to the beach for ‘uitwaaien’ (to blow out), to walk in the wind and take a break and clear your head, relieve the stress or forget one’s problems.” — Designed by Gwen Bogaert from Belgium.

    Shades Of Gold

    “We are about to experience the magical imagery of nature, with all the yellows, ochers, oranges, and reds coming our way this fall. With all the subtle sunrises and the burning sunsets before us, we feel so joyful that we are going to shout it out to the world from the top of the mountains.” — Designed by PopArt Studio from Serbia.

    Autumn Vibes

    “Autumn has come, the time of long walks in the rain, weekends spent with loved ones, with hot drinks, and a lot of tenderness. Enjoy.” — Designed by LibraFire from Serbia.

    Game Night And Hot Chocolate

    “To me, October is all about cozy evenings with hot chocolate, freshly baked cookies, and a game night with friends or family.” — Designed by Lieselot Geirnaert from Belgium.

    Haunted House

    “Love all the Halloween costumes and decorations!” — Designed by Tazi from Australia.

    Say Bye To Summer

    “And hello to autumn! The summer heat and high season is over. It’s time to pack our backpacks and head for the mountains — there are many treasures waiting to be discovered!” Designed by Agnes Sobon from Poland.

    Tea And Cookies

    “As it gets colder outside, all I want to do is stay inside with a big pot of tea, eat cookies and read or watch a movie, wrapped in a blanket. Is it just me?” — Designed by Miruna Sfia from Romania.

    The Return

    Designed by Ricardo Gimenes from Spain.

    Boo!

    Designed by Mad Fish Digital from Portland, OR.

    Trick Or Treat

    “Have you ever wondered if all the little creatures of the animal kingdom celebrate Halloween as humans do? My answer is definitely ‘YES! They do!’ They use acorns as baskets to collect all the treats, pastry brushes as brooms for the spookiest witches and hats made from the tips set of your pastry bag. So, if you happen to miss something from your kitchen or from your tool box, it may be one of them, trying to get ready for All Hallows’ Eve.” — Designed by Carla Dipasquale from Italy.

    Dope Code

    “October is the month when the weather in Poland starts to get colder, and it gets very rainy, too. You can’t always spend your free time outside, so it’s the perfect opportunity to get some hot coffee and work on your next cool web project!” — Designed by Robert Brodziak from Poland.

    Happy Halloween

    Designed by Ricardo Gimenes from Spain.

    Ghostober

    Designed by Ricardo Delgado from Mexico City.

    Get Featured Next Month

    Would you like to get featured in our next wallpapers post? We’ll publish the November wallpapers on October 31, so if you’d like to be a part of the collection, please don’t hesitate to submit your design. We can’t wait to see what you’ll come up with!

  • From Prompt To Partner: Designing Your Custom AI Assistant

    In “A Week In The Life Of An AI-Augmented Designer”, Kate stumbled her way through an AI-augmented sprint (coffee was chugged, mistakes were made). In “Prompting Is A Design Act”, we introduced WIRE+FRAME, a framework to structure prompts like designers structure creative briefs. Now we’ll take the next step: packaging those structured prompts into AI assistants you can design, reuse, and share.

    AI assistants go by different names: CustomGPTs (ChatGPT), Agents (Copilot), and Gems (Gemini). But they all serve the same function — allowing you to customize the default AI model for your unique needs. If we carry over our smart intern analogy, think of these as interns trained to assist you with specific tasks, eliminating the need for repeated instructions or information, and who can support not just you, but your entire team.

    Why Build Your Own Assistant?

    If you’ve ever copied and pasted the same mega-prompt for the nth time, you’ve experienced the pain. An AI assistant turns a one-off “great prompt” into a dependable teammate. And if you’ve used any of the publicly available AI Assistants, you’ve realized quickly that they’re usually generic and not tailored for your use.

    Public AI assistants are great for inspiration, but nothing beats an assistant that solves a repeated problem for you and your team, in your voice, with your context and constraints baked in. Instead of reinventing the wheel by writing new prompts each time, or repeatedly copy-pasting your structured prompts every time, or spending cycles trying to make a public AI Assistant work the way you need it to, your own AI Assistant allows you and others to easily get better, repeatable, consistent results faster.

    Benefits Of Reusing Prompts, Even Your Own

    Some of the benefits of building your own AI Assistant over writing or reusing your prompts include:

    • Focused on a real repeating problem
      A good AI Assistant isn’t a general-purpose “do everything” bot that you need to keep tweaking. It focuses on a single, recurring problem that takes a long time to complete manually and often results in varying quality depending on who’s doing it (e.g., analyzing customer feedback).
    • Customized for your context
      Most large language models (LLMs, such as ChatGPT) are designed to be everything to everyone. An AI Assistant changes that by allowing you to customize it to automatically work like you want it to, instead of a generic AI.
    • Consistency at scale
      You can use the WIRE+FRAME prompt framework to create structured, reusable prompts. An AI Assistant is the next logical step: instead of copy-pasting that fine-tuned prompt and sharing contextual information and examples each time, you can bake it into the assistant itself, allowing you and others achieve the same consistent results every time.
    • Codifying expertise
      Every time you turn a great prompt into an AI Assistant, you’re essentially bottling your expertise. Your assistant becomes a living design guide that outlasts projects (and even job changes).
    • Faster ramp-up for teammates
      Instead of new designers starting from a blank slate, they can use pre-tuned assistants. Think of it as knowledge transfer without the long onboarding lecture.

    Reasons For Your Own AI Assistant Instead Of Public AI Assistants

    Public AI assistants are like stock templates. While they serve a specific purpose compared to the generic AI platform, and are useful starting points, if you want something tailored to your needs and team, you should really build your own.

    A few reasons for building your AI Assistant instead of using a public assistant someone else created include:

    • Fit: Public assistants are built for the masses. Your work has quirks, tone, and processes they’ll never quite match.
    • Trust & Security: You don’t control what instructions or hidden guardrails someone else baked in. With your own assistant, you know exactly what it will (and won’t) do.
    • Evolution: An AI Assistant you design and build can grow with your team. You can update files, tweak prompts, and maintain a changelog — things a public bot won’t do for you.

    Your own AI Assistants allow you to take your successful ways of interacting with AI and make them repeatable and shareable. And while they are tailored to your and your team’s way of working, remember that they are still based on generic AI models, so the usual AI disclaimers apply:

    Don’t share anything you wouldn’t want screenshotted in the next company all-hands. Keep it safe, private, and user-respecting. A shared AI Assistant can potentially reveal its inner workings or data.

    Note: We will be building an AI assistant using ChatGPT, aka a CustomGPT, but you can try the same process with any decent LLM sidekick. As of publication, a paid account is required to create CustomGPTs, but once created, they can be shared and used by anyone, regardless of whether they have a paid or free account. Similar limitations apply to the other platforms. Just remember that outputs can vary depending on the LLM model used, the model’s training, mood, and flair for creative hallucinations.

    When Not to Build An AI Assistant (Yet)

    An AI Assistant is great when the same audience has the same problem often. When the fit isn’t there, the risk is high; you should skip building an AI Assistant for now, as explained below:

    • One-off or rare tasks
      If it won’t be reused at least monthly, I’d recommend keeping it as a saved WIRE+FRAME prompt. For example, something for a one-time audit or creating placeholder content for a specific screen.
    • Sensitive or regulated data
      If you need to build in personally identifiable information (PII), health, finance, legal, or trade secrets, err on the side of not building an AI Assistant. Even if the AI platform promises not to use your data, I’d strongly suggest using redaction or an approved enterprise tool with necessary safeguards in place (company-approved enterprise versions of Microsoft Copilot, for instance).
    • Heavy orchestration or logic
      Multi-step workflows, API calls, database writes, and approvals go beyond the scope of an AI Assistant into Agentic territory (as of now). I’d recommend not trying to build an AI Assistant for these cases.
    • Real-time information
      AI Assistants may not be able to access real-time data like prices, live metrics, or breaking news. If you need these, you can upload near-real-time data (as we do below) or connect with data sources that you or your company controls, rather than relying on the open web.
    • High-stakes outputs
      For cases related to compliance, legal, medical, or any other area requiring auditability, consider implementing process guardrails and training to keep humans in the loop for proper review and accountability.
    • No measurable win
      If you can’t name a success metric (such as time saved, first-draft quality, or fewer re-dos), I’d recommend keeping it as a saved WIRE+FRAME prompt.

    Just because these are signs that you should not build your AI Assistant now, doesn’t mean you shouldn’t ever. Revisit this decision when you notice that you’re starting to repeatedly use the same prompt weekly, multiple teammates ask for it, or manual time copy-pasting and refining start exceeding ~15 minutes. Those are some signs that an AI Assistant will pay back quickly.

    In a nutshell, build an AI Assistant when you can name the problem, the audience, frequency, and the win. The rest of this article shows how to turn your successful WIRE+FRAME prompt into a CustomGPT that you and your team can actually use. No advanced knowledge, coding skills, or hacks needed.

    As Always, Start with the User

    This should go without saying to UX professionals, but it’s worth a reminder: if you’re building an AI assistant for anyone besides yourself, start with the user and their needs before you build anything.

    • Who will use this assistant?
    • What’s the specific pain or task they struggle with today?
    • What language, tone, and examples will feel natural to them?

    Building without doing this first is a sure way to end up with clever assistants nobody actually wants to use. Think of it like any other product: before you build features, you understand your audience. The same rule applies here, even more so, because AI assistants are only as helpful as they are useful and usable.

    From Prompt To Assistant

    You’ve already done the heavy lifting with WIRE+FRAME. Now you’re just turning that refined and reliable prompt into a CustomGPT you can reuse and share. You can use MATCH as a checklist to go from a great prompt to a useful AI assistant.

    • M: Map your prompt
      Port your successful WIRE+FRAME prompt into the AI assistant.
    • A: Add knowledge and training
      Ground the assistant in your world. Upload knowledge files, examples, or guides that make it uniquely yours.
    • T: Tailor for audience
      Make it feel natural to the people who will use it. Give it the right capabilities, but also adjust its settings, tone, examples, and conversation starters so they land with your audience.
    • C: Check, test, and refine
      Test the preview with different inputs and refine until you get the results you expect.
    • H: Hand off and maintain
      Set sharing options and permissions, share the link, and maintain it.

    A few weeks ago, we invited readers to share their ideas for AI assistants they wished they had. The top contenders were:

    • Prototype Prodigy: Transform rough ideas into prototypes and export them into Figma to refine.
    • Critique Coach: Review wireframes or mockups and point out accessibility and usability gaps.

    But the favorite was an AI assistant to turn tons of customer feedback into actionable insights. Readers replied with variations of: “An assistant that can quickly sort through piles of survey responses, app reviews, or open-ended comments and turn them into themes we can act on.”

    And that’s the one we will build in this article — say hello to Insight Interpreter.

    Walkthrough: Insight Interpreter

    Having lots of customer feedback is a nice problem to have. Companies actively seek out customer feedback through surveys and studies (solicited), but also receive feedback that may not have been asked for through social media or public reviews (unsolicited). This is a goldmine of information, but it can be messy and overwhelming trying to make sense of it all, and it’s nobody’s idea of fun. Here’s where an AI assistant like the Insight Interpreter can help. We’ll turn the example prompt created using the WIRE+FRAME framework in Prompting Is A Design Act into a CustomGPT.

    When you start building a CustomGPT by visiting https://chat.openai.com/gpts/editor, you’ll see two paths:

    • Conversational interface
      Vibe-chat your way — it’s easy and quick, but similar to unstructured prompts, your inputs get baked in a little messily, so you may end up with vague or inconsistent instructions.
    • Configure interface
      The structured form where you type instructions, upload files, and toggle capabilities. Less instant gratification, less winging it, but more control. This is the option you’ll want for assistants you plan to share or depend on regularly.

    The good news is that MATCH works for both. In conversational mode, you can use it as a mental checklist, and we’ll walk through using it in configure mode as a more formal checklist in this article.

    M: Map Your Prompt

    Paste your full WIRE+FRAME prompt into the Instructions section exactly as written. As a refresher, I’ve included the mapping and snippets of the detailed prompt from before:

    • Who & What: The AI persona and the core deliverable (“…senior UX researcher and customer insights analyst… specialize in synthesizing qualitative data from diverse sources…”).
    • Input Context: Background or data scope to frame the task (“…analyzing customer feedback uploaded from sources such as…”).
    • Rules & Constraints: Boundaries (“…do not fabricate pain points, representative quotes, journey stages, or patterns…”).
    • Expected Output: Format and fields of the deliverable (“…a structured list of themes. For each theme, include…”).
    • Flow: Explicit, ordered sub-tasks (“Recommended flow of tasks: Step 1…”).
    • Reference Voice: Tone, mood, or reference (“…concise, pattern-driven, and objective…”).
    • Ask for Clarification: Ask questions if unclear (“…if data is missing or unclear, ask before continuing…”).
    • Memory: Memory to recall earlier definitions (“Unless explicitly instructed otherwise, keep using this process…”).
    • Evaluate & Iterate: Have the AI self-critique outputs (“…critically evaluate…suggest improvements…”).

    If you’re building Copilot Agents or Gemini Gems instead of CustomGPTs, you still paste your WIRE+FRAME prompt into their respective Instructions sections.

    A: Add Knowledge And Training

    In the knowledge section, upload up to 20 files, clearly labeled, that will help the CustomGPT respond effectively. Keep files small and versioned: reviews_Q2_2025.csv beats latestfile_final2.csv. For this prompt for analyzing customer feedback, generating themes organized by customer journey, rating them by severity and effort, files could include:

    • Taxonomy of themes;
    • Instructions on parsing uploaded data;
    • Examples of real UX research reports using this structure;
    • Scoring guidelines for severity and effort, e.g., what makes something a 3 vs. a 5 in severity;
    • Customer journey map stages;
    • Customer feedback file templates (not actual data).

    An example of a file to help it parse uploaded data is shown below:

    T: Tailor For Audience

    • Audience tailoring
      If you are building this for others, your prompt should have addressed tone in the “Reference Voice” section. If you didn’t, do it now, so the CustomGPT can be tailored to the tone and expertise level of users who will use it. In addition, use the Conversation starters section to add a few examples or common prompts for users to start using the CustomGPT, again, worded for your users. For instance, we could use “Analyze feedback from the attached file” for our Insights Interpreter to make it more self-explanatory for anyone, instead of “Analyze data,” which may be good enough if you were using it alone. For my Designerly Curiosity GPT, assuming that users may not know what it could do, I use “What are the types of curiosity?” and “Give me a micro-practice to spark curiosity”.
    • Functional tailoring
      Fill in the CustomGPT name, icon, description, and capabilities.
      • Name: Pick one that will make it clear what the CustomGPT does. Let’s use “Insights Interpreter — Customer Feedback Analyzer”. If needed, you can also add a version number. This name will show up in the sidebar when people use it or pin it, so make the first part memorable and easily identifiable.
      • Icon: Upload an image or generate one. Keep it simple so it can be easily recognized in a smaller dimension when people pin it in their sidebar.
      • Description: A brief, yet clear description of what the CustomGPT can do. If you plan to list it in the GPT store, this will help people decide if they should pick yours over something similar.
      • Recommended Model: If your CustomGPT needs the capabilities of a particular model (e.g., needs GPT-5 thinking for detailed analysis), select it. In most cases, you can safely leave it up to the user or select the most common model.
      • Capabilities: Turn off anything you won’t need. We’ll turn off “Web Search” to allow the CustomGPT to focus only on uploaded data, without expanding the search online, and we will turn on “Code Interpreter & Data Analysis” to allow it to understand and process uploaded files. “Canvas” allows users to work on a shared canvas with the GPT to edit writing tasks; “Image generation” – if the CustomGPT needs to create images.
      • Actions: Making third-party APIs available to the CustomGPT, advanced functionality we don’t need.
      • Additional Settings: Sneakily hidden and opted in by default, I opt out of training OpenAI’s models.

    C: Check, Test & Refine

    Do one last visual check to make sure you’ve filled in all applicable fields and the basics are in place: is the concept sharp and clear (not a do-everything bot)? Are the roles, goals, and tone clear? Do we have the right assets (docs, guides) to support it? Is the flow simple enough that others can get started easily? Once those boxes are checked, move into testing.

    Use the Preview panel to verify that your CustomGPT performs as well, or better, than your original WIRE+FRAME prompt, and that it works for your intended audience. Try a few representative inputs and compare the results to what you expected. If something worked before but doesn’t now, check whether new instructions or knowledge files are overriding it.

    When things don’t look right, here are quick debugging fixes:

    • Generic answers?
      Tighten Input Context or update the knowledge files.
    • Hallucinations?
      Revisit your Rules section. Turn off web browsing if you don’t need external data.
    • Wrong tone?
      Strengthen Reference Voice or swap in clearer examples.
    • Inconsistent?
      Test across models in preview and set the most reliable one as “Recommended.”

    H: Hand Off And Maintain

    When your CustomGPT is ready, you can publish it via the “Create” option. Select the appropriate access option:

    • Only me: Private use. Perfect if you’re still experimenting or keeping it personal.
    • Anyone with the link: Exactly what it means. Shareable but not searchable. Great for pilots with a team or small group. Just remember that links can be reshared, so treat them as semi-public.
    • GPT Store: Fully public. Your assistant is listed and findable by anyone browsing the store. (This is the option we’ll use.)
    • Business workspace (if you’re on GPT Business): Share with others within your business account only — the easiest way to keep it in-house and controlled.

    But hand off doesn’t end with hitting publish, you should maintain it to keep it relevant and useful:

    • Collect feedback: Ask teammates what worked, what didn’t, and what they had to fix manually.
    • Iterate: Apply changes directly or duplicate the GPT if you want multiple versions in play. You can find all your CustomGPTs at: https://chatgpt.com/gpts/mine
    • Track changes: Keep a simple changelog (date, version, updates) for traceability.
    • Refresh knowledge: Update knowledge files and examples on a regular cadence so answers don’t go stale.

    And that’s it! Our Insights Interpreter is now live!

    Since we used the WIRE+FRAME prompt from the previous article to create the Insights Interpreter CustomGPT, I compared the outputs:

    The results are similar, with slight differences, and that’s expected. If you compare the results carefully, the themes, issues, journey stages, frequency, severity, and estimated effort match with some differences in wording of the theme, issue summary, and problem statement. The opportunities and quotes have more visible differences. Most of it is because of the CustomGPT knowledge and training files, including instructions, examples, and guardrails, now live as always-on guidance.

    Keep in mind that in reality, Generative AI is by nature generative, so outputs will vary. Even with the same data, you won’t get identical wording every time. In addition, underlying models and their capabilities rapidly change. If you want to keep things as consistent as possible, recommend a model (though people can change it), track versions of your data, and compare for structure, priorities, and evidence rather than exact wording.

    While I’d love for you to use Insights Interpreter, I strongly recommend taking 15 minutes to follow the steps above and create your own. That is exactly what you or your team needs — including the tone, context, output formats, and get the real AI Assistant you need!

    Inspiration For Other AI Assistants

    We just built the Insight Interpreter and mentioned two contenders: Critique Coach and Prototype Prodigy. Here are a few other realistic uses that can spark ideas for your own AI Assistant:

    • Workshop Wizard: Generates workshop agendas, produces icebreaker questions, and follows up survey drafts.
    • Research Roundup Buddy: Summarizes raw transcripts into key themes, then creates highlight reels (quotes + visuals) for team share-outs.
    • Persona Refresher: Updates stale personas with the latest customer feedback, then rewrites them in different tones (boardroom formal vs. design-team casual).
    • Content Checker: Proofs copy for tone, accessibility, and reading level before it ever hits your site.
    • Trend Tamer: Scans competitor reviews and identifies emerging patterns you can act on before they reach your roadmap.
    • Microcopy Provocateur: Tests alternate copy options by injecting different tones (sassy, calm, ironic, nurturing) and role-playing how users might react, especially useful for error states or Call to Actions.
    • Ethical UX Debater: Challenges your design decisions and deceptive designs by simulating the voice of an ethics board or concerned user.

    The best AI Assistants come from carefully inspecting your workflow and looking for areas where AI can augment your work regularly and repetitively. Then follow the steps above to build a team of customized AI assistants.

    Ask Me Anything About Assistants

    • What are some limitations of a CustomGPT?
      Right now, the best parallels for AI are a very smart intern with access to a lot of information. CustomGPTs are still running on LLM models that are basically trained on a lot of information and programmed to predictively generate responses based on that data, including possible bias, misinformation, or incomplete information. Keeping that in mind, you can make that intern provide better and more relevant results by using your uploads as onboarding docs, your guardrails as a job description, and your updates as retraining.
    • Can I copy someone else’s public CustomGPT and tweak it?
      Not directly, but if you get inspired by another CustomGPT, you can look at how it’s framed and rebuild your own using WIRE+FRAME & MATCH. That way, you make it your own and have full control of the instructions, files, and updates. But you can do that with Google’s equivalent — Gemini Gems. Shared Gems behave similarly to shared Google Docs, so once shared, any Gem instructions and files that you have uploaded can be viewed by any user with access to the Gem. Any user with edit access to the Gem can also update and delete the Gem.
    • How private are my uploaded files?
      The files you upload are stored and used to answer prompts to your CustomGPT. If your CustomGPT is not private or you didn’t disable the hidden setting to allow CustomGPT conversations to improve the model, that data could be referenced. Don’t upload sensitive, confidential, or personal data you wouldn’t want circulating. Enterprise accounts do have some protections, so check with your company.
    • How many files can I upload, and does size matter?
      Limits vary by platform, but smaller, specific files usually perform better than giant docs. Think “chapter” instead of “entire book.” At the time of publishing, CustomGPTs allow up to 20 files, Copilot Agents up to 200 (if you need anywhere near that many, chances are your agent is not focused enough), and Gemini Gems up to 10.
    • What’s the difference between a CustomGPT and a Project?
      A CustomGPT is a focused assistant, like an intern trained to do one role well (like “Insight Interpreter”). A Project is more like a workspace where you can group multiple prompts, files, and conversations together for a broader effort. CustomGPTs are specialists. Projects are containers. If you want something reusable, shareable, and role-specific, go to CustomGPT. If you want to organize broader work with multiple tools and outputs, and shared knowledge, Projects are the better fit.

    From Reading To Building

    In this AI x Design series, we’ve gone from messy prompting (“A Week In The Life Of An AI-Augmented Designer”) to a structured prompt framework, WIRE+FRAME (“Prompting Is A Design Act”). And now, in this article, your very own reusable AI sidekick.

    CustomGPTs don’t replace designers but augment them. The real magic isn’t in the tool itself, but in how you design and manage it. You can use public CustomGPTs for inspiration, but the ones that truly fit your workflow are the ones you design yourself. They extend your craft, codify your expertise, and give your team leverage that generic AI models can’t.

    Build one this week. Even better, today. Train it, share it, stress-test it, and refine it into an AI assistant that can augment your team.

  • Intent Prototyping: The Allure And Danger Of Pure Vibe Coding In Enterprise UX (Part 1)

    There is a spectrum of opinions on how dramatically all creative professions will be changed by the coming wave of agentic AI, from the very skeptical to the wildly optimistic and even apocalyptic. I think that even if you are on the “skeptical” end of the spectrum, it makes sense to explore ways this new technology can help with your everyday work. As for my everyday work, I’ve been doing UX and product design for about 25 years now, and I’m always keen to learn new tricks and share them with colleagues. Right now, I’m interested in AI-assisted prototyping, and I’m here to share my thoughts on how it can change the process of designing digital products.

    To set your expectations up front: this exploration focuses on a specific part of the product design lifecycle. Many people know about the Double Diamond framework, which shows the path from problem to solution. However, I think it’s the Triple Diamond model that makes an important point for our needs. It explicitly separates the solution space into two phases: Solution Discovery (ideating and validating the right concept) and Solution Delivery (engineering the validated concept into a final product). This article is focused squarely on that middle diamond: Solution Discovery.

    How AI can help with the preceding (Problem Discovery) and the following (Solution Delivery) stages is out of the scope of this article. Problem Discovery is less about prototyping and more about research, and while I believe AI can revolutionize the research process as well, I’ll leave that to people more knowledgeable in the field. As for Solution Delivery, it is more about engineering optimization. There’s no doubt that software engineering in the AI era is undergoing dramatic changes, but I’m not an engineer — I’m a designer, so let me focus on my “sweet spot”.

    And my “sweet spot” has a specific flavor: designing enterprise applications. In this world, the main challenge is taming complexity: dealing with complicated data models and guiding users through non-linear workflows. This background has had a big impact on my approach to design, putting a lot of emphasis on the underlying logic and structure. This article explores the potential of AI through this lens.

    I’ll start by outlining the typical artifacts designers create during Solution Discovery. Then, I’ll examine the problems with how this part of the process often plays out in practice. Finally, we’ll explore whether AI-powered prototyping can offer a better approach, and if so, whether it aligns with what people call “vibe coding,” or calls for a more deliberate and disciplined way of working.

    What We Create During Solution Discovery

    The Solution Discovery phase begins with the key output from the preceding research: a well-defined problem and a core hypothesis for a solution. This is our starting point. The artifacts we create from here are all aimed at turning that initial hypothesis into a tangible, testable concept.

    Traditionally, at this stage, designers can produce artifacts of different kinds, progressively increasing fidelity: from napkin sketches, boxes-and-arrows, and conceptual diagrams to hi-fi mockups, then to interactive prototypes, and in some cases even live prototypes. Artifacts of lower fidelity allow fast iteration and enable the exploration of many alternatives, while artifacts of higher fidelity help to understand, explain, and validate the concept in all its details.

    It’s important to think holistically, considering different aspects of the solution. I would highlight three dimensions:

    1. Conceptual model: Objects, relations, attributes, actions;
    2. Visualization: Screens, from rough sketches to hi-fi mockups;
    3. Flow: From the very high-level user journeys to more detailed ones.

    One can argue that those are layers rather than dimensions, and each of them builds on the previous ones (for example, according to Semantic IxD by Daniel Rosenberg), but I see them more as different facets of the same thing, so the design process through them is not necessarily linear: you may need to switch from one perspective to another many times.

    This is how different types of design artifacts map to these dimensions:

    As Solution Discovery progresses, designers move from the left part of this map to the right, from low-fidelity to high-fidelity, from ideating to validating, from diverging to converging.

    Note that at the beginning of the process, different dimensions are supported by artifacts of different types (boxes-and-arrows, sketches, class diagrams, etc.), and only closer to the end can you build a live prototype that encompasses all three dimensions: conceptual model, visualization, and flow.

    This progression shows a classic trade-off, like the difference between a pencil drawing and an oil painting. The drawing lets you explore ideas in the most flexible way, whereas the painting has a lot of detail and overall looks much more realistic, but is hard to adjust. Similarly, as we go towards artifacts that integrate all three dimensions at higher fidelity, our ability to iterate quickly and explore divergent ideas goes down. This inverse relationship has long been an accepted, almost unchallenged, limitation of the design process.

    The Problem With The Mockup-Centric Approach

    Faced with this difficult trade-off, often teams opt for the easiest way out. On the one hand, they need to show that they are making progress and create things that appear detailed. On the other hand, they rarely can afford to build interactive or live prototypes. This leads them to over-invest in one type of artifact that seems to offer the best of both worlds. As a result, the neatly organized “bento box” of design artifacts we saw previously gets shrunk down to just one compartment: creating static high-fidelity mockups.

    This choice is understandable, as several forces push designers in this direction. Stakeholders are always eager to see nice pictures, while artifacts representing user flows and conceptual models receive much less attention and priority. They are too high-level and hardly usable for validation, and usually, not everyone can understand them.

    On the other side of the fidelity spectrum, interactive prototypes require too much effort to create and maintain, and creating live prototypes in code used to require special skills (and again, effort). And even when teams make this investment, they do so at the end of Solution Discovery, during the convergence stage, when it is often too late to experiment with fundamentally different ideas. With so much effort already sunk, there is little appetite to go back to the drawing board.

    It’s no surprise, then, that many teams default to the perceived safety of static mockups, seeing them as a middle ground between the roughness of the sketches and the overwhelming complexity and fragility that prototypes can have.

    As a result, validation with users doesn’t provide enough confidence that the solution will actually solve the problem, and teams are forced to make a leap of faith to start building. To make matters worse, they do so without a clear understanding of the conceptual model, the user flows, and the interactions, because from the very beginning, designers’ attention has been heavily skewed toward visualization.

    The result is often a design artifact that resembles the famous “horse drawing” meme: beautifully rendered in the parts everyone sees first (the mockups), but dangerously underdeveloped in its underlying structure (the conceptual model and flows).

    While this is a familiar problem across the industry, its severity depends on the nature of the project. If your core challenge is to optimize a well-understood, linear flow (like many B2C products), a mockup-centric approach can be perfectly adequate. The risks are contained, and the “lopsided horse” problem is unlikely to be fatal.

    However, it’s different for the systems I specialize in: complex applications defined by intricate data models and non-linear, interconnected user flows. Here, the biggest risks are not on the surface but in the underlying structure, and a lack of attention to the latter would be a recipe for disaster.

    Transforming The Design Process

    This situation makes me wonder:

    How might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one?

    If we were able to answer this question, we would:

    • Learn faster.
      By going straight from intent to a testable artifact, we cut the feedback loop from weeks to days.
    • Gain more confidence.
      Users interact with real logic, which gives us more proof that the idea works.
    • Enforce conceptual clarity.
      A live prototype cannot hide a flawed or ambiguous conceptual model.
    • Establish a clear and lasting source of truth.
      A live prototype, combined with a clearly documented design intent, provides the engineering team with an unambiguous specification.

    Of course, the desire for such a process is not new. This vision of a truly prototype-driven workflow is especially compelling for enterprise applications, where the benefits of faster learning and forced conceptual clarity are the best defense against costly structural flaws. But this ideal was still out of reach because prototyping in code took so much work and specialized talents. Now, the rise of powerful AI coding assistants changes this equation in a big way.

    The Seductive Promise Of “Vibe Coding”

    And the answer seems to be obvious: vibe coding!

    “Vibe coding is an artificial intelligence-assisted software development style popularized by Andrej Karpathy in early 2025. It describes a fast, improvisational, collaborative approach to creating software where the developer and a large language model (LLM) tuned for coding is acting rather like pair programmers in a conversational loop.”

    Wikipedia

    The original tweet by Andrej Karpathy:

    The allure of this approach is undeniable. If you are not a developer, you are bound to feel awe when you describe a solution in plain language, and moments later, you can interact with it. This seems to be the ultimate fulfillment of our goal: a direct, frictionless path from an idea to a live prototype. But is this method reliable enough to build our new design process around it?

    The Trap: A Process Without A Blueprint

    Vibe coding mixes up a description of the UI with a description of the system itself, resulting in a prototype based on changing assumptions rather than a clear, solid model.

    The pitfall of vibe coding is that it encourages us to express our intent in the most ambiguous way possible: by having a conversation.

    This is like hiring a builder and telling them what to do one sentence at a time without ever presenting them a blueprint. They could make a wall that looks great, but you can’t be sure that it can hold weight.

    I’ll give you one example illustrating problems you may face if you try to jump over the chasm between your idea and a live prototype relying on pure vibe coding in the spirit of Andrej Karpathy’s tweet. Imagine I want to prototype a solution to keep track of tests to validate product ideas. I open my vibe coding tool of choice (I intentionally don’t disclose its name, as I believe they all are awesome yet prone to similar pitfalls) and start with the following prompt:

    I need an app to track tests. For every test, I need to fill out the following data:
    - Hypothesis (we believe that...) 
    - Experiment (to verify that, we will...)
    - When (a single date, or a period) 
    - Status (New/Planned/In Progress/Proven/Disproven)
    

    And in a minute or so, I get a working prototype:

    Inspired by success, I go further:

    Please add the ability to specify a product idea for every test. Also, I want to filter tests by product ideas and see how many tests each product idea has in each status.
    

    And the result is still pretty good:

    But then I want to extend the functionality related to product ideas:

    Okay, one more thing. For every product idea, I want to assess the impact score, the confidence score, and the ease score, and get the overall ICE score. Perhaps I need a separate page focused on the product idea, with all the relevant information and related tests.
    

    And from this point on, the results are getting more and more confusing.

    The flow of creating tests hasn’t changed much. I can still create a bunch of tests, and they seem to be organized by product ideas. But when I click “Product Ideas” in the top navigation, I see nothing:

    I need to create my ideas from scratch, and they are not connected to the tests I created before:

    Moreover, when I go back to “Tests”, I see that they are all gone. Clearly something went wrong, and my AI assistant confirms that:

    No, this is not expected behavior — it’s a bug! The issue is that tests are being stored in two separate places (local state in the Index page and App state), so tests created on the main page don’t sync with the product ideas page.

    Sure, eventually it fixed that bug, but note that we encountered this just on the third step, when we asked to slightly extend the functionality of a very simple app. The more layers of complexity we add, the more roadblocks of this sort we are bound to face.

    Also note that this specific problem of a not fully thought-out relationship between two entities (product ideas and tests) is not isolated at the technical level, and therefore, it didn’t go away once the technical bug was fixed. The underlying conceptual model is still broken, and it manifests in the UI as well.

    For example, you can still create “orphan” tests that are not connected to any item from the “Product Ideas” page. As a result, you may end up with different numbers of ideas and tests on different pages of the app:

    Let’s diagnose what really happened here. The AI’s response that this is a “bug” is only half the story. The true root cause is a conceptual model failure. My prompts never explicitly defined the relationship between product ideas and tests. The AI was forced to guess, which led to the broken experience. For a simple demo, this might be a fixable annoyance. But for a data-heavy enterprise application, this kind of structural ambiguity is fatal. It demonstrates the fundamental weakness of building without a blueprint, which is precisely what vibe coding encourages.

    Don’t take this as a criticism of vibe coding tools. They are creating real magic. However, the fundamental truth about “garbage in, garbage out” is still valid. If you don’t express your intent clearly enough, chances are the result won’t fulfill your expectations.

    Another problem worth mentioning is that even if you wrestle it into a state that works, the artifact is a black box that can hardly serve as reliable specifications for the final product. The initial meaning is lost in the conversation, and all that’s left is the end result. This makes the development team “code archaeologists,” who have to figure out what the designer was thinking by reverse-engineering the AI’s code, which is frequently very complicated. Any speed gained at the start is lost right away because of this friction and uncertainty.

    From Fast Magic To A Solid Foundation

    Pure vibe coding, for all its allure, encourages building without a blueprint. As we’ve seen, this results in structural ambiguity, which is not acceptable when designing complex applications. We are left with a seemingly quick but fragile process that creates a black box that is difficult to iterate on and even more so to hand off.

    This leads us back to our main question: how might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one, without getting caught in the ambiguity trap? The answer lies in a more methodical, disciplined, and therefore trustworthy process.

    In Part 2 of this series, “A Practical Guide to Building with Clarity”, I will outline the entire workflow for Intent Prototyping. This method places the explicit intent of the designer at the forefront of the process while embracing the potential of AI-assisted coding.

    Thank you for reading, and I look forward to seeing you in Part 2.

  • Ambient Animations In Web Design: Principles And Implementation (Part 1)

    Unlike timeline-based animations, which tell stories across a sequence of events, or interaction animations that are triggered when someone touches something, ambient animations are the kind of passive movements you might not notice at first. But, they make a design look alive in subtle ways.

    In an ambient animation, elements might subtly transition between colours, move slowly, or gradually shift position. Elements can appear and disappear, change size, or they could rotate slowly.

    Ambient animations aren’t intrusive; they don’t demand attention, aren’t distracting, and don’t interfere with what someone’s trying to achieve when they use a product or website. They can be playful, too, making someone smile when they catch sight of them. That way, ambient animations add depth to a brand’s personality.

    To illustrate the concept of ambient animations, I’ve recreated the cover of a Quick Draw McGraw comic book (PDF) as a CSS/SVG animation. The comic was published by Charlton Comics in 1971, and, being printed, these characters didn’t move, making them ideal candidates to transform into ambient animations.

    FYI: Original cover artist Ray Dirgo was best known for his work drawing Hanna-Barbera characters for Charlton Comics during the 1970s. Ray passed away in 2000 at the age of 92. He outlived Charlton Comics, which went out of business in 1986, and DC Comics acquired its characters.

    Tip: You can view the complete ambient animation code on CodePen.

    Choosing Elements To Animate

    Not everything on a page or in a graphic needs to move, and part of designing an ambient animation is knowing when to stop. The trick is to pick elements that lend themselves naturally to subtle movement, rather than forcing motion into places where it doesn’t belong.

    Natural Motion Cues

    When I’m deciding what to animate, I look for natural motion cues and think about when something would move naturally in the real world. I ask myself: “Does this thing have weight?”, “Is it flexible?”, and “Would it move in real life?” If the answer’s “yes,” it’ll probably feel right if it moves. There are several motion cues in Ray Dirgo’s cover artwork.

    For example, the peace pipe Quick Draw’s puffing on has two feathers hanging from it. They swing slightly left and right by three degrees as the pipe moves, just like real feathers would.

    #quick-draw-pipe {
      animation: quick-draw-pipe-rotate 6s ease-in-out infinite alternate;
    }
    
    @keyframes quick-draw-pipe-rotate {
      0% { transform: rotate(3deg); }
      100% { transform: rotate(-3deg); }
    }
    
    #quick-draw-feather-1 {
      animation: quick-draw-feather-1-rotate 3s ease-in-out infinite alternate;
    }
    
    #quick-draw-feather-2 {
      animation: quick-draw-feather-2-rotate 3s ease-in-out infinite alternate;
    }
    
    @keyframes quick-draw-feather-1-rotate {
      0% { transform: rotate(3deg); }
      100% { transform: rotate(-3deg); }
    }
    
    @keyframes quick-draw-feather-2-rotate {
      0% { transform: rotate(-3deg); }
      100% { transform: rotate(3deg); }
    }
    

    Atmosphere, Not Action

    I often choose elements or decorative details that add to the vibe but don’t fight for attention.

    Ambient animations aren’t about signalling to someone where they should look; they’re about creating a mood.

    Here, the chief slowly and subtly rises and falls as he puffs on his pipe.

    #chief {
      animation: chief-rise-fall 3s ease-in-out infinite alternate;
    }
    
    @keyframes chief-group-rise-fall {
      0% { transform: translateY(0); }
      100% { transform: translateY(-20px); }
    }
    

    For added effect, the feather on his head also moves in time with his rise and fall:

    #chief-feather-1 {
      animation: chief-feather-1-rotate 3s ease-in-out infinite alternate;
    }
    
    #chief-feather-2 {
      animation: chief-feather-2-rotate 3s ease-in-out infinite alternate;
    }
    
    @keyframes chief-feather-1-rotate {
      0% { transform: rotate(0deg); }
      100% { transform: rotate(-9deg); }
    }
    
    @keyframes chief-feather-2-rotate {
      0% { transform: rotate(0deg); }
      100% { transform: rotate(9deg); }
    }
    

    Playfulness And Fun

    One of the things I love most about ambient animations is how they bring fun into a design. They’re an opportunity to demonstrate personality through playful details that make people smile when they notice them.

    Take a closer look at the chief, and you might spot his eyebrows raising and his eyes crossing as he puffs hard on his pipe. Quick Draw’s eyebrows also bounce at what look like random intervals.

    #quick-draw-eyebrow {
      animation: quick-draw-eyebrow-raise 5s ease-in-out infinite;
    }
    
    @keyframes quick-draw-eyebrow-raise {
      0%, 20%, 60%, 100% { transform: translateY(0); }
      10%, 50%, 80% { transform: translateY(-10px); }
    }
    

    Keep Hierarchy In Mind

    Motion draws the eye, and even subtle movements have a visual weight. So, I reserve the most obvious animations for elements that I need to create the biggest impact.

    Smoking his pipe clearly has a big effect on Quick Draw McGraw, so to demonstrate this, I wrapped his elements — including his pipe and its feathers — within a new SVG group, and then I made that wobble.

    #quick-draw-group {
      animation: quick-draw-group-wobble 6s ease-in-out infinite;
    }
    
    @keyframes quick-draw-group-wobble {
      0% { transform: rotate(0deg); }
      15% { transform: rotate(2deg); }
      30% { transform: rotate(-2deg); }
      45% { transform: rotate(1deg); }
      60% { transform: rotate(-1deg); }
      75% { transform: rotate(0.5deg); }
      100% { transform: rotate(0deg); }
    }
    

    Then, to emphasise this motion, I mirrored those values to wobble his shadow:

    #quick-draw-shadow {
      animation: quick-draw-shadow-wobble 6s ease-in-out infinite;
    }
    
    @keyframes quick-draw-shadow-wobble {
      0% { transform: rotate(0deg); }
      15% { transform: rotate(-2deg); }
      30% { transform: rotate(2deg); }
      45% { transform: rotate(-1deg); }
      60% { transform: rotate(1deg); }
      75% { transform: rotate(-0.5deg); }
      100% { transform: rotate(0deg); }
    }
    

    Apply Restraint

    Just because something can be animated doesn’t mean it should be. When creating an ambient animation, I study the image and note the elements where subtle motion might add life. I keep in mind the questions: “What’s the story I’m telling? Where does movement help, and when might it become distracting?”

    Remember, restraint isn’t just about doing less; it’s about doing the right things less often.

    Layering SVGs For Export

    In “Smashing Animations Part 4: Optimising SVGs,” I wrote about the process I rely on to “prepare, optimise, and structure SVGs for animation.” When elements are crammed into a single SVG file, they can be a nightmare to navigate. Locating a specific path or group can feel like searching for a needle in a haystack.

    That’s why I develop my SVGs in layers, exporting and optimising one set of elements at a time — always in the order they’ll appear in the final file. This lets me build the master SVG gradually by pasting it in each cleaned-up section.

    I start by exporting background elements, optimising them, adding class and ID attributes, and pasting their code into my SVG file.

    Then, I export elements that often stay static or move as groups, like the chief and Quick Draw McGraw.

    Before finally exporting, naming, and adding details, like Quick Draw’s pipe, eyes, and his stoned sparkles.

    Since I export each layer from the same-sized artboard, I don’t need to worry about alignment or positioning issues as they all slot into place automatically.

    Implementing Ambient Animations

    You don’t need an animation framework or library to add ambient animations to a project. Most of the time, all you’ll need is a well-prepared SVG and some thoughtful CSS.

    But, let’s start with the SVG. The key is to group elements logically and give them meaningful class or ID attributes, which act as animation hooks in the CSS. For this animation, I gave every moving part its own identifier like #quick-draw-tail or #chief-smoke-2. That way, I could target exactly what I needed without digging through the DOM like a raccoon in a trash can.

    Once the SVG is set up, CSS does most of the work. I can use @keyframes for more expressive movement, or animation-delay to simulate randomness and stagger timings. The trick is to keep everything subtle and remember I’m not animating for attention, I’m animating for atmosphere.

    Remember that most ambient animations loop continuously, so they should be lightweight and performance-friendly. And of course, it’s good practice to respect users who’ve asked for less motion. You can wrap your animations in an @media prefers-reduced-motion query so they only run when they’re welcome.

    @media (prefers-reduced-motion: no-preference) {
      #quick-draw-shadow {
        animation: quick-draw-shadow-wobble 6s ease-in-out infinite;
      }
    }
    

    It’s a small touch that’s easy to implement, and it makes your designs more inclusive.

    Ambient Animation Design Principles

    If you want your animations to feel ambient, more like atmosphere than action, it helps to follow a few principles. These aren’t hard and fast rules, but rather things I’ve learned while animating smoke, sparkles, eyeballs, and eyebrows.

    Keep Animations Slow And Smooth

    Ambient animations should feel relaxed, so use longer durations and choose easing curves that feel organic. I often use ease-in-out, but cubic Bézier curves can also be helpful when you want a more relaxed feel and the kind of movements you might find in nature.

    Loop Seamlessly And Avoid Abrupt Changes

    Hard resets or sudden jumps can ruin the mood, so if an animation loops, ensure it cycles smoothly. You can do this by matching start and end keyframes, or by setting the animation-direction to alternate the value so the animation plays forward, then back.

    Use Layering To Build Complexity

    A single animation might be boring. Five subtle animations, each on separate layers, can feel rich and alive. Think of it like building a sound mix — you want variation in rhythm, tone, and timing. In my animation, sparkles twinkle at varying intervals, smoke curls upward, feathers sway, and eyes boggle. Nothing dominates, and each motion plays its small part in the scene.

    Avoid Distractions

    The point of an ambient animation is that it doesn’t dominate. It’s a background element and not a call to action. If someone’s eyes are drawn to a raised eyebrow, it’s probably too much, so dial back the animation until it feels like something you’d only catch if you’re really looking.

    Consider Accessibility And Performance

    Check prefers-reduced-motion, and don’t assume everyone’s device can handle complex animations. SVG and CSS are light, but things like blur filters and drop shadows, and complex CSS animations can still tax lower-powered devices. When an animation is purely decorative, consider adding aria-hidden="true" to keep it from cluttering up the accessibility tree.

    Quick On The Draw

    Ambient animation is like seasoning on a great dish. It’s the pinch of salt you barely notice, but you’d miss when it’s gone. It doesn’t shout, it whispers. It doesn’t lead, it lingers. It’s floating smoke, swaying feathers, and sparkles you catch in the corner of your eye. And when it’s done well, ambient animation adds personality to a design without asking for applause.

    Now, I realise that not everyone needs to animate cartoon characters. So, in part two, I’ll share how I created animations for several recent client projects. Until next time, if you’re crafting an illustration or working with SVG, ask yourself: What would move if this were real? Then animate just that. Make it slow and soft. Keep it ambient.

    You can view the complete ambient animation code on CodePen.

  • 智泊AI-AGI大模型全栈课12期【VIP】

    你是否已经掌握了 Transformer 的原理,能背诵 Attention 机制的公式,却在面对一个真实的企业需求时感到无从下手?你是否刷了上百道算法题,却依然不知道如何将一个模型封装成稳定、高效、
  • Android AI解放生产力(五)实战:解放写API接口的繁琐工作

    公司当前项目使用的是Apifox作为API调试/文档工具,所以以这个为例。市面上的工具应该都逐步开放了AI能力,需要关注一下,只要能提供原始数据就行,不能的话那也没办法喂给AI。 上一篇已经让AI写U
  • 深入理解 React 中 useState 与 useEffect

    在 React 函数组件的世界里,useState 和 useEffect 是两个最基础却也最关键的 Hooks。它们看似简单,但若只停留在“会用”的层面,很容易在复杂场景中踩坑。今天,我们就结合一段
  • 管理100台服务器是什么体验?Python一行代码搞定

    前言 运维日常: 登录服务器A,执行命令 登录服务器B,执行同样命令 登录服务器C… 这太痛苦了!用Python + Fabric实现批量自动化。