分类: Uncategorized

  • 北大系可控核聚变公司完成超5000万天使轮融资,致力于低成本高性能聚变 |硬氪首发

    硬氪获悉,致力于将基础科学发现转化为具有巨大战略价值的未来能源产业的企业「零点聚能」近日完成超五千万元天使轮融资。我们总结了本轮融资信息和该公司几大亮点:

    融资金额及投资机构

    融资轮次:天使轮

    融资规模:超5000万人民币

    资金用途:本轮资金将重点用于研制具有关键作用的一号实验装置,开展基于磁零点位形聚变新路线的关键验证实验,推进极具商业价值的低成本、高参数聚变能源技术研发。

    公司基本信息

    成立时间:2024年9月

    注册地址:北京

    技术亮点:实现聚变能源应用的核心挑战之一是如何长时间稳定约束高温等离子体。零点聚能聚焦的磁零点位形聚变路线,源于空间等离子体中的自然物理现象。肖池阶团队于2006年首次通过卫星观测数据证实磁零点位形的存在,相关成果多次发表于《Nature Physics》等国际专业期刊。自2013年起,该团队已在北京大学研制零号实验装置,并系统研究磁零点位形的物理特性与约束性能,发现该技术路径有望以较低成本实现高性能聚变。

    未来规划

    聚变能源因其安全、清洁、资源丰富等特点,被视为未来终极能源之一。零点聚能致力于将基础科学发现转化为具有巨大战略价值的未来能源产业。公司通过一号实验装置获得关键参数后,将着手研制二号实验装置及三号实验装置,从而完成从参数验证到商业验证的关键跨越。该技术一旦取得突破,将使发电成本进入“一分钱”时代,为人类社会带来近乎无限的清洁能源,真正实现能源自由,并有潜力应用于航天动力及星际运输等领域。

    团队背景

    零点聚能由北京大学科技开发部燕缘孵化器与溪山天使汇联合孵化。公司首席科学家肖池阶是北京大学物理学院长聘副教授、博士生导师,曾任北京大学物理学院重离子物理研究所副所长。2025年3月,公司与北京大学共同成立“北大-零点聚能聚变能源联合实验室”。实验室由零点聚能创始人肖池阶担任主任,实验室学术委员会汇聚中国科学院院士及多位核聚变领域顶尖科学家,携手推进磁零点位形聚变前沿探索与关键技术攻关。

    投资人思考

    溪山天使汇发起人、北大汇丰创新引擎实验室主任许晖表示:肖池阶教授团队在现有主流核聚变技术路线之外,独辟蹊径,从其观测到的宇宙太空中现有核聚变位形中获得灵感,技术方案结构简洁,在温度、���度同等参数下,约束时间高出一个数量级,装置成本低了一个数量级,未来有望实现一分钱一度电,让核聚变发电和深空遨游真正成为现实。

  • Smashing Animations Part 7: Recreating Toon Text With CSS And SVG

    After finishing a project that required me to learn everything I could about CSS and SVG animations, I started writing this series about Smashing Animations and “How Classic Cartoons Inspire Modern CSS.” To round off this year, I want to show you how to use modern CSS to create that element that makes Toon Titles so impactful: their typography.

    Title Artwork Design

    In the silent era of the 1920s and early ’30s, the typography of a film’s title card created a mood, set the scene, and reminded an audience of the type of film they’d paid to see.

    Cartoon title cards were also branding, mood, and scene-setting, all rolled into one. In the early years, when major studio budgets were bigger, these title cards were often illustrative and painterly.

    But when television boomed during the 1950s, budgets dropped, and cards designed by artists like Lawrence “Art” Goble adopted a new visual language, becoming more graphic, stylised, and less intricate.

    Note: Lawrence “Art” Goble is one of the often overlooked heroes of mid-century American animation. He primarily worked for Hanna-Barbera during its most influential years of the 1950s and 1960s.

    Goble wasn’t a character animator. His role was to create atmosphere, so he designed environments for The Flintstones, Huckleberry Hound, Quick Draw McGraw, and Yogi Bear, as well as the opening title cards that set the tone. His title cards, featuring paintings with a logo overlaid, helped define the iconic look of Hanna-Barbera.

    Goble’s artwork for characters such as Quick Draw McGraw and Yogi Bear was effective on smaller TV screens. Rather than reproducing a still from the cartoon, he focused on presenting a single, strong idea — often in silhouette — that captured its essence. In “The Buzzin’ Bear,” Yogi buzzes by in a helicopter. He bounces away, pic-a-nic basket in hand, in “Bear on a Picnic,” and for his “Prize Fight Fright,” Yogi boxes the title text.

    With little or no motion to rely on, Goble’s single frames had to create a mood, set the scene, and describe a story. They did this using flat colours, graphic shapes, and typography that was frequently integrated into the artwork.

    As designers who work on the web, toon titles can teach us plenty about how to convey a brand’s personality, make a first impression, and set expectations for someone’s experience using a product or website. We can learn from the artists’ techniques to create effective banners, landing-page headers, and even good ol’ fashioned splash screens.

    Toon Title Typography

    Cartoon title cards show how merging type with imagery delivers the punch a header or hero needs. With a handful of text-shadow, text-stroke, and transform tricks, modern CSS lets you tap into that same energy.

    The Toon Text Title Generator

    Partway through writing this article, I realised it would be useful to have a tool for generating text styled like the cartoon titles I love so much. So I made one.

    My Toon Text Title Generator lets you experiment with colours, strokes, and multiple text shadows. You can adjust paint order, apply letter spacing, preview your text in a selection of sample fonts, and then copy the generated CSS straight to your clipboard to use in a project.

    Toon Title CSS

    You can simply copy-paste the CSS that the Toon Text Title Generator provides you. But let’s look closer at what it does.

    Text shadow

    Look at the type in this title from Augie Doggie’s episode “Yuk-Yuk Duck,” with its pale yellow letters and dark, hard, offset shadow that lifts it off the background and creates the illusion of depth.

    You probably already know that text-shadow accepts four values: (1) horizontal and (2) vertical offsets, (3) blur, and (4) a colour which can be solid or semi-transparent. Those offset values can be positive or negative, so I can replicate “Yuk-Yuk Duck” using a hard shadow pulled down and to the right:

    color: #f7f76d;
    text-shadow: 5px 5px 0 #1e1904;
    

    On the other hand, this “Pint Giant” title has a different feel with its negative semi-soft shadow:

    color: #c2a872;
    text-shadow:
      -7px 5px 0 #b100e,
      0 -5px 10px #546c6f;
    

    To add extra depth and create more interesting effects, I can layer multiple shadows. For “Let’s Duck Out,” I combine four shadows: the first a solid shadow with a negative horizontal offset to lift the text off the background, followed by progressively softer shadows to create a blur around it:

    color: #6F4D80;
    text-shadow:
      -5px 5px 0 #260e1e, /* Shadow 1 */
      0 0 15px #e9ce96,   /* Shadow 2 */
      0 0 30px #e9ce96,   /* Shadow 3 */
      0 0 30px #e9ce96;   /* Shadow 4 */
    

    These shadows show that using text-shadow isn’t just about creating lighting effects, as they can also be decorative and add personality.

    Text Stroke

    Many cartoon title cards feature letters with a bold outline that makes them stand out from the background. I can recreate this effect using text-stroke. For a long time, this property was only available via a -webkit- prefix, but that also means it’s now supported across modern browsers.

    text-stroke is a shorthand for two properties. The first, text-stroke-width, draws a contour around individual letters, while the second, text-stroke-color, controls its colour. For “Whatever Goes Pup,” I added a 4px blue stroke to the yellow text:

    color: #eff0cd;
    -webkit-text-stroke: 4px #7890b5;
    text-stroke: 4px #7890b5;
    

    Strokes can be especially useful when they’re combined with shadows, so for “Growing, Growing, Gone,” I added a thin 3px stroke to a barely blurred 1px shadow to create this three-dimensional text effect:

    color: #fbb999;
    text-shadow: 3px 5px 1px #5160b1;
    -webkit-text-stroke: 3px #984336;
    text-stroke: 3px #984336;
    

    Paint Order

    Using text-stroke doesn’t always produce the expected result, especially with thinner letters and thicker strokes, because by default the browser draws a stroke over the fill. Sadly, CSS still does not permit me to adjust stroke placement as I often do in Sketch. However, the paint-order property has values that allow me to place the stroke behind, rather than in front of, the fill.

    paint-order: stroke paints the stroke first, then the fill, whereas paint-order: fill does the opposite:

    color: #fbb999;
    paint-order: fill;
    text-shadow: 3px 5px 1px #5160b1;
    text-stroke-color:#984336;
    text-stroke-width: 3px;
    

    An effective stroke keeps letters readable, adds weight, and — when combined with shadows and paint order — gives flat text real presence.

    Backgrounds Inside Text

    Many cartoon title cards go beyond flat colour by adding texture, gradients, or illustrated detail to the lettering. Sometimes that’s a texture, other times it might be a gradient with a subtle tonal shift. On the web, I can recreate this effect by using a background image or gradient behind the text, and then clipping it to the shape of the letters. This relies on two properties working together: background-clip: text and text-fill-color: transparent.

    First, I apply a background behind the text. This can be a bitmap or vector image or a CSS gradient. For this example from the Quick Draw McGraw episode “Baba Bait,” the title text includes a subtle top–bottom gradient from dark to light:

    background: linear-gradient(0deg, #667b6a, #1d271a);
    

    Next, I clip that background to the glyphs and make the text transparent so the background shows through:

    -webkit-background-clip: text;
    -webkit-text-fill-color: transparent;
    

    With just those two lines, the background is no longer painted behind the text; instead, it’s painted within it. This technique works especially well when combined with strokes and shadows. A clipped gradient provides the lettering with colour and texture, a stroke keeps its edges sharp, and a shadow elevates it from the background. Together, they recreate the layered look of hand-painted title cards using nothing more than a little CSS. As always, test clipped text carefully, as browser quirks can sometimes affect shadows and rendering.

    Splitting Text Into Individual Characters

    Sometimes I don’t want to style a whole word or heading. I want to style individual letters — to nudge a character into place, give one glyph extra weight, or animate a few letters independently.

    In plain HTML and CSS, there’s only one reliable way to do that: wrap each character in its own span element. I could do that manually, but that would be fragile, hard to maintain, and would quickly fall apart when copy changes. Instead, when I need per-letter control, I use a text-splitting library like splt.js (although other solutions are available). This takes a text node and automatically wraps words or characters, giving me extra hooks to animate and style without messing up my markup.

    It’s an approach that keeps my HTML readable and semantic, while giving me the fine-grained control I need to recreate the uneven, characterful typography you see in classic cartoon title cards. However, this approach comes with accessibility caveats, as most screen readers read text nodes in order. So this:

    <h2>Hum Sweet Hum</h2>
    

    …reads as you’d expect:

    Hum Sweet Hum

    But this:

    <h2>
    <span>H</span>
    <span>u</span>
    <span>m</span>
    <!-- etc. -->
    </h2>
    

    …can be interpreted differently depending on the browser and screen reader. Some will concatenate the letters and read the words correctly. Others may pause between letters, which in a worst-case scenario might sound like:

    “H…” “U…” “M…”

    Sadly, some splitting solutions don’t deliver an always accessible result, so I’ve written my own text splitter, splinter.js, which is currently in beta.

    Transforming Individual Letters

    To activate my Toon Text Splitter, I add a data- attribute to the element I want to split:

    <h2 data-split="toon">Hum Sweet Hum</h2>
    

    First, my script separates each word into individual letters and wraps them in a span element with class and ARIA attributes applied:

    <span class="toon-char" aria-hidden="true">H</span>
    <span class="toon-char" aria-hidden="true">u</span>
    <span class="toon-char" aria-hidden="true">m</span>
    

    The script then takes the initial content of the split element and adds it as an aria attribute to help maintain accessibility:

    <h2 data-split="toon" aria-label="Hum Sweet Hum">
      <span class="toon-char" aria-hidden="true">H</span>
      <span class="toon-char" aria-hidden="true">u</span>
      <span class="toon-char" aria-hidden="true">m</span>
    </h2>
    

    With those class attributes applied, I can then style individual characters as I choose.

    For example, for “Hum Sweet Hum,” I want to replicate how its letters shift away from the baseline. After using my Toon Text Splitter, I applied four different translate values using several :nth-child selectors to create a semi-random look:

    /* 4th, 8th, 12th... */
    .toon-char:nth-child(4n) { translate: 0 -8px; }
    /* 1st, 5th, 9th... */
    .toon-char:nth-child(4n+1) { translate: 0 -4px; }
    /* 2nd, 6th, 10th... */
    .toon-char:nth-child(4n+2) { translate: 0 4px; }
    /* 3rd, 7th, 11th... */
    .toon-char:nth-child(4n+3) { translate: 0 8px; }
    

    But translate is only one property I can use to transform my toon text.

    I could also rotate those individual characters for an even more chaotic look:

    /* 4th, 8th, 12th... */
    .toon-line .toon-char:nth-child(4n) { rotate: -4deg; }
    /* 1st, 5th, 9th... */
    .toon-char:nth-child(4n+1) { rotate: -8deg; }
    /* 2nd, 6th, 10th... */
    .toon-char:nth-child(4n+2) { rotate: 4deg; }
    /* 3rd, 7th, 11th... */
    .toon-char:nth-child(4n+3) { rotate: 8deg; }
    

    But translate is only one property I can use to transform my toon text. I could also rotate those individual characters for an even more chaotic look:

    /* 4th, 8th, 12th... */
    .toon-line .toon-char:nth-child(4n) {
    rotate: -4deg; }
    
    /* 1st, 5th, 9th... */
    .toon-char:nth-child(4n+1) {
    rotate: -8deg; }
    
    /* 2nd, 6th, 10th... */
    .toon-char:nth-child(4n+2) {
    rotate: 4deg; }
    
    /* 3rd, 7th, 11th... */
    .toon-char:nth-child(4n+3) {
    rotate: 8deg; }
    

    And, of course, I could add animations to jiggle those characters and bring my toon text style titles to life. First, I created a keyframe animation that rotates the characters:

    @keyframes jiggle {
    0%, 100% { transform: rotate(var(--base-rotate, 0deg)); }
    25% { transform: rotate(calc(var(--base-rotate, 0deg) + 3deg)); }
    50% { transform: rotate(calc(var(--base-rotate, 0deg) - 2deg)); }
    75% { transform: rotate(calc(var(--base-rotate, 0deg) + 1deg)); }
    }
    

    Before applying it to the span elements created by my Toon Text Splitter:

    .toon-char {
    animation: jiggle 3s infinite ease-in-out;
    transform-origin: center bottom; }
    

    And finally, setting the rotation amount and a delay before each character begins to jiggle:

    .toon-char:nth-child(4n) { --base-rotate: -2deg; }
    .toon-char:nth-child(4n+1) { --base-rotate: -4deg; }
    .toon-char:nth-child(4n+2) { --base-rotate: 2deg; }
    .toon-char:nth-child(4n+3) { --base-rotate: 4deg; }
    
    .toon-char:nth-child(4n) { animation-delay: 0.1s; }
    .toon-char:nth-child(4n+1) { animation-delay: 0.3s; }
    .toon-char:nth-child(4n+2) { animation-delay: 0.5s; }
    .toon-char:nth-child(4n+3) { animation-delay: 0.7s; }
    

    One Frame To Make An Impression

    Cartoon title artists had one frame to make an impression, and their typography was as important as the artwork they painted. The same is true on the web.

    A well-designed header or hero area needs clarity, character, and confidence — not simply a faded full-width background image.

    With a few carefully chosen CSS properties — shadows, strokes, clipped backgrounds, and some restrained animation — we can recreate that same impact. I love toon text not because I’m nostalgic, but because its design is intentional. Make deliberate choices, and let a little toon text typography add punch to your designs.

  • Accessible UX Research, eBook Now Available For Download

    This article is a sponsored by Accessible UX Research

    Smashing Library expands again! We’re so happy to announce our newest book, Accessible UX Research, is now available for download in eBook formats. Michele A. Williams takes us for a deep dive into the real world of UX testing, and provides a road map for including users with different abilities and needs in every phase of testing.

    But the truth is, you don’t need to be conducting UX testing or even be a UX professional to get a lot out of this book. Michele gives in-depth descriptions of the assistive technology we should all be familiar with, in addition to disability etiquette, common pitfalls when creating accessible prototypes, and so much more. You’ll refer to this book again and again in your daily work.



    This is also your last chance to get your printed copy at our discounted presale price. We expect printed copies to start shipping in early 2026. We know you’ll love this book, but don’t just take our word for it — we asked a few industry experts to check out Accessible UX Research too:

    Accessible UX Research stands as a vital and necessary resource. In addressing disability at the User Experience Research layer, it helps to set an equal and equitable tone for products and features that resonates through the rest of the creation process. The book provides a solid framework for all aspects of conducting research efforts, including not only process considerations, but also importantly the mindset required to approach the work.

    This is the book I wish I had when I was first getting started with my accessibility journey. It is a gift, and I feel so fortunate that Michele has chosen to share it with us all.”

    Eric Bailey, Accessibility Advocate

    “User research in accessibility is non-negotiable for actually meeting users’ needs, and this book is a critical piece in the puzzle of actually doing and integrating that research into accessibility work day to day.”

    Devon Pershing, Author of The Accessibility Operations Guidebook

    “Our decisions as developers and designers are often based on recommendations, assumptions, and biases. Usually, this doesn’t work, because checking off lists or working solely from our own perspective can never truly represent the depth of human experience. Michele’s book provides you with the strategies you need to conduct UX research with diverse groups of people, challenge your assumptions, and create truly great products.”

    Manuel Matuzović, Author of the Web Accessibility Cookbook

    “This book is a vital resource on inclusive research. Michele Williams expertly breaks down key concepts, guiding readers through disability models, language, and etiquette. A strong focus on real-world application equips readers to conduct impactful, inclusive research sessions. By emphasizing diverse perspectives and proactive inclusion, the book makes a compelling case for accessibility as a core principle rather than an afterthought. It is a must-read for researchers, product-makers, and advocates!”

    Anna E. Cook, Accessibility and Inclusive Design Specialist

    About The Book

    The book isn’t a checklist for you to complete as a part of your accessibility work. It’s a practical guide to inclusive UX research, from start to finish. If you’ve ever felt unsure how to include disabled participants, or worried about “getting it wrong,” this book is for you. You’ll get clear, practical strategies to make your research more inclusive, effective, and reliable.

    Inside, you’ll learn how to:

    • Plan research that includes disabled participants from the start,
    • Recruit participants with disabilities,
    • Facilitate sessions that work for a range of access needs,
    • Ask better questions and avoid unintentionally biased research methods,
    • Build trust and confidence in your team around accessibility and inclusion.

    The book also challenges common assumptions about disability and urges readers to rethink what inclusion really means in UX research and beyond. Let’s move beyond compliance and start doing research that reflects the full diversity of your users. Whether you’re in industry or academia, this book gives you the tools — and the mindset — to make it happen.

    High-quality hardcover, 320 pages. Written by Dr. Michele A. Williams. Cover art by Espen Brunborg. Print edition shipping early 2026. eBook now available for download. Download a free sample (PDF, 2.3MB) and reserve your print copy at the presale price.



    “Accessible UX Research” shares successful strategies that’ll help you recruit the participants you need for the study you’re designing. (Large preview)

    Contents

    1. Disability mindset: For inclusive research to succeed, we must first confront our mindset about disability, typically influenced by ableism.
    2. Diversity of disability: Accessibility is not solely about blind screen reader users; disability categories help us unpack and process the diversity of disabled users.
    3. Disability in the stages of UX research: Disabled participants can and should be part of every research phase — formative, prototype, and summative.
    4. Recruiting disabled participants: Recruiting disabled participants is not always easy, but that simply means we need to learn strategies on where to look.
    5. Designing your research: While our goal is to influence accessible products, our research execution must also be accessible.
    6. Facilitating an accessible study: Preparation and communication with your participants can ensure your study logistics run smoothly.
    7. Analyzing and reporting with accuracy and impact: How you communicate your findings is just as important as gathering them in the first place — so prepare to be a storyteller, educator, and advocate.
    8. Disability in the UX research field: Inclusion isn’t just for research participants, it’s important for our colleagues as well, as explained by blind UX Researcher Dr. Cynthia Bennett.



    The book will challenge your disability mindset and what it means to be truly inclusive in your work. (Large preview)

    Who This Book Is For

    Whether a UX professional who conducts research in industry or academia, or more broadly part of an engineering, product, or design function, you’ll want to read this book if…

    1. You have been tasked to improve accessibility of your product, but need to know where to start to facilitate this successfully.
    2. You want to establish a culture for accessibility in your company, but not sure how to make it work.
    3. You want to move from WCAG/EAA compliance to established accessibility practices and inclusion in research practices and beyond.
    4. You want to improve your overall accessibility knowledge and be viewed as an Accessibility Specialist for your organization.



    About the Author

    Dr. Michele A. Williams is owner of M.A.W. Consulting, LLC – Making Accessibility Work. Her 20+ years of experience include influencing top tech companies as a Senior User Experience (UX) Researcher and Accessibility Specialist and obtaining a PhD in Human-Centered Computing focused on accessibility. An international speaker, published academic author, and patented inventor, she is passionate about educating and advising on technology that does not exclude disabled users.

    Technical Details

    Community Matters ❤️

    Producing a book takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. A huge shout-out to Smashing Members for the kind, ongoing support. The eBook is and always will be free for Smashing Members. Plus, Members get a friendly discount when purchasing their printed copy. Just sayin’! 😉

    More Smashing Books & Goodies

    Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the core of everything we do at Smashing.

    In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Trine, Heather, and Steven are three of these people. Have you checked out their books already?

    The Ethical Design Handbook

    A practical guide on ethical design for digital products.

    Add to cart $44

    Understanding Privacy

    Everything you need to know to put your users first and make a better web.

    Add to cart $44

    Touch Design for Mobile Interfaces

    Learn how touchscreen devices really work — and how people really use them.

    Add to cart $44

  • State, Logic, And Native Power: CSS Wrapped 2025

    If I were to divide CSS evolutions into categories, we have moved far beyond the days when we simply asked for border-radius to feel like we were living in the future. We are currently living in a moment where the platform is handing us tools that don’t just tweak the visual layer, but fundamentally redefine how we architect interfaces. I thought the number of features announced in 2024 couldn’t be topped. I’ve never been so happily wrong.

    The Chrome team’s “CSS Wrapped 2025” is not just a list of features; it is a manifesto for a dynamic, native web. As someone who has spent a couple of years documenting these evolutions — from defining “CSS5” eras to the intricacies of modern layout utilities — I find myself looking at this year’s wrap-up with a huge sense of excitement. We are seeing a shift towards “Optimized Ergonomics” and “Next-gen interactions” that allow us to stop fighting the code and start sculpting interfaces in their natural state.

    In this article, you can find a comprehensive look at the standout features from Chrome’s report, viewed through the lens of my recent experiments and hopes for the future of the platform.

    The Component Revolution: Finally, A Native Customizable Select

    For years, we have relied on heavy JavaScript libraries to style dropdowns, a “decades-old problem” that the platform has finally solved. As I detailed in my deep dive into the history of the customizable select (and related articles), this has been a long road involving Open UI, bikeshedding names like <selectmenu> and <selectlist>, and finally landing on a solution that re-uses the existing <select> element.

    The introduction of appearance: base-select is a strong foundation. It allows us to fully customize the <select> element — including the button and the dropdown list (via ::picker(select)) — using standard CSS. Crucially, this is built with progressive enhancement in mind. By wrapping our styles in a feature query, we ensure a seamless experience across all browsers.

    We can opt in to this new behavior without breaking older browsers:

    select {
      /* Opt-in for the new customizable select */
      @supports (appearance: base-select) {
        &, &::picker(select) {
          appearance: base-select;
        }
      }
    }
    

    The fantastic addition to allow rich content inside options, such as images or flags, is a lot of fun. We can create all sorts of selects nowadays:

    • Demo: I created a Poké-adventure demo showing how the new <selectedcontent> element can clone rich content (like a Pokéball icon) from an option directly into the button.

    See the Pen A customizable select with images inside of the options and the selectedcontent [forked] by utilitybend.

    See the Pen A customizable select with only pseudo-elements [forked] by utilitybend.

    See the Pen An actual Select Menu with optgroups [forked] by utilitybend.

    This feature alone signals a massive shift in how we will build forms, reducing dependencies and technical debt.

    Scroll Markers And The Death Of The JavaScript Carousel

    Creating carousels has historically been a friction point between developers and clients. Clients love them, developers dread the JavaScript required to make them accessible and performant. The arrival of ::scroll-marker and ::scroll-button() pseudo-elements changes this dynamic entirely.

    These features allow us to create navigation dots and scroll buttons purely with CSS, linked natively to the scroll container. As I wrote on my blog, this was Love at first slide. The ability to create a fully functional, accessible slider without a single line of JavaScript is not just convenient; it is a triumph for performance. There are some accessibility concerns around this feature, and even though these are valid, there might be a way for us developers to make it work. The good thing is, all these UI changes are making it a lot easier than custom DOM manipulation and dragging around aria tags, but I digress…

    We can now group markers automatically using scroll-marker-group and style the buttons using anchor positioning to place them exactly where we want.

    .carousel {
      overflow-x: auto;
      scroll-marker-group: after; /* Creates the container for dots */
    
      /* Create the buttons */
      &::scroll-button(inline-end),
      &::scroll-button(inline-start) {
        content: " ";
        position: absolute;
        /* Use anchor positioning to center them */
        position-anchor: --carousel;
        top: anchor(center);
      }
    
      /* Create the markers on the children */
      div {
        &::scroll-marker {
          content: " ";
          width: 24px;
          border-radius: 50%;
          cursor: pointer;
        }
        /* Highlight the active marker */
        &::scroll-marker:target-current {
          background: white;
        }
      }
    }
    

    See the Pen Carousel Pure HTML and CSS [forked] by utilitybend.

    See the Pen Webshop slick slider remake in CSS [forked] by utilitybend.

    State Queries: Sticky Thing Stuck? Snappy Thing Snapped?

    For a long time, we have lacked the ability to know if a “sticky thing is stuck” or if a “snappy item is snapped” without relying on IntersectionObserver hacks. Chrome 133 introduced scroll-state queries, allowing us to query these states declaratively.

    By setting container-type: scroll-state, we can now style children based on whether they are stuck, snapped, or overflowing. This is a massive “quality of life” improvement that I have been eagerly waiting for since CSS Day 2023. It has even evolved a lot since we can also see the direction of the scroll, lovely!

    For a simple example: we can finally apply a shadow to a header only when it is actually sticking to the top of the viewport:

    .header-container {
      container-type: scroll-state;
      position: sticky;
      top: 0;
    
      header {
        transition: box-shadow 0.5s ease-out;
        /* The query checks the state of the container */
        @container scroll-state(stuck: top) {
          box-shadow: rgba(0, 0, 0, 0.6) 0px 12px 28px 0px;
        }
      }
    }
    
    • Demo: A sticky header that only applies a shadow when it is actually stuck.

    See the Pen Sticky headers with scroll-state query, checking if the sticky element is stuck [forked] by utilitybend.

    • Demo: A Pokémon-themed list that uses scroll-state queries combined with anchor positioning to move a frame over the currently snapped character.

    See the Pen Scroll-state query to check which item is snapped with CSS, Pokemon version [forked] by utilitybend.

    Optimized Ergonomics: Logic In CSS

    The “Optimized Ergonomics” section of CSS Wrapped highlights features that make our workflows more intuitive. Three features stand out as transformative for how we write logic:

    1. if() Statements
      We are finally getting conditionals in CSS. The if() function acts like a ternary operator for stylesheets, allowing us to apply values based on media, support, or style queries inline. This reduces the need for verbose @media blocks for single property changes.
    2. @function functions
      We can finally move some logic to a different place, resulting in some cleaner files, a real quality of life feature.
    3. sibling-index() and sibling-count()
      These tree-counting functions solve the issue of staggering animations or styling items based on list size. As I explored in Styling siblings with CSS has never been easier, this eliminates the need to hard-code custom properties (like --index: 1) in our HTML.

    Example: Calculating Layouts

    We can now write concise mathematical formulas. For example, staggering an animation for cards entering the screen becomes trivial:

    .card-container > * {
      animation: reveal 0.6s ease-out forwards;
      /* No more manual --index variables! */
      animation-delay: calc(sibling-index() * 0.1s);
    }
    

    I even experimented with using these functions along with trigonometry to place items in a perfect circle without any JavaScript.

    See the Pen Stagger cards using sibling-index() [forked] by utilitybend.

    • Demo: Placing items in a perfect circle using sibling-index, sibling-count, and the new CSS @function feature.

    See the Pen The circle using sibling-index, sibling-count and functions [forked] by utilitybend.

    My CSS To-Do List: Features I Can’t Wait To Try

    While I have been busy sculpting selects and transitions, the “CSS Wrapped 2025” report is packed with other goodies that I haven’t had the chance to fire up in CodePen yet. These are high on my list for my next experiments:

    Anchored Container Queries

    I used CSS Anchor Positioning for the buttons in my carousel demo, but “CSS Wrapped” highlights an evolution of this: Anchored Container Queries. This solves a problem we’ve all had with tooltips: if the browser flips the tooltip from top to bottom because of space constraints, the “arrow” often stays pointing the wrong way. With anchored container queries (@container anchored(fallback: flip-block)), we can style the element based on which fallback position the browser actually chose.

    Nested View Transition Groups

    View Transitions have been a revolution, but they came with a specific trade-off: they flattened the element tree, which often broke 3D transforms or overflow: clip. I always had a feeling that it was missing something, and this might just be the answer. By using view-transition-group: nearest, we can finally nest transition groups within each other.

    This allows us to maintain clipping effects or 3D rotations during a transition — something that was previously impossible because the elements were hoisted up to the top level.

    .card img {
      view-transition-name: photo;
      view-transition-group: nearest; /* Keep it nested! */
    }
    

    Typography and Shapes

    Finally, the ergonomist in me is itching to try Text Box Trim, which promises to remove that annoying extra whitespace above and below text content (the leading) to finally achieve perfect vertical alignment. And for the creative side, corner-shape and the shape() function are opening up non-rectangular layouts, allowing for “squaricles” and complex paths that respond to CSS variables. That being said, I can’t wait to have a design full of squircles!

    A Hopeful Future

    We are witnessing a world where CSS is becoming capable of handling logic, state, and complex interactions that previously belonged to JavaScript. Features like moveBefore (preserving DOM state for iframes/videos) and attr() (using types beyond strings for colors and grids) further cement this reality.

    While some of these features are currently experimental or specific to Chrome, the momentum is undeniable. We must hope for continued support across all browsers through initiatives like Interop to ensure these capabilities become the baseline. That being said, having browser engines is just as important as having all these awesome features in “Chrome first”. These new features need to be discussed, tinkered with, and tested before ever landing in browsers.

    It is a fantastic moment to get into CSS. We are no longer just styling documents; we are crafting dynamic, ergonomic, and robust applications with a native toolkit that is more powerful than ever.

    Let’s get going with this new era and spread the word.

    This is CSS Wrapped!

  • How UX Professionals Can Lead AI Strategy

    Your senior management is excited about AI. They’ve read the articles, attended the webinars, and seen the demos. They’re convinced that AI will transform your organization, boost productivity, and give you a competitive edge.

    Meanwhile, you’re sitting in your UX role wondering what this means for your team, your workflow, and your users. You might even be worried about your job security.

    The problem is that the conversation about how AI gets implemented is happening right now, and if you’re not part of it, someone else will decide how it affects your work. That someone probably doesn’t understand user experience, research practices, or the subtle ways poor implementation can damage the very outcomes management hopes to achieve.

    You have a choice. You can wait for directives to come down from above, or you can take control of the conversation and lead the AI strategy for your practice.

    Why UX Professionals Must Own the AI Conversation

    Management sees AI as efficiency gains, cost savings, competitive advantage, and innovation all wrapped up in one buzzword-friendly package. They’re not wrong to be excited. The technology is genuinely impressive and can deliver real value.

    But without UX input, AI implementations often fail users in predictable ways:

    • They automate tasks without understanding the judgment calls those tasks require.
    • They optimize for speed while destroying the quality that made your work valuable.

    Your expertise positions you perfectly to guide implementation. You understand users, workflows, quality standards, and the gap between what looks impressive in a demo and what actually works in practice.

    Use AI Momentum to Advance Your Priorities

    Management’s enthusiasm for AI creates an opportunity to advance priorities you’ve been fighting for unsuccessfully. When management is willing to invest in AI, you can connect those long-standing needs to the AI initiative. Position user research as essential for training AI systems on real user needs. Frame usability testing as the validation method that ensures AI-generated solutions actually work.

    How AI gets implemented will shape your team’s roles, your users’ experiences, and your organization’s capability to deliver quality digital products.

    Your Role Isn’t Disappearing (It’s Evolving)

    Yes, AI will automate some of the tasks you currently do. But someone needs to decide which tasks get automated, how they get automated, what guardrails to put in place, and how automated processes fit around real humans doing complex work.

    That someone should be you.

    Think about what you already do. When you conduct user research, AI might help you transcribe interviews or identify themes. But you’re the one who knows which participant hesitated before answering, which feedback contradicts what you observed in their behavior, and which insights matter most for your specific product and users.

    When you design interfaces, AI might generate layout variations or suggest components from your design system. But you’re the one who understands the constraints of your technical platform, the political realities of getting designs approved, and the edge cases that will break a clever solution.

    Your future value comes from the work you’re already doing:

    • Seeing the full picture.
      You understand how this feature connects to that workflow, how this user segment differs from that one, and why the technically correct solution won’t work in your organization’s reality.
    • Making judgment calls.
      You decide when to follow the design system and when to break it, when user feedback reflects a real problem versus a feature request from one vocal user, and when to push back on stakeholders versus find a compromise.
    • Connecting the dots.
      You translate between technical constraints and user needs, between business goals and design principles, between what stakeholders ask for and what will actually solve their problem.

    AI will keep getting better at individual tasks. But you’re the person who decides which solution actually works for your specific context. The people who will struggle are those doing simple, repeatable work without understanding why. Your value is in understanding context, making judgment calls, and connecting solutions to real problems.

    Step 1: Understand Management’s AI Motivations

    Before you can lead the conversation, you need to understand what’s driving it. Management is responding to real pressures: cost reduction, competitive pressure, productivity gains, and board expectations.

    Speak their language.
    When you talk to management about AI, frame everything in terms of ROI, risk mitigation, and competitive advantage. “This approach will protect our quality standards” is less compelling than “This approach reduces the risk of damaging our conversion rate while we test AI capabilities.”

    Separate hype from reality.
    Take time to research what AI capabilities actually exist versus what’s hype. Read case studies, try tools yourself, and talk to peers about what’s actually working.

    Identify real pain points.
    AI might legitimately address in your organization. Maybe your team spends hours formatting research findings, or accessibility testing creates bottlenecks. These are the problems worth solving.

    Step 2: Audit Your Current State and Opportunities

    Map your team’s work. Where does time actually go? Look at the past quarter and categorize how your team spent their hours.

    Identify high-volume, repeatable tasks versus high-judgment work.
    Repeatable tasks are candidates for automation. High-judgment work is where you add irreplaceable value.

    Also, identify what you’ve wanted to do but couldn’t get approved.
    This is your opportunity list. Maybe you’ve wanted quarterly usability tests, but only get budget annually. Write these down separately. You’ll connect them to your AI strategy in the next step.

    Spot opportunities where AI could genuinely help:

    • Research synthesis:
      AI can help organize and categorize findings.
    • Analyzing user behavior data:
      AI can process analytics and session recordings to surface patterns you might miss.
    • Rapid prototyping:
      AI can quickly generate testable prototypes, speeding up your test cycles.

    Step 3: Define AI Principles for Your UX Practice

    Before you start forming your strategy, establish principles that will guide every decision.

    Set non-negotiables.
    User privacy, accessibility, and human oversight of significant decisions. Write these down and get agreement from leadership before you pilot anything.

    Define criteria for AI use.
    AI is good at pattern recognition, summarization, and generating variations. AI is poor at understanding context, making ethical judgments, and knowing when rules should be broken.

    Define success metrics beyond efficiency.
    Yes, you want to save time. But you also need to measure quality, user satisfaction, and team capability. Build a balanced scorecard that captures what actually matters.

    Create guardrails.
    Maybe every AI-generated interface needs human review before it ships. These guardrails prevent the obvious disasters and give you space to learn safely.

    Step 4: Build Your AI-in-UX Strategy

    Now you’re ready to build the actual strategy you’ll pitch to leadership. Start small with pilot projects that have a clear scope and evaluation criteria.

    Connect to business outcomes management cares about.
    Don’t pitch “using AI for research synthesis.” Pitch “reducing time from research to insights by 40%, enabling faster product decisions.”

    Piggyback your existing priorities on AI momentum.
    Remember that opportunity list from Step 2? Now you connect those long-standing needs to your AI strategy. If you’ve wanted more frequent usability testing, explain that AI implementations need continuous validation to catch problems before they scale. AI implementations genuinely benefit from good research practices. You’re simply using management’s enthusiasm for AI as the vehicle to finally get resources for practices that should have been funded all along.

    Define roles clearly.
    Where do humans lead? Where does AI assist? Where won’t you automate? Management needs to understand that some work requires human judgment and should never be fully automated.

    Plan for capability building.
    Your team will need training and new skills. Budget time and resources for this.

    Address risks honestly.
    AI could generate biased recommendations, miss important context, or produce work that looks good but doesn’t actually function. For each risk, explain how you’ll detect it and what you’ll do to mitigate it.

    Step 5: Pitch the Strategy to Leadership

    Frame your strategy as de-risking management’s AI ambitions, not blocking them. You’re showing them how to implement AI successfully while avoiding the obvious pitfalls.

    Lead with outcomes and ROI they care about.
    Put the business case up front.

    Bundle your wish list into the AI strategy.
    When you present your strategy, include those capabilities you’ve wanted but couldn’t get approved before. Don’t present them as separate requests. Integrate them as essential components. “To validate AI-generated designs, we’ll need to increase our testing frequency from annual to quarterly” sounds much more reasonable than “Can we please do more testing?” You’re explaining what’s required for their AI investment to succeed.

    Show quick wins alongside a longer-term vision.
    Identify one or two pilots that can show value within 30-60 days. Then show them how those pilots build toward bigger changes over the next year.

    Ask for what you need.
    Be specific. You need a budget for tools, time for pilots, access to data, and support for team training.

    Step 6: Implement and Demonstrate Value

    Run your pilots with clear before-and-after metrics. Measure everything: time saved, quality maintained, user satisfaction, team confidence.

    Document wins and learning.
    Failures are useful too. If a pilot doesn’t work out, document why and what you learned.

    Share progress in management’s language.
    Monthly updates should focus on business outcomes, not technical details. “We’ve reduced research synthesis time by 35% while maintaining quality scores” is the right level of detail.

    Build internal advocates by solving real problems.
    When your AI pilots make someone’s job easier, you create advocates who will support broader adoption.

    Iterate based on what works in your specific context.
    Not every AI application will fit your organization. Pay attention to what’s actually working and double down on that.

    Taking Initiative Beats Waiting

    AI adoption is happening. The question isn’t whether your organization will use AI, but whether you’ll shape how it gets implemented.

    Your UX expertise is exactly what’s needed to implement AI successfully. You understand users, quality, and the gap between impressive demos and useful reality.

    Take one practical first step this week.
    Schedule 30 minutes to map one AI opportunity in your practice. Pick one area where AI might help, think through how you’d pilot it safely, and sketch out what success would look like.

    Then start the conversation with your manager. You might be surprised how receptive they are to someone stepping up to lead this.

    You know how to understand user needs, test solutions, measure outcomes, and iterate based on evidence. Those skills don’t change just because AI is involved. You’re applying your existing expertise to a new tool.

    Your role isn’t disappearing. It’s evolving into something more strategic, more valuable, and more secure. But only if you take the initiative to shape that evolution yourself.

    Further Reading On SmashingMag

  • Beyond The Black Box: Practical XAI For UX Practitioners

    In my last piece, we established a foundational truth: for users to adopt and rely on AI, they must trust it. We talked about trust being a multifaceted construct, built on perceptions of an AI’s Ability, Benevolence, Integrity, and Predictability. But what happens when an AI, in its silent, algorithmic wisdom, makes a decision that leaves a user confused, frustrated, or even hurt? A mortgage application is denied, a favorite song is suddenly absent from a playlist, and a qualified resume is rejected before a human ever sees it. In these moments, ability and predictability are shattered, and benevolence feels a world away.

    Our conversation now must evolve from the why of trust to the how of transparency. The field of Explainable AI (XAI), which focuses on developing methods to make AI outputs understandable to humans, has emerged to address this, but it’s often framed as a purely technical challenge for data scientists. I argue it’s a critical design challenge for products relying on AI. It’s our job as UX professionals to bridge the gap between algorithmic decision-making and human understanding.

    This article provides practical, actionable guidance on how to research and design for explainability. We’ll move beyond the buzzwords and into the mockups, translating complex XAI concepts into concrete design patterns you can start using today.

    De-mystifying XAI: Core Concepts For UX Practitioners

    XAI is about answering the user’s question: “Why?” Why was I shown this ad? Why is this movie recommended to me? Why was my request denied? Think of it as the AI showing its work on a math problem. Without it, you just have an answer, and you’re forced to take it on faith. In showing the steps, you build comprehension and trust. You also allow for your work to be double-checked and verified by the very humans it impacts.

    Feature Importance And Counterfactuals

    There are a number of techniques we can use to clarify or explain what is happening with AI. While methods range from providing the entire logic of a decision tree to generating natural language summaries of an output, two of the most practical and impactful types of information UX practitioners can introduce into an experience are feature importance (Figure 1) and counterfactuals. These are often the most straightforward for users to understand and the most actionable for designers to implement.

    Feature Importance

    This explainability method answers, “What were the most important factors the AI considered?” It’s about identifying the top 2-3 variables that had the biggest impact on the outcome. It’s the headline, not the whole story.

    Example: Imagine an AI that predicts whether a customer will churn (cancel their service). Feature importance might reveal that “number of support calls in the last month” and “recent price increases” were the two most important factors in determining if a customer was likely to churn.

    Counterfactuals

    This powerful method answers, “What would I need to change to get a different outcome?” This is crucial because it gives users a sense of agency. It transforms a frustrating “no” into an actionable “not yet.”

    Example: Imagine a loan application system that uses AI. A user is denied a loan. Instead of just seeing “Application Denied,” a counterfactual explanation would also share, “If your credit score were 50 points higher, or if your debt-to-income ratio were 10% lower, your loan would have been approved.” This gives Sarah clear, actionable steps she can take to potentially get a loan in the future.

    Using Model Data To Enhance The Explanation

    Although technical specifics are often handled by data scientists, it’s helpful for UX practitioners to know that tools like LIME (Local Interpretable Model-agnostic Explanations) which explains individual predictions by approximating the model locally, and SHAP (SHapley Additive exPlanations) which uses a game theory approach to explain the output of any machine learning model are commonly used to extract these “why” insights from complex models. These libraries essentially help break down an AI’s decision to show which inputs were most influential for a given outcome.

    When done properly, the data underlying an AI tool’s decision can be used to tell a powerful story. Let’s walk through feature importance and counterfactuals and show how the data science behind the decision can be utilized to enhance the user’s experience.

    Now let’s cover feature importance with the assistance of Local Explanations (e.g., LIME) data: This approach answers, “Why did the AI make this specific recommendation for me, right now?” Instead of a general explanation of how the model works, it provides a focused reason for a single, specific instance. It’s personal and contextual.

    Example: Imagine an AI-powered music recommendation system like Spotify. A local explanation would answer, “Why did the system recommend this specific song by Adele to you right now?” The explanation might be: “Because you recently listened to several other emotional ballads and songs by female vocalists.”

    Finally, let’s cover the inclusion of Value-based Explanations (e.g. Shapley Additive Explanations (SHAP) data to an explanation of a decision: This is a more nuanced version of feature importance that answers, “How did each factor push the decision one way or the other?” It helps visualize what mattered, and whether its influence was positive or negative.

    Example: Imagine a bank uses an AI model to decide whether to approve a loan application.

    Feature Importance: The model output might show that the applicant’s credit score, income, and debt-to-income ratio were the most important factors in its decision. This answers what mattered.

    Feature Importance with Value-Based Explanations (SHAP): SHAP values would take feature importance further based on elements of the model.

    • For an approved loan, SHAP might show that a high credit score significantly pushed the decision towards approval (positive influence), while a slightly higher-than-average debt-to-income ratio pulled it slightly away (negative influence), but not enough to deny the loan.
    • For a denied loan, SHAP could reveal that a low income and a high number of recent credit inquiries strongly pushed the decision towards denial, even if the credit score was decent.

    This helps the loan officer explain to the applicant beyond what was considered, to how each factor contributed to the final “yes” or “no” decision.

    It’s crucial to recognize that the ability to provide good explanations often starts much earlier in the development cycle. Data scientists and engineers play a pivotal role by intentionally structuring models and data pipelines in ways that inherently support explainability, rather than trying to bolt it on as an afterthought.

    Research and design teams can foster this by initiating early conversations with data scientists and engineers about user needs for understanding, contributing to the development of explainability metrics, and collaboratively prototyping explanations to ensure they are both accurate and user-friendly.

    XAI And Ethical AI: Unpacking Bias And Responsibility

    Beyond building trust, XAI plays a critical role in addressing the profound ethical implications of AI*, particularly concerning algorithmic bias. Explainability techniques, such as analyzing SHAP values, can reveal if a model’s decisions are disproportionately influenced by sensitive attributes like race, gender, or socioeconomic status, even if these factors were not explicitly used as direct inputs.

    For instance, if a loan approval model consistently assigns negative SHAP values to applicants from a certain demographic, it signals a potential bias that needs investigation, empowering teams to surface and mitigate such unfair outcomes.

    The power of XAI also comes with the potential for “explainability washing.” Just as “greenwashing” misleads consumers about environmental practices, explainability washing can occur when explanations are designed to obscure, rather than illuminate, problematic algorithmic behavior or inherent biases. This could manifest as overly simplistic explanations that omit critical influencing factors, or explanations that strategically frame results to appear more neutral or fair than they truly are. It underscores the ethical responsibility of UX practitioners to design explanations that are genuinely transparent and verifiable.

    UX professionals, in collaboration with data scientists and ethicists, hold a crucial responsibility in communicating the why of a decision, and also the limitations and potential biases of the underlying AI model. This involves setting realistic user expectations about AI accuracy, identifying where the model might be less reliable, and providing clear channels for recourse or feedback when users perceive unfair or incorrect outcomes. Proactively addressing these ethical dimensions will allow us to build AI systems that are truly just and trustworthy.

    From Methods To Mockups: Practical XAI Design Patterns

    Knowing the concepts is one thing; designing them is another. Here’s how we can translate these XAI methods into intuitive design patterns.

    Pattern 1: The “Because” Statement (for Feature Importance)

    This is the simplest and often most effective pattern. It’s a direct, plain-language statement that surfaces the primary reason for an AI’s action.

    • Heuristic: Be direct and concise. Lead with the single most impactful reason. Avoid jargon at all costs.

    Example: Imagine a music streaming service. Instead of just presenting a “Discover Weekly” playlist, you add a small line of microcopy.

    Song Recommendation: “Velvet Morning”
    Because you listen to “The Fuzz” and other psychedelic rock.

    Pattern 2: The “What-If” Interactive (for Counterfactuals)

    Counterfactuals are inherently about empowerment. The best way to represent them is by giving users interactive tools to explore possibilities themselves. This is perfect for financial, health, or other goal-oriented applications.

    • Heuristic: Make explanations interactive and empowering. Let users see the cause and effect of their choices.

    Example: A loan application interface. After a denial, instead of a dead end, the user gets a tool to determine how various scenarios (what-ifs) might play out (See Figure 1).

    Pattern 3: The Highlight Reel (For Local Explanations)

    When an AI performs an action on a user’s content (like summarizing a document or identifying faces in photos), the explanation should be visually linked to the source.

    • Heuristic: Use visual cues like highlighting, outlines, or annotations to connect the explanation directly to the interface element it’s explaining.

    Example: An AI tool that summarizes long articles.

    AI-Generated Summary Point:
    Initial research showed a market gap for sustainable products.

    Source in Document:
    “…Our Q2 analysis of market trends conclusively demonstrated that no major competitor was effectively serving the eco-conscious consumer, revealing a significant market gap for sustainable products…”

    Pattern 4: The Push-and-Pull Visual (for Value-based Explanations)

    For more complex decisions, users might need to understand the interplay of factors. Simple data visualizations can make this clear without being overwhelming.

    • Heuristic: Use simple, color-coded data visualizations (like bar charts) to show the factors that positively and negatively influenced a decision.

    Example: An AI screening a candidate’s profile for a job.

    Why this candidate is a 75% match:

    Factors pushing the score up:

    • 5+ Years UX Research Experience
    • Proficient in Python

    Factors pushing the score down:

    • No experience with B2B SaaS

    Learning and using these design patterns in the UX of your AI product will help increase the explainability. You can also use additional techniques that I’m not covering in-depth here. This includes the following:

    • Natural language explanations: Translating an AI’s technical output into simple, conversational human language that non-experts can easily understand.
    • Contextual explanations: Providing a rationale for an AI’s output at the specific moment and location, it is most relevant to the user’s task.
    • Relevant visualizations: Using charts, graphs, or heatmaps to visually represent an AI’s decision-making process, making complex data intuitive and easier for users to grasp.

    A Note For the Front End: Translating these explainability outputs into seamless user experiences also presents its own set of technical considerations. Front-end developers often grapple with API design to efficiently retrieve explanation data, and performance implications (like the real-time generation of explanations for every user interaction) need careful planning to avoid latency.

    Some Real-world Examples

    UPS Capital’s DeliveryDefense

    UPS uses AI to assign a “delivery confidence score” to addresses to predict the likelihood of a package being stolen. Their DeliveryDefense software analyzes historical data on location, loss frequency, and other factors. If an address has a low score, the system can proactively reroute the package to a secure UPS Access Point, providing an explanation for the decision (e.g., “Package rerouted to a secure location due to a history of theft”). This system demonstrates how XAI can be used for risk mitigation and building customer trust through transparency.

    Autonomous Vehicles

    These vehicles of the future will need to effectively use XAI to help their vehicles make safe, explainable decisions. When a self-driving car brakes suddenly, the system can provide a real-time explanation for its action, for example, by identifying a pedestrian stepping into the road. This is not only crucial for passenger comfort and trust but is a regulatory requirement to prove the safety and accountability of the AI system.

    IBM Watson Health (and its challenges)

    While often cited as a general example of AI in healthcare, it’s also a valuable case study for the importance of XAI. The failure of its Watson for Oncology project highlights what can go wrong when explanations are not clear, or when the underlying data is biased or not localized. The system’s recommendations were sometimes inconsistent with local clinical practices because they were based on U.S.-centric guidelines. This serves as a cautionary tale on the need for robust, context-aware explainability.

    The UX Researcher’s Role: Pinpointing And Validating Explanations

    Our design solutions are only effective if they address the right user questions at the right time. An explanation that answers a question the user doesn’t have is just noise. This is where UX research becomes the critical connective tissue in an XAI strategy, ensuring that we explain the what and how that actually matters to our users. The researcher’s role is twofold: first, to inform the strategy by identifying where explanations are needed, and second, to validate the designs that deliver those explanations.

    Informing the XAI Strategy (What to Explain)

    Before we can design a single explanation, we must understand the user’s mental model of the AI system. What do they believe it’s doing? Where are the gaps between their understanding and the system’s reality? This is the foundational work of a UX researcher.

    Mental Model Interviews: Unpacking User Perceptions Of AI Systems

    Through deep, semi-structured interviews, UX practitioners can gain invaluable insights into how users perceive and understand AI systems. These sessions are designed to encourage users to literally draw or describe their internal “mental model” of how they believe the AI works. This often involves asking open-ended questions that prompt users to explain the system’s logic, its inputs, and its outputs, as well as the relationships between these elements.

    These interviews are powerful because they frequently reveal profound misconceptions and assumptions that users hold about AI. For example, a user interacting with a recommendation engine might confidently assert that the system is based purely on their past viewing history. They might not realize that the algorithm also incorporates a multitude of other factors, such as the time of day they are browsing, the current trending items across the platform, or even the viewing habits of similar users.

    Uncovering this gap between a user’s mental model and the actual underlying AI logic is critically important. It tells us precisely what specific information we need to communicate to users to help them build a more accurate and robust mental model of the system. This, in turn, is a fundamental step in fostering trust. When users understand, even at a high level, how an AI arrives at its conclusions or recommendations, they are more likely to trust its outputs and rely on its functionality.

    AI Journey Mapping: A Deep Dive Into User Trust And Explainability

    By meticulously mapping the user’s journey with an AI-powered feature, we gain invaluable insights into the precise moments where confusion, frustration, or even profound distrust emerge. This uncovers critical junctures where the user’s mental model of how the AI operates clashes with its actual behavior.

    Consider a music streaming service: Does the user’s trust plummet when a playlist recommendation feels “random,” lacking any discernible connection to their past listening habits or stated preferences? This perceived randomness is a direct challenge to the user’s expectation of intelligent curation and a breach of the implicit promise that the AI understands their taste. Similarly, in a photo management application, do users experience significant frustration when an AI photo-tagging feature consistently misidentifies a cherished family member? This error is more than a technical glitch; it strikes at the heart of accuracy, personalization, and even emotional connection.

    These pain points are vivid signals indicating precisely where a well-placed, clear, and concise explanation is necessary. Such explanations serve as crucial repair mechanisms, mending a breach of trust that, if left unaddressed, can lead to user abandonment.

    The power of AI journey mapping lies in its ability to move us beyond simply explaining the final output of an AI system. While understanding what the AI produced is important, it’s often insufficient. Instead, this process compels us to focus on explaining the process at critical moments. This means addressing:

    • Why a particular output was generated: Was it due to specific input data? A particular model architecture?
    • What factors influenced the AI’s decision: Were certain features weighted more heavily?
    • How the AI arrived at its conclusion: Can we offer a simplified, analogous explanation of its internal workings?
    • What assumptions the AI made: Were there implicit understandings of the user’s intent or data that need to be surfaced?
    • What the limitations of the AI are: Clearly communicating what the AI cannot do, or where its accuracy might waver, builds realistic expectations.

    AI journey mapping transforms the abstract concept of XAI into a practical, actionable framework for UX practitioners. It enables us to move beyond theoretical discussions of explainability and instead pinpoint the exact moments where user trust is at stake, providing the necessary insights to build AI experiences that are powerful, transparent, understandable, and trustworthy.

    Ultimately, research is how we uncover the unknowns. Your team might be debating how to explain why a loan was denied, but research might reveal that users are far more concerned with understanding how their data was used in the first place. Without research, we are simply guessing what our users are wondering.

    Collaborating On The Design (How to Explain Your AI)

    Once research has identified what to explain, the collaborative loop with design begins. Designers can prototype the patterns we discussed earlier—the “Because” statement, the interactive sliders—and researchers can put those designs in front of users to see if they hold up.

    Targeted Usability & Comprehension Testing: We can design research studies that specifically test the XAI components. We don’t just ask, “Is this easy to use?” We ask, “After seeing this, can you tell me in your own words why the system recommended this product?” or “Show me what you would do to see if you could get a different result.” The goal here is to measure comprehension and actionability, alongside usability.

    Measuring Trust Itself: We can use simple surveys and rating scales before and after an explanation is shown. For instance, we can ask a user on a 5-point scale, “How much do you trust this recommendation?” before they see the “Because” statement, and then ask them again afterward. This provides quantitative data on whether our explanations are actually moving the needle on trust.

    This process creates a powerful, iterative loop. Research findings inform the initial design. That design is then tested, and the new findings are fed back to the design team for refinement. Maybe the “Because” statement was too jargony, or the “What-If” slider was more confusing than empowering. Through this collaborative validation, we ensure that the final explanations are technically accurate, genuinely understandable, useful, and trust-building for the people using the product.

    The Goldilocks Zone Of Explanation

    A critical word of caution: it is possible to over-explain. As in the fairy tale, where Goldilocks sought the porridge that was ‘just right’, the goal of a good explanation is to provide the right amount of detail—not too much and not too little. Bombarding a user with every variable in a model will lead to cognitive overload and can actually decrease trust. The goal is not to make the user a data scientist.

    One solution is progressive disclosure.

    1. Start with the simple. Lead with a concise “Because” statement. For most users, this will be enough.
    2. Offer a path to detail. Provide a clear, low-friction link like “Learn More” or “See how this was determined.”
    3. Reveal the complexity. Behind that link, you can offer the interactive sliders, the visualizations, or a more detailed list of contributing factors.

    This layered approach respects user attention and expertise, providing just the right amount of information for their needs. Let’s imagine you’re using a smart home device that recommends optimal heating based on various factors.

    Start with the simple: “Your home is currently heated to 72 degrees, which is the optimal temperature for energy savings and comfort.

    Offer a path to detail: Below that, a small link or button: “Why is 72 degrees optimal?

    Reveal the complexity: Clicking that link could open a new screen showing:

    • Interactive sliders for outside temperature, humidity, and your preferred comfort level, demonstrating how these adjust the recommended temperature.
    • A visualization of energy consumption at different temperatures.
    • A list of contributing factors like “Time of day,” “Current outside temperature,” “Historical energy usage,” and “Occupancy sensors.”

    It’s effective to combine multiple XAI methods and this Goldilocks Zone of Explanation pattern, which advocates for progressive disclosure, implicitly encourages this. You might start with a simple “Because” statement (Pattern 1) for immediate comprehension, and then offer a “Learn More” link that reveals a “What-If” Interactive (Pattern 2) or a “Push-and-Pull Visual” (Pattern 4) for deeper exploration.

    For instance, a loan application system could initially state the primary reason for denial (feature importance), then allow the user to interact with a “What-If” tool to see how changes to their income or debt would alter the outcome (counterfactuals), and finally, provide a detailed “Push-and-Pull” chart (value-based explanation) to illustrate the positive and negative contributions of all factors. This layered approach allows users to access the level of detail they need, when they need it, preventing cognitive overload while still providing comprehensive transparency.

    Determining which XAI tools and methods to use is primarily a function of thorough UX research. Mental model interviews and AI journey mapping are crucial for pinpointing user needs and pain points related to AI understanding and trust. Mental model interviews help uncover user misconceptions about how the AI works, indicating areas where fundamental explanations (like feature importance or local explanations) are needed. AI journey mapping, on the other hand, identifies critical moments of confusion or distrust in the user’s interaction with the AI, signaling where more granular or interactive explanations (like counterfactuals or value-based explanations) would be most beneficial to rebuild trust and provide agency.

    Ultimately, the best way to choose a technique is to let user research guide your decisions, ensuring that the explanations you design directly address actual user questions and concerns, rather than simply offering technical details for their own sake.

    XAI for Deep Reasoning Agents

    Some of the newest AI systems, known as deep reasoning agents, produce an explicit “chain of thought” for every complex task. They do not merely cite sources; they show the logical, step-by-step path they took to arrive at a conclusion. While this transparency provides valuable context, a play-by-play that spans several paragraphs can feel overwhelming to a user simply trying to complete a task.

    The principles of XAI, especially the Goldilocks Zone of Explanation, apply directly here. We can curate the journey, using progressive disclosure to show only the final conclusion and the most salient step in the thought process first. Users can then opt in to see the full, detailed, multi-step reasoning when they need to double-check the logic or find a specific fact. This approach respects user attention while preserving the agent’s full transparency.

    Next Steps: Empowering Your XAI Journey

    Explainability is a fundamental pillar for building trustworthy and effective AI products. For the advanced practitioner looking to drive this change within their organization, the journey extends beyond design patterns into advocacy and continuous learning.

    To deepen your understanding and practical application, consider exploring resources like the AI Explainability 360 (AIX360) toolkit from IBM Research or Google’s What-If Tool, which offer interactive ways to explore model behavior and explanations. Engaging with communities like the Responsible AI Forum or specific research groups focused on human-centered AI can provide invaluable insights and collaboration opportunities.

    Finally, be an advocate for XAI within your own organization. Frame explainability as a strategic investment. Consider a brief pitch to your leadership or cross-functional teams:

    “By investing in XAI, we’ll go beyond building trust; we’ll accelerate user adoption, reduce support costs by empowering users with understanding, and mitigate significant ethical and regulatory risks by exposing potential biases. This is good design and smart business.”

    Your voice, grounded in practical understanding, is crucial in bringing AI out of the black box and into a collaborative partnership with users.

  • 从零搭建使用 Open-AutoGML 搜索附近的美食

    1. 环境配置 下载项目库:https://github.com/zai-org/Open-AutoGLM , 使用 Open-AutoGML 需要安装以下开发环境: python 环境 ADB 调试
  • 如何在 Spring Boot 中接入 Amazon ElastiCache

    缓存在服务端是一个非常重要的东西,今天我们来聊聊怎么把 Amazon ElastiCache应用整合到 Spring Boot 上,让服务响应提速、系统整体轻盈起来。
  • 鸿蒙ArkUI:状态管理、应用结构、路由全解析

    🔥 鸿蒙新手入门必看!本文面向鸿蒙初学者,讲解 ArkUI 中V1与V2状态管理的区别,并结合示例介绍应用结构、Route路由及Navigation的基本用法,快速理解鸿蒙应用的页面组织与跳转逻辑。
  • 我们来说一下消息的可靠性投递

    ​ 1. 核心概念 可靠性投递(Reliable Delivery)是指确保消息从生产者成功到达消费者,即使面对网络故障、系统崩溃等异常情况也能保证不丢失、不重复、按顺序(部分场景)传递。 2. 面临