标签: startup

  • 8点1氪丨罗永浩回应“骂俞敏洪是忘恩负义”;馆藏估价8800万名作现身拍卖市场,南京博物院回应;多只“宝宝类”基金收益率跌破1%

    今日热点导览

    姚顺雨出任腾讯首席AI科学家,带队大语言模型、AI Infra

    罗福莉完成入职后的小米首秀

    何小鹏称当前没有AI泡沫,人形机器人将来会是巨头的竞争

    蜜雪冰城美国首店试运营,最高可选200%糖度

    车厘子或将再降价,部分车厘子每斤售价跌破20元 

    TOP3大新闻

    罗永浩回应“骂俞敏洪是忘恩负义”:完全不能认同,当时很愤怒

    12月17日,罗永浩的十字路口发布新一期播客节目。在节目中,罗永浩回顾了在新东方的工作经历,他说,今天年轻人在社会上骂老板骂职场是主流,我骂就说我忘恩负义,这是不是太奇怪了。如果你在职场上成功了,就没想过可能是你自己努力的吗?罗永浩表示:“我们有时候讲职场的知遇之恩,我理解的知遇之恩是所有人都说你不行,有某个领导非说你行,一路支撑你到你真行。你看我在新东方,很多人说你没有新东方你什么也不是,所以俞敏洪对你是有恩的,但是你出来骂俞敏洪,这是忘恩负义的,这个我完全不能认同。”(红星新闻)

    南京博物院回应馆藏明代仇英《江南春》现身拍卖市场

    12月17日,南京博物院回应了关于明代仇英《江南春》画作现身拍卖市场的报道。博物院表示,该画作曾于1961年和1964年两次被专家组鉴定为伪作,并在上世纪90年代依照《博物馆藏品管理办法》进行了处置。目前,有关该画作的赠与合同纠纷案件正在审理中。博物院将积极配合案件审理,核查画作去向,并对捐赠物品和馆藏文物的管理进行规范。对于拍卖市场出现的《江南春》是否为受赠画作,博物院表示尚需进一步查证。

    据此前报道,庞莱臣(1864-1949)是中国近现代收藏大家,其“虚斋”收藏的历代名画以质量精湛、体系完整著称。上世纪50年代,庞莱臣后人曾向南京博物院等多家国有文博机构捐赠了大量珍贵古代书画,而其中,捐赠最多的当属南京博物院,共有137件(套)之多。然而让人意外的是,庞莱臣后人捐给南京博物院的一件明代仇英《江南春》图卷突然在今年北京的一场艺术拍卖中亮相,估价达8800万元。(界面新闻、澎湃新闻)

    多只“宝宝类”基金收益率跌破1%,余额宝基金仍坚守

    中证网12月17日消息,今年以来,“宝宝类”基金收益率持续走低。据Wind数据,截至12月16日,941只有统计数据的货币基金七日年化收益率中位数为1.24%。其中,102只货币基金七日年化跌至1%以下,还有300余只货币基金七日年化在1%至1.2%之间。不过,规模最大的天弘余额宝基金仍坚守在1%以上。截至12月16日,天弘余额宝七日年化1.014%,较上周小幅回升。此前,该基金七日年化一度跌至1.001%,但始终保持在1%以上。即使货币基金收益率处于长期下行趋势,但受活期存款收益走低、债券及权益市场双双波动等因素影响,近期货币基金规模不降反增。据基金业协会数据,截至10月底,货币基金总份额为15.05万亿份,较9月底增长逾3800万份。(界面新闻)

    大公司/大事件

    姚顺雨出任腾讯首席AI科学家,带队大语言模型、AI Infra

    36氪独家获悉,腾讯近期完成了一次组织调整,正式新成立AI Infra部、AI Data部、数据计算平台部。12月17日下午发布的内部公告中,腾讯表示,Vinces Yao将出任“CEO/总裁办公室”首席AI科学家,向腾讯总裁刘炽平汇报;他同时兼任AI Infra部、大语言模型部负责人,向技术工程事业群总裁卢山汇报。

    腾讯并未披露Vinces Yao的中文名或过往履历。不过,36氪了解到,Vinces Yao即为数月前入职腾讯的姚顺雨,他毕业于清华和普林斯顿大学,曾任OpenAI研究员,是OpenAI首批智能体产品Operator与Deep Research的核心贡献者。

    沐曦股份竞价高开568%,单签盈利近30万

    据新浪财经报道,12月17日,继摩尔线程之后,国产GPU第二股沐曦股份,也正式登陆科创板。沐曦股份竞价高开568%,股价报700元,总市值达2800亿元,单签盈利近30万元。公司主要从事自主研发全栈高性能GPU芯片及计算平台,产品包括用于智算推理的曦思N系列GPU、用于训推一体和通用计算的曦云C系列GPU,以及正在研发用于图形渲染的曦彩G系列GPU。照此计算,沐曦股份超越摩尔线程,成为A股全面注册制以来最赚钱新股。(新浪财经) 

    宗馥莉卸任娃哈哈食品公司法定代表人等多个职务

    36氪获悉,近日,杭州娃哈哈食品有限公司发生工商变更,宗馥莉卸任法定代表人、经理、董事职务,均由许思敏接任。该公司成立于1992年10月,注册资本2.4亿余元,由杭州娃哈哈宏振投资有限公司、杭州娃哈哈集团有限公司、浙江娃哈哈实业股份有限公司分别持股51%、39%、10%。值得一提的是,11月宗馥莉已卸任娃哈哈集团董事长、法定代表人等职务。 

    “生娃不花钱”明年落地,国家医保局连发五文

    12月16日,国家医保局以“医保支持生育”为主题连发五篇文章推荐地方生娃基本不花钱的经验,用地方实践和真实的医保数据回应了大家关心的这些问题。全国医保工作会议提出,推动将灵活就业人员、农民工、新就业形态人员纳入生育保险覆盖范围。合理提升产前检查医疗费用保障水平,力争全国基本实现政策范围内分娩个人“无自付”。

    国家医保局数据显示,到2025年底,31个省(区、市)及新疆生产建设兵团均已将符合条件的辅助生殖项目纳入医保,7个省份实现政策范围内住院分娩医疗费用全额保障,95%的统筹区将生育津贴直接发放给参保人。吉林、江苏、山东等7个省份已率先实现生娃基本不花钱。(第一财经)

    蜜雪冰城美国首店试运营,最高可选200%糖度

    近日,有多名网友在社交平台分享在美国好莱坞看到了“雪王”,据网友信息,该蜜雪冰城门店位于洛杉矶好莱坞星光大道,目前该店在试运营阶段,已启动预售及预热活动。 

    据了解,位于洛杉矶名为“蜜雪冰城(Hollywood)”的商家开启独家预售,已上架2款预售套餐。这两款套餐定价均为3.99美元,含两杯饮品及冰淇淋,新用户最低仅需1.17美元。值得注意的是,在套餐选择饮品糖度时,除了常规的正常糖、七分糖、五分糖、三分糖、不另外加糖的选项外,还有120%糖度、150%糖度、200%糖度可选。(第一财经) 

    北极气温创1900年以来最高

    美国国家海洋和大气管理局16日发布的北极年度气候报告显示,在2024年10月至2025年9月这一统计周期内,北极地区平均地表气温创下自1900年有记录以来的最高值。报告说,过去10年是北极地区有观测记录以来气温最高的10年。自2006年以来,北极地区年均升温速度超过全球平均升温速度的两倍。(新华社、财联社)

    何小鹏称当前没有AI泡沫,人形机器人将来会是巨头的竞争

    12月17日,小鹏汽车董事长何小鹏在朋友圈发文,对AI泡沫、物理AI、美国的新创业和机器人,以及AGI的到来等热门话题发表自己的感受与见解。何小鹏称,人形机器人将来会是巨头的竞争,而不同的专用机器人则会有大量不同领域的选手,且会有非常多的成功机会。何小鹏认为,当前没有AI泡沫,未来AI的市场有着巨大机遇。(IT之家) 

    苹果考虑在印度封装iPhone芯片

    美东时间周三,据外媒报道,苹果公司正在与印度芯片制造商进行初步商谈,计划为其iPhone组装并封装零部件。此前,苹果与印度的工业合作主要集中在iPhone、AirPods等终端产品的最终组装环节。而最新的谈判进展表明,苹果在印度的布局可能从现有的终端产品组装,进一步向上游延伸至更复杂的半导体封装领域。(科创日报) 

    马斯克回应福特收缩电动汽车战略:没救了,他们非“死”不可

    12月17日,马斯克在X平台上对福特收缩电动汽车战略的消息进行了回应,认为这标志着传统汽车行业已走向衰落和死亡。一位网友评论称:“福特仍然认为生产电动汽车的原因是为了减少排放,这一事实告诉你,传统汽车对这种模式转变的理解是多么的少。减少排放只是一个副作用。电动汽车从根本上来说是一个更好的自动驾驶平台。这就像在白炽灯商业化之后投资新的捕鲸船一样。” 

    马斯克回应称:“完全正确。多年前,我就说过,非自主内燃机汽车就像一边骑马一边使用翻盖手机,但你不可能把一个好主意硬塞给传统行业。他们只是坚持去死。”(上游新闻) 

    阿里创投减持华谊兄弟

    华谊兄弟12月17日公告称,公司股东阿里创投于2025年12月17日通过大宗交易方式减持2952.68万股,持股比例由3.467799%降至2.403580%。阿里创投及其一致行动人马云合计持股比例由6.064215%降至4.999996%,不再是公司持股5%以上股东。此次减持有利于公司股权结构稳定,不会对公司正常经营产生不利影响。(界面新闻)

    抖音、小红书、贝壳、58同城、闲鱼、链家等,被约谈

    据北京市住建委17日消息,为加强网络生态治理,遏制房地产领域网络乱象,日前,北京市多部门对抖音、小红书、贝壳、58同城、闲鱼、链家、我爱我家、麦田等互联网平台进行联合约谈。会议指出,部分自媒体账号存在在网络上发布和传播唱衰北京楼市、制造市场恐慌、散布不实信息、虚假房源引流等违规信息问题,严重扰乱市场秩序。会议要求各平台立即开展全面自查,及时下架违规信息、处置违规账号,并加快建立完善常态化行业内容内审机制。(财联社)

    车厘子或将再降价,部分车厘子每斤售价跌破20元

    12月12日清晨,新一批海运智利车厘子刚靠泊广州南沙港,很快整船货物就被转运至广州江南果蔬批发市场。随着这批货入市,今年车厘子市场正式“开市”。多位广州江南果蔬市场的果商告诉记者:“去年(车厘子)价格创新低,今年价格还会更低。随着海运陆续到港,今年车厘子价格将再度下探,开柜后市场将降价20%~30%。” 

    12月15日,时代财经发现,山姆、盒马、叮咚买菜和朴朴超市等平台车厘子在近期价格均有所下降。其中,山姆更是价格跳水,其中一款3J规格1千克车厘子与12月12日相比价格降价40元。(时代财经) 

    万科对已到期的20亿中票推出新的展期方案

    经过一轮博弈后,万科对已到期的20亿中票又推出了新的展期方案。12月17日,上清所披露了位22万科MTN004第二次持有人会议的议案概要。 

    此次的展期方案为,将本金兑付时间展期12个月,调整后兑付时间为2026年12月15日,届时偿付本期中期票据全部本金;在利息支付上,该方案提出,在2025年12月15日到期的应付利息6000万元在宽限期内支付。上述宽限期间未偿付本金按照3.00%计息,未偿付利息不计复利。展期期间(2025年12月15日至2026年12月15日)本期中期票据票面利率维持不变。(第一财经) 

    教育部:高中要严格控制考试次数

    近日,教育部办公厅印发《关于进一步加强中小学日常考试管理的通知》,要求减少日常考试测试频次,提升日常考试质量,减轻学生过重学业负担,促进学生全面健康发展。《通知》针对不同学段分类提出考试频次的控制要求:小学一、二年级不进行纸笔考试,义务教育其他年级由学校每学期组织一次期末考试;初中年级可根据不同学科实际,适当安排一次期中考试;普通高中学校要严格控制考试次数;严禁面向小学各年级和初高中非毕业年级组织区域性或跨校际的考试。初高中毕业年级为适应学生毕业和升学需要,可在总复习阶段组织1~2次模拟考试。(财联社) 

    献血年龄拟延长至65周岁

    国家卫生健康委12月17日发布公告,就《中华人民共和国献血法(修订草案征求意见稿)》面向社会公开征求意见。献血者年龄从提倡十八周岁至五十五周岁,修订为提倡十八周岁至六十五周岁的公民在符合健康要求的情况下自愿献血。血站对献血者每次采集全血不得超过四百毫升,两次采集全血间隔期从不少于六个月修订为不少于九十天。鼓励支持血液采供贮用等环节的科技创新,每个县(市,区)至少设置一个固定献血屋(点),人口较多和用血需求较大的县(市,区)酌情增设,提升血液保障能力,加强协调调度,健全血液应急调配机制。(界面新闻) 

    韩国:计划将脱发治疗纳入医保

    韩国总统李在明12月16日在听取韩国保健福祉部等部门工作汇报时下达指示,要求保健福祉部推进将脱发治疗纳入医疗保险的工作。(环球网、财联社) 

    特朗普签公告,40国被实施入境限制

    美国总统特朗普12月16日签署公告,将此前受美方全面和部分入境限制的国家数量从19个扩大至40个。最新签署的公告显示,此前受部分入境限制的老挝和塞拉利昂,将面临全面入境限制。另外,新增对布基纳法索、马里、尼日尔、南苏丹和叙利亚5国公民以及对持有巴勒斯坦民族权力机构签发旅行证件的个人实施全面入境限制。根据最新政策,受到美国全面入境限制的国家数量由12个增至20个。公告还宣布新增对尼日利亚、科特迪瓦等15国实施部分入境限制,将受部分入境限制的国家数量增加到20个。(财联社) 

    小米、华为、理想报案,警方抓获12人

    “烟台公安”微信公众号今日发布消息,近日,烟台公安历时四个月,打掉一个炒作新能源汽车负面信息的团伙,一举抓获12人、查扣资金百万、关停账号8000余个。

    今年7月份以来,小米公司、华为鸿蒙智行、理想汽车等企业先后报警称:某平台集中涌现大量针对其汽车品牌的负面文章。烟台公安办案人员对平台上3000余条负面文章逐一甄别后,发现均出自一批注册时间短、活跃度异常、IP地址分散的账号,背后存在明显的产业化运作痕迹,疑似“网络水军”有预谋地进行炒作引流并牟利。为固定证据,专案组以犯罪团伙操控的海量虚假账号为突破口,分析各类网络信息8万余条,核查资金流水10万余条,厘清该团伙的组织架构、人员分工和完整作案流程。专案组于烟台、聊城两地同步收网,成功抓获12名犯罪嫌疑人,查扣涉案资金100余万元,关停违法网络账号8000余个。(财联社)

    市场监管总局:平台要求商家“全网最低价”可能构成垄断

    从市场监管总局获悉,平台要求商家“全网最低价”,可能构成滥用市场支配地位或者垄断协议行为。当日,市场监管总局举行新闻发布会,介绍民生领域反垄断执法相关情况。市场监管总局反垄断执法一司副司长刘健介绍,近期发布的《互联网平台反垄断合规指引(征求意见稿)》提出了8个新型垄断风险,为平台企业提供务实管用的合规指导。例如,有的平台企业要求平台内商家销售商品价格不得高于其他竞争性平台。指引提示,平台要求商家“全网最低价”,可能构成滥用市场支配地位或者垄断协议行为。(新华社) 

    iPhone18Pro或摒弃药丸状挖孔

    12月16日,科技媒体The Information发布博文,爆料称苹果iPhone 18 Pro和iPhone 18 Pro Max将迎来重大外观变革,计划彻底摒弃”灵动岛”药丸形挖孔,转而采用左上角单打孔前置镜头与屏下FaceID技术。(梨视频)

    知名连锁品牌万宁即将关闭内地所有门店

    12月16日,万宁中国官网发布关于线下及线上门店停止运营及会员积分处理的公告。因公司业务战略调整,中国大陆门店将于明年年初关闭。 

    具体来看,公告指出,线下门店将于2026年1月15日之后正式停止运营,线上万宁官方商城(小程序)将于2025年12月28日24时停止运营。线上万宁天猫旗舰店、线上万宁天猫保健品专营店、线上万宁京东旗舰店以及线上万宁拼多多旗舰店将于2025年12月26日24时停止销售以及相关会员权益停止提供,售后服务截止日期均为2026年1月25日。(澎湃新闻) 

    美股三大指数集体收跌,特斯拉跌超4%

    36氪获悉,12月17日收盘,美股三大指数集体下跌,道指跌0.47%,纳指跌1.81%,标普500指数跌1.16%。大型科技股多数走弱,Arm跌超5%,特斯拉跌超4%,英伟达、谷歌跌超3%,苹果、Meta跌超1%,微软、亚马逊小幅下跌;奈飞小幅上涨。热门中概股跌多涨少,拼多多、蔚来、理想汽车跌超3%,小鹏汽车跌超2%,网易、微博、阿里巴巴跌超1%;百度小幅上涨。

    融创宣布将彻底解除96亿美元债务 

    融创中国的风险化解再次出现关键节点。12月17日,融创中国发布公告称,预计境外债重组生效日期为2025年12月23日前后,届时融创中国约96 亿美元的现有债务,将获全面解除及免除。同时,公司将根据计划条款,向计划债权人发行强制可转换债券。公告还披露了一笔额外债务的重组方案。为了对融创全面境外债务重组范围外,唯一剩余的债务进行重组,融创中国、三亚青田与集友订立了集友重组契约。(第一财经)

    瑞幸咖啡据悉考虑竞购雀巢旗下咖啡连锁品牌蓝瓶咖啡

    据知情人士透露,瑞幸咖啡正在考虑竞购雀巢旗下蓝瓶咖啡(Blue Bottle Coffee),此举旨在提升品牌形象并拓展高端咖啡市场。因讨论未公开消息而要求匿名的知情人士表示,瑞幸咖啡及其股东大钲资本也在评估其他收购目标,包括在中国经营%Arabica咖啡店的运营商,其投资方包括私募股权公司PAG。(新浪财经) 

    上市进行时

    智谱 

    据了解,北京智谱华章科技股份有限公司(简称智谱)于12月17日通过港交所上市聆讯。(新浪财经)

    兆易创新

    36氪获悉,据港交所文件,12月17日,兆易创新科技集团股份有限公司更新聆讯后资料集,意味着该公司港交所IPO通过聆讯。

    美格智能

    36氪获悉,美格智能发布公告,公司正在进行申请发行境外上市股份(H股)并在香港联合交易所有限公司主板上市的相关工作,公司于近日收到中国证监会出具的《关于美格智能技术股份有限公司境外发行上市备案通知书》。

    AI最前沿

    罗福莉完成入职后的小米首秀,正式发布和开源最新MoE大模型MiMo-V2-Flash 

    12月17日上午,在小米2025小米人车家全生态合作伙伴大会上,Xiaomi MiMO大模型负责人罗福莉完成入职后的小米首秀,并正式发布和开源最新MoE大模型MiMo-V2-Flash。

    罗福莉表示,该模型具备超强基座模型潜能,在世界级评估榜单中排到了全球开源模型的TOP2,具备低成本高速度的特点,其成本在低于DeepseekV3.2的情况下,推理速度是其3倍。

    罗福莉被誉为“95后AI才女”,曾入职阿里达摩院,后任职幻方量化、DeepSeek并成为DeepSeek-V2关键开发者。2025年11月起罗福莉担任小米MiMo大模型团队负责人。(财联社)

    Gemini 3 Flash正式发布 

    12月18日,Gemini 3 Flash正式发布,Gemini 3家族成为完全体:Flash、Pro和Deep Think。Flash模型已经全面上线Gemini APP、AI Studio、Google Antigravity和Gemini CLI,用户打开Gemini就是默认Gemini 3 Flash版本。(新智元)

    字节跳动Seedance 1.5 pro音视频创作模型正式发布

    36氪获悉,字节跳动Seed团队12月17日正式发布新一代音视频创作模型Seedance 1.5 pro。Seedance 1.5 pro支持音视频联合生成,能够执行多种任务,包括从文本到音视频的合成以及图像引导的音视频生成等。

    整理|晶晶 

  • AppGallery Awards 2025发布,Z世代的生活方式都藏在这里

    “技术不仅仅是工具的总和,它是一种新的环境,重塑了我们的生活、感知和交往方式。”早在20世纪30年代,美国技术社会学家刘易斯·芒福德在《技术与文明》中就预见了未来。

    纵观历史,每一次关键工具的诞生都如同一次文明的大爆炸,近一个十年,核心关键词无疑是人工智能——AI技术及其应用研发,几乎以月为单位更新换代,释放出无限可能。这一改变在近几年乱花撩人眼的应用市场中可见一斑:各种前沿的概念、新潮的创意,让普通人更是难以甄别,谁是跟风,谁是真的具备实用价值,谁能为人们的生活带来切实改变……

    别急,AppGallery Awards 2025的揭晓,给了鸿蒙用户一套可信的方案——对照获奖名单,闭眼下载就对了。作为华为应用市场的年度权威奖项,自2024年起,AppGallery Awards每年围绕技术创新、用户体验、精品内容等维度,评选出鸿蒙生态中的标杆应用与游戏,打造一场可深度品鉴的“App博览会”,帮助用户在五花八门的选择中,精准找到最有可能撬动生活杠杆的那几款,启发用户新的灵感和使用场景。

    正如它的名字,AppGallery,区别于传统应用商店(Store)的定位,AppGallery想做的,是为用户呈现一条持续焕新、精选汇聚的数字生活画廊。AppGallery Awards不仅是对前沿趋势的洞察,更是鸿蒙标杆应用与游戏的影响力背书,通过推荐经得起用户检验的应用,打造创新的生活解决方案,AppGallery成为连接用户和开发者的桥梁。

    2025年12月8日,AppGallery Awards 2025如约而至,为用户呈现“年度应用与游戏”及“年度影响力应用与游戏”。“年度应用与游戏”榜单聚焦6款综合表现卓越的应用,它们深度融合HarmonyOS 6创新能力,打造全场景优质体验,并为用户带来无限的创造力。

    而“年度影响力应用与游戏”与去年类似,更像是对前沿趋势与文化发展的一次思考,帮助用户尤其是年轻群体,把握数字生活前沿热点。基于对2025年的趋势洞察,这一次AppGallery带来了全新的应用与游戏推荐。

    2025年是AI技术真正改变生活的一年——AI正从工具演变为伙伴。现在,AI智能体已经成熟到可以作为年轻人的数字分身,搭建AI Agent也成了年度热门话题,这届年轻人更进一步将AI训练成懂自己口味的“生活智能体”。

    如尤瓦尔·赫拉利在《今日简史》中的论述:“人工智能不会完全取代人类,但它将深刻改变工作的性质,使人类从标准化、重复性的工作中解放出来,转向更具创造性、情感性或需要身体灵活性的领域。”

    像是毕业季求职,00后应届生纷纷开始用AI模拟面试,再像是把AI训练成工作秘书。光明日报调查显示,超六成青年人每天都会使用AI,超80%青年人表示自己将AI当作一个通用助手,用AI生成PPT、搭建思维框架、整理会议纪要,甚至使用文生图、生视频等操作,对于这届年轻人来说早已手到擒来。越来越多年轻人发现,将重复决策交给AI,才能把真正宝贵的注意力留给创造、思考与感受。

    鸿蒙生态的加持更让这些应用和生活“无缝衔接”:《夸克》结合了智慧多窗-画中画等鸿蒙特性,基于阿里 Qwen 大模型,免去频繁切屏和浏览器检索的麻烦,成为高效学习和工作的好助手;《豆包》与系统分享、文件中转站等创新能力结合,从聊天到修图样样精通;《文心》可以自由创建智能体,统一拖拽能力打破边界,让文字、图片实现跨应用无缝流转……

    还有谁没有用过这些AI工具?只能说,这届年轻人对AI的开发还不到1%。

    现实已经魔幻到,任何讽刺小说在它面前都成了纪实文学。当荒诞成为日常,艺术唯一的出路就是变得比现实更加荒诞——2025年,那些“再过100年没人能看懂”的抽象梗文化,一跃成了互联网世界的主流,在AI技术的加成下更上一层楼。

    社交媒体上,时而刮起一阵“用AI生成名人同框合照”的潮流,时而又流行起给自己的小猫制作唱歌视频。有人把杜甫的茅屋、林黛玉倒拔垂杨柳视频化;有人“把恐龙做成三菜一汤”;在影视区,年轻人让嬛嬛坐上摩托车、安陵容踩上滑板;在音乐区,技能五子棋带着抽象的歌词和舞蹈爆火;还有人把《意大利面就应该拌42号混凝土》《黄龙江一带全蓝牙》等意义不明的梗做成视频,不按常理出牌的创作却激起了许多同龄人点赞。

    热爱抽象梗的年轻人,不再满足于单纯“吟唱”抽象金句,而是借助AI工具将天马行空的想象快速落地,这是一种标榜个性的方式,也是一种寻找同温层的独特社交方式。大家争相在“抽象大赛”中表现自己,从灵感迸发到剪辑制作、添加配乐,再到社群传播,而各种辅助应用更新换代,也让“抽象文化”有了具体的承接平台。

    在鸿蒙生态中,抽象得以传播得更快、更广,《小红书》的热帖、《酷狗音乐》的热歌都能通过碰一碰直接投递亲朋好友,而《酷狗音乐》首发麒麟芯片专属蝰蛇母带音质,HDR Vivid鸿蒙创新特性为《快影》带来画质加持,也让这份“抽象”变得更加“具象”……

    想读懂年轻人的社交新趋势,解码抽象符号是一道必做题。

    韩炳哲曾在《倦怠社会》中提醒,焦虑与倦怠,并非源于外在压迫,而来自永无止境的自我驱动。这是一个“过度积极”的社会,泛滥的信息与社交挤占了沉默与停滞的空间,使我们丧失了“消极”的能力——我们疲惫,却无法停歇;我们相连,却倍感孤独。

    在高度原子化的社会网络里,孤独不是偶然的情绪,而是悬浮于都市上空的常态。学会与焦虑共处已成为一代人的心理必修课——这届年轻人正在虚拟世界中寻求电子布洛芬的止痛。

    每天被工作占去8小时甚至更久,回家刷刷突然探头的小猫小狗,又好像马上能原谅全世界。赛博萌物只是入门级的电子布洛芬,年轻人已经学会用App修身养性:午休间隙打开一个冥想App,一节十分钟的免费冥想课程跟练又给自己续命一天;众多播客App、心理疗愈类App则及时让年轻人刹住车,让他们暂时停下来,听听闲聊、知识科普,看看远方与诗歌。

    同样,AI技术也没有放过情感疗愈赛道。Soul在今年公布的一份Z世代AI使用情况报告中显示,近四成年轻人每天都会使用AI产品获得情感陪伴。归根到底,相比起上一代,这届年轻人更习惯独处,但也渴望陪伴,这种时候,一位虚拟的陪伴,也不失为一种情感慰藉。

    华为账号一键登录,能让用户随时随地召唤出那个最熟悉的心理疏导小助手,与它讲述困扰,像是通过《Now冥想》从入门到进阶快速开始冥想练习,或通过《简单心理》投奔AI心理咨询助理……鸿蒙系统的防偷窥模式,也让这种倾诉更安心。

    情绪疗愈永远不缺需求,它是年轻人需要用一生攻克的命题,而电子布洛芬产品也会持续为他们提供镇痛服务。

    很难想象,前段时间还自嘲“脆皮”的年轻人,如今正成群结队地走向山野。

    一股强劲的户外热潮已然形成。据小红书《2025年上半年户外观察报告》显示,仅2025年上半年,平台上带“周末”关键词的户外笔记发布量就超106万篇,这背后是超过4亿人的中国户外运动参与人群,一场由都市青年主导的“山野回归”正在发生。在社交媒体上,Mountain Walk相关的帖子铺天盖地,年轻人热衷于分享徒步路线、装备清单,还有那些在山间捕捉到的绝美风景。

    这股风潮已从徒步进阶到更具挑战性的领域——越野跑成了2025年的现象级风潮。国内赛事数量呈现爆发式增长,通过一份《2025越野跑日历》可以发现,单单11月就有超过50场赛事遍布全国。热门赛事如崇礼168超级越野赛,报名人数超过2万,需要抽签获取资格,中签率约64%,一票难求。

    这场奔赴山野的集体出走中,年轻人暂时忘却了工作的烦恼、生活的压力,只专注于脚下的路和眼前的风景,而科技发展也让户外活动变得没那么高不可攀。如今,网上组队招募、最佳小众路线规划等帖子堪称保姆级Mountain Walk教学;导航类App即便山上无信号,也能清晰定位,带年轻人畅通无阻走出山野;气候类App不仅能够提供准确天气预报,更能将复杂的数据转化为直观的出行建议,比如增减衣物、预警潜在危险,帮助年轻人更好地安排“出逃”日程,享受纯粹的自然时光。

    鸿蒙将导航、气候类App纳入了更完善的生态圈:锁屏导航实况窗实时播报《高德地图》的高精准车道级导航;设备桌面添加《墨迹天气》服务卡片,App的AI雷达同步提供公里级、分钟级的精准降水预报;集成了Petal Maps的《两步路户外助手》,能帮助用户更好定位适合自己的Mountain Walk路线……

    一切准备就绪,年轻人的信仰,是向大山进发——因为,山就在那里。

    2025年12月2日,“搜打撤”类游戏鼻祖《逃离塔科夫》迎来历史性时刻:人气主播Tigz在全球玩家见证下,成为首位通过全新地图“终点站”完成终极逃脱的玩家。而2025年,同样也是无数“塔科夫like”游戏问世的一年,“搜打撤”类型游戏已从小众硬核圈层,一跃成为主流玩家热衷的玩法。

    简单概括,“搜打撤”的游玩核心模式有三步,搜索物资、进行战斗、成功撤离。它构建了一个极致纯粹的风险博弈世界:玩家需自备装备入场,在有限时间内搜索物资、应对战斗,并最终抵达撤离点。成功,则可将战利品转化为财富;失败,则会失去携带的所有装备。这种“高风险、高回报”的设定,迫使玩家在每一局中都必须进行精细的策略抉择与心理博弈。

    这种脱胎于吃鸡类型的子品类游戏,在今年迎来了真正的爆发。据游戏家联盟VIP媒体不完全统计,从2025年以来融入“搜打撤”模式或直接以“搜打撤”玩法立项的新游已超过22款。市场领头羊的表现更为直观——《三角洲行动》与《暗区突围》的持续火热,共同奠定了这一品类的市场基本盘。头部游戏的火爆甚至催生了“万物皆可搜打撤”的风潮,不同类型的游戏IP也开始推出“搜打撤”子模式。

    风险与收获相伴,在“搜打撤”游戏里规则清晰,努力与回报的路径清晰可见。这种“确定性的冒险”,既能快速建立自我价值感,又成了一种高效解压的渠道。

    鸿蒙正在逐步完善其巨大的游戏版图——截至目前,鸿蒙游戏数量已超过20000款,覆盖手机、PC、平板、智慧屏等多终端,1000+游戏厂商参与鸿蒙生态合作,越来越多的游戏选择首发上线鸿蒙版。鸿蒙正在成为游戏构建跨平台生态、推动技术升级和内容创新的必选项。

    在按部就班的日常里,谁又能拒绝抽空进入虚拟世界,来一场心跳加速、每局都独一无二的冒险呢?

    2025年是技术持续爆炸的一年,但身处其中的Z世代年轻人,已经逐渐学会和技术共舞的脚步了。他们玩转AI,把新的应用程序玩出花,让工具为自己的创意服务,让技术成为装点生活趣味的伙伴。

    技术在今天,已成为一代人探索自我、安顿身心、连接同好的延伸与出口。年轻人不是被动的接受者,而是在自己定义何为潮流、何为生活。

    在这场由用户书写主角的故事中,AppGallery Awards扮演的正是那位敏锐的趋势预言家。作为这些潮流生活方式的观察者,AppGallery持续洞察用户的需求变化,通过对优质应用的精准推荐,为不同生活场景提供切实可行的“数字解决方案”。 

    AppGallery Awards远不止于对标杆应用的表彰 ,更像是在绘制一幅由数亿年轻人所选择的技术画廊。它由真实的生活场景与用户需求牵引,也为鸿蒙生态的扩容和完善增添核心驱动力。它源于对互联网生活趋势的深刻洞察,最终又回馈了每一个数字世界的使用者,让生活变得更轻盈自在。

    AppGallery与鸿蒙生态,持续促成了这种技术与生活的双向奔赴。AppGallery Awards 2025“年度应用与游戏”、“年度影响力应用与游戏”榜单 ,也将作为一个 “赛季”的里程碑,让科技回归“人本”,为技术开发者们点亮前行的方向。 

     

    本文来自微信公众号“后浪研究所”,作者:后浪研究所,36氪经授权发布。

  • 北大系可控核聚变公司完成超5000万天使轮融资,致力于低成本高性能聚变 |硬氪首发

    硬氪获悉,致力于将基础科学发现转化为具有巨大战略价值的未来能源产业的企业「零点聚能」近日完成超五千万元天使轮融资。我们总结了本轮融资信息和该公司几大亮点:

    融资金额及投资机构

    融资轮次:天使轮

    融资规模:超5000万人民币

    资金用途:本轮资金将重点用于研制具有关键作用的一号实验装置,开展基于磁零点位形聚变新路线的关键验证实验,推进极具商业价值的低成本、高参数聚变能源技术研发。

    公司基本信息

    成立时间:2024年9月

    注册地址:北京

    技术亮点:实现聚变能源应用的核心挑战之一是如何长时间稳定约束高温等离子体。零点聚能聚焦的磁零点位形聚变路线,源于空间等离子体中的自然物理现象。肖池阶团队于2006年首次通过卫星观测数据证实磁零点位形的存在,相关成果多次发表于《Nature Physics》等国际专业期刊。自2013年起,该团队已在北京大学研制零号实验装置,并系统研究磁零点位形的物理特性与约束性能,发现该技术路径有望以较低成本实现高性能聚变。

    未来规划

    聚变能源因其安全、清洁、资源丰富等特点,被视为未来终极能源之一。零点聚能致力于将基础科学发现转化为具有巨大战略价值的未来能源产业。公司通过一号实验装置获得关键参数后,将着手研制二号实验装置及三号实验装置,从而完成从参数验证到商业验证的关键跨越。该技术一旦取得突破,将使发电成本进入“一分钱”时代,为人类社会带来近乎无限的清洁能源,真正实现能源自由,并有潜力应用于航天动力及星际运输等领域。

    团队背景

    零点聚能由北京大学科技开发部燕缘孵化器与溪山天使汇联合孵化。公司首席科学家肖池阶是北京大学物理学院长聘副教授、博士生导师,曾任北京大学物理学院重离子物理研究所副所长。2025年3月,公司与北京大学共同成立“北大-零点聚能聚变能源联合实验室”。实验室由零点聚能创始人肖池阶担任主任,实验室学术委员会汇聚中国科学院院士及多位核聚变领域顶尖科学家,携手推进磁零点位形聚变前沿探索与关键技术攻关。

    投资人思考

    溪山天使汇发起人、北大汇丰创新引擎实验室主任许晖表示:肖池阶教授团队在现有主流核聚变技术路线之外,独辟蹊径,从其观测到的宇宙太空中现有核聚变位形中获得灵感,技术方案结构简洁,在温度、���度同等参数下,约束时间高出一个数量级,装置成本低了一个数量级,未来有望实现一分钱一度电,让核聚变发电和深空遨游真正成为现实。

  • Smashing Animations Part 7: Recreating Toon Text With CSS And SVG

    After finishing a project that required me to learn everything I could about CSS and SVG animations, I started writing this series about Smashing Animations and “How Classic Cartoons Inspire Modern CSS.” To round off this year, I want to show you how to use modern CSS to create that element that makes Toon Titles so impactful: their typography.

    Title Artwork Design

    In the silent era of the 1920s and early ’30s, the typography of a film’s title card created a mood, set the scene, and reminded an audience of the type of film they’d paid to see.

    Cartoon title cards were also branding, mood, and scene-setting, all rolled into one. In the early years, when major studio budgets were bigger, these title cards were often illustrative and painterly.

    But when television boomed during the 1950s, budgets dropped, and cards designed by artists like Lawrence “Art” Goble adopted a new visual language, becoming more graphic, stylised, and less intricate.

    Note: Lawrence “Art” Goble is one of the often overlooked heroes of mid-century American animation. He primarily worked for Hanna-Barbera during its most influential years of the 1950s and 1960s.

    Goble wasn’t a character animator. His role was to create atmosphere, so he designed environments for The Flintstones, Huckleberry Hound, Quick Draw McGraw, and Yogi Bear, as well as the opening title cards that set the tone. His title cards, featuring paintings with a logo overlaid, helped define the iconic look of Hanna-Barbera.

    Goble’s artwork for characters such as Quick Draw McGraw and Yogi Bear was effective on smaller TV screens. Rather than reproducing a still from the cartoon, he focused on presenting a single, strong idea — often in silhouette — that captured its essence. In “The Buzzin’ Bear,” Yogi buzzes by in a helicopter. He bounces away, pic-a-nic basket in hand, in “Bear on a Picnic,” and for his “Prize Fight Fright,” Yogi boxes the title text.

    With little or no motion to rely on, Goble’s single frames had to create a mood, set the scene, and describe a story. They did this using flat colours, graphic shapes, and typography that was frequently integrated into the artwork.

    As designers who work on the web, toon titles can teach us plenty about how to convey a brand’s personality, make a first impression, and set expectations for someone’s experience using a product or website. We can learn from the artists’ techniques to create effective banners, landing-page headers, and even good ol’ fashioned splash screens.

    Toon Title Typography

    Cartoon title cards show how merging type with imagery delivers the punch a header or hero needs. With a handful of text-shadow, text-stroke, and transform tricks, modern CSS lets you tap into that same energy.

    The Toon Text Title Generator

    Partway through writing this article, I realised it would be useful to have a tool for generating text styled like the cartoon titles I love so much. So I made one.

    My Toon Text Title Generator lets you experiment with colours, strokes, and multiple text shadows. You can adjust paint order, apply letter spacing, preview your text in a selection of sample fonts, and then copy the generated CSS straight to your clipboard to use in a project.

    Toon Title CSS

    You can simply copy-paste the CSS that the Toon Text Title Generator provides you. But let’s look closer at what it does.

    Text shadow

    Look at the type in this title from Augie Doggie’s episode “Yuk-Yuk Duck,” with its pale yellow letters and dark, hard, offset shadow that lifts it off the background and creates the illusion of depth.

    You probably already know that text-shadow accepts four values: (1) horizontal and (2) vertical offsets, (3) blur, and (4) a colour which can be solid or semi-transparent. Those offset values can be positive or negative, so I can replicate “Yuk-Yuk Duck” using a hard shadow pulled down and to the right:

    color: #f7f76d;
    text-shadow: 5px 5px 0 #1e1904;
    

    On the other hand, this “Pint Giant” title has a different feel with its negative semi-soft shadow:

    color: #c2a872;
    text-shadow:
      -7px 5px 0 #b100e,
      0 -5px 10px #546c6f;
    

    To add extra depth and create more interesting effects, I can layer multiple shadows. For “Let’s Duck Out,” I combine four shadows: the first a solid shadow with a negative horizontal offset to lift the text off the background, followed by progressively softer shadows to create a blur around it:

    color: #6F4D80;
    text-shadow:
      -5px 5px 0 #260e1e, /* Shadow 1 */
      0 0 15px #e9ce96,   /* Shadow 2 */
      0 0 30px #e9ce96,   /* Shadow 3 */
      0 0 30px #e9ce96;   /* Shadow 4 */
    

    These shadows show that using text-shadow isn’t just about creating lighting effects, as they can also be decorative and add personality.

    Text Stroke

    Many cartoon title cards feature letters with a bold outline that makes them stand out from the background. I can recreate this effect using text-stroke. For a long time, this property was only available via a -webkit- prefix, but that also means it’s now supported across modern browsers.

    text-stroke is a shorthand for two properties. The first, text-stroke-width, draws a contour around individual letters, while the second, text-stroke-color, controls its colour. For “Whatever Goes Pup,” I added a 4px blue stroke to the yellow text:

    color: #eff0cd;
    -webkit-text-stroke: 4px #7890b5;
    text-stroke: 4px #7890b5;
    

    Strokes can be especially useful when they’re combined with shadows, so for “Growing, Growing, Gone,” I added a thin 3px stroke to a barely blurred 1px shadow to create this three-dimensional text effect:

    color: #fbb999;
    text-shadow: 3px 5px 1px #5160b1;
    -webkit-text-stroke: 3px #984336;
    text-stroke: 3px #984336;
    

    Paint Order

    Using text-stroke doesn’t always produce the expected result, especially with thinner letters and thicker strokes, because by default the browser draws a stroke over the fill. Sadly, CSS still does not permit me to adjust stroke placement as I often do in Sketch. However, the paint-order property has values that allow me to place the stroke behind, rather than in front of, the fill.

    paint-order: stroke paints the stroke first, then the fill, whereas paint-order: fill does the opposite:

    color: #fbb999;
    paint-order: fill;
    text-shadow: 3px 5px 1px #5160b1;
    text-stroke-color:#984336;
    text-stroke-width: 3px;
    

    An effective stroke keeps letters readable, adds weight, and — when combined with shadows and paint order — gives flat text real presence.

    Backgrounds Inside Text

    Many cartoon title cards go beyond flat colour by adding texture, gradients, or illustrated detail to the lettering. Sometimes that’s a texture, other times it might be a gradient with a subtle tonal shift. On the web, I can recreate this effect by using a background image or gradient behind the text, and then clipping it to the shape of the letters. This relies on two properties working together: background-clip: text and text-fill-color: transparent.

    First, I apply a background behind the text. This can be a bitmap or vector image or a CSS gradient. For this example from the Quick Draw McGraw episode “Baba Bait,” the title text includes a subtle top–bottom gradient from dark to light:

    background: linear-gradient(0deg, #667b6a, #1d271a);
    

    Next, I clip that background to the glyphs and make the text transparent so the background shows through:

    -webkit-background-clip: text;
    -webkit-text-fill-color: transparent;
    

    With just those two lines, the background is no longer painted behind the text; instead, it’s painted within it. This technique works especially well when combined with strokes and shadows. A clipped gradient provides the lettering with colour and texture, a stroke keeps its edges sharp, and a shadow elevates it from the background. Together, they recreate the layered look of hand-painted title cards using nothing more than a little CSS. As always, test clipped text carefully, as browser quirks can sometimes affect shadows and rendering.

    Splitting Text Into Individual Characters

    Sometimes I don’t want to style a whole word or heading. I want to style individual letters — to nudge a character into place, give one glyph extra weight, or animate a few letters independently.

    In plain HTML and CSS, there’s only one reliable way to do that: wrap each character in its own span element. I could do that manually, but that would be fragile, hard to maintain, and would quickly fall apart when copy changes. Instead, when I need per-letter control, I use a text-splitting library like splt.js (although other solutions are available). This takes a text node and automatically wraps words or characters, giving me extra hooks to animate and style without messing up my markup.

    It’s an approach that keeps my HTML readable and semantic, while giving me the fine-grained control I need to recreate the uneven, characterful typography you see in classic cartoon title cards. However, this approach comes with accessibility caveats, as most screen readers read text nodes in order. So this:

    <h2>Hum Sweet Hum</h2>
    

    …reads as you’d expect:

    Hum Sweet Hum

    But this:

    <h2>
    <span>H</span>
    <span>u</span>
    <span>m</span>
    <!-- etc. -->
    </h2>
    

    …can be interpreted differently depending on the browser and screen reader. Some will concatenate the letters and read the words correctly. Others may pause between letters, which in a worst-case scenario might sound like:

    “H…” “U…” “M…”

    Sadly, some splitting solutions don’t deliver an always accessible result, so I’ve written my own text splitter, splinter.js, which is currently in beta.

    Transforming Individual Letters

    To activate my Toon Text Splitter, I add a data- attribute to the element I want to split:

    <h2 data-split="toon">Hum Sweet Hum</h2>
    

    First, my script separates each word into individual letters and wraps them in a span element with class and ARIA attributes applied:

    <span class="toon-char" aria-hidden="true">H</span>
    <span class="toon-char" aria-hidden="true">u</span>
    <span class="toon-char" aria-hidden="true">m</span>
    

    The script then takes the initial content of the split element and adds it as an aria attribute to help maintain accessibility:

    <h2 data-split="toon" aria-label="Hum Sweet Hum">
      <span class="toon-char" aria-hidden="true">H</span>
      <span class="toon-char" aria-hidden="true">u</span>
      <span class="toon-char" aria-hidden="true">m</span>
    </h2>
    

    With those class attributes applied, I can then style individual characters as I choose.

    For example, for “Hum Sweet Hum,” I want to replicate how its letters shift away from the baseline. After using my Toon Text Splitter, I applied four different translate values using several :nth-child selectors to create a semi-random look:

    /* 4th, 8th, 12th... */
    .toon-char:nth-child(4n) { translate: 0 -8px; }
    /* 1st, 5th, 9th... */
    .toon-char:nth-child(4n+1) { translate: 0 -4px; }
    /* 2nd, 6th, 10th... */
    .toon-char:nth-child(4n+2) { translate: 0 4px; }
    /* 3rd, 7th, 11th... */
    .toon-char:nth-child(4n+3) { translate: 0 8px; }
    

    But translate is only one property I can use to transform my toon text.

    I could also rotate those individual characters for an even more chaotic look:

    /* 4th, 8th, 12th... */
    .toon-line .toon-char:nth-child(4n) { rotate: -4deg; }
    /* 1st, 5th, 9th... */
    .toon-char:nth-child(4n+1) { rotate: -8deg; }
    /* 2nd, 6th, 10th... */
    .toon-char:nth-child(4n+2) { rotate: 4deg; }
    /* 3rd, 7th, 11th... */
    .toon-char:nth-child(4n+3) { rotate: 8deg; }
    

    But translate is only one property I can use to transform my toon text. I could also rotate those individual characters for an even more chaotic look:

    /* 4th, 8th, 12th... */
    .toon-line .toon-char:nth-child(4n) {
    rotate: -4deg; }
    
    /* 1st, 5th, 9th... */
    .toon-char:nth-child(4n+1) {
    rotate: -8deg; }
    
    /* 2nd, 6th, 10th... */
    .toon-char:nth-child(4n+2) {
    rotate: 4deg; }
    
    /* 3rd, 7th, 11th... */
    .toon-char:nth-child(4n+3) {
    rotate: 8deg; }
    

    And, of course, I could add animations to jiggle those characters and bring my toon text style titles to life. First, I created a keyframe animation that rotates the characters:

    @keyframes jiggle {
    0%, 100% { transform: rotate(var(--base-rotate, 0deg)); }
    25% { transform: rotate(calc(var(--base-rotate, 0deg) + 3deg)); }
    50% { transform: rotate(calc(var(--base-rotate, 0deg) - 2deg)); }
    75% { transform: rotate(calc(var(--base-rotate, 0deg) + 1deg)); }
    }
    

    Before applying it to the span elements created by my Toon Text Splitter:

    .toon-char {
    animation: jiggle 3s infinite ease-in-out;
    transform-origin: center bottom; }
    

    And finally, setting the rotation amount and a delay before each character begins to jiggle:

    .toon-char:nth-child(4n) { --base-rotate: -2deg; }
    .toon-char:nth-child(4n+1) { --base-rotate: -4deg; }
    .toon-char:nth-child(4n+2) { --base-rotate: 2deg; }
    .toon-char:nth-child(4n+3) { --base-rotate: 4deg; }
    
    .toon-char:nth-child(4n) { animation-delay: 0.1s; }
    .toon-char:nth-child(4n+1) { animation-delay: 0.3s; }
    .toon-char:nth-child(4n+2) { animation-delay: 0.5s; }
    .toon-char:nth-child(4n+3) { animation-delay: 0.7s; }
    

    One Frame To Make An Impression

    Cartoon title artists had one frame to make an impression, and their typography was as important as the artwork they painted. The same is true on the web.

    A well-designed header or hero area needs clarity, character, and confidence — not simply a faded full-width background image.

    With a few carefully chosen CSS properties — shadows, strokes, clipped backgrounds, and some restrained animation — we can recreate that same impact. I love toon text not because I’m nostalgic, but because its design is intentional. Make deliberate choices, and let a little toon text typography add punch to your designs.

  • Accessible UX Research, eBook Now Available For Download

    This article is a sponsored by Accessible UX Research

    Smashing Library expands again! We’re so happy to announce our newest book, Accessible UX Research, is now available for download in eBook formats. Michele A. Williams takes us for a deep dive into the real world of UX testing, and provides a road map for including users with different abilities and needs in every phase of testing.

    But the truth is, you don’t need to be conducting UX testing or even be a UX professional to get a lot out of this book. Michele gives in-depth descriptions of the assistive technology we should all be familiar with, in addition to disability etiquette, common pitfalls when creating accessible prototypes, and so much more. You’ll refer to this book again and again in your daily work.



    This is also your last chance to get your printed copy at our discounted presale price. We expect printed copies to start shipping in early 2026. We know you’ll love this book, but don’t just take our word for it — we asked a few industry experts to check out Accessible UX Research too:

    Accessible UX Research stands as a vital and necessary resource. In addressing disability at the User Experience Research layer, it helps to set an equal and equitable tone for products and features that resonates through the rest of the creation process. The book provides a solid framework for all aspects of conducting research efforts, including not only process considerations, but also importantly the mindset required to approach the work.

    This is the book I wish I had when I was first getting started with my accessibility journey. It is a gift, and I feel so fortunate that Michele has chosen to share it with us all.”

    Eric Bailey, Accessibility Advocate

    “User research in accessibility is non-negotiable for actually meeting users’ needs, and this book is a critical piece in the puzzle of actually doing and integrating that research into accessibility work day to day.”

    Devon Pershing, Author of The Accessibility Operations Guidebook

    “Our decisions as developers and designers are often based on recommendations, assumptions, and biases. Usually, this doesn’t work, because checking off lists or working solely from our own perspective can never truly represent the depth of human experience. Michele’s book provides you with the strategies you need to conduct UX research with diverse groups of people, challenge your assumptions, and create truly great products.”

    Manuel Matuzović, Author of the Web Accessibility Cookbook

    “This book is a vital resource on inclusive research. Michele Williams expertly breaks down key concepts, guiding readers through disability models, language, and etiquette. A strong focus on real-world application equips readers to conduct impactful, inclusive research sessions. By emphasizing diverse perspectives and proactive inclusion, the book makes a compelling case for accessibility as a core principle rather than an afterthought. It is a must-read for researchers, product-makers, and advocates!”

    Anna E. Cook, Accessibility and Inclusive Design Specialist

    About The Book

    The book isn’t a checklist for you to complete as a part of your accessibility work. It’s a practical guide to inclusive UX research, from start to finish. If you’ve ever felt unsure how to include disabled participants, or worried about “getting it wrong,” this book is for you. You’ll get clear, practical strategies to make your research more inclusive, effective, and reliable.

    Inside, you’ll learn how to:

    • Plan research that includes disabled participants from the start,
    • Recruit participants with disabilities,
    • Facilitate sessions that work for a range of access needs,
    • Ask better questions and avoid unintentionally biased research methods,
    • Build trust and confidence in your team around accessibility and inclusion.

    The book also challenges common assumptions about disability and urges readers to rethink what inclusion really means in UX research and beyond. Let’s move beyond compliance and start doing research that reflects the full diversity of your users. Whether you’re in industry or academia, this book gives you the tools — and the mindset — to make it happen.

    High-quality hardcover, 320 pages. Written by Dr. Michele A. Williams. Cover art by Espen Brunborg. Print edition shipping early 2026. eBook now available for download. Download a free sample (PDF, 2.3MB) and reserve your print copy at the presale price.



    “Accessible UX Research” shares successful strategies that’ll help you recruit the participants you need for the study you’re designing. (Large preview)

    Contents

    1. Disability mindset: For inclusive research to succeed, we must first confront our mindset about disability, typically influenced by ableism.
    2. Diversity of disability: Accessibility is not solely about blind screen reader users; disability categories help us unpack and process the diversity of disabled users.
    3. Disability in the stages of UX research: Disabled participants can and should be part of every research phase — formative, prototype, and summative.
    4. Recruiting disabled participants: Recruiting disabled participants is not always easy, but that simply means we need to learn strategies on where to look.
    5. Designing your research: While our goal is to influence accessible products, our research execution must also be accessible.
    6. Facilitating an accessible study: Preparation and communication with your participants can ensure your study logistics run smoothly.
    7. Analyzing and reporting with accuracy and impact: How you communicate your findings is just as important as gathering them in the first place — so prepare to be a storyteller, educator, and advocate.
    8. Disability in the UX research field: Inclusion isn’t just for research participants, it’s important for our colleagues as well, as explained by blind UX Researcher Dr. Cynthia Bennett.



    The book will challenge your disability mindset and what it means to be truly inclusive in your work. (Large preview)

    Who This Book Is For

    Whether a UX professional who conducts research in industry or academia, or more broadly part of an engineering, product, or design function, you’ll want to read this book if…

    1. You have been tasked to improve accessibility of your product, but need to know where to start to facilitate this successfully.
    2. You want to establish a culture for accessibility in your company, but not sure how to make it work.
    3. You want to move from WCAG/EAA compliance to established accessibility practices and inclusion in research practices and beyond.
    4. You want to improve your overall accessibility knowledge and be viewed as an Accessibility Specialist for your organization.



    About the Author

    Dr. Michele A. Williams is owner of M.A.W. Consulting, LLC – Making Accessibility Work. Her 20+ years of experience include influencing top tech companies as a Senior User Experience (UX) Researcher and Accessibility Specialist and obtaining a PhD in Human-Centered Computing focused on accessibility. An international speaker, published academic author, and patented inventor, she is passionate about educating and advising on technology that does not exclude disabled users.

    Technical Details

    Community Matters ❤️

    Producing a book takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. A huge shout-out to Smashing Members for the kind, ongoing support. The eBook is and always will be free for Smashing Members. Plus, Members get a friendly discount when purchasing their printed copy. Just sayin’! 😉

    More Smashing Books & Goodies

    Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the core of everything we do at Smashing.

    In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Trine, Heather, and Steven are three of these people. Have you checked out their books already?

    The Ethical Design Handbook

    A practical guide on ethical design for digital products.

    Add to cart $44

    Understanding Privacy

    Everything you need to know to put your users first and make a better web.

    Add to cart $44

    Touch Design for Mobile Interfaces

    Learn how touchscreen devices really work — and how people really use them.

    Add to cart $44

  • State, Logic, And Native Power: CSS Wrapped 2025

    If I were to divide CSS evolutions into categories, we have moved far beyond the days when we simply asked for border-radius to feel like we were living in the future. We are currently living in a moment where the platform is handing us tools that don’t just tweak the visual layer, but fundamentally redefine how we architect interfaces. I thought the number of features announced in 2024 couldn’t be topped. I’ve never been so happily wrong.

    The Chrome team’s “CSS Wrapped 2025” is not just a list of features; it is a manifesto for a dynamic, native web. As someone who has spent a couple of years documenting these evolutions — from defining “CSS5” eras to the intricacies of modern layout utilities — I find myself looking at this year’s wrap-up with a huge sense of excitement. We are seeing a shift towards “Optimized Ergonomics” and “Next-gen interactions” that allow us to stop fighting the code and start sculpting interfaces in their natural state.

    In this article, you can find a comprehensive look at the standout features from Chrome’s report, viewed through the lens of my recent experiments and hopes for the future of the platform.

    The Component Revolution: Finally, A Native Customizable Select

    For years, we have relied on heavy JavaScript libraries to style dropdowns, a “decades-old problem” that the platform has finally solved. As I detailed in my deep dive into the history of the customizable select (and related articles), this has been a long road involving Open UI, bikeshedding names like <selectmenu> and <selectlist>, and finally landing on a solution that re-uses the existing <select> element.

    The introduction of appearance: base-select is a strong foundation. It allows us to fully customize the <select> element — including the button and the dropdown list (via ::picker(select)) — using standard CSS. Crucially, this is built with progressive enhancement in mind. By wrapping our styles in a feature query, we ensure a seamless experience across all browsers.

    We can opt in to this new behavior without breaking older browsers:

    select {
      /* Opt-in for the new customizable select */
      @supports (appearance: base-select) {
        &, &::picker(select) {
          appearance: base-select;
        }
      }
    }
    

    The fantastic addition to allow rich content inside options, such as images or flags, is a lot of fun. We can create all sorts of selects nowadays:

    • Demo: I created a Poké-adventure demo showing how the new <selectedcontent> element can clone rich content (like a Pokéball icon) from an option directly into the button.

    See the Pen A customizable select with images inside of the options and the selectedcontent [forked] by utilitybend.

    See the Pen A customizable select with only pseudo-elements [forked] by utilitybend.

    See the Pen An actual Select Menu with optgroups [forked] by utilitybend.

    This feature alone signals a massive shift in how we will build forms, reducing dependencies and technical debt.

    Scroll Markers And The Death Of The JavaScript Carousel

    Creating carousels has historically been a friction point between developers and clients. Clients love them, developers dread the JavaScript required to make them accessible and performant. The arrival of ::scroll-marker and ::scroll-button() pseudo-elements changes this dynamic entirely.

    These features allow us to create navigation dots and scroll buttons purely with CSS, linked natively to the scroll container. As I wrote on my blog, this was Love at first slide. The ability to create a fully functional, accessible slider without a single line of JavaScript is not just convenient; it is a triumph for performance. There are some accessibility concerns around this feature, and even though these are valid, there might be a way for us developers to make it work. The good thing is, all these UI changes are making it a lot easier than custom DOM manipulation and dragging around aria tags, but I digress…

    We can now group markers automatically using scroll-marker-group and style the buttons using anchor positioning to place them exactly where we want.

    .carousel {
      overflow-x: auto;
      scroll-marker-group: after; /* Creates the container for dots */
    
      /* Create the buttons */
      &::scroll-button(inline-end),
      &::scroll-button(inline-start) {
        content: " ";
        position: absolute;
        /* Use anchor positioning to center them */
        position-anchor: --carousel;
        top: anchor(center);
      }
    
      /* Create the markers on the children */
      div {
        &::scroll-marker {
          content: " ";
          width: 24px;
          border-radius: 50%;
          cursor: pointer;
        }
        /* Highlight the active marker */
        &::scroll-marker:target-current {
          background: white;
        }
      }
    }
    

    See the Pen Carousel Pure HTML and CSS [forked] by utilitybend.

    See the Pen Webshop slick slider remake in CSS [forked] by utilitybend.

    State Queries: Sticky Thing Stuck? Snappy Thing Snapped?

    For a long time, we have lacked the ability to know if a “sticky thing is stuck” or if a “snappy item is snapped” without relying on IntersectionObserver hacks. Chrome 133 introduced scroll-state queries, allowing us to query these states declaratively.

    By setting container-type: scroll-state, we can now style children based on whether they are stuck, snapped, or overflowing. This is a massive “quality of life” improvement that I have been eagerly waiting for since CSS Day 2023. It has even evolved a lot since we can also see the direction of the scroll, lovely!

    For a simple example: we can finally apply a shadow to a header only when it is actually sticking to the top of the viewport:

    .header-container {
      container-type: scroll-state;
      position: sticky;
      top: 0;
    
      header {
        transition: box-shadow 0.5s ease-out;
        /* The query checks the state of the container */
        @container scroll-state(stuck: top) {
          box-shadow: rgba(0, 0, 0, 0.6) 0px 12px 28px 0px;
        }
      }
    }
    
    • Demo: A sticky header that only applies a shadow when it is actually stuck.

    See the Pen Sticky headers with scroll-state query, checking if the sticky element is stuck [forked] by utilitybend.

    • Demo: A Pokémon-themed list that uses scroll-state queries combined with anchor positioning to move a frame over the currently snapped character.

    See the Pen Scroll-state query to check which item is snapped with CSS, Pokemon version [forked] by utilitybend.

    Optimized Ergonomics: Logic In CSS

    The “Optimized Ergonomics” section of CSS Wrapped highlights features that make our workflows more intuitive. Three features stand out as transformative for how we write logic:

    1. if() Statements
      We are finally getting conditionals in CSS. The if() function acts like a ternary operator for stylesheets, allowing us to apply values based on media, support, or style queries inline. This reduces the need for verbose @media blocks for single property changes.
    2. @function functions
      We can finally move some logic to a different place, resulting in some cleaner files, a real quality of life feature.
    3. sibling-index() and sibling-count()
      These tree-counting functions solve the issue of staggering animations or styling items based on list size. As I explored in Styling siblings with CSS has never been easier, this eliminates the need to hard-code custom properties (like --index: 1) in our HTML.

    Example: Calculating Layouts

    We can now write concise mathematical formulas. For example, staggering an animation for cards entering the screen becomes trivial:

    .card-container > * {
      animation: reveal 0.6s ease-out forwards;
      /* No more manual --index variables! */
      animation-delay: calc(sibling-index() * 0.1s);
    }
    

    I even experimented with using these functions along with trigonometry to place items in a perfect circle without any JavaScript.

    See the Pen Stagger cards using sibling-index() [forked] by utilitybend.

    • Demo: Placing items in a perfect circle using sibling-index, sibling-count, and the new CSS @function feature.

    See the Pen The circle using sibling-index, sibling-count and functions [forked] by utilitybend.

    My CSS To-Do List: Features I Can’t Wait To Try

    While I have been busy sculpting selects and transitions, the “CSS Wrapped 2025” report is packed with other goodies that I haven’t had the chance to fire up in CodePen yet. These are high on my list for my next experiments:

    Anchored Container Queries

    I used CSS Anchor Positioning for the buttons in my carousel demo, but “CSS Wrapped” highlights an evolution of this: Anchored Container Queries. This solves a problem we’ve all had with tooltips: if the browser flips the tooltip from top to bottom because of space constraints, the “arrow” often stays pointing the wrong way. With anchored container queries (@container anchored(fallback: flip-block)), we can style the element based on which fallback position the browser actually chose.

    Nested View Transition Groups

    View Transitions have been a revolution, but they came with a specific trade-off: they flattened the element tree, which often broke 3D transforms or overflow: clip. I always had a feeling that it was missing something, and this might just be the answer. By using view-transition-group: nearest, we can finally nest transition groups within each other.

    This allows us to maintain clipping effects or 3D rotations during a transition — something that was previously impossible because the elements were hoisted up to the top level.

    .card img {
      view-transition-name: photo;
      view-transition-group: nearest; /* Keep it nested! */
    }
    

    Typography and Shapes

    Finally, the ergonomist in me is itching to try Text Box Trim, which promises to remove that annoying extra whitespace above and below text content (the leading) to finally achieve perfect vertical alignment. And for the creative side, corner-shape and the shape() function are opening up non-rectangular layouts, allowing for “squaricles” and complex paths that respond to CSS variables. That being said, I can’t wait to have a design full of squircles!

    A Hopeful Future

    We are witnessing a world where CSS is becoming capable of handling logic, state, and complex interactions that previously belonged to JavaScript. Features like moveBefore (preserving DOM state for iframes/videos) and attr() (using types beyond strings for colors and grids) further cement this reality.

    While some of these features are currently experimental or specific to Chrome, the momentum is undeniable. We must hope for continued support across all browsers through initiatives like Interop to ensure these capabilities become the baseline. That being said, having browser engines is just as important as having all these awesome features in “Chrome first”. These new features need to be discussed, tinkered with, and tested before ever landing in browsers.

    It is a fantastic moment to get into CSS. We are no longer just styling documents; we are crafting dynamic, ergonomic, and robust applications with a native toolkit that is more powerful than ever.

    Let’s get going with this new era and spread the word.

    This is CSS Wrapped!

  • How UX Professionals Can Lead AI Strategy

    Your senior management is excited about AI. They’ve read the articles, attended the webinars, and seen the demos. They’re convinced that AI will transform your organization, boost productivity, and give you a competitive edge.

    Meanwhile, you’re sitting in your UX role wondering what this means for your team, your workflow, and your users. You might even be worried about your job security.

    The problem is that the conversation about how AI gets implemented is happening right now, and if you’re not part of it, someone else will decide how it affects your work. That someone probably doesn’t understand user experience, research practices, or the subtle ways poor implementation can damage the very outcomes management hopes to achieve.

    You have a choice. You can wait for directives to come down from above, or you can take control of the conversation and lead the AI strategy for your practice.

    Why UX Professionals Must Own the AI Conversation

    Management sees AI as efficiency gains, cost savings, competitive advantage, and innovation all wrapped up in one buzzword-friendly package. They’re not wrong to be excited. The technology is genuinely impressive and can deliver real value.

    But without UX input, AI implementations often fail users in predictable ways:

    • They automate tasks without understanding the judgment calls those tasks require.
    • They optimize for speed while destroying the quality that made your work valuable.

    Your expertise positions you perfectly to guide implementation. You understand users, workflows, quality standards, and the gap between what looks impressive in a demo and what actually works in practice.

    Use AI Momentum to Advance Your Priorities

    Management’s enthusiasm for AI creates an opportunity to advance priorities you’ve been fighting for unsuccessfully. When management is willing to invest in AI, you can connect those long-standing needs to the AI initiative. Position user research as essential for training AI systems on real user needs. Frame usability testing as the validation method that ensures AI-generated solutions actually work.

    How AI gets implemented will shape your team’s roles, your users’ experiences, and your organization’s capability to deliver quality digital products.

    Your Role Isn’t Disappearing (It’s Evolving)

    Yes, AI will automate some of the tasks you currently do. But someone needs to decide which tasks get automated, how they get automated, what guardrails to put in place, and how automated processes fit around real humans doing complex work.

    That someone should be you.

    Think about what you already do. When you conduct user research, AI might help you transcribe interviews or identify themes. But you’re the one who knows which participant hesitated before answering, which feedback contradicts what you observed in their behavior, and which insights matter most for your specific product and users.

    When you design interfaces, AI might generate layout variations or suggest components from your design system. But you’re the one who understands the constraints of your technical platform, the political realities of getting designs approved, and the edge cases that will break a clever solution.

    Your future value comes from the work you’re already doing:

    • Seeing the full picture.
      You understand how this feature connects to that workflow, how this user segment differs from that one, and why the technically correct solution won’t work in your organization’s reality.
    • Making judgment calls.
      You decide when to follow the design system and when to break it, when user feedback reflects a real problem versus a feature request from one vocal user, and when to push back on stakeholders versus find a compromise.
    • Connecting the dots.
      You translate between technical constraints and user needs, between business goals and design principles, between what stakeholders ask for and what will actually solve their problem.

    AI will keep getting better at individual tasks. But you’re the person who decides which solution actually works for your specific context. The people who will struggle are those doing simple, repeatable work without understanding why. Your value is in understanding context, making judgment calls, and connecting solutions to real problems.

    Step 1: Understand Management’s AI Motivations

    Before you can lead the conversation, you need to understand what’s driving it. Management is responding to real pressures: cost reduction, competitive pressure, productivity gains, and board expectations.

    Speak their language.
    When you talk to management about AI, frame everything in terms of ROI, risk mitigation, and competitive advantage. “This approach will protect our quality standards” is less compelling than “This approach reduces the risk of damaging our conversion rate while we test AI capabilities.”

    Separate hype from reality.
    Take time to research what AI capabilities actually exist versus what’s hype. Read case studies, try tools yourself, and talk to peers about what’s actually working.

    Identify real pain points.
    AI might legitimately address in your organization. Maybe your team spends hours formatting research findings, or accessibility testing creates bottlenecks. These are the problems worth solving.

    Step 2: Audit Your Current State and Opportunities

    Map your team’s work. Where does time actually go? Look at the past quarter and categorize how your team spent their hours.

    Identify high-volume, repeatable tasks versus high-judgment work.
    Repeatable tasks are candidates for automation. High-judgment work is where you add irreplaceable value.

    Also, identify what you’ve wanted to do but couldn’t get approved.
    This is your opportunity list. Maybe you’ve wanted quarterly usability tests, but only get budget annually. Write these down separately. You’ll connect them to your AI strategy in the next step.

    Spot opportunities where AI could genuinely help:

    • Research synthesis:
      AI can help organize and categorize findings.
    • Analyzing user behavior data:
      AI can process analytics and session recordings to surface patterns you might miss.
    • Rapid prototyping:
      AI can quickly generate testable prototypes, speeding up your test cycles.

    Step 3: Define AI Principles for Your UX Practice

    Before you start forming your strategy, establish principles that will guide every decision.

    Set non-negotiables.
    User privacy, accessibility, and human oversight of significant decisions. Write these down and get agreement from leadership before you pilot anything.

    Define criteria for AI use.
    AI is good at pattern recognition, summarization, and generating variations. AI is poor at understanding context, making ethical judgments, and knowing when rules should be broken.

    Define success metrics beyond efficiency.
    Yes, you want to save time. But you also need to measure quality, user satisfaction, and team capability. Build a balanced scorecard that captures what actually matters.

    Create guardrails.
    Maybe every AI-generated interface needs human review before it ships. These guardrails prevent the obvious disasters and give you space to learn safely.

    Step 4: Build Your AI-in-UX Strategy

    Now you’re ready to build the actual strategy you’ll pitch to leadership. Start small with pilot projects that have a clear scope and evaluation criteria.

    Connect to business outcomes management cares about.
    Don’t pitch “using AI for research synthesis.” Pitch “reducing time from research to insights by 40%, enabling faster product decisions.”

    Piggyback your existing priorities on AI momentum.
    Remember that opportunity list from Step 2? Now you connect those long-standing needs to your AI strategy. If you’ve wanted more frequent usability testing, explain that AI implementations need continuous validation to catch problems before they scale. AI implementations genuinely benefit from good research practices. You’re simply using management’s enthusiasm for AI as the vehicle to finally get resources for practices that should have been funded all along.

    Define roles clearly.
    Where do humans lead? Where does AI assist? Where won’t you automate? Management needs to understand that some work requires human judgment and should never be fully automated.

    Plan for capability building.
    Your team will need training and new skills. Budget time and resources for this.

    Address risks honestly.
    AI could generate biased recommendations, miss important context, or produce work that looks good but doesn’t actually function. For each risk, explain how you’ll detect it and what you’ll do to mitigate it.

    Step 5: Pitch the Strategy to Leadership

    Frame your strategy as de-risking management’s AI ambitions, not blocking them. You’re showing them how to implement AI successfully while avoiding the obvious pitfalls.

    Lead with outcomes and ROI they care about.
    Put the business case up front.

    Bundle your wish list into the AI strategy.
    When you present your strategy, include those capabilities you’ve wanted but couldn’t get approved before. Don’t present them as separate requests. Integrate them as essential components. “To validate AI-generated designs, we’ll need to increase our testing frequency from annual to quarterly” sounds much more reasonable than “Can we please do more testing?” You’re explaining what’s required for their AI investment to succeed.

    Show quick wins alongside a longer-term vision.
    Identify one or two pilots that can show value within 30-60 days. Then show them how those pilots build toward bigger changes over the next year.

    Ask for what you need.
    Be specific. You need a budget for tools, time for pilots, access to data, and support for team training.

    Step 6: Implement and Demonstrate Value

    Run your pilots with clear before-and-after metrics. Measure everything: time saved, quality maintained, user satisfaction, team confidence.

    Document wins and learning.
    Failures are useful too. If a pilot doesn’t work out, document why and what you learned.

    Share progress in management’s language.
    Monthly updates should focus on business outcomes, not technical details. “We’ve reduced research synthesis time by 35% while maintaining quality scores” is the right level of detail.

    Build internal advocates by solving real problems.
    When your AI pilots make someone’s job easier, you create advocates who will support broader adoption.

    Iterate based on what works in your specific context.
    Not every AI application will fit your organization. Pay attention to what’s actually working and double down on that.

    Taking Initiative Beats Waiting

    AI adoption is happening. The question isn’t whether your organization will use AI, but whether you’ll shape how it gets implemented.

    Your UX expertise is exactly what’s needed to implement AI successfully. You understand users, quality, and the gap between impressive demos and useful reality.

    Take one practical first step this week.
    Schedule 30 minutes to map one AI opportunity in your practice. Pick one area where AI might help, think through how you’d pilot it safely, and sketch out what success would look like.

    Then start the conversation with your manager. You might be surprised how receptive they are to someone stepping up to lead this.

    You know how to understand user needs, test solutions, measure outcomes, and iterate based on evidence. Those skills don’t change just because AI is involved. You’re applying your existing expertise to a new tool.

    Your role isn’t disappearing. It’s evolving into something more strategic, more valuable, and more secure. But only if you take the initiative to shape that evolution yourself.

    Further Reading On SmashingMag

  • Beyond The Black Box: Practical XAI For UX Practitioners

    In my last piece, we established a foundational truth: for users to adopt and rely on AI, they must trust it. We talked about trust being a multifaceted construct, built on perceptions of an AI’s Ability, Benevolence, Integrity, and Predictability. But what happens when an AI, in its silent, algorithmic wisdom, makes a decision that leaves a user confused, frustrated, or even hurt? A mortgage application is denied, a favorite song is suddenly absent from a playlist, and a qualified resume is rejected before a human ever sees it. In these moments, ability and predictability are shattered, and benevolence feels a world away.

    Our conversation now must evolve from the why of trust to the how of transparency. The field of Explainable AI (XAI), which focuses on developing methods to make AI outputs understandable to humans, has emerged to address this, but it’s often framed as a purely technical challenge for data scientists. I argue it’s a critical design challenge for products relying on AI. It’s our job as UX professionals to bridge the gap between algorithmic decision-making and human understanding.

    This article provides practical, actionable guidance on how to research and design for explainability. We’ll move beyond the buzzwords and into the mockups, translating complex XAI concepts into concrete design patterns you can start using today.

    De-mystifying XAI: Core Concepts For UX Practitioners

    XAI is about answering the user’s question: “Why?” Why was I shown this ad? Why is this movie recommended to me? Why was my request denied? Think of it as the AI showing its work on a math problem. Without it, you just have an answer, and you’re forced to take it on faith. In showing the steps, you build comprehension and trust. You also allow for your work to be double-checked and verified by the very humans it impacts.

    Feature Importance And Counterfactuals

    There are a number of techniques we can use to clarify or explain what is happening with AI. While methods range from providing the entire logic of a decision tree to generating natural language summaries of an output, two of the most practical and impactful types of information UX practitioners can introduce into an experience are feature importance (Figure 1) and counterfactuals. These are often the most straightforward for users to understand and the most actionable for designers to implement.

    Feature Importance

    This explainability method answers, “What were the most important factors the AI considered?” It’s about identifying the top 2-3 variables that had the biggest impact on the outcome. It’s the headline, not the whole story.

    Example: Imagine an AI that predicts whether a customer will churn (cancel their service). Feature importance might reveal that “number of support calls in the last month” and “recent price increases” were the two most important factors in determining if a customer was likely to churn.

    Counterfactuals

    This powerful method answers, “What would I need to change to get a different outcome?” This is crucial because it gives users a sense of agency. It transforms a frustrating “no” into an actionable “not yet.”

    Example: Imagine a loan application system that uses AI. A user is denied a loan. Instead of just seeing “Application Denied,” a counterfactual explanation would also share, “If your credit score were 50 points higher, or if your debt-to-income ratio were 10% lower, your loan would have been approved.” This gives Sarah clear, actionable steps she can take to potentially get a loan in the future.

    Using Model Data To Enhance The Explanation

    Although technical specifics are often handled by data scientists, it’s helpful for UX practitioners to know that tools like LIME (Local Interpretable Model-agnostic Explanations) which explains individual predictions by approximating the model locally, and SHAP (SHapley Additive exPlanations) which uses a game theory approach to explain the output of any machine learning model are commonly used to extract these “why” insights from complex models. These libraries essentially help break down an AI’s decision to show which inputs were most influential for a given outcome.

    When done properly, the data underlying an AI tool’s decision can be used to tell a powerful story. Let’s walk through feature importance and counterfactuals and show how the data science behind the decision can be utilized to enhance the user’s experience.

    Now let’s cover feature importance with the assistance of Local Explanations (e.g., LIME) data: This approach answers, “Why did the AI make this specific recommendation for me, right now?” Instead of a general explanation of how the model works, it provides a focused reason for a single, specific instance. It’s personal and contextual.

    Example: Imagine an AI-powered music recommendation system like Spotify. A local explanation would answer, “Why did the system recommend this specific song by Adele to you right now?” The explanation might be: “Because you recently listened to several other emotional ballads and songs by female vocalists.”

    Finally, let’s cover the inclusion of Value-based Explanations (e.g. Shapley Additive Explanations (SHAP) data to an explanation of a decision: This is a more nuanced version of feature importance that answers, “How did each factor push the decision one way or the other?” It helps visualize what mattered, and whether its influence was positive or negative.

    Example: Imagine a bank uses an AI model to decide whether to approve a loan application.

    Feature Importance: The model output might show that the applicant’s credit score, income, and debt-to-income ratio were the most important factors in its decision. This answers what mattered.

    Feature Importance with Value-Based Explanations (SHAP): SHAP values would take feature importance further based on elements of the model.

    • For an approved loan, SHAP might show that a high credit score significantly pushed the decision towards approval (positive influence), while a slightly higher-than-average debt-to-income ratio pulled it slightly away (negative influence), but not enough to deny the loan.
    • For a denied loan, SHAP could reveal that a low income and a high number of recent credit inquiries strongly pushed the decision towards denial, even if the credit score was decent.

    This helps the loan officer explain to the applicant beyond what was considered, to how each factor contributed to the final “yes” or “no” decision.

    It’s crucial to recognize that the ability to provide good explanations often starts much earlier in the development cycle. Data scientists and engineers play a pivotal role by intentionally structuring models and data pipelines in ways that inherently support explainability, rather than trying to bolt it on as an afterthought.

    Research and design teams can foster this by initiating early conversations with data scientists and engineers about user needs for understanding, contributing to the development of explainability metrics, and collaboratively prototyping explanations to ensure they are both accurate and user-friendly.

    XAI And Ethical AI: Unpacking Bias And Responsibility

    Beyond building trust, XAI plays a critical role in addressing the profound ethical implications of AI*, particularly concerning algorithmic bias. Explainability techniques, such as analyzing SHAP values, can reveal if a model’s decisions are disproportionately influenced by sensitive attributes like race, gender, or socioeconomic status, even if these factors were not explicitly used as direct inputs.

    For instance, if a loan approval model consistently assigns negative SHAP values to applicants from a certain demographic, it signals a potential bias that needs investigation, empowering teams to surface and mitigate such unfair outcomes.

    The power of XAI also comes with the potential for “explainability washing.” Just as “greenwashing” misleads consumers about environmental practices, explainability washing can occur when explanations are designed to obscure, rather than illuminate, problematic algorithmic behavior or inherent biases. This could manifest as overly simplistic explanations that omit critical influencing factors, or explanations that strategically frame results to appear more neutral or fair than they truly are. It underscores the ethical responsibility of UX practitioners to design explanations that are genuinely transparent and verifiable.

    UX professionals, in collaboration with data scientists and ethicists, hold a crucial responsibility in communicating the why of a decision, and also the limitations and potential biases of the underlying AI model. This involves setting realistic user expectations about AI accuracy, identifying where the model might be less reliable, and providing clear channels for recourse or feedback when users perceive unfair or incorrect outcomes. Proactively addressing these ethical dimensions will allow us to build AI systems that are truly just and trustworthy.

    From Methods To Mockups: Practical XAI Design Patterns

    Knowing the concepts is one thing; designing them is another. Here’s how we can translate these XAI methods into intuitive design patterns.

    Pattern 1: The “Because” Statement (for Feature Importance)

    This is the simplest and often most effective pattern. It’s a direct, plain-language statement that surfaces the primary reason for an AI’s action.

    • Heuristic: Be direct and concise. Lead with the single most impactful reason. Avoid jargon at all costs.

    Example: Imagine a music streaming service. Instead of just presenting a “Discover Weekly” playlist, you add a small line of microcopy.

    Song Recommendation: “Velvet Morning”
    Because you listen to “The Fuzz” and other psychedelic rock.

    Pattern 2: The “What-If” Interactive (for Counterfactuals)

    Counterfactuals are inherently about empowerment. The best way to represent them is by giving users interactive tools to explore possibilities themselves. This is perfect for financial, health, or other goal-oriented applications.

    • Heuristic: Make explanations interactive and empowering. Let users see the cause and effect of their choices.

    Example: A loan application interface. After a denial, instead of a dead end, the user gets a tool to determine how various scenarios (what-ifs) might play out (See Figure 1).

    Pattern 3: The Highlight Reel (For Local Explanations)

    When an AI performs an action on a user’s content (like summarizing a document or identifying faces in photos), the explanation should be visually linked to the source.

    • Heuristic: Use visual cues like highlighting, outlines, or annotations to connect the explanation directly to the interface element it’s explaining.

    Example: An AI tool that summarizes long articles.

    AI-Generated Summary Point:
    Initial research showed a market gap for sustainable products.

    Source in Document:
    “…Our Q2 analysis of market trends conclusively demonstrated that no major competitor was effectively serving the eco-conscious consumer, revealing a significant market gap for sustainable products…”

    Pattern 4: The Push-and-Pull Visual (for Value-based Explanations)

    For more complex decisions, users might need to understand the interplay of factors. Simple data visualizations can make this clear without being overwhelming.

    • Heuristic: Use simple, color-coded data visualizations (like bar charts) to show the factors that positively and negatively influenced a decision.

    Example: An AI screening a candidate’s profile for a job.

    Why this candidate is a 75% match:

    Factors pushing the score up:

    • 5+ Years UX Research Experience
    • Proficient in Python

    Factors pushing the score down:

    • No experience with B2B SaaS

    Learning and using these design patterns in the UX of your AI product will help increase the explainability. You can also use additional techniques that I’m not covering in-depth here. This includes the following:

    • Natural language explanations: Translating an AI’s technical output into simple, conversational human language that non-experts can easily understand.
    • Contextual explanations: Providing a rationale for an AI’s output at the specific moment and location, it is most relevant to the user’s task.
    • Relevant visualizations: Using charts, graphs, or heatmaps to visually represent an AI’s decision-making process, making complex data intuitive and easier for users to grasp.

    A Note For the Front End: Translating these explainability outputs into seamless user experiences also presents its own set of technical considerations. Front-end developers often grapple with API design to efficiently retrieve explanation data, and performance implications (like the real-time generation of explanations for every user interaction) need careful planning to avoid latency.

    Some Real-world Examples

    UPS Capital’s DeliveryDefense

    UPS uses AI to assign a “delivery confidence score” to addresses to predict the likelihood of a package being stolen. Their DeliveryDefense software analyzes historical data on location, loss frequency, and other factors. If an address has a low score, the system can proactively reroute the package to a secure UPS Access Point, providing an explanation for the decision (e.g., “Package rerouted to a secure location due to a history of theft”). This system demonstrates how XAI can be used for risk mitigation and building customer trust through transparency.

    Autonomous Vehicles

    These vehicles of the future will need to effectively use XAI to help their vehicles make safe, explainable decisions. When a self-driving car brakes suddenly, the system can provide a real-time explanation for its action, for example, by identifying a pedestrian stepping into the road. This is not only crucial for passenger comfort and trust but is a regulatory requirement to prove the safety and accountability of the AI system.

    IBM Watson Health (and its challenges)

    While often cited as a general example of AI in healthcare, it’s also a valuable case study for the importance of XAI. The failure of its Watson for Oncology project highlights what can go wrong when explanations are not clear, or when the underlying data is biased or not localized. The system’s recommendations were sometimes inconsistent with local clinical practices because they were based on U.S.-centric guidelines. This serves as a cautionary tale on the need for robust, context-aware explainability.

    The UX Researcher’s Role: Pinpointing And Validating Explanations

    Our design solutions are only effective if they address the right user questions at the right time. An explanation that answers a question the user doesn’t have is just noise. This is where UX research becomes the critical connective tissue in an XAI strategy, ensuring that we explain the what and how that actually matters to our users. The researcher’s role is twofold: first, to inform the strategy by identifying where explanations are needed, and second, to validate the designs that deliver those explanations.

    Informing the XAI Strategy (What to Explain)

    Before we can design a single explanation, we must understand the user’s mental model of the AI system. What do they believe it’s doing? Where are the gaps between their understanding and the system’s reality? This is the foundational work of a UX researcher.

    Mental Model Interviews: Unpacking User Perceptions Of AI Systems

    Through deep, semi-structured interviews, UX practitioners can gain invaluable insights into how users perceive and understand AI systems. These sessions are designed to encourage users to literally draw or describe their internal “mental model” of how they believe the AI works. This often involves asking open-ended questions that prompt users to explain the system’s logic, its inputs, and its outputs, as well as the relationships between these elements.

    These interviews are powerful because they frequently reveal profound misconceptions and assumptions that users hold about AI. For example, a user interacting with a recommendation engine might confidently assert that the system is based purely on their past viewing history. They might not realize that the algorithm also incorporates a multitude of other factors, such as the time of day they are browsing, the current trending items across the platform, or even the viewing habits of similar users.

    Uncovering this gap between a user’s mental model and the actual underlying AI logic is critically important. It tells us precisely what specific information we need to communicate to users to help them build a more accurate and robust mental model of the system. This, in turn, is a fundamental step in fostering trust. When users understand, even at a high level, how an AI arrives at its conclusions or recommendations, they are more likely to trust its outputs and rely on its functionality.

    AI Journey Mapping: A Deep Dive Into User Trust And Explainability

    By meticulously mapping the user’s journey with an AI-powered feature, we gain invaluable insights into the precise moments where confusion, frustration, or even profound distrust emerge. This uncovers critical junctures where the user’s mental model of how the AI operates clashes with its actual behavior.

    Consider a music streaming service: Does the user’s trust plummet when a playlist recommendation feels “random,” lacking any discernible connection to their past listening habits or stated preferences? This perceived randomness is a direct challenge to the user’s expectation of intelligent curation and a breach of the implicit promise that the AI understands their taste. Similarly, in a photo management application, do users experience significant frustration when an AI photo-tagging feature consistently misidentifies a cherished family member? This error is more than a technical glitch; it strikes at the heart of accuracy, personalization, and even emotional connection.

    These pain points are vivid signals indicating precisely where a well-placed, clear, and concise explanation is necessary. Such explanations serve as crucial repair mechanisms, mending a breach of trust that, if left unaddressed, can lead to user abandonment.

    The power of AI journey mapping lies in its ability to move us beyond simply explaining the final output of an AI system. While understanding what the AI produced is important, it’s often insufficient. Instead, this process compels us to focus on explaining the process at critical moments. This means addressing:

    • Why a particular output was generated: Was it due to specific input data? A particular model architecture?
    • What factors influenced the AI’s decision: Were certain features weighted more heavily?
    • How the AI arrived at its conclusion: Can we offer a simplified, analogous explanation of its internal workings?
    • What assumptions the AI made: Were there implicit understandings of the user’s intent or data that need to be surfaced?
    • What the limitations of the AI are: Clearly communicating what the AI cannot do, or where its accuracy might waver, builds realistic expectations.

    AI journey mapping transforms the abstract concept of XAI into a practical, actionable framework for UX practitioners. It enables us to move beyond theoretical discussions of explainability and instead pinpoint the exact moments where user trust is at stake, providing the necessary insights to build AI experiences that are powerful, transparent, understandable, and trustworthy.

    Ultimately, research is how we uncover the unknowns. Your team might be debating how to explain why a loan was denied, but research might reveal that users are far more concerned with understanding how their data was used in the first place. Without research, we are simply guessing what our users are wondering.

    Collaborating On The Design (How to Explain Your AI)

    Once research has identified what to explain, the collaborative loop with design begins. Designers can prototype the patterns we discussed earlier—the “Because” statement, the interactive sliders—and researchers can put those designs in front of users to see if they hold up.

    Targeted Usability & Comprehension Testing: We can design research studies that specifically test the XAI components. We don’t just ask, “Is this easy to use?” We ask, “After seeing this, can you tell me in your own words why the system recommended this product?” or “Show me what you would do to see if you could get a different result.” The goal here is to measure comprehension and actionability, alongside usability.

    Measuring Trust Itself: We can use simple surveys and rating scales before and after an explanation is shown. For instance, we can ask a user on a 5-point scale, “How much do you trust this recommendation?” before they see the “Because” statement, and then ask them again afterward. This provides quantitative data on whether our explanations are actually moving the needle on trust.

    This process creates a powerful, iterative loop. Research findings inform the initial design. That design is then tested, and the new findings are fed back to the design team for refinement. Maybe the “Because” statement was too jargony, or the “What-If” slider was more confusing than empowering. Through this collaborative validation, we ensure that the final explanations are technically accurate, genuinely understandable, useful, and trust-building for the people using the product.

    The Goldilocks Zone Of Explanation

    A critical word of caution: it is possible to over-explain. As in the fairy tale, where Goldilocks sought the porridge that was ‘just right’, the goal of a good explanation is to provide the right amount of detail—not too much and not too little. Bombarding a user with every variable in a model will lead to cognitive overload and can actually decrease trust. The goal is not to make the user a data scientist.

    One solution is progressive disclosure.

    1. Start with the simple. Lead with a concise “Because” statement. For most users, this will be enough.
    2. Offer a path to detail. Provide a clear, low-friction link like “Learn More” or “See how this was determined.”
    3. Reveal the complexity. Behind that link, you can offer the interactive sliders, the visualizations, or a more detailed list of contributing factors.

    This layered approach respects user attention and expertise, providing just the right amount of information for their needs. Let’s imagine you’re using a smart home device that recommends optimal heating based on various factors.

    Start with the simple: “Your home is currently heated to 72 degrees, which is the optimal temperature for energy savings and comfort.

    Offer a path to detail: Below that, a small link or button: “Why is 72 degrees optimal?

    Reveal the complexity: Clicking that link could open a new screen showing:

    • Interactive sliders for outside temperature, humidity, and your preferred comfort level, demonstrating how these adjust the recommended temperature.
    • A visualization of energy consumption at different temperatures.
    • A list of contributing factors like “Time of day,” “Current outside temperature,” “Historical energy usage,” and “Occupancy sensors.”

    It’s effective to combine multiple XAI methods and this Goldilocks Zone of Explanation pattern, which advocates for progressive disclosure, implicitly encourages this. You might start with a simple “Because” statement (Pattern 1) for immediate comprehension, and then offer a “Learn More” link that reveals a “What-If” Interactive (Pattern 2) or a “Push-and-Pull Visual” (Pattern 4) for deeper exploration.

    For instance, a loan application system could initially state the primary reason for denial (feature importance), then allow the user to interact with a “What-If” tool to see how changes to their income or debt would alter the outcome (counterfactuals), and finally, provide a detailed “Push-and-Pull” chart (value-based explanation) to illustrate the positive and negative contributions of all factors. This layered approach allows users to access the level of detail they need, when they need it, preventing cognitive overload while still providing comprehensive transparency.

    Determining which XAI tools and methods to use is primarily a function of thorough UX research. Mental model interviews and AI journey mapping are crucial for pinpointing user needs and pain points related to AI understanding and trust. Mental model interviews help uncover user misconceptions about how the AI works, indicating areas where fundamental explanations (like feature importance or local explanations) are needed. AI journey mapping, on the other hand, identifies critical moments of confusion or distrust in the user’s interaction with the AI, signaling where more granular or interactive explanations (like counterfactuals or value-based explanations) would be most beneficial to rebuild trust and provide agency.

    Ultimately, the best way to choose a technique is to let user research guide your decisions, ensuring that the explanations you design directly address actual user questions and concerns, rather than simply offering technical details for their own sake.

    XAI for Deep Reasoning Agents

    Some of the newest AI systems, known as deep reasoning agents, produce an explicit “chain of thought” for every complex task. They do not merely cite sources; they show the logical, step-by-step path they took to arrive at a conclusion. While this transparency provides valuable context, a play-by-play that spans several paragraphs can feel overwhelming to a user simply trying to complete a task.

    The principles of XAI, especially the Goldilocks Zone of Explanation, apply directly here. We can curate the journey, using progressive disclosure to show only the final conclusion and the most salient step in the thought process first. Users can then opt in to see the full, detailed, multi-step reasoning when they need to double-check the logic or find a specific fact. This approach respects user attention while preserving the agent’s full transparency.

    Next Steps: Empowering Your XAI Journey

    Explainability is a fundamental pillar for building trustworthy and effective AI products. For the advanced practitioner looking to drive this change within their organization, the journey extends beyond design patterns into advocacy and continuous learning.

    To deepen your understanding and practical application, consider exploring resources like the AI Explainability 360 (AIX360) toolkit from IBM Research or Google’s What-If Tool, which offer interactive ways to explore model behavior and explanations. Engaging with communities like the Responsible AI Forum or specific research groups focused on human-centered AI can provide invaluable insights and collaboration opportunities.

    Finally, be an advocate for XAI within your own organization. Frame explainability as a strategic investment. Consider a brief pitch to your leadership or cross-functional teams:

    “By investing in XAI, we’ll go beyond building trust; we’ll accelerate user adoption, reduce support costs by empowering users with understanding, and mitigate significant ethical and regulatory risks by exposing potential biases. This is good design and smart business.”

    Your voice, grounded in practical understanding, is crucial in bringing AI out of the black box and into a collaborative partnership with users.

  • NASA finally—and we really do mean it this time—has a full-time leader

    Jared Isaacman, a pilot and financial tech billionaire, has commanded two groundbreaking spaceflights, including leading the first private spacewalk.

    But his most remarkable flying has occurred over the last year. And on Wednesday, he stuck the landing by earning formal Senate approval to become NASA’s 15th administrator.

    With a final tally of 67 to 30, Wednesday’s Senate confirmation came 377 days after President Trump first nominated Isaacman to serve as NASA administrator. Since that time, Isaacman had to navigate the following issues:

    Read full article

    Comments

  • Physicists 3D-printed a Christmas tree of ice

    Physicists at the University of Amsterdam came up with a really cool bit of Christmas decor: a miniature 3D-printed Christmas tree, a mere 8 centimeters tall, made of ice, without any refrigeration equipment or other freezing technology, and at minimal cost. The secret is evaporative cooling, according to a preprint posted to the physics arXiv.

    Evaporative cooling is a well-known phenomenon; mammals use it to regulate body temperature. You can see it in your morning cup of hot coffee: the hotter atoms rise to the top of the magnetic trap and “jump out” as steam. It also plays a role (along with shock wave dynamics and various other factors) in the formation of “wine tears.” It’s a key step in creating Bose-Einstein condensates.

    And evaporative cooling is also the main culprit behind the infamous “stall” that so frequently plagues aspiring BBQ pit masters eager to make a successful pork butt. The meat sweats as it cooks, releasing the moisture within, and that moisture evaporates and cools the meat, effectively canceling out the heat from the BBQ. That’s why a growing number of competitive pit masters wrap their meat in tinfoil after the first few hours (usually when the internal temperature hits 170° F).

    Read full article

    Comments