分类: Uncategorized
-
用创意守护飞鸟,让年味更有温度:防鸟撞新春窗花设计公益征集活动
岁末迎新,为窗户贴上红火窗花,是迎接新年不可或缺的仪式。然而,那面映照着温暖灯火的明净玻璃,对许多穿梭于城市的小鸟而言,却可能成为一道看不见的危险——玻璃幕墙与节日灯光,常常让它们在飞行中迷失方向,不 …查看全文 -
拉伸没有用?也许只是你拉伸不到位
大多数人都认为拉伸可以提高柔韧性,改善体态,缓解腰酸背痛。科学研究证明,拉伸也确实有这些作用,但当我们跟着教程拉伸半天,也没感觉到有什么用,就失去了对拉伸的信任,在我曾写过的《久坐一族如何缓解腰背酸痛 …查看全文 -
海淀区2025年度网络传播发展大会成功举办 :聚力网络传播新动能 共筑文化强区新篇章
12月19日,“e路同心 网聚未来”——海淀区2025年度网络传播发展大会,在中关村国家自主创新示范区展示中心成功举办。本次大会由海淀区委宣传部指导,海淀区委网信办主办,中央、市、区相关部门,互联网平台企业、网络创作者、产业园区、行业协会、高校及媒体代表共200余人参加。中央网信办网络传播局二级巡视员龙宁丽、北京市委网信办副主任潘锋、北京市海淀区委常委、宣传部部长齐慧超出席会议。
当前,数字化浪潮席卷全球,网络传播已成为塑造社会认知、凝聚价值共识、赋能城市发展的重要力量。大会汇聚“政产学研用”各方力量,通过展示年度传播作品、分享行业前沿趋势、搭建合作共享平台,共谋区域网络传播发展未来,为大国首都文化强区建设凝聚广泛共识、注入强劲动力。
高举旗帜 把稳方向 共绘网络传播新蓝图
大会在展现海淀创新活力与深度互联的主题宣传片中启幕。北京市海淀区委常委、宣传部部长齐慧超致辞,展示海淀资源、人才、产业优势,强调要把握数字化、网络化、智能化趋势,巩固壮大主流舆论,动员网络传播力量,构建清朗网络空间,以高质量网络传播服务区域创新发展大局,为网络传播工作锚定了前进方向。
洞察前沿 把握趋势 共探网络传播新动能
在“新趋势主旨谈”单元,行业领袖与权威学者贡献了智慧洞见。
微博首席运营官、新浪移动首席执行官王巍以“大模型赋能网络传播新生态”为主题,系统阐述了人工智能技术对网络传播生态的重塑作用,指出以大模型为代表的AI技术已从辅助工具升级为网络传播的核心驱动力量;表达了互联网企业在网络传播生态建设中肩负的责任与担当,要持续加大在内容安全、可信算法等领域的投入,与各方携手,共同探索智能网络传播的良性发展路径。
清华大学新闻学院、人工智能学院双聘教授沈阳聚焦“AIGC内容创作生态变革和出海策略”,分析了人工智能生成内容对全球传播格局的深远影响,AIGC技术正在塑造一种“人机共生”的新范式,这不仅意味着生产效率的几何级提升,更代表着创作主体、创意过程乃至产业生态的深刻重构。他进一步强调海淀区在AI内容创作领域的先发优势,这是提升区域文化影响力的重要契机。
解码基因 讲好故事 共塑海淀城市新形象
“解码‘海淀IP’”单元生动展现了海淀区的创新底蕴与传播活力。年度优秀网络传播作品集中亮相,从五大维度充分展现了海淀在网络传播领域的丰硕成果与多元生态。
随后的“创作者故事会”上,中国计算机学会科普工委执委高庆一主持深度访谈,上海水令青争文化传播有限公司首席执行官王国培、甲子光年创始人及首席执行官张一甲、中国人民大学副教授董晨宇分别代表“内容创作者”“产业观察者”与“学术研究者”,从“形象塑造”到“价值传递”再到“发展赋能”层层深入,共同探讨如何将海淀的科技底蕴、文化资源与时代精神,转化为具有感召力与认同感的城市叙事。
平台聚势 协同联动 共建网络传播新生态
大会发布“网聚海淀”系列举措,标志着区域网络传播协同机制迈上新台阶。海淀区委网信办主任、区委宣传部副部长、二级巡视员黄英宣布正式成立海淀区“科技自媒体专业委员会”,旨在发挥科技自媒体纽带作用和专业力量,提升科技领域传播的专业性与影响力,打造共建共治共享的科技传播生态。首批委员会委员包括13家海淀优质科技自媒体机构与5位科技领域网络大V。
同时,海淀区融媒体中心主任、区委宣传部副部长张东旭发布海淀“光影朋友圈”计划,旨在构建一个“培育-联动-赋能”的开放传播共同体,最终实现资源共融、内容共创、声量共振,全方位、立体化地塑造与推广海淀的城市形象。来自中国人民大学、北京语言大学、北京城市学院的3位创作者代表分享经验体会。
同心筑梦 汇聚力量 壮大网络空间正能量
作为大会重要环节,海淀区网络知名人士联谊会顺利完成第二届理事会换届选举,并对过去五年的工作进行总结回顾,审议通过新章程,选举产生新一届领导班子。此举将进一步加强联系服务,团结引领广大网络知名人士画好网上网下同心圆,为区域发展汇聚更多智慧与力量。
大会还打造了融合区域文体资源的政企联动沉浸式展区,邀请了快手、36氪、科大讯飞等驻区重点互联网企业,充分展示内容生态、智能技术、平台渠道等方面的科技资源与核心能力,助力海淀区城市形象与特色产业的多维度、精准化传播。
会场内外全面展现了海淀区在网络传播与数字内容领域的前沿探索与丰硕成果,强化了协同创新的平台纽带,为构建良好网络传播生态、赋能区域高质量发展奠定了坚实基础。
-
2025手机三国杀:小米涨价了、华为重回王位、苹果不装了丨36氪年度透视②
“透视图”栏目在年终特别策划了“36氪年度总结”系列,用数据透视2025全年趋势,以图片呈现今年商业世界中不可错过的要点。
作者|邱晓芬
2025年,手机圈在成本飙升的“超级周期”中迎来残酷洗牌。当零部件成本普涨10%-25%,市场被一道600美金的红线切开:中低端陷入存量塌陷,唯有高端市场凭借AI与生态逆势扩张。
36氪制图
苹果:不装了,基础款也给“全家桶”
面对国产追击,苹果不再矜持。iPhone 17基础款不再“挤牙膏”,破天荒补齐高刷等核心短板,销量在中国近乎翻番,成为统治10月的绝对主力。哪怕AI迟到,苹果仍凭基础款的“降维打击”守住了高端大盘。
36氪制图
华为:硬刚五年,终夺王位
靠着高达95%的国产化率与鸿蒙原生生态的成型,华为彻底收复失地。11月底,Mate 80压轴登场,单周暴涨16%,在W49正式超越苹果,重新加冕中国市场销量冠军。
小米:最高级的“涨价”是高端化
小米17先行制人,Pro版销量首次超越基础版。不纠结参数,改拼副屏设计与“人车家全生态”,小米ASP(平均售价)提升20%,成为高端局最强黑马。
36氪制图
中国高端市场已成三足鼎立。
得高端者得天下,2026年的硬骨头,更难啃了。
36氪制图
以上是我们的第②期内容。
-
每日路演精选项目|人工智能、低空经济、新材料、生物制造、核聚变等领域
36氪作为中国最大的新经济媒体平台,过去通过与投资机构、基金合伙人的深度合作,大幅提升企业融资机会。随着新经济不断发展的十年,36氪沉淀和积累了大量一级市场投资人资源。
「每日路演」是针对优质创业项目开放投资人社群、进行新型的线上闭门路演,持续为创业者与投资人两端提供深度运营服务。to 高潜创业者-高效输出公司价值亮点+与投资人进行深度交流;to 投资人与投资机构-多维度、低成本对话项目决策人+先一步看到未来,掌握一手资讯,以媒体资源与平台优势助力创投双端对接。
本期我们精选汇总了六个社群精选路演项目信息。如果您对本文中的项目感兴趣,希望可以对接到项目方,或者如果您手中有好项目需要融资对接更多投资人,欢迎与我们联系(底部扫码添加运营官,备注项目对接)。
对接项目请扫码添加文章底部专属运营官微信
1.宇耀科技-人工智能新材料预测大模型—重新定义材料研发范式
【项目概要】响应国家聚焦原始创新战略,团队经六年的艰苦研究,将人工智能引入了新材料研发。解决传统研发的痛点和瓶颈,真正做到从0到1的突破,自主开发的AI新材料预测大模型,基于自主积累的有效实验数据,为客户提供新材料研发解决方案或模型的就地部署。
【融资需求】A轮,2500万-7000万人民币
【项目亮点】
1)范式级技术突破:全球首创“物理信息融合的AI新材料预测大模型”,将材料物理基本原理嵌入神经网络,实现了从“试错研发”到“AI理性设计”的研发范式根本变革。
2)颠覆性效能提升:平台已验证能将材料研发周期缩短80%、成本降低70%、失败率从80%以上降至5%以下,解决了行业长期存在的核心痛点。
3)独家数据与闭环壁垒:以六年积累、自主可控的高质量实验数据库为核心驱动力,构建了“AI设计-实验验证-数据反馈”的自进化闭环,形成了坚实的数据与迭代壁垒。
4)国家级验证与前沿成果:公司已获批国家级博士后科研工作站,技术成果被列入工信部垂直领域大模型示范应用案例、中关村科学城AI全景赋能典型案例、北京市经信局2025工业互联网唯一一个新材料行业特色平台,并从平台中完成仿生鼻传感器、6G可调谐器件、核防护薄膜等10余项关键领域新材料技术储备,并完成实验或工程试验验证。
5)顶尖交叉学科团队:由国际材料基因芯片发明人项晓东博士领衔,组建了覆盖材料科学、人工智能、量子计算等多学科的“梦之队”,具备从理论突破到产业化的全链条能力。
6)明确商业与社会价值:商业模式清晰(Saas微服务+技术平台全流程服务+核心材料产业化),直击千亿级市场,其提升研发前端效率的核心价值,能为整个产业链乃至区域经济高质量发展注入强大动力。
宇耀科技-模型平台界面及技术流程
2.蓝色向量
【项目概要】载人evtol整机及航空工业软件
【融资需求】Pre-A轮 1亿元人民币
【历史���东】厚雪资本、险峰长青、东方嘉富等
【项目亮点】V30是全球首款同时满足民航级10⁻⁹高安全性标准与开放式系统架构的智能电动垂直起降飞行器,核心团队由来自中国航空工业、中国商飞、柯林斯航空、昂际航电、阿里巴巴等企业的资深专家构成。蓝色向量是民航局系统内航投集团在低空经济领域联合创始的首家以“软件定义飞机”为理念进行设计的两吨级以上智能化载客eVTOL制造商,同时与隶属中国商飞的商飞软件在电子电气架构和适航研发管理软件领域展开深度合作。
蓝色向量SKYLA-V30
3.毫秒智控
【项目概要】中国线控转向SBW、REPS及PPK领先企业
【融资需求】Pre-A轮 5000万人民币
【历史股东】厚雪资本
【项目亮点】创始团队作为核心成员联合中汽中心等单位参与了国家标准GB17675-2025的制定工作。团队具备覆盖系统、电控、算法及功能安全等全栈式技术能力。核心成员曾任职于博世、华为、上汽、蔚来、万都等国内外主流汽车零部件供应商或整车企业,在相关领域拥有累计数百万套级产品的量产交付经验。
4.百识电子
【项目概要】南京百识电子科技有限公司成立于2019年,是国内专门生产第三代半导体碳化硅及氮化镓相关外延片的领导厂商,目前已于南京浦口经济技术开发区建立了完整的研发总部与生产中心,提供六吋、八吋碳化硅以及硅基氮化镓专业外延代工服务,以满足新世代功率器件开发市场需求。产品包含以碳化硅SiC为衬底(SiC on SiC、GaN on SiC),以及硅为衬底(GaN on Silicon)的外延片,针对大尺寸、高压、高功率以及射频微波等应用市场提供高品质、高一致性、高可靠度的碳化硅及氮化镓外延片产品及专业代工服务。除了标准规格外延片,公司亦可针对特殊应用市场需求,提供客制化规格外延服务及器件开发所需的关键制程。
【融资需求】B+轮,1亿元人民币
【历史股东】华映资本、和利资本、杭实资管、台达电、涌铧投资、GRC富华资本等
【项目亮点】
1)核心团队来自于亚洲碳化硅、氮化镓外延片大厂,具备外延工艺开发、良率保障、设备改良等全栈能力;
2)工艺技术全部来自公司自主开发,拥有超1000平米10级无尘研发制造中心,配备高端研发测试设备,在产品质量上,公司能够在外延规格书上提供坑洞指标(坑洞指标和沟槽式MOS质量直接相关),6英寸碳化硅外延片SBD级投片良率高达98%-99%;6英寸碳化硅外延片MOSFET级投片良率高达95%,是市场上少数具备可提供高质量3300V、6500V耐压的外延片量产供货厂商。此外,8吋碳化硅外延片已实现批量销售,且获得海外知名客户认证;
3)目前,公司已完成国产化进程,衬底材料来自于国内厂商,生产设备基本实现全国产自动化;
4)品线齐全,覆盖SiC功率、GaN功率、GaN射频外延片,满足新能源汽车、5G通信、光伏储能、智能电网等多领域需求,此外,还提供客制化规格外延服务,可根据客户需求定制特殊应用的外延片,满足不同场景的技术要求;
5)截至目前公司已累积42项专利,在全部产品技术指标上均达到世界领先水平;2025年10月,百识电子通过2025年国家级专精特新“小巨人”企业评审认定,标志着公司的综合实力与行业影响力获得国家层面的高度认可。
百识电子产品展示
5.中科国生
【项目概要】中科国生是全球生物基材料制造领域的先行者,专注于呋喃类化学品的创新开发与产业化。公司成立于2021年,核心团队毕业于中科院大连化物所、清华大学等知名院校,在生物质催化转化、呋喃类材料设计和化工生产领域拥有深厚的研发和产业化经验。
凭借自研的连续化工艺和短流程生产方案,公司实现了5-羟甲基糠醛(HMF)、呋喃二甲酸(FDCA)等关键化合物和单体的规模化生产,大幅降低了产品成本,并在浙江丽水和江苏泰兴设有百吨级和万吨级工厂,成功在高阻隔包装、生物基纤维、芳纶阻燃纤维等行业实现新原料的产业化落地,领跑行业产业化进程。
【融资需求】B轮、1-1.5亿元人民币
【历史股东】华映资本、君联资本、经纬创投、五源资本、中信金石、普华资本、余杭国投等
【项目亮点】
1)核心团队毕业于中科院大连化物所、清华大学等知名院校,长期深耕生物质催化转化与呋喃类材料方向,具备从分子设计、催化体系构建到工程放大与稳定生产的完整能力,是少数同时具备“科研深度+工业化经验”的生物基材料团队;
2)全球首创双连续化工艺,相比传统间歇釜式生产HMF工艺,生产效率提升5倍以上,规模化程度提高80%,并进一步突破FDCA氧化环节的催化剂稳定性难题(通过非贵金属催化剂与非浆态反应体系,实现粗品HMF直接高效转化为聚合级FDCA),综合成本削减近70%,成为全球少数掌握“HMF→FDCA”全流程连续化量产技术的企业,显著提升了商业化可行性与长期竞争壁垒;
3)以HMF为起点,以市场需求量最大的FDCA为核心单体,公司已构建覆盖“生物质-单体-衍生物-终端材料”的全链条研发与产业体系,相关技术已完成多条产品线的商业化验证,为后续多场景、多行业扩展奠定平台基础;
4)中科国生已联合产业链上下游,形成覆盖高阻隔包装、生物基纤维等方向的产业协同网络。在包装领域,生物基高阻隔材料已进入啤酒、碳酸饮料等国际终端品牌的大货验证阶段;在纤维领域,PEF生物基纤维已与多家头部服装品牌推进大货验证,推进多种混纺方案(与棉、羊毛、天丝等)的性能验证,应用场景覆盖服装、功能面料及家纺领域;
5)累计申请专利超120项(覆盖HMF催化、FDCA纯化、衍生物合成等全产业链环节),其中“双-(5-羟甲基糠基)醚的制备方法”(非贵金属催化剂降低制造成本)、“高选择性制备羟基脂肪酸”等核心专利已获授权;已成功构建出具有自主知识产权的连续化工艺体系,年产FDCA可达400吨,并已累计向全球交付超过200吨,实现了商业化落地;
6)2025年,中科国生FDCA产品完成欧盟REACH法规完整注册,成为国内首家实现该项合规的企业。同时,公司亦为国内首家完成FDCA新化学物质环境管理常规登记及产品LCA生命周期评价的企业,为后续全球市场拓展及下游品牌合作奠定了坚实的合规基础。
中科国生 2,5-呋喃二甲酸(FDCA)
6.瀚海聚能
【项目概要】瀚海聚能是国内首家直线型场反位形可控核聚变商业公司,成立于2022年12月30日,聚焦场反位形装置及其配套的等离子体源与诊断系统软硬件研发,为未来商业聚变发电堆提供高性价比、高可靠性的核心组件和整体解决方案。同时通过聚变研发开发中子源中间产品,应用于医用同位素、BNCT、中子成像、核废料处理等领域,实现可控核聚变技术的早、中、长期商业化价值。
【融资需求】Pre-A轮,2亿元人民币
【历史股东】华映资本、厚实资本、轻舟资本、奇绩创坛等
【项目亮点】
1)采用直线型场反位形(FRC)聚变技术路线,区别于传统的托卡马克装置,具有结构简单、模块化设计的优势,建造成本仅为托卡马克的1/5至1/10,磁体用量减少80%以上,装置体积缩小50%,能量效率更高,等离子体自组织特性减少能量损耗,相同磁场强度下聚变功率输出可达托卡马克的100至1000倍,且兼容氢-硼等先进燃料,燃料利用率更高。
2)核心技术100%由国内团队自主原创,实现了从装置设计、制造到等离子体点亮的全流程国产化,打破了国外技术垄断,为我国可控核聚变技术发展奠定了基础;
3)2025年7月成功实现中国首台商业化直线型场反位形聚变装置(HHMAX-901)等离子体点亮,标志着FRC技术从实验室迈向应用端,是目前国内该技术路线商业化应用的重大突破;
4)制定清晰的“三步走”战略,计划2027-2028年建造第二代装置,建设10MW功率发电主机;2028-2030年建造第三代装置,建设50MW发电主机及示范电站。
5)采用“边研发边转化”的“沿途下蛋”商业模式,利用核聚变过程中产生的中子开发癌症治疗(BNCT)、中子成像及核废料处理等技术应用,孵化沿途商业化产品,提前实现部分商业价值;
6)成都作为项目所在地,拥有丰富的科研资源、完整的产业链以及政策支持,吸引了国内外顶尖人才加入。公司核心团队来自于中物院、海内外可控核聚变公司以及中科大、清华等国内高校院所的科学家与研发人员,具备深厚的行业经验和技术积累
瀚海聚能HHMAX-901主机成功点亮
欢迎对接优质在融项目
对于创业者和投资人来说,行业洗牌也意味着新的机遇,每一次的“危机”都是高潜项目破局而立的重要节点。36氪将持续提供解决企业及投资人不同阶段需求的活动,为资本市场注入信心。
如果您对本文中的项目感兴趣,希望可以对接到项目方,或者如果您手中有好项目需要融资对接更多投资人,欢迎与我们联系。
扫码添加小助手,对接项目
-
AI破解出海增长困局,易蛙智能助力商家实现效率平权
2025年,AI Agent从概念走向商业化落地,“用AI赚钱”成为创业者和投资人最关心的话题之一,在垂直领域形成闭环、直接创造商业价值的AI产品不断涌现。
其中,跨境电商——这个被视作“流量红海”与“增长蓝海”的矛盾综合体无疑是应用落地的热门场景。一方面,TikTok等社交电商平台成为销量增长的新引擎;另一方面,中小卖家又难以应对内容创作效率低、广告成本激增、达人合作难起量等复杂挑战。
在充满不确定性的跨境电商战场,易蛙科技(EWA)试图用确定性的AI和数据能力,为出海者提供一道通往增长的桥梁。
易蛙科技成立于2023年,2025年11月推出AI产品“易蛙量化智能体”(以下简称“易蛙”),致力于打造一个以商品为核心、能够自主迭代的“营销大脑”,帮助跨境商家提高内容营销的效率和转化率。“我们最终交付的不是工具,而是销量和利润。”易蛙科技创始人姜安琦表示。
姜安琦是数字营销领域的连续创业者,曾任百度、新浪广告平台负责人,前易车和知乎高级副总裁,2023年曾带领团队推出面向法律群体的agent产品“EWA Assistant”。公司核心团队来自字节跳动、新浪微博等互联网企业,在数字营销和AI应用领域拥有多年从业经验。
1.用Agent解出海营销效率之困
海关总署数据显示,2025年上半年,中国跨境电商进出口总值达到1.37万亿元,同比增长10.3%,其中以TikTok Shop为代表的社交电商增速持续领跑。
繁荣之下,痛点尖锐。易蛙科技观察发现,TikTok Shop上58%的GMV来自短视频和直播的内容转化,但自然内容的爆款率普遍低于5%,达人合作出单率不足3%,商家难以找到高转化率的内容策略。
并且随着广告营销逐渐从传统的创意驱动,转向由数据和算法驱动的数字营销,企业需要生产大量素材做A/B测试,再根据数据反馈决定投放策略。而从制作、测试、投放到迭代的数字营销导致商家工作量巨大,对中小商家来说试错成本较高。随着商家数量增加,平台广告CPM持续增长,新店起量门槛也被不断抬高。
“这些都是阻碍更多中小商家进入跨境领域的核心痛点。”姜安琦表示,“我们创立易蛙科技的出发点就是希望让AI能力贯穿内容全链路,帮助中小商家以更低成本实现流量到商品的高效转化。”
面对市面上已有的视频生成类AI工具,易蛙科技提出“Vibe Marketing”(协同营销)的差异化产品定位。对此姜安琦解释,“Vibe的概念来自Copilot,强调人机协同。易蛙通过构建营销任务自动化的协作框架,让AI在内容生产流程中与人进行协同,无论是货架电商场景下的SEO优化,还是打造社交电商的短视频爆款,都只是达成GMV目标的不同手段,都可以由AI负责所有动作的串联、执行和优化,人只需要下达指令和审阅结果。
比起其他通用视频生成工具,易蛙的优势在于对电商营销场景的深度理解。其自研的视觉理解模型,目标不仅是识别“人在跑步”,而是“这个画面是展示鞋子防水性能的钩子”。
具体来说,易蛙能够为每个商品构建独立的专属营销智能体,对商品在目标市场的核心卖点与受众画像进行分析后,调用自研的视觉理解模型,对商家上传的原始视频素材进行解析,识别素材片段呈现的产品属性和作用(如“痛点钩子”、“功能展示”、“号召购买”等),再像经验丰富的剪辑师,将素材重新组合剪辑,生成符合目标市场语言和风格的高质量带货视频。
据姜安琦介绍,目前易蛙已同步用于公司自有的跨境业务,面向东南亚市场销售日用百货和健身用品。“通过给我们自己的产品服务,在实操中找到可能出现的卡点,一方面可以通过优化AI能力解决业务问题,另一方面也能反过来把agent产品打造好,实现业务和技术相互促进的正循环。”
中小跨境卖家的利润通常会被进货、履约、流量和内容运营成本侵蚀殆尽。根据易蛙科技的计算,通过向商家提供优质供应链整合、智能剪辑和智能投流服务,易蛙可以将商家的盈利能力提升25%-30%。
2.打造内容营销全链路闭环,实现“一人CMO”愿景
今年11月中旬,易蛙Beta版已正式上线,根据内部电商业务和少数共创客户验证的效果,能够节省约75%的剪辑人力,且生成视频的转化效率接近人工剪辑水平。
下一步,易蛙科技计划在明年上线智能投流功能,实现数据闭环与自迭代能力。带货视频在不同平台投放后,播放、互动、转化等全链路数据会被实时回收至易蛙的营销目标数据银行。易蛙基于自迭代能力,能够对回收数据的内容形式、投放时段和目标人群等关键要素进行分析,不断深化对产品和用户的理解,从而优化素材生产质量和投放策略。
“易蛙就像一个能够不断学习和成长的数字运营专员。”姜安琦比喻道,“初期人和agent的协作比例可能是50:50,随着服务时间变长,数据积累越多,agent的自主性会越来越强,人的参与度逐渐降到20%甚至更低。”
当前易蛙科技面临的挑战在于,持续应对海外市场的快速变化,包括平台政策、用户习惯与流量规则。对此,易蛙采用AtoA的开放式架构,目的是能够快速接入新的平台规则,适配不同的营销渠道,从而相比于传统的Martech公司或代运营服务,更快响应海外市场变化。
“我们从上一次AI创业中得到的经验是,大多数agent产品以提效降本为目标,在企业实际经营层面往往难以被清晰量化,而好的AI应用必须能交付60%-70%以上成功率的端到端结果,否则‘降本’没有太大意义。我们希望易蛙能够真正为企业创造多少增量价值,这个目标对企业来说更加清晰,也能让客户和我们真正成为利益共同体。”姜安琦说道。
因此在商业化模式上,易蛙没有选择业内常见的SaaS订阅制,而是面向不同规模客户按结果付费的RaaS模式(Result as a Service)——对大客户和品牌方采用代运营模式,提供深度、透明的全案服务,按GMV分成;对中小商家提供标准化的SaaS产品,收费与推广商品的起量效果挂钩;对个人商家/创业者则推出“1对N陪跑模式”,提供从产品、供应链到运营培训的全链路服务。
姜安琦表示,易蛙科技的愿景是“一人CMO,赋能全球商品”,将曾经只有大品牌才负担得起、体系化的整合营销能力,通过AI交付给每一个微小的商业个体。“任何一个有执行力的人,在易蛙科技的产品和服务体系支持下,自己就能成为一个高效的CMO团队,把好商品卖向全球。”
-
国盾量子:董事应勇暂代董事长等职责
36氪获悉,国盾量子公告,因公司董事长、法定代表人吕品离世,全体董事共同推举董事应勇暂代履行董事长、法定代表人及战略与投资委员会主任委员、薪酬与考核委员会委员职责,直至公司完成董事补选及选举出新的董事长、相关专委会委员之日止。 -
Intent Prototyping: A Practical Guide To Building With Clarity (Part 2)
In Part 1 of this series, we explored the “lopsided horse” problem born from mockup-centric design and demonstrated how the seductive promise of vibe coding often leads to structural flaws. The main question remains:
How might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one, without getting caught in the ambiguity trap?
In other words, we need a way to build prototypes that are both fast to create and founded on a clear, unambiguous blueprint.
The answer is a more disciplined process I call Intent Prototyping (kudos to Marco Kotrotsos, who coined Intent-Oriented Programming). This method embraces the power of AI-assisted coding but rejects ambiguity, putting the designer’s explicit intent at the very center of the process. It receives a holistic expression of intent (sketches for screen layouts, conceptual model description, boxes-and-arrows for user flows) and uses it to generate a live, testable prototype.
This method solves the concerns we’ve discussed in Part 1 in the best way possible:
- Unlike static mockups, the prototype is fully interactive and can be easily populated with a large amount of realistic data. This lets us test the system’s underlying logic as well as its surface.
- Unlike a vibe-coded prototype, it is built from a stable, unambiguous specification. This prevents the conceptual model failures and design debt that happen when things are unclear. The engineering team doesn’t need to reverse-engineer a black box or become “code archaeologists” to guess at the designer’s vision, as they receive not only a live prototype but also a clearly documented design intent behind it.
This combination makes the method especially suited for designing complex enterprise applications. It allows us to test the system’s most critical point of failure, its underlying structure, at a speed and flexibility that was previously impossible. Furthermore, the process is built for iteration. You can explore as many directions as you want simply by changing the intent and evolving the design based on what you learn from user testing.
My Workflow
To illustrate this process in action, let’s walk through a case study. It’s the very same example I’ve used to illustrate the vibe coding trap: a simple tool to track tests to validate product ideas. You can find the complete project, including all the source code and documentation files discussed below, in this GitHub repository.
Step 1: Expressing An Intent
Imagine we’ve already done proper research, and having mused on the defined problem, I begin to form a vague idea of what the solution might look like. I need to capture this idea immediately, so I quickly sketch it out:
In this example, I used Excalidraw, but the tool doesn’t really matter. Note that we deliberately keep it rough, as visual details are not something we need to focus on at this stage. And we are not going to be stuck here: we want to make a leap from this initial sketch directly to a live prototype that we can put in front of potential users. Polishing those sketches would not bring us any closer to achieving our goal.
What we need to move forward is to add to those sketches just enough details so that they may serve as a sufficient input for a junior frontend developer (or, in our case, an AI assistant). This requires explaining the following:
- Navigational paths (clicking here takes you to).
- Interaction details that can’t be shown in a static picture (e.g., non-scrollable areas, adaptive layout, drag-and-drop behavior).
- What parts might make sense to build as reusable components.
- Which components from the design system (I’m using Ant Design Library) should be used.
- Any other comments that help understand how this thing should work (while sketches illustrate how it should look).
Having added all those details, we end up with such an annotated sketch:
As you see, this sketch covers both the Visualization and Flow aspects. You may ask, what about the Conceptual Model? Without that part, the expression of our intent will not be complete. One way would be to add it somewhere in the margins of the sketch (for example, as a UML Class Diagram), and I would do so in the case of a more complex application, where the model cannot be simply derived from the UI. But in our case, we can save effort and ask an LLM to generate a comprehensive description of the conceptual model based on the sketch.
For tasks of this sort, the LLM of my choice is Gemini 2.5 Pro. What is important is that this is a multimodal model that can accept not only text but also images as input (GPT-5 and Claude-4 also fit that criteria). I use Google AI Studio, as it gives me enough control and visibility into what’s happening:
Note: All the prompts that I use here and below can be found in the Appendices. The prompts are not custom-tailored to any particular project; they are supposed to be reused as they are.
As a result, Gemini gives us a description and the following diagram:
The diagram might look technical, but I believe that a clear understanding of all objects, their attributes, and relationships between them is key to good design. That’s why I consider the Conceptual Model to be an essential part of expressing intent, along with the Flow and Visualization.
As a result of this step, our intent is fully expressed in two files:
Sketch.pngandModel.md. This will be our durable source of truth.Step 2: Preparing A Spec And A Plan
The purpose of this step is to create a comprehensive technical specification and a step-by-step plan. Most of the work here is done by AI; you just need to keep an eye on it.
I separate the Data Access Layer and the UI layer, and create specifications for them using two different prompts (see Appendices 2 and 3). The output of the first prompt (the Data Access Layer spec) serves as an input for the second one. Note that, as an additional input, we give the guidelines tailored for prototyping needs (see Appendices 8, 9, and 10). They are not specific to this project. The technical approach encoded in those guidelines is out of the scope of this article.
As a result, Gemini provides us with content for
DAL.mdandUI.md. Although in most cases this result is quite reliable enough, you might want to scrutinize the output. You don’t need to be a real programmer to make sense of it, but some level of programming literacy would be really helpful. However, even if you don’t have such skills, don’t get discouraged. The good news is that if you don’t understand something, you always know who to ask. Do it in Google AI Studio before refreshing the context window. If you believe you’ve spotted a problem, let Gemini know, and it will either fix it or explain why the suggested approach is actually better.It’s important to remember that by their nature, LLMs are not deterministic and, to put it simply, can be forgetful about small details, especially when it comes to details in sketches. Fortunately, you don’t have to be an expert to notice that the “Delete” button, which is in the upper right corner of the sketch, is not mentioned in the spec.
Don’t get me wrong: Gemini does a stellar job most of the time, but there are still times when it slips up. Just let it know about the problems you’ve spotted, and everything will be fixed.
Once we have
Sketch.png,Model.md,DAL.md,UI.md, and we have reviewed the specs, we can grab a coffee. We deserve it: our technical design documentation is complete. It will serve as a stable foundation for building the actual thing, without deviating from our original intent, and ensuring that all components fit together perfectly, and all layers are stacked correctly.One last thing we can do before moving on to the next steps is to prepare a step-by-step plan. We split that plan into two parts: one for the Data Access Layer and another for the UI. You can find prompts I use to create such a plan in Appendices 4 and 5.
Step 3: Executing The Plan
To start building the actual thing, we need to switch to another category of AI tools. Up until this point, we have relied on Generative AI. It excels at creating new content (in our case, specifications and plans) based on a single prompt. I’m using Google Gemini 2.5 Pro in Google AI Studio, but other similar tools may also fit such one-off tasks: ChatGPT, Claude, Grok, and DeepSeek.
However, at this step, this wouldn’t be enough. Building a prototype based on specs and according to a plan requires an AI that can read context from multiple files, execute a sequence of tasks, and maintain coherence. A simple generative AI can’t do this. It would be like asking a person to build a house by only ever showing them a single brick. What we need is an agentic AI that can be given the full house blueprint and a project plan, and then get to work building the foundation, framing the walls, and adding the roof in the correct sequence.
My coding agent of choice is Google Gemini CLI, simply because Gemini 2.5 Pro serves me well, and I don’t think we need any middleman like Cursor or Windsurf (which would use Claude, Gemini, or GPT under the hood anyway). If I used Claude, my choice would be Claude Code, but since I’m sticking with Gemini, Gemini CLI it is. But if you prefer Cursor or Windsurf, I believe you can apply the same process with your favourite tool.
Before tasking the agent, we need to create a basic template for our React application. I won’t go into this here. You can find plenty of tutorials on how to scaffold an empty React project using Vite.
Then we put all our files into that project:
Once the basic template with all our files is ready, we open Terminal, go to the folder where our project resides, and type “gemini”:
And we send the prompt to build the Data Access Layer (see Appendix 6). That prompt implies step-by-step execution, so upon completion of each step, I send the following:
Thank you! Now, please move to the next task. Remember that you must not make assumptions based on common patterns; always verify them with the actual data from the spec. After each task, stop so that I can test it. Don’t move to the next task before I tell you to do so.As the last task in the plan, the agent builds a special page where we can test all the capabilities of our Data Access Layer, so that we can manually test it. It may look like this:
It doesn’t look fancy, to say the least, but it allows us to ensure that the Data Access Layer works correctly before we proceed with building the final UI.
And finally, we clear the Gemini CLI context window to give it more headspace and send the prompt to build the UI (see Appendix 7). This prompt also implies step-by-step execution. Upon completion of each step, we test how it works and how it looks, following the “Manual Testing Plan” from
UI-plan.md. I have to say that despite the fact that the sketch has been uploaded to the model context and, in general, Gemini tries to follow it, attention to visual detail is not one of its strengths (yet). Usually, a few additional nudges are needed at each step to improve the look and feel:Once I’m happy with the result of a step, I ask Gemini to move on:
Thank you! Now, please move to the next task. Make sure you build the UI according to the sketch; this is very important. Remember that you must not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch.
After each task, stop so that I can test it. Don’t move to the next task before I tell you to do so.Before long, the result looks like this, and in every detail it works exactly as we intended:
The prototype is up and running and looking nice. Does it mean that we are done with our work? Surely not, the most fascinating part is just beginning.
Step 4: Learning And Iterating
It’s time to put the prototype in front of potential users and learn more about whether this solution relieves their pain or not.
And as soon as we learn something new, we iterate. We adjust or extend the sketches and the conceptual model, based on that new input, we update the specifications, create plans to make changes according to the new specifications, and execute those plans. In other words, for every iteration, we repeat the steps I’ve just walked you through.
Is This Workflow Too Heavy?
This four-step workflow may create an impression of a somewhat heavy process that requires too much thinking upfront and doesn’t really facilitate creativity. But before jumping to that conclusion, consider the following:
- In practice, only the first step requires real effort, as well as learning in the last step. AI does most of the work in between; you just need to keep an eye on it.
- Individual iterations don’t need to be big. You can start with a Walking Skeleton: the bare minimum implementation of the thing you have in mind, and add more substance in subsequent iterations. You are welcome to change your mind about the overall direction in between iterations.
- And last but not least, maybe the idea of “think before you do” is not something you need to run away from. A clear and unambiguous statement of intent can prevent many unnecessary mistakes and save a lot of effort down the road.
Intent Prototyping Vs. Other Methods
There is no method that fits all situations, and Intent Prototyping is not an exception. Like any specialized tool, it has a specific purpose. The most effective teams are not those who master a single method, but those who understand which approach to use to mitigate the most significant risk at each stage. The table below gives you a way to make this choice clearer. It puts Intent Prototyping next to other common methods and tools and explains each one in terms of the primary goal it helps achieve and the specific risks it is best suited to mitigate.
Method/Tool Goal Risks it is best suited to mitigate Examples Why Intent Prototyping To rapidly iterate on the fundamental architecture of a data-heavy application with a complex conceptual model, sophisticated business logic, and non-linear user flows. Building a system with a flawed or incoherent conceptual model, leading to critical bugs and costly refactoring. - A CRM (Customer Relationship Management system).
- A Resource Management Tool.
- A No-Code Integration Platform (admin’s UI).
It enforces conceptual clarity. This not only de-risks the core structure but also produces a clear, documented blueprint that serves as a superior specification for the engineering handoff. Vibe Coding (Conversational) To rapidly explore interactive ideas through improvisation. Losing momentum because of analysis paralysis. - An interactive data table with live sorting/filtering.
- A novel navigation concept.
- A proof-of-concept for a single, complex component.
It has the smallest loop between an idea conveyed in natural language and an interactive outcome. Axure To test complicated conditional logic within a specific user journey, without having to worry about how the whole system works. Designing flows that break when users don’t follow the “happy path.” - A multi-step e-commerce checkout.
- A software configuration wizard.
- A dynamic form with dependent fields.
It’s made to create complex if-thenlogic and manage variables visually. This lets you test complicated paths and edge cases in a user journey without writing any code.Figma To make sure that the user interface looks good, aligns with the brand, and has a clear information architecture. Making a product that looks bad, doesn’t fit with the brand, or has a layout that is hard to understand. - A marketing landing page.
- A user onboarding flow.
- Presenting a new visual identity.
It excels at high-fidelity visual design and provides simple, fast tools for linking static screens. ProtoPie, Framer To make high-fidelity micro-interactions feel just right. Shipping an application that feels cumbersome and unpleasant to use because of poorly executed interactions. - A custom pull-to-refresh animation.
- A fluid drag-and-drop interface.
- An animated chart or data visualization.
These tools let you manipulate animation timelines, physics, and device sensor inputs in great detail. Designers can carefully work on and test the small things that make an interface feel really polished and fun to use. Low-code / No-code Tools (e.g., Bubble, Retool) To create a working, data-driven app as quickly as possible. The application will never be built because traditional development is too expensive. - An internal inventory tracker.
- A customer support dashboard.
- A simple directory website.
They put a UI builder, a database, and hosting all in one place. The goal is not merely to make a prototype of an idea, but to make and release an actual, working product. This is the last step for many internal tools or MVPs. The key takeaway is that each method is a specialized tool for mitigating a specific type of risk. For example, Figma de-risks the visual presentation. ProtoPie de-risks the feel of an interaction. Intent Prototyping is in a unique position to tackle the most foundational risk in complex applications: building on a flawed or incoherent conceptual model.
Bringing It All Together
The era of the “lopsided horse” design, sleek on the surface but structurally unsound, is a direct result of the trade-off between fidelity and flexibility. This trade-off has led to a process filled with redundant effort and misplaced focus. Intent Prototyping, powered by modern AI, eliminates that conflict. It’s not just a shortcut to building faster — it’s a fundamental shift in how we design. By putting a clear, unambiguous intent at the heart of the process, it lets us get rid of the redundant work and focus on architecting a sound and robust system.
There are three major benefits to this renewed focus. First, by going straight to live, interactive prototypes, we shift our validation efforts from the surface to the deep, testing the system’s actual logic with users from day one. Second, the very act of documenting the design intent makes us clear about our ideas, ensuring that we fully understand the system’s underlying logic. Finally, this documented intent becomes a durable source of truth, eliminating the ambiguous handoffs and the redundant, error-prone work of having engineers reverse-engineer a designer’s vision from a black box.
Ultimately, Intent Prototyping changes the object of our work. It allows us to move beyond creating pictures of a product and empowers us to become architects of blueprints for a system. With the help of AI, we can finally make the live prototype the primary canvas for ideation, not just a high-effort afterthought.
Appendices
You can find the full Intent Prototyping Starter Kit, which includes all those prompts and guidelines, as well as the example from this article and a minimal boilerplate project, in this GitHub repository.
Appendix 1: Sketch to UML Class Diagram+You are an expert Senior Software Architect specializing in Domain-Driven Design. You are tasked with defining a conceptual model for an app based on information from a UI sketch. ## Workflow Follow these steps precisely: **Step 1:** Analyze the sketch carefully. There should be no ambiguity about what we are building. **Step 2:** Generate the conceptual model description in the Mermaid format using a UML class diagram. ## Ground Rules - Every entity must have the following attributes: -id(string) -createdAt(string, ISO 8601 format) -updatedAt(string, ISO 8601 format) - Include all attributes shown in the UI: If a piece of data is visually represented as a field for an entity, include it in the model, even if it's calculated from other attributes. - Do not add any speculative entities, attributes, or relationships ("just in case"). The model should serve the current sketch's requirements only. - Pay special attention to cardinality definitions (e.g., if a relationship is optional on both sides, it cannot be"1" -- "0..*", it must be"0..1" -- "0..*"). - Use only valid syntax in the Mermaid diagram. - Do not include enumerations in the Mermaid diagram. - Add comments explaining the purpose of every entity, attribute, and relationship, and their expected behavior (not as a part of the diagram, in the Markdown file). ## Naming Conventions - Names should reveal intent and purpose. - Use PascalCase for entity names. - Use camelCase for attributes and relationships. - Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError). ## Final Instructions - **No Assumptions: Base every detail on visual evidence in the sketch, not on common design patterns. - **Double-Check: After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. - **Do not add redundant empty lines between items.** Your final output should be the complete, raw markdown content forModel.md.Appendix 2: Sketch to DAL Spec+You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with creating a comprehensive technical specification for the development team in a structured markdown document, based on a UI sketch and a conceptual model description. ## Workflow Follow these steps precisely: **Step 1:** Analyze the documentation carefully: -Model.md: the conceptual model -Sketch.png: the UI sketch There should be no ambiguity about what we are building. **Step 2:** Check out the guidelines: -TS-guidelines.md: TypeScript Best Practices -React-guidelines.md: React Best Practices -Zustand-guidelines.md: Zustand Best Practices **Step 3:** Create a Markdown specification for the stores and entity-specific hook that implements all the logic and provides all required operations. --- ## Markdown Output Structure Use this template for the entire document.markdown # Data Access Layer Specification This document outlines the specification for the data access layer of the application, following the principles defined in `docs/guidelines/Zustand-guidelines.md`. ## 1. Type Definitions Location: `src/types/entities.ts` ### 1.1. `BaseEntity` A shared interface that all entities should extend. [TypeScript interface definition] ### 1.2. `[Entity Name]` The interface for the [Entity Name] entity. [TypeScript interface definition] ## 2. Zustand Stores ### 2.1. Store for `[Entity Name]` **Location:** `src/stores/[Entity Name (plural)].ts` The Zustand store will manage the state of all [Entity Name] items. **Store State (`[Entity Name]State`):** [TypeScript interface definition] **Store Implementation (`use[Entity Name]Store`):** - The store will be created using `create<[Entity Name]State>()(...)`. - It will use the `persist` middleware from `zustand/middleware` to save state to `localStorage`. The persistence key will be `[entity-storage-key]`. - `[Entity Name (plural, camelCase)]` will be a dictionary (`Record<string, [Entity]>`) for O(1) access. **Actions:** - **`add[Entity Name]`**: [Define the operation behavior based on entity requirements] - **`update[Entity Name]`**: [Define the operation behavior based on entity requirements] - **`remove[Entity Name]`**: [Define the operation behavior based on entity requirements] - **`doSomethingElseWith[Entity Name]`**: [Define the operation behavior based on entity requirements] ## 3. Custom Hooks ### 3.1. `use[Entity Name (plural)]` **Location:** `src/hooks/use[Entity Name (plural)].ts` The hook will be the primary interface for UI components to interact with [Entity Name] data. **Hook Return Value:** [TypeScript interface definition] **Hook Implementation:** [List all properties and methods returned by this hook, and briefly explain the logic behind them, including data transformations, memoization. Do not write the actual code here.]--- ## Final Instructions - **No Assumptions:** Base every detail in the specification on the conceptual model or visual evidence in the sketch, not on common design patterns. - **Double-Check:** After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. - **Do not add redundant empty lines between items.** Your final output should be the complete, raw markdown content forDAL.md.Appendix 3: Sketch to UI Spec+You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with creating a comprehensive technical specification by translating a UI sketch into a structured markdown document for the development team. ## Workflow Follow these steps precisely: **Step 1:** Analyze the documentation carefully: -Sketch.png: the UI sketch - Note that red lines, red arrows, and red text within the sketch are annotations for you and should not be part of the final UI design. They provide hints and clarification. Never translate them to UI elements directly. -Model.md: the conceptual model -DAL.md: the Data Access Layer spec There should be no ambiguity about what we are building. **Step 2:** Check out the guidelines: -TS-guidelines.md: TypeScript Best Practices -React-guidelines.md: React Best Practices **Step 3:** Generate the complete markdown content for a new file,UI.md. --- ## Markdown Output Structure Use this template for the entire document.markdown # UI Layer Specification This document specifies the UI layer of the application, breaking it down into pages and reusable components based on the provided sketches. All components will adhere to Ant Design's principles and utilize the data access patterns defined in `docs/guidelines/Zustand-guidelines.md`. ## 1. High-Level Structure The application is a single-page application (SPA). It will be composed of a main layout, one primary page, and several reusable components. ### 1.1. `App` Component The root component that sets up routing and global providers. - **Location**: `src/App.tsx` - **Purpose**: To provide global context, including Ant Design's `ConfigProvider` and `App` contexts for message notifications, and to render the main page. - **Composition**: - Wraps the application with `ConfigProvider` and `App as AntApp` from 'antd' to enable global message notifications as per `simple-ice/antd-messages.mdc`. - Renders `[Page Name]`. ## 2. Pages ### 2.1. `[Page Name]` - **Location:** `src/pages/PageName.tsx` - **Purpose:** [Briefly describe the main goal and function of this page] - **Data Access:** [List the specific hooks and functions this component uses to fetch or manage its data] - **Internal State:** [Describe any state managed internally by this page using `useState`] - **Composition:** [Briefly describe the content of this page] - **User Interactions:** [Describe how the user interacts with this page] - **Logic:** [If applicable, provide additional comments on how this page should work] ## 3. Components ### 3.1. `[Component Name]` - **Location:** `src/components/ComponentName.tsx` - **Purpose:** [Explain what this component does and where it's used] - **Props:** [TypeScript interface definition for the component's props. Props should be minimal. Avoid prop drilling by using hooks for data access.] - **Data Access:** [List the specific hooks and functions this component uses to fetch or manage its data] - **Internal State:** [Describe any state managed internally by this component using `useState`] - **Composition:** [Briefly describe the content of this component] - **User Interactions:** [Describe how the user interacts with the component] - **Logic:** [If applicable, provide additional comments on how this component should work]--- ## Final Instructions - **No Assumptions:** Base every detail on the visual evidence in the sketch, not on common design patterns. - **Double-Check:** After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. - **Do not add redundant empty lines between items.** Your final output should be the complete, raw markdown content forUI.md.Appendix 4: DAL Spec to Plan+You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with creating a plan to build a Data Access Layer for an application based on a spec. ## Workflow Follow these steps precisely: **Step 1:** Analyze the documentation carefully: -DAL.md: The full technical specification for the Data Access Layer of the application. Follow it carefully and to the letter. There should be no ambiguity about what we are building. **Step 2:** Check out the guidelines: -TS-guidelines.md: TypeScript Best Practices -React-guidelines.md: React Best Practices -Zustand-guidelines.md: Zustand Best Practices **Step 3:** Create a step-by-step plan to build a Data Access Layer according to the spec. Each task should: - Focus on one concern - Be reasonably small - Have a clear start + end - Contain clearly defined Objectives and Acceptance Criteria The last step of the plan should include creating a page to test all the capabilities of our Data Access Layer, and making it the start page of this application, so that I can manually check if it works properly. I will hand this plan over to an engineering LLM that will be told to complete one task at a time, allowing me to review results in between. ## Final Instructions - Note that we are not starting from scratch; the basic template has already been created using Vite. - Do not add redundant empty lines between items. Your final output should be the complete, raw markdown content forDAL-plan.md.Appendix 5: UI Spec to Plan+You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with creating a plan to build a UI layer for an application based on a spec and a sketch. ## Workflow Follow these steps precisely: **Step 1:** Analyze the documentation carefully: -UI.md: The full technical specification for the UI layer of the application. Follow it carefully and to the letter. -Sketch.png: Contains important information about the layout and style, complements the UI Layer Specification. The final UI must be as close to this sketch as possible. There should be no ambiguity about what we are building. **Step 2:** Check out the guidelines: -TS-guidelines.md: TypeScript Best Practices -React-guidelines.md: React Best Practices **Step 3:** Create a step-by-step plan to build a UI layer according to the spec and the sketch. Each task must: - Focus on one concern. - Be reasonably small. - Have a clear start + end. - Result in a verifiable increment of the application. Each increment should be manually testable to allow for functional review and approval before proceeding. - Contain clearly defined Objectives, Acceptance Criteria, and Manual Testing Plan. I will hand this plan over to an engineering LLM that will be told to complete one task at a time, allowing me to test in between. ## Final Instructions - Note that we are not starting from scratch, the basic template has already been created using Vite, and the Data Access Layer has been built successfully. - For every task, describe how components should be integrated for verification. You must use the provided hooks to connect to the live Zustand store data—do not use mock data (note that the Data Access Layer has been already built successfully). - The Manual Testing Plan should read like a user guide. It must only contain actions a user can perform in the browser and must never reference any code files or programming tasks. - Do not add redundant empty lines between items. Your final output should be the complete, raw markdown content forUI-plan.md.Appendix 6: DAL Plan to Code+You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with building a Data Access Layer for an application based on a spec. ## Workflow Follow these steps precisely: **Step 1:** Analyze the documentation carefully: - @docs/specs/DAL.md: The full technical specification for the Data Access Layer of the application. Follow it carefully and to the letter. There should be no ambiguity about what we are building. **Step 2:** Check out the guidelines: - @docs/guidelines/TS-guidelines.md: TypeScript Best Practices - @docs/guidelines/React-guidelines.md: React Best Practices - @docs/guidelines/Zustand-guidelines.md: Zustand Best Practices **Step 3:** Read the plan: - @docs/plans/DAL-plan.md: The step-by-step plan to build the Data Access Layer of the application. **Step 4:** Build a Data Access Layer for this application according to the spec and following the plan. - Complete one task from the plan at a time. - After each task, stop, so that I can test it. Don’t move to the next task before I tell you to do so. - Do not do anything else. At this point, we are focused on building the Data Access Layer. ## Final Instructions - Do not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch. - Do not start the development server, I'll do it by myself.Appendix 7: UI Plan to Code+You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with building a UI layer for an application based on a spec and a sketch. ## Workflow Follow these steps precisely: **Step 1:** Analyze the documentation carefully: - @docs/specs/UI.md: The full technical specification for the UI layer of the application. Follow it carefully and to the letter. - @docs/intent/Sketch.png: Contains important information about the layout and style, complements the UI Layer Specification. The final UI must be as close to this sketch as possible. - @docs/specs/DAL.md: The full technical specification for the Data Access Layer of the application. That layer is already ready. Use this spec to understand how to work with it. There should be no ambiguity about what we are building. **Step 2:** Check out the guidelines: - @docs/guidelines/TS-guidelines.md: TypeScript Best Practices - @docs/guidelines/React-guidelines.md: React Best Practices **Step 3:** Read the plan: - @docs/plans/UI-plan.md: The step-by-step plan to build the UI layer of the application. **Step 4:** Build a UI layer for this application according to the spec and the sketch, following the step-by-step plan: - Complete one task from the plan at a time. - Make sure you build the UI according to the sketch; this is very important. - After each task, stop, so that I can test it. Don’t move to the next task before I tell you to do so. ## Final Instructions - Do not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch. - Follow Ant Design's default styles and components. - Do not touch the data access layer: it's ready and it's perfect. - Do not start the development server, I'll do it by myself.Appendix 8: TS-guidelines.md+# Guidelines: TypeScript Best Practices ## Type System & Type Safety - Use TypeScript for all code and enable strict mode. - Ensure complete type safety throughout stores, hooks, and component interfaces. - Prefer interfaces over types for object definitions; use types for unions, intersections, and mapped types. - Entity interfaces should extend common patterns while maintaining their specific properties. - Use TypeScript type guards in filtering operations for relationship safety. - Avoid the 'any' type; prefer 'unknown' when necessary. - Use generics to create reusable components and functions. - Utilize TypeScript's features to enforce type safety. - Use type-only imports (import type { MyType } from './types') when importing types, because verbatimModuleSyntax is enabled. - Avoid enums; use maps instead. ## Naming Conventions - Names should reveal intent and purpose. - Use PascalCase for component names and types/interfaces. - Prefix interfaces for React props with 'Props' (e.g., ButtonProps). - Use camelCase for variables and functions. - Use UPPER_CASE for constants. - Use lowercase with dashes for directories, and PascalCase for files with components (e.g., components/auth-wizard/AuthForm.tsx). - Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError). - Favor named exports for components. ## Code Structure & Patterns - Write concise, technical TypeScript code with accurate examples. - Use functional and declarative programming patterns; avoid classes. - Prefer iteration and modularization over code duplication. - Use the "function" keyword for pure functions. - Use curly braces for all conditionals for consistency and clarity. - Structure files appropriately based on their purpose. - Keep related code together and encapsulate implementation details. ## Performance & Error Handling - Use immutable and efficient data structures and algorithms. - Create custom error types for domain-specific errors. - Use try-catch blocks with typed catch clauses. - Handle Promise rejections and async errors properly. - Log errors appropriately and handle edge cases gracefully. ## Project Organization - Place shared types in a types directory. - Use barrel exports (index.ts) for organizing exports. - Structure files and directories based on their purpose. ## Other Rules - Use comments to explain complex logic or non-obvious decisions. - Follow the single responsibility principle: each function should do exactly one thing. - Follow the DRY (Don't Repeat Yourself) principle. - Do not implement placeholder functions, empty methods, or "just in case" logic. Code should serve the current specification's requirements only. - Use 2 spaces for indentation (no tabs).Appendix 9: React-guidelines.md+# Guidelines: React Best Practices ## Component Structure - Use functional components over class components - Keep components small and focused - Extract reusable logic into custom hooks - Use composition over inheritance - Implement proper prop types with TypeScript - Structure React files: exported component, subcomponents, helpers, static content, types - Use declarative TSX for React components - Ensure that UI components use custom hooks for data fetching and operations rather than receive data via props, except for simplest components ## React Patterns - Utilize useState and useEffect hooks for state and side effects - Use React.memo for performance optimization when needed - Utilize React.lazy and Suspense for code-splitting - Implement error boundaries for robust error handling - Keep styles close to components ## React Performance - Avoid unnecessary re-renders - Lazy load components and images when possible - Implement efficient state management - Optimize rendering strategies - Optimize network requests - Employ memoization techniques (e.g., React.memo, useMemo, useCallback) ## React Project Structure/src - /components - UI components (every component in a separate file) - /hooks - public-facing custom hooks (every hook in a separate file) - /providers - React context providers (every provider in a separate file) - /pages - page components (every page in a separate file) - /stores - entity-specific Zustand stores (every store in a separate file) - /styles - global styles (if needed) - /types - shared TypeScript types and interfacesAppendix 10: Zustand-guidelines.md+# Guidelines: Zustand Best Practices ## Core Principles - **Implement a data layer** for this React application following this specification carefully and to the letter. - **Complete separation of concerns**: All data operations should be accessible in UI components through simple and clean entity-specific hooks, ensuring state management logic is fully separated from UI logic. - **Shared state architecture**: Different UI components should work with the same shared state, despite using entity-specific hooks separately. ## Technology Stack - **State management**: Use Zustand for state management with automatic localStorage persistence via thepersistmiddleware. ## Store Architecture - **Base entity:** Implement a BaseEntity interface with common properties that all entities extend:typescript export interface BaseEntity { id: string; createdAt: string; // ISO 8601 format updatedAt: string; // ISO 8601 format }- **Entity-specific stores**: Create separate Zustand stores for each entity type. - **Dictionary-based storage**: Use dictionary/map structures (Record<string, Entity>) rather than arrays for O(1) access by ID. - **Handle relationships**: Implement cross-entity relationships (like cascade deletes) within the stores where appropriate. ## Hook Layer The hook layer is the exclusive interface between UI components and the Zustand stores. It is designed to be simple, predictable, and follow a consistent pattern across all entities. ### Core Principles 1. **One Hook Per Entity**: There will be a single, comprehensive custom hook for each entity (e.g.,useBlogPosts,useCategories). This hook is the sole entry point for all data and operations related to that entity. Separate hooks for single-item access will not be created. 2. **Return reactive data, not getter functions**: To prevent stale data, hooks must return the state itself, not a function that retrieves state. Parameterize hooks to accept filters and return the derived data directly. A component calling a getter function will not update when the underlying data changes. 3. **Expose Dictionaries for O(1) Access**: To provide simple and direct access to data, every hook will return a dictionary (Record<string, Entity>) of the relevant items. ### The Standard Hook Pattern Every entity hook will follow this implementation pattern: 1. **Subscribe** to the entire dictionary of entities from the corresponding Zustand store. This ensures the hook is reactive to any change in the data. 2. **Filter** the data based on the parameters passed into the hook. This logic will be memoized withuseMemofor efficiency. If no parameters are provided, the hook will operate on the entire dataset. 3. **Return a Consistent Shape**: The hook will always return an object containing: * A **filtered and sorted array** (e.g.,blogPosts) for rendering lists. * A **filtered dictionary** (e.g.,blogPostsDict) for convenientO(1)lookup within the component. * All necessary **action functions** (add,update,remove) and **relationship operations**. * All necessary **helper functions** and **derived data objects**. Helper functions are suitable for pure, stateless logic (e.g., calculators). Derived data objects are memoized values that provide aggregated or summarized information from the state (e.g., an object containing status counts). They must be derived directly from the reactive state to ensure they update automatically when the underlying data changes. ## API Design Standards - **Object Parameters**: Use object parameters instead of multiple direct parameters for better extensibility:typescript // ✅ Preferred add({ title, categoryIds }) // ❌ Avoid add(title, categoryIds)- **Internal Methods**: Use underscore-prefixed methods for cross-store operations to maintain clean separation. ## State Validation Standards - **Existence checks**: Allupdateandremoveoperations should validate entity existence before proceeding. - **Relationship validation**: Verify both entities exist before establishing relationships between them. ## Error Handling Patterns - **Operation failures**: Define behavior when operations fail (e.g., updating non-existent entities). - **Graceful degradation**: How to handle missing related entities in helper functions. ## Other Standards - **Secure ID generation**: Usecrypto.randomUUID()for entity ID generation instead of custom implementations for better uniqueness guarantees and security. - **Return type consistency**:addoperations return generated IDs for component workflows requiring immediate entity access, whileupdateandremoveoperations returnvoidto maintain clean modification APIs. -
Shades Of October (2025 Wallpapers Edition)
As September comes to a close and October takes over, we are in the midst of a time of transition. The air in the morning feels crisper, the leaves are changing colors, and winding down with a warm cup of tea regains its almost-forgotten appeal after a busy summer. When we look closely, October is full of little moments that have the power to inspire, and whatever your secret to finding new inspiration might be, our monthly wallpapers series is bound to give you a little inspiration boost, too.
For this October edition, artists and designers from across the globe once again challenged their creative skills and designed wallpapers to spark your imagination. You find them compiled below, along with a selection of timeless October treasures from our wallpapers archives that are just too good to gather dust.
A huge thank you to everyone who shared their designs with us this month — this post wouldn’t exist without your creativity and kind support! Happy October!
- You can click on every image to see a larger preview.
- We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
- Submit your wallpaper design! 👩🎨
Feeling inspired? We are always looking for creative talent and would love to feature your desktop wallpaper in one of our upcoming posts. Join in ↬
Midnight Mischief
Designed by Libra Fire from Serbia.
- preview
- with calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
- without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
AI
Designed by Ricardo Gimenes from Spain.
- preview
- with calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160
- without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160
Glowing Pumpkin Lanterns
“I was inspired by the classic orange and purple colors of October and Halloween, and wanted to combine those two themes to create a fun pumpkin lantern background.” — Designed by Melissa Bostjancic from New Jersey, United States.
- preview
- with calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
- without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
Halloween 2040
Designed by Ricardo Gimenes from Spain.
- preview
- with calendar: 640×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160
- without calendar: 640×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160
When The Mind Opens
“In October, we observe World Mental Health Day. The open window in the head symbolizes light and fresh thoughts, the plant represents quiet inner growth and resilience, and the bird brings freedom and connection with the world. Together, they create an image of a mind that breathes, grows, and remains open to new beginnings.” — Designed by Ginger IT Solutions from Serbia.
- preview
- with calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
- without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
Enter The Factory
“I took this photo while visiting an old factory. The red light was astonishing.” — Designed by Philippe Brouard from France.
- preview
- with calendar: 1024×768, 1366×768, 1600×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 2560×1600, 2880×1800, 3840×2160
- without calendar: 1024×768, 1366×768, 1600×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 2560×1600, 2880×1800, 3840×2160
The Crow And The Ghosts
“If my heart were a season, it would be autumn.” — Designed by Lívia Lénárt from Hungary.
- preview
- without calendar: 320×480, 1024×1024, 1280×1024, 1600×1200, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160
The Night Drive
Designed by Vlad Gerasimov from Georgia.
- preview
- without calendar: 800×480, 800×600, 1024×600, 1024×768, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1440×900, 1440×960, 1400×1050, 1600×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 2560×1600, 2880×1800, 3072×1920, 3840×2160, 5120×2880
Spooky Town
Designed by Xenia Latii from Germany.
- preview
- without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
Bird Migration Portal
“When I was young, I had a bird’s nest not so far from my room window. I watched the birds almost every day; because those swallows always left their nests in October. As a child, I dreamt that they all flew together to a nicer place, where they were not so cold.” — Designed by Eline Claeys from Belgium.
- preview
- without calendar: 1280×1024, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
Hanlu
“The term ‘Hanlu’ literally translates as ‘Cold Dew.’ The cold dew brings brisk mornings and evenings. Eventually the briskness will turn cold, as winter is coming soon. And chrysanthemum is the iconic flower of Cold Dew.” — Designed by Hong, ZI-Qing from Taiwan.
- preview
- without calendar: 640×480, 800×600, 1024×768, 1080×1920, 1152×864, 1280×720, 1280×960, 1366×768, 1400×1050, 1600×1200, 1920×1080, 1920×1440, 2560×1440
Autumn’s Splendor
“The transition to autumn brings forth a rich visual tapestry of warm colors and falling leaves, making it a natural choice for a wallpaper theme.” — Designed by Farhan Srambiyan from India.
- preview
- without calendar: 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
Ghostbusters
Designed by Ricardo Gimenes from Spain.
- preview
- without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
Hello Autumn
“Did you know that squirrels don’t just eat nuts? They really like to eat fruit, too. Since apples are the seasonal fruit of October, I decided to combine both things into a beautiful image.” — Designed by Erin Troch from Belgium.
- preview
- without calendar: 320×480, 800×480, 1024×1024, 1280×800, 1366×768, 1600×1200, 1680×1050, 1680×1200, 1920×1440, 2560×1440
Discovering The Universe
“Autumn is the best moment for discovering the universe. I am looking for a new galaxy or maybe… a UFO!” — Designed by Verónica Valenzuela from Spain.
- preview
- without calendar: 800×480, 1024×768, 1152×864, 1280×800, 1280×960, 1440×900, 1680×1200, 1920×1080, 2560×1440
The Return Of The Living Dead
Designed by Ricardo Gimenes from Spain.
- preview
- without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160
Goddess Makosh
“At the end of the kolodar, as everything begins to ripen, the village sets out to harvesting. Together with the farmers goes Makosh, the Goddess of fields and crops, ensuring a prosperous harvest. What she gave her life and health all year round is now mature and rich, thus, as a sign of gratitude, the girls bring her bread and wine. The beautiful game of the goddess makes the hard harvest easier, while the song of the farmer permeates the field.” — Designed by PopArt Studio from Serbia.
- preview
- without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
Strange October Journey
“October makes the leaves fall to cover the land with lovely auburn colors and brings out all types of weird with them.” — Designed by Mi Ni Studio from Serbia.
- preview
- without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
Autumn Deer
Designed by Amy Hamilton from Canada.
- preview
- without calendar: 1024×768, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1920×1080, 1920×1200, 2048×1536, 2560×1440, 2880×1800
Transitions
“To me, October is a transitional month. We gradually slide from summer to autumn. That’s why I chose to use a lot of gradients. I also wanted to work with simple shapes, because I think of October as the ‘back to nature/back to basics month’.” — Designed by Jelle Denturck from Belgium.
- preview
- without calendar: 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 2880×1800
Happy Fall!
“Fall is my favorite season!” — Designed by Thuy Truong from the United States.
- preview
- without calendar: 320×480, 640×480, 800×600, 1024×768, 1152×864, 1280×720, 1280×800, 1366×768, 1440×900, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
Roger That Rogue Rover
“The story is a mash-up of retro science fiction and zombie infection. What would happen if a Mars rover came into contact with an unknown Martian material and got infected with a virus? What if it reversed its intended purpose of research and exploration? Instead choosing a life of chaos and evil. What if they all ran rogue on Mars? Would humans ever dare to voyage to the red planet?” Designed by Frank Candamil from the United States.
Turtles In Space
“Finished September, with October comes the month of routines. This year we share it with turtles that explore space.” — Designed by Veronica Valenzuela from Spain.
- preview
- without calendar: 640×480, 800×480, 1024×768, 1280×720, 1280×800, 1440×900, 1600×1200, 1920×1080, 1920×1440, 2560×1440
First Scarf And The Beach
“When I was little, my parents always took me and my sister for a walk at the beach in Nieuwpoort. We didn’t really do those beach walks in the summer but always when the sky started to turn gray and the days became colder. My sister and I always took out our warmest scarfs and played in the sand while my parents walked behind us. I really loved those Saturday or Sunday mornings where we were all together. I think October (when it’s not raining) is the perfect month to go to the beach for ‘uitwaaien’ (to blow out), to walk in the wind and take a break and clear your head, relieve the stress or forget one’s problems.” — Designed by Gwen Bogaert from Belgium.
Shades Of Gold
“We are about to experience the magical imagery of nature, with all the yellows, ochers, oranges, and reds coming our way this fall. With all the subtle sunrises and the burning sunsets before us, we feel so joyful that we are going to shout it out to the world from the top of the mountains.” — Designed by PopArt Studio from Serbia.
- preview
- without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
Autumn Vibes
“Autumn has come, the time of long walks in the rain, weekends spent with loved ones, with hot drinks, and a lot of tenderness. Enjoy.” — Designed by LibraFire from Serbia.
- preview
- without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
Game Night And Hot Chocolate
“To me, October is all about cozy evenings with hot chocolate, freshly baked cookies, and a game night with friends or family.” — Designed by Lieselot Geirnaert from Belgium.
Haunted House
“Love all the Halloween costumes and decorations!” — Designed by Tazi from Australia.
- preview
- without calendar: 320×480, 640×480, 800×600, 1024×768, 1152×864, 1280×720, 1280×960, 1600×1200, 1920×1080, 1920×1440, 2560×1440
Say Bye To Summer
“And hello to autumn! The summer heat and high season is over. It’s time to pack our backpacks and head for the mountains — there are many treasures waiting to be discovered!” Designed by Agnes Sobon from Poland.
Tea And Cookies
“As it gets colder outside, all I want to do is stay inside with a big pot of tea, eat cookies and read or watch a movie, wrapped in a blanket. Is it just me?” — Designed by Miruna Sfia from Romania.
- preview
- without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1440×900, 1440×1050, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
The Return
Designed by Ricardo Gimenes from Spain.
- preview
- without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
Boo!
Designed by Mad Fish Digital from Portland, OR.
Trick Or Treat
“Have you ever wondered if all the little creatures of the animal kingdom celebrate Halloween as humans do? My answer is definitely ‘YES! They do!’ They use acorns as baskets to collect all the treats, pastry brushes as brooms for the spookiest witches and hats made from the tips set of your pastry bag. So, if you happen to miss something from your kitchen or from your tool box, it may be one of them, trying to get ready for All Hallows’ Eve.” — Designed by Carla Dipasquale from Italy.
- preview
- without calendar: 640×480, 800×600, 1024×768, 1280×960, 1440×900, 1600×1200, 1680×1200, 1920×1080, 1920×1440, 2560×1440
Dope Code
“October is the month when the weather in Poland starts to get colder, and it gets very rainy, too. You can’t always spend your free time outside, so it’s the perfect opportunity to get some hot coffee and work on your next cool web project!” — Designed by Robert Brodziak from Poland.
- preview
- without calendar: 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
Happy Halloween
Designed by Ricardo Gimenes from Spain.
- preview
- without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160
Ghostober
Designed by Ricardo Delgado from Mexico City.
Get Featured Next Month
Would you like to get featured in our next wallpapers post? We’ll publish the November wallpapers on October 31, so if you’d like to be a part of the collection, please don’t hesitate to submit your design. We can’t wait to see what you’ll come up with!
-
From Prompt To Partner: Designing Your Custom AI Assistant
In “A Week In The Life Of An AI-Augmented Designer”, Kate stumbled her way through an AI-augmented sprint (coffee was chugged, mistakes were made). In “Prompting Is A Design Act”, we introduced WIRE+FRAME, a framework to structure prompts like designers structure creative briefs. Now we’ll take the next step: packaging those structured prompts into AI assistants you can design, reuse, and share.
AI assistants go by different names: CustomGPTs (ChatGPT), Agents (Copilot), and Gems (Gemini). But they all serve the same function — allowing you to customize the default AI model for your unique needs. If we carry over our smart intern analogy, think of these as interns trained to assist you with specific tasks, eliminating the need for repeated instructions or information, and who can support not just you, but your entire team.
Why Build Your Own Assistant?
If you’ve ever copied and pasted the same mega-prompt for the nth time, you’ve experienced the pain. An AI assistant turns a one-off “great prompt” into a dependable teammate. And if you’ve used any of the publicly available AI Assistants, you’ve realized quickly that they’re usually generic and not tailored for your use.
Public AI assistants are great for inspiration, but nothing beats an assistant that solves a repeated problem for you and your team, in your voice, with your context and constraints baked in. Instead of reinventing the wheel by writing new prompts each time, or repeatedly copy-pasting your structured prompts every time, or spending cycles trying to make a public AI Assistant work the way you need it to, your own AI Assistant allows you and others to easily get better, repeatable, consistent results faster.
Benefits Of Reusing Prompts, Even Your Own
Some of the benefits of building your own AI Assistant over writing or reusing your prompts include:
- Focused on a real repeating problem
A good AI Assistant isn’t a general-purpose “do everything” bot that you need to keep tweaking. It focuses on a single, recurring problem that takes a long time to complete manually and often results in varying quality depending on who’s doing it (e.g., analyzing customer feedback). - Customized for your context
Most large language models (LLMs, such as ChatGPT) are designed to be everything to everyone. An AI Assistant changes that by allowing you to customize it to automatically work like you want it to, instead of a generic AI. - Consistency at scale
You can use the WIRE+FRAME prompt framework to create structured, reusable prompts. An AI Assistant is the next logical step: instead of copy-pasting that fine-tuned prompt and sharing contextual information and examples each time, you can bake it into the assistant itself, allowing you and others achieve the same consistent results every time. - Codifying expertise
Every time you turn a great prompt into an AI Assistant, you’re essentially bottling your expertise. Your assistant becomes a living design guide that outlasts projects (and even job changes). - Faster ramp-up for teammates
Instead of new designers starting from a blank slate, they can use pre-tuned assistants. Think of it as knowledge transfer without the long onboarding lecture.
Reasons For Your Own AI Assistant Instead Of Public AI Assistants
Public AI assistants are like stock templates. While they serve a specific purpose compared to the generic AI platform, and are useful starting points, if you want something tailored to your needs and team, you should really build your own.
A few reasons for building your AI Assistant instead of using a public assistant someone else created include:
- Fit: Public assistants are built for the masses. Your work has quirks, tone, and processes they’ll never quite match.
- Trust & Security: You don’t control what instructions or hidden guardrails someone else baked in. With your own assistant, you know exactly what it will (and won’t) do.
- Evolution: An AI Assistant you design and build can grow with your team. You can update files, tweak prompts, and maintain a changelog — things a public bot won’t do for you.
Your own AI Assistants allow you to take your successful ways of interacting with AI and make them repeatable and shareable. And while they are tailored to your and your team’s way of working, remember that they are still based on generic AI models, so the usual AI disclaimers apply:
Don’t share anything you wouldn’t want screenshotted in the next company all-hands. Keep it safe, private, and user-respecting. A shared AI Assistant can potentially reveal its inner workings or data.
Note: We will be building an AI assistant using ChatGPT, aka a CustomGPT, but you can try the same process with any decent LLM sidekick. As of publication, a paid account is required to create CustomGPTs, but once created, they can be shared and used by anyone, regardless of whether they have a paid or free account. Similar limitations apply to the other platforms. Just remember that outputs can vary depending on the LLM model used, the model’s training, mood, and flair for creative hallucinations.
When Not to Build An AI Assistant (Yet)
An AI Assistant is great when the same audience has the same problem often. When the fit isn’t there, the risk is high; you should skip building an AI Assistant for now, as explained below:
- One-off or rare tasks
If it won’t be reused at least monthly, I’d recommend keeping it as a saved WIRE+FRAME prompt. For example, something for a one-time audit or creating placeholder content for a specific screen. - Sensitive or regulated data
If you need to build in personally identifiable information (PII), health, finance, legal, or trade secrets, err on the side of not building an AI Assistant. Even if the AI platform promises not to use your data, I’d strongly suggest using redaction or an approved enterprise tool with necessary safeguards in place (company-approved enterprise versions of Microsoft Copilot, for instance). - Heavy orchestration or logic
Multi-step workflows, API calls, database writes, and approvals go beyond the scope of an AI Assistant into Agentic territory (as of now). I’d recommend not trying to build an AI Assistant for these cases. - Real-time information
AI Assistants may not be able to access real-time data like prices, live metrics, or breaking news. If you need these, you can upload near-real-time data (as we do below) or connect with data sources that you or your company controls, rather than relying on the open web. - High-stakes outputs
For cases related to compliance, legal, medical, or any other area requiring auditability, consider implementing process guardrails and training to keep humans in the loop for proper review and accountability. - No measurable win
If you can’t name a success metric (such as time saved, first-draft quality, or fewer re-dos), I’d recommend keeping it as a saved WIRE+FRAME prompt.
Just because these are signs that you should not build your AI Assistant now, doesn’t mean you shouldn’t ever. Revisit this decision when you notice that you’re starting to repeatedly use the same prompt weekly, multiple teammates ask for it, or manual time copy-pasting and refining start exceeding ~15 minutes. Those are some signs that an AI Assistant will pay back quickly.
In a nutshell, build an AI Assistant when you can name the problem, the audience, frequency, and the win. The rest of this article shows how to turn your successful WIRE+FRAME prompt into a CustomGPT that you and your team can actually use. No advanced knowledge, coding skills, or hacks needed.
As Always, Start with the User
This should go without saying to UX professionals, but it’s worth a reminder: if you’re building an AI assistant for anyone besides yourself, start with the user and their needs before you build anything.
- Who will use this assistant?
- What’s the specific pain or task they struggle with today?
- What language, tone, and examples will feel natural to them?
Building without doing this first is a sure way to end up with clever assistants nobody actually wants to use. Think of it like any other product: before you build features, you understand your audience. The same rule applies here, even more so, because AI assistants are only as helpful as they are useful and usable.
From Prompt To Assistant
You’ve already done the heavy lifting with WIRE+FRAME. Now you’re just turning that refined and reliable prompt into a CustomGPT you can reuse and share. You can use MATCH as a checklist to go from a great prompt to a useful AI assistant.
- M: Map your prompt
Port your successful WIRE+FRAME prompt into the AI assistant. - A: Add knowledge and training
Ground the assistant in your world. Upload knowledge files, examples, or guides that make it uniquely yours. - T: Tailor for audience
Make it feel natural to the people who will use it. Give it the right capabilities, but also adjust its settings, tone, examples, and conversation starters so they land with your audience. - C: Check, test, and refine
Test the preview with different inputs and refine until you get the results you expect. - H: Hand off and maintain
Set sharing options and permissions, share the link, and maintain it.
A few weeks ago, we invited readers to share their ideas for AI assistants they wished they had. The top contenders were:
- Prototype Prodigy: Transform rough ideas into prototypes and export them into Figma to refine.
- Critique Coach: Review wireframes or mockups and point out accessibility and usability gaps.
But the favorite was an AI assistant to turn tons of customer feedback into actionable insights. Readers replied with variations of: “An assistant that can quickly sort through piles of survey responses, app reviews, or open-ended comments and turn them into themes we can act on.”
And that’s the one we will build in this article — say hello to Insight Interpreter.
Walkthrough: Insight Interpreter
Having lots of customer feedback is a nice problem to have. Companies actively seek out customer feedback through surveys and studies (solicited), but also receive feedback that may not have been asked for through social media or public reviews (unsolicited). This is a goldmine of information, but it can be messy and overwhelming trying to make sense of it all, and it’s nobody’s idea of fun. Here’s where an AI assistant like the Insight Interpreter can help. We’ll turn the example prompt created using the WIRE+FRAME framework in Prompting Is A Design Act into a CustomGPT.
When you start building a CustomGPT by visiting https://chat.openai.com/gpts/editor, you’ll see two paths:
- Conversational interface
Vibe-chat your way — it’s easy and quick, but similar to unstructured prompts, your inputs get baked in a little messily, so you may end up with vague or inconsistent instructions. - Configure interface
The structured form where you type instructions, upload files, and toggle capabilities. Less instant gratification, less winging it, but more control. This is the option you’ll want for assistants you plan to share or depend on regularly.
The good news is that MATCH works for both. In conversational mode, you can use it as a mental checklist, and we’ll walk through using it in configure mode as a more formal checklist in this article.
M: Map Your Prompt
Paste your full WIRE+FRAME prompt into the Instructions section exactly as written. As a refresher, I’ve included the mapping and snippets of the detailed prompt from before:
- Who & What: The AI persona and the core deliverable (“…senior UX researcher and customer insights analyst… specialize in synthesizing qualitative data from diverse sources…”).
- Input Context: Background or data scope to frame the task (“…analyzing customer feedback uploaded from sources such as…”).
- Rules & Constraints: Boundaries (“…do not fabricate pain points, representative quotes, journey stages, or patterns…”).
- Expected Output: Format and fields of the deliverable (“…a structured list of themes. For each theme, include…”).
- Flow: Explicit, ordered sub-tasks (“Recommended flow of tasks: Step 1…”).
- Reference Voice: Tone, mood, or reference (“…concise, pattern-driven, and objective…”).
- Ask for Clarification: Ask questions if unclear (“…if data is missing or unclear, ask before continuing…”).
- Memory: Memory to recall earlier definitions (“Unless explicitly instructed otherwise, keep using this process…”).
- Evaluate & Iterate: Have the AI self-critique outputs (“…critically evaluate…suggest improvements…”).
If you’re building Copilot Agents or Gemini Gems instead of CustomGPTs, you still paste your WIRE+FRAME prompt into their respective Instructions sections.
A: Add Knowledge And Training
In the knowledge section, upload up to 20 files, clearly labeled, that will help the CustomGPT respond effectively. Keep files small and versioned: reviews_Q2_2025.csv beats latestfile_final2.csv. For this prompt for analyzing customer feedback, generating themes organized by customer journey, rating them by severity and effort, files could include:
- Taxonomy of themes;
- Instructions on parsing uploaded data;
- Examples of real UX research reports using this structure;
- Scoring guidelines for severity and effort, e.g., what makes something a 3 vs. a 5 in severity;
- Customer journey map stages;
- Customer feedback file templates (not actual data).
An example of a file to help it parse uploaded data is shown below:
T: Tailor For Audience
- Audience tailoring
If you are building this for others, your prompt should have addressed tone in the “Reference Voice” section. If you didn’t, do it now, so the CustomGPT can be tailored to the tone and expertise level of users who will use it. In addition, use the Conversation starters section to add a few examples or common prompts for users to start using the CustomGPT, again, worded for your users. For instance, we could use “Analyze feedback from the attached file” for our Insights Interpreter to make it more self-explanatory for anyone, instead of “Analyze data,” which may be good enough if you were using it alone. For my Designerly Curiosity GPT, assuming that users may not know what it could do, I use “What are the types of curiosity?” and “Give me a micro-practice to spark curiosity”. - Functional tailoring
Fill in the CustomGPT name, icon, description, and capabilities.- Name: Pick one that will make it clear what the CustomGPT does. Let’s use “Insights Interpreter — Customer Feedback Analyzer”. If needed, you can also add a version number. This name will show up in the sidebar when people use it or pin it, so make the first part memorable and easily identifiable.
- Icon: Upload an image or generate one. Keep it simple so it can be easily recognized in a smaller dimension when people pin it in their sidebar.
- Description: A brief, yet clear description of what the CustomGPT can do. If you plan to list it in the GPT store, this will help people decide if they should pick yours over something similar.
- Recommended Model: If your CustomGPT needs the capabilities of a particular model (e.g., needs GPT-5 thinking for detailed analysis), select it. In most cases, you can safely leave it up to the user or select the most common model.
- Capabilities: Turn off anything you won’t need. We’ll turn off “Web Search” to allow the CustomGPT to focus only on uploaded data, without expanding the search online, and we will turn on “Code Interpreter & Data Analysis” to allow it to understand and process uploaded files. “Canvas” allows users to work on a shared canvas with the GPT to edit writing tasks; “Image generation” – if the CustomGPT needs to create images.
- Actions: Making third-party APIs available to the CustomGPT, advanced functionality we don’t need.
- Additional Settings: Sneakily hidden and opted in by default, I opt out of training OpenAI’s models.
C: Check, Test & Refine
Do one last visual check to make sure you’ve filled in all applicable fields and the basics are in place: is the concept sharp and clear (not a do-everything bot)? Are the roles, goals, and tone clear? Do we have the right assets (docs, guides) to support it? Is the flow simple enough that others can get started easily? Once those boxes are checked, move into testing.
Use the Preview panel to verify that your CustomGPT performs as well, or better, than your original WIRE+FRAME prompt, and that it works for your intended audience. Try a few representative inputs and compare the results to what you expected. If something worked before but doesn’t now, check whether new instructions or knowledge files are overriding it.
When things don’t look right, here are quick debugging fixes:
- Generic answers?
Tighten Input Context or update the knowledge files. - Hallucinations?
Revisit your Rules section. Turn off web browsing if you don’t need external data. - Wrong tone?
Strengthen Reference Voice or swap in clearer examples. - Inconsistent?
Test across models in preview and set the most reliable one as “Recommended.”
H: Hand Off And Maintain
When your CustomGPT is ready, you can publish it via the “Create” option. Select the appropriate access option:
- Only me: Private use. Perfect if you’re still experimenting or keeping it personal.
- Anyone with the link: Exactly what it means. Shareable but not searchable. Great for pilots with a team or small group. Just remember that links can be reshared, so treat them as semi-public.
- GPT Store: Fully public. Your assistant is listed and findable by anyone browsing the store. (This is the option we’ll use.)
- Business workspace (if you’re on GPT Business): Share with others within your business account only — the easiest way to keep it in-house and controlled.
But hand off doesn’t end with hitting publish, you should maintain it to keep it relevant and useful:
- Collect feedback: Ask teammates what worked, what didn’t, and what they had to fix manually.
- Iterate: Apply changes directly or duplicate the GPT if you want multiple versions in play. You can find all your CustomGPTs at: https://chatgpt.com/gpts/mine
- Track changes: Keep a simple changelog (date, version, updates) for traceability.
- Refresh knowledge: Update knowledge files and examples on a regular cadence so answers don’t go stale.
And that’s it! Our Insights Interpreter is now live!
Since we used the WIRE+FRAME prompt from the previous article to create the Insights Interpreter CustomGPT, I compared the outputs:
The results are similar, with slight differences, and that’s expected. If you compare the results carefully, the themes, issues, journey stages, frequency, severity, and estimated effort match with some differences in wording of the theme, issue summary, and problem statement. The opportunities and quotes have more visible differences. Most of it is because of the CustomGPT knowledge and training files, including instructions, examples, and guardrails, now live as always-on guidance.
Keep in mind that in reality, Generative AI is by nature generative, so outputs will vary. Even with the same data, you won’t get identical wording every time. In addition, underlying models and their capabilities rapidly change. If you want to keep things as consistent as possible, recommend a model (though people can change it), track versions of your data, and compare for structure, priorities, and evidence rather than exact wording.
While I’d love for you to use Insights Interpreter, I strongly recommend taking 15 minutes to follow the steps above and create your own. That is exactly what you or your team needs — including the tone, context, output formats, and get the real AI Assistant you need!
Inspiration For Other AI Assistants
We just built the Insight Interpreter and mentioned two contenders: Critique Coach and Prototype Prodigy. Here are a few other realistic uses that can spark ideas for your own AI Assistant:
- Workshop Wizard: Generates workshop agendas, produces icebreaker questions, and follows up survey drafts.
- Research Roundup Buddy: Summarizes raw transcripts into key themes, then creates highlight reels (quotes + visuals) for team share-outs.
- Persona Refresher: Updates stale personas with the latest customer feedback, then rewrites them in different tones (boardroom formal vs. design-team casual).
- Content Checker: Proofs copy for tone, accessibility, and reading level before it ever hits your site.
- Trend Tamer: Scans competitor reviews and identifies emerging patterns you can act on before they reach your roadmap.
- Microcopy Provocateur: Tests alternate copy options by injecting different tones (sassy, calm, ironic, nurturing) and role-playing how users might react, especially useful for error states or Call to Actions.
- Ethical UX Debater: Challenges your design decisions and deceptive designs by simulating the voice of an ethics board or concerned user.
The best AI Assistants come from carefully inspecting your workflow and looking for areas where AI can augment your work regularly and repetitively. Then follow the steps above to build a team of customized AI assistants.
Ask Me Anything About Assistants
- What are some limitations of a CustomGPT?
Right now, the best parallels for AI are a very smart intern with access to a lot of information. CustomGPTs are still running on LLM models that are basically trained on a lot of information and programmed to predictively generate responses based on that data, including possible bias, misinformation, or incomplete information. Keeping that in mind, you can make that intern provide better and more relevant results by using your uploads as onboarding docs, your guardrails as a job description, and your updates as retraining. - Can I copy someone else’s public CustomGPT and tweak it?
Not directly, but if you get inspired by another CustomGPT, you can look at how it’s framed and rebuild your own using WIRE+FRAME & MATCH. That way, you make it your own and have full control of the instructions, files, and updates. But you can do that with Google’s equivalent — Gemini Gems. Shared Gems behave similarly to shared Google Docs, so once shared, any Gem instructions and files that you have uploaded can be viewed by any user with access to the Gem. Any user with edit access to the Gem can also update and delete the Gem. - How private are my uploaded files?
The files you upload are stored and used to answer prompts to your CustomGPT. If your CustomGPT is not private or you didn’t disable the hidden setting to allow CustomGPT conversations to improve the model, that data could be referenced. Don’t upload sensitive, confidential, or personal data you wouldn’t want circulating. Enterprise accounts do have some protections, so check with your company. - How many files can I upload, and does size matter?
Limits vary by platform, but smaller, specific files usually perform better than giant docs. Think “chapter” instead of “entire book.” At the time of publishing, CustomGPTs allow up to 20 files, Copilot Agents up to 200 (if you need anywhere near that many, chances are your agent is not focused enough), and Gemini Gems up to 10. - What’s the difference between a CustomGPT and a Project?
A CustomGPT is a focused assistant, like an intern trained to do one role well (like “Insight Interpreter”). A Project is more like a workspace where you can group multiple prompts, files, and conversations together for a broader effort. CustomGPTs are specialists. Projects are containers. If you want something reusable, shareable, and role-specific, go to CustomGPT. If you want to organize broader work with multiple tools and outputs, and shared knowledge, Projects are the better fit.
From Reading To Building
In this AI x Design series, we’ve gone from messy prompting (“A Week In The Life Of An AI-Augmented Designer”) to a structured prompt framework, WIRE+FRAME (“Prompting Is A Design Act”). And now, in this article, your very own reusable AI sidekick.
CustomGPTs don’t replace designers but augment them. The real magic isn’t in the tool itself, but in how you design and manage it. You can use public CustomGPTs for inspiration, but the ones that truly fit your workflow are the ones you design yourself. They extend your craft, codify your expertise, and give your team leverage that generic AI models can’t.
Build one this week. Even better, today. Train it, share it, stress-test it, and refine it into an AI assistant that can augment your team.
- Focused on a real repeating problem