分类: Uncategorized

  • The Psychology Of Trust In AI: A Guide To Measuring And Designing For User Confidence

    Misuse and misplaced trust of AI is becoming an unfortunate common event. For example, lawyers trying to leverage the power of generative AI for research submit court filings citing multiple compelling legal precedents. The problem? The AI had confidently, eloquently, and completely fabricated the cases cited. The resulting sanctions and public embarrassment can become a viral cautionary tale, shared across social media as a stark example of AI’s fallibility.

    This goes beyond a technical glitch; it’s a catastrophic failure of trust in AI tools in an industry where accuracy and trust are critical. The trust issue here is twofold — the law firms are submitting briefs in which they have blindly over-trusted the AI tool to return accurate information. The subsequent fallout can lead to a strong distrust in AI tools, to the point where platforms featuring AI might not be considered for use until trust is reestablished.

    Issues with trusting AI aren’t limited to the legal field. We are seeing the impact of fictional AI-generated information in critical fields such as healthcare and education. On a more personal scale, many of us have had the experience of asking Siri or Alexa to perform a task, only to have it done incorrectly or not at all, for no apparent reason. I’m guilty of sending more than one out-of-context hands-free text to an unsuspecting contact after Siri mistakenly pulls up a completely different name than the one I’d requested.

    With digital products moving to incorporate generative and agentic AI at an increasingly frequent rate, trust has become the invisible user interface. When it works, our interactions are seamless and powerful. When it breaks, the entire experience collapses, with potentially devastating consequences. As UX professionals, we’re on the front lines of a new twist on a common challenge. How do we build products that users can rely on? And how do we even begin to measure something as ephemeral as trust in AI?

    Trust isn’t a mystical quality. It is a psychological construct built on predictable factors. I won’t dive deep into academic literature on trust in this article. However, it is important to understand that trust is a concept that can be understood, measured, and designed for. This article will provide a practical guide for UX researchers and designers. We will briefly explore the psychological anatomy of trust, offer concrete methods for measuring it, and provide actionable strategies for designing more trustworthy and ethical AI systems.

    The Anatomy of Trust: A Psychological Framework for AI

    To build trust, we must first understand its components. Think of trust like a four-legged stool. If any one leg is weak, the whole thing becomes unstable. Based on classic psychological models, we can adapt these “legs” for the AI context.

    1. Ability (or Competence)

    This is the most straightforward pillar: Does the AI have the skills to perform its function accurately and effectively? If a weather app is consistently wrong, you stop trusting it. If an AI legal assistant creates fictitious cases, it has failed the basic test of ability. This is the functional, foundational layer of trust.

    2. Benevolence

    This moves from function to intent. Does the user believe the AI is acting in their best interest? A GPS that suggests a toll-free route even if it’s a few minutes longer might be perceived as benevolent. Conversely, an AI that aggressively pushes sponsored products feels self-serving, eroding this sense of benevolence. This is where user fears, such as concerns about job displacement, directly challenge trust—the user starts to believe the AI is not on their side.

    3. Integrity

    Does AI operate on predictable and ethical principles? This is about transparency, fairness, and honesty. An AI that clearly states how it uses personal data demonstrates integrity. A system that quietly changes its terms of service or uses dark patterns to get users to agree to something violates integrity. An AI job recruiting tool that has subtle yet extremely harmful social biases, existing in the algorithm, violates integrity.

    4. Predictability & Reliability

    Can the user form a stable and accurate mental model of how the AI will behave? Unpredictability, even if the outcomes are occasionally good, creates anxiety. A user needs to know, roughly, what to expect. An AI that gives a radically different answer to the same question asked twice is unpredictable and, therefore, hard to trust.

    The Trust Spectrum: The Goal of a Well-Calibrated Relationship

    Our goal as UX professionals shouldn’t be to maximize trust at all costs. An employee who blindly trusts every email they receive is a security risk. Likewise, a user who blindly trusts every AI output can be led into dangerous situations, such as the legal briefs referenced at the beginning of this article. The goal is well-calibrated trust.

    Think of it as a spectrum where the upper-mid level is the ideal state for a truly trustworthy product to achieve:

    • Active Distrust
      The user believes the AI is incompetent or malicious. They will avoid it or actively work against it.
    • Suspicion & Scrutiny
      The user interacts cautiously, constantly verifying the AI’s outputs. This is a common and often healthy state for users of new AI.
    • Calibrated Trust (The Ideal State)
      This is the sweet spot. The user has an accurate understanding of the AI’s capabilities—its strengths and, crucially, its weaknesses. They know when to rely on it and when to be skeptical.
    • Over-trust & Automation Bias
      The user unquestioningly accepts the AI’s outputs. This is where users follow flawed AI navigation into a field or accept a fictional legal brief as fact.

    Our job is to design experiences that guide users away from the dangerous poles of Active Distrust and Over-trust and toward that healthy, realistic middle ground of Calibrated Trust.

    The Researcher’s Toolkit: How to Measure Trust In AI

    Trust feels abstract, but it leaves measurable fingerprints. Academics in the social sciences have done much to define both what trust looks like and how it might be measured. As researchers, we can capture these signals through a mix of qualitative, quantitative, and behavioral methods.

    Qualitative Probes: Listening For The Language Of Trust

    During interviews and usability tests, go beyond “Was that easy to use?” and listen for the underlying psychology. Here are some questions you can start using tomorrow:

    • To measure Ability:
      “Tell me about a time this tool’s performance surprised you, either positively or negatively.”
    • To measure Benevolence:
      “Do you feel this system is on your side? What gives you that impression?”
    • To measure Integrity:
      “If this AI made a mistake, how would you expect it to handle it? What would be a fair response?”
    • To measure Predictability:
      “Before you clicked that button, what did you expect the AI to do? How closely did it match your expectation?”

    Investigating Existential Fears (The Job Displacement Scenario)

    One of the most potent challenges to an AI’s Benevolence is the fear of job displacement. When a participant expresses this, it is a critical research finding. It requires a specific, ethical probing technique.

    Imagine a participant says, “Wow, it does that part of my job pretty well. I guess I should be worried.”

    An untrained researcher might get defensive or dismiss the comment. An ethical, trained researcher validates and explores:

    “Thank you for sharing that; it’s a vital perspective, and it’s exactly the kind of feedback we need to hear. Can you tell me more about what aspects of this tool make you feel that way? In an ideal world, how would a tool like this work with you to make your job better, not to replace it?”

    This approach respects the participant, validates their concern, and reframes the feedback into an actionable insight about designing a collaborative, augmenting tool rather than a replacement. Similarly, your findings should reflect the concern users expressed about replacement. We shouldn’t pretend this fear doesn’t exist, nor should we pretend that every AI feature is being implemented with pure intention. Users know better than that, and we should be prepared to argue on their behalf for how the technology might best co-exist within their roles.

    Quantitative Measures: Putting A Number On Confidence

    You can quantify trust without needing a data science degree. After a user completes a task with an AI, supplement your standard usability questions with a few simple Likert-scale items:

    • “The AI’s suggestion was reliable.” (1-7, Strongly Disagree to Strongly Agree)
    • “I am confident in the AI’s output.” (1-7)
    • “I understood why the AI made that recommendation.” (1-7)
    • “The AI responded in a way that I expected.” (1-7)
    • “The AI provided consistent responses over time.” (1-7)

    Over time, these metrics can track how trust is changing as your product evolves.

    Note: If you want to go beyond these simple questions that I’ve made up, there are numerous scales (measurements) of trust in technology that exist in academic literature. It might be an interesting endeavor to measure some relevant psychographic and demographic characteristics of your users and see how that correlates with trust in AI/your product. Table 1 at the end of the article contains four examples of current scales you might consider using to measure trust. You can decide which is best for your application, or you might pull some of the items from any of the scales if you aren’t looking to publish your findings in an academic journal, yet want to use items that have been subjected to some level of empirical scrutiny.

    Behavioral Metrics: Observing What Users Do, Not Just What They Say

    People’s true feelings are often revealed in their actions. You can use behaviors that reflect the specific context of use for your product. Here are a few general metrics that might apply to most AI tools that give insight into users’ behavior and the trust they place in your tool.

    • Correction Rate
      How often do users manually edit, undo, or ignore the AI’s output? A high correction rate is a powerful signal of low trust in its Ability.
    • Verification Behavior
      Do users switch to Google or open another application to double-check the AI’s work? This indicates they don’t trust it as a standalone source of truth. It can also potentially be positive that they are calibrating their trust in the system when they use it up front.
    • Disengagement
      Do users turn the AI feature off? Do they stop using it entirely after one bad experience? This is the ultimate behavioral vote of no confidence.

    Designing For Trust: From Principles to Pixels

    Once you’ve researched and measured trust, you can begin to design for it. This means translating psychological principles into tangible interface elements and user flows.

    Designing for Competence and Predictability

    • Set Clear Expectations
      Use onboarding, tooltips, and empty states to honestly communicate what the AI is good at and where it might struggle. A simple “I’m still learning about [topic X], so please double-check my answers” can work wonders.
    • Show Confidence Levels
      Instead of just giving an answer, have the AI signal its own uncertainty. A weather app that says “70% chance of rain” is more trustworthy than one that just says “It will rain” and is wrong. An AI could say, “I’m 85% confident in this summary,” or highlight sentences it’s less sure about.

    The Role of Explainability (XAI) and Transparency

    Explainability isn’t about showing users the code. It’s about providing a useful, human-understandable rationale for a decision.

    Instead of:
    “Here is your recommendation.”

    Try:
    “Because you frequently read articles about UX research methods, I’m recommending this new piece on measuring trust in AI.”

    This addition transforms AI from an opaque oracle to a transparent logical partner.

    Many of the popular AI tools (e.g., ChatGPT and Gemini) show the thinking that went into the response they provide to a user. Figure 3 shows the steps Gemini went through to provide me with a non-response when I asked it to help me generate the masterpiece displayed above in Figure 2. While this might be more information than most users care to see, it provides a useful resource for a user to audit how the response came to be, and it has provided me with instructions on how I might proceed to address my task.

    Figure 4 shows an example of a scorecard OpenAI makes available as an attempt to increase users’ trust. These scorecards are available for each ChatGPT model and go into the specifics of how the models perform as it relates to key areas such as hallucinations, health-based conversations, and much more. In reading the scorecards closely, you will see that no AI model is perfect in any area. The user must remain in a trust but verify mode to make the relationship between human reality and AI work in a way that avoids potential catastrophe. There should never be blind trust in an LLM.

    Designing For Trust Repair (Graceful Error Handling) And Not Knowing an Answer

    Your AI will make mistakes.

    Trust is not determined by the absence of errors, but by how those errors are handled.

    • Acknowledge Errors Humbly.
      When the AI is wrong, it should be able to state that clearly. “My apologies, I misunderstood that request. Could you please rephrase it?” is far better than silence or a nonsensical answer.
    • Provide an Easy Path to Correction.
      Make feedback mechanisms (like thumbs up/down or a correction box) obvious. More importantly, show that the feedback is being used. A “Thank you, I’m learning from your correction” can help rebuild trust after a failure. As long as this is true.

    Likewise, your AI can’t know everything. You should acknowledge this to your users.

    UX practitioners should work with the product team to ensure that honesty about limitations is a core product principle.

    This can include the following:

    • Establish User-Centric Metrics: Instead of only measuring engagement or task completion, UXers can work with product managers to define and track metrics like:
      • Hallucination Rate: The frequency with which the AI provides verifiably false information.
      • Successful Fallback Rate: How often the AI correctly identifies its inability to answer and provides a helpful, honest alternative.
    • Prioritize the “I Don’t Know” Experience: UXers should frame the “I don’t know” response not as an error state, but as a critical feature. They must lobby for the engineering and content resources needed to design a high-quality, helpful fallback experience.

    UX Writing And Trust

    All of these considerations highlight the critical role of UX writing in the development of trustworthy AI. UX writers are the architects of the AI’s voice and tone, ensuring that its communication is clear, honest, and empathetic. They translate complex technical processes into user-friendly explanations, craft helpful error messages, and design conversational flows that build confidence and rapport. Without thoughtful UX writing, even the most technologically advanced AI can feel opaque and untrustworthy.

    The words and phrases an AI uses are its primary interface with users. UX writers are uniquely positioned to shape this interaction, ensuring that every tooltip, prompt, and response contributes to a positive and trust-building experience. Their expertise in human-centered language and design is indispensable for creating AI systems that not only perform well but also earn and maintain the trust of their users.

    A few key areas for UX writers to focus on when writing for AI include:

    • Prioritize Transparency
      Clearly communicate the AI’s capabilities and limitations, especially when it’s still learning or if its responses are generated rather than factual. Use phrases that indicate the AI’s nature, such as “As an AI, I can…” or “This is a generated response.”
    • Design for Explainability
      When the AI provides a recommendation, decision, or complex output, strive to explain the reasoning behind it in an understandable way. This builds trust by showing the user how the AI arrived at its conclusion.
    • Emphasize User Control
      Empower users by providing clear ways to provide feedback, correct errors, or opt out of certain AI features. This reinforces the idea that the user is in control and the AI is a tool to assist them.

    The Ethical Tightrope: The Researcher’s Responsibility

    As the people responsible for understanding and advocating for users, we walk an ethical tightrope. Our work comes with profound responsibilities.

    The Danger Of “Trustwashing”

    We must draw a hard line between designing for calibrated trust and designing to manipulate users into trusting a flawed, biased, or harmful system. For example, if an AI system designed for loan approvals consistently discriminates against certain demographics but presents a user interface that implies fairness and transparency, this would be an instance of trustwashing.

    Another example of trustwashing would be if an AI medical diagnostic tool occasionally misdiagnoses conditions, but the user interface makes it seem infallible. To avoid trustwashing, the system should clearly communicate the potential for error and the need for human oversight.

    Our goal must be to create genuinely trustworthy systems, not just the perception of trust. Using these principles to lull users into a false sense of security is a betrayal of our professional ethics.

    To avoid and prevent trustwashing, researchers and UX teams should:

    • Prioritize genuine transparency.
      Clearly communicate the limitations, biases, and uncertainties of AI systems. Don’t overstate capabilities or obscure potential risks.
    • Conduct rigorous, independent evaluations.
      Go beyond internal testing and seek external validation of system performance, fairness, and robustness.
    • Engage with diverse stakeholders.
      Involve users, ethics experts, and impacted communities in the design, development, and evaluation processes to identify potential harms and build genuine trust.
    • Be accountable for outcomes.
      Take responsibility for the societal impact of AI systems, even if unintended. Establish mechanisms for redress and continuous improvement.
    • Be accountable for outcomes.
      Establish clear and accessible mechanisms for redress when harm occurs, ensuring that individuals and communities affected by AI decisions have avenues for recourse and compensation.
    • Educate the public.
      Help users understand how AI works, its limitations, and what to look for when evaluating AI products.
    • Advocate for ethical guidelines and regulations.
      Support the development and implementation of industry standards and policies that promote responsible AI development and prevent deceptive practices.
    • Be wary of marketing hype.
      Critically assess claims made about AI systems, especially those that emphasize “trustworthiness” without clear evidence or detailed explanations.
    • Publish negative findings.
      Don’t shy away from reporting challenges, failures, or ethical dilemmas encountered during research. Transparency about limitations is crucial for building long-term trust.
    • Focus on user empowerment.
      Design systems that give users control, agency, and understanding rather than just passively accepting AI outputs.

    The Duty To Advocate

    When our research uncovers deep-seated distrust or potential harm — like the fear of job displacement — our job has only just begun. We have an ethical duty to advocate for that user. In my experience directing research teams, I’ve seen that the hardest part of our job is often carrying these uncomfortable truths into rooms where decisions are made. We must champion these findings and advocate for design and strategy shifts that prioritize user well-being, even when it challenges the product roadmap.

    I personally try to approach presenting this information as an opportunity for growth and improvement, rather than a negative challenge.

    For example, instead of stating “Users don’t trust our AI because they fear job displacement,” I might frame it as “Addressing user concerns about job displacement presents a significant opportunity to build deeper trust and long-term loyalty by demonstrating our commitment to responsible AI development and exploring features that enhance human capabilities rather than replace them.” This reframing can shift the conversation from a defensive posture to a proactive, problem-solving mindset, encouraging collaboration and innovative solutions that ultimately benefit both the user and the business.

    It’s no secret that one of the more appealing areas for businesses to use AI is in workforce reduction. In reality, there will be many cases where businesses look to cut 10–20% of a particular job family due to the perceived efficiency gains of AI. However, giving users the opportunity to shape the product may steer it in a direction that makes them feel safer than if they do not provide feedback. We should not attempt to convince users they are wrong if they are distrustful of AI. We should appreciate that they are willing to provide feedback, creating an experience that is informed by the human experts who have long been doing the task being automated.

    Conclusion: Building Our Digital Future On A Foundation Of Trust

    The rise of AI is not the first major technological shift our field has faced. However, it presents one of the most significant psychological challenges of our current time. Building products that are not just usable but also responsible, humane, and trustworthy is our obligation as UX professionals.

    Trust is not a soft metric. It is the fundamental currency of any successful human-technology relationship. By understanding its psychological roots, measuring it with rigor, and designing for it with intent and integrity, we can move from creating “intelligent” products to building a future where users can place their confidence in the tools they use every day. A trust that is earned and deserved.

    Table 1: Published Academic Scales Measuring Trust In Automated Systems

    Survey Tool Name Focus Key Dimensions of Trust Citation
    Trust in Automation Scale 12-item questionnaire to assess trust between people and automated systems. Measures a general level of trust, including reliability, predictability, and confidence. Jian, J. Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53–71.
    Trust of Automated Systems Test (TOAST) 9-items used to measure user trust in a variety of automated systems, designed for quick administration. Divided into two main subscales: Understanding (user’s comprehension of the system) and Performance (belief in the system’s effectiveness). Wojton, H. M., Porter, D., Lane, S. T., Bieber, C., & Madhavan, P. (2020). Initial validation of the trust of automated systems test (TOAST). (PDF) The Journal of Social Psychology, 160(6), 735–750.
    Trust in Automation Questionnaire A 19-item questionnaire capable of predicting user reliance on automated systems. A 2-item subscale is available for quick assessments; the full tool is recommended for a more thorough analysis. Measures 6 factors: Reliability, Understandability, Propensity to trust, Intentions of developers, Familiarity, Trust in automation Körber, M. (2018). Theoretical considerations and development of a questionnaire to measure trust in automation. In Proceedings 20th Triennial Congress of the IEA. Springer.
    Human Computer Trust Scale 12-item questionnaire created to provide an empirically sound tool for assessing user trust in technology. Divided into two key factors:

    1. Benevolence and Competence: This dimension captures the positive attributes of the technology
    2. Perceived Risk: This factor measures the user’s subjective assessment of the potential for negative consequences when using a technical artifact.
    Siddharth Gulati, Sonia Sousa & David Lamas (2019): Design, development and evaluation of a human-computer trust scale, (PDF) Behaviour & Information Technology

    Appendix A: Trust-Building Tactics Checklist

    To design for calibrated trust, consider implementing the following tactics, organized by the four pillars of trust:

    1. Ability (Competence) & Predictability

    • Set Clear Expectations: Use onboarding, tooltips, and empty states to honestly communicate the AI’s strengths and weaknesses.
    • Show Confidence Levels: Display the AI’s uncertainty (e.g., “70% chance,” “85% confident”) or highlight less certain parts of its output.
    • Provide Explainability (XAI): Offer useful, human-understandable rationales for the AI’s decisions or recommendations (e.g., “Because you frequently read X, I’m recommending Y”).
    • Design for Graceful Error Handling:
      • ✅ Acknowledge errors humbly (e.g., “My apologies, I misunderstood that request.”).
      • ✅ Provide easy paths to correction (e. ] g., prominent feedback mechanisms like thumbs up/down).
      • ✅ Show that feedback is being used (e.g., “Thank you, I’m learning from your correction”).
    • Design for “I Don’t Know” Responses:
      • ✅ Acknowledge limitations honestly.
      • ✅ Prioritize a high-quality, helpful fallback experience when the AI cannot answer.
    • Prioritize Transparency: Clearly communicate the AI’s capabilities and limitations, especially if responses are generated.

    2. Benevolence

    • Address Existential Fears: When users express concerns (e.g., job displacement), validate their concerns and reframe the feedback into actionable insights about collaborative tools.
    • Prioritize User Well-being: Advocate for design and strategy shifts that prioritize user well-being, even if it challenges the product roadmap.
    • Emphasize User Control: Provide clear ways for users to give feedback, correct errors, or opt out of AI features.

    3. Integrity

    • Adhere to Ethical Principles: Ensure the AI operates on predictable, ethical principles, demonstrating fairness and honesty.
    • Prioritize Genuine Transparency: Clearly communicate the limitations, biases, and uncertainties of AI systems; avoid overstating capabilities or obscuring risks.
    • Conduct Rigorous, Independent Evaluations: Seek external validation of system performance, fairness, and robustness to mitigate bias.
    • Engage Diverse Stakeholders: Involve users, ethics experts, and impacted communities in the design and evaluation processes.
    • Be Accountable for Outcomes: Establish clear mechanisms for redress and continuous improvement for societal impacts, even if unintended.
    • Educate the Public: Help users understand how AI works, its limitations, and how to evaluate AI products.
    • Advocate for Ethical Guidelines: Support the development and implementation of industry standards and policies that promote responsible AI.
    • Be Wary of Marketing Hype: Critically assess claims about AI “trustworthiness” and demand verifiable data.
    • Publish Negative Findings: Be transparent about challenges, failures, or ethical dilemmas encountered during research.

    4. Predictability & Reliability

    • Set Clear Expectations: Use onboarding, tooltips, and empty states to honestly communicate what the AI is good at and where it might struggle.
    • Show Confidence Levels: Instead of just giving an answer, have the AI signal its own uncertainty.
    • Provide Explainability (XAI) and Transparency: Offer a useful, human-understandable rationale for AI decisions.
    • Design for Graceful Error Handling: Acknowledge errors humbly and provide easy paths to correction.
    • Prioritize the “I Don’t Know” Experience: Frame “I don’t know” as a feature and design a high-quality fallback experience.
    • Prioritize Transparency (UX Writing): Clearly communicate the AI’s capabilities and limitations, especially when it’s still learning or if responses are generated.
    • Design for Explainability (UX Writing): Explain the reasoning behind AI recommendations, decisions, or complex outputs.
  • How To Minimize The Environmental Impact Of Your Website

    Climate change is the single biggest health threat to humanity, accelerated by human activities such as the burning of fossil fuels, which generate greenhouse gases that trap the sun’s heat.

    The average temperature of the earth’s surface is now 1.2°C warmer than it was in the late 1800’s, and projected to more than double by the end of the century.

    The consequences of climate change include intense droughts, water shortages, severe fires, melting polar ice, catastrophic storms, and declining biodiversity.

    The Internet Is A Significant Part Of The Problem

    Shockingly, the internet is responsible for higher global greenhouse emissions than the aviation industry, and is projected to be responsible for 14% of all global greenhouse gas emissions by 2040.

    If the internet were a country, it would be the 4th largest polluter in the world and represents the largest coal-powered machine on the planet.

    But how can something digital like the internet produce harmful emissions?

    Internet emissions come from powering the infrastructure that drives the internet, such as the vast data centres and data transmission networks that consume huge amounts of electricity.

    Internet emissions also come from the global manufacturing, distribution, and usage of the estimated 30.5 billion devices (phones, laptops, etc.) that we use to access the internet.

    Unsurprisingly, internet related emissions are increasing, given that 60% of the world’s population spend, on average, 40% of their waking hours online.

    We Must Urgently Reduce The Environmental Impact Of The Internet

    As responsible digital professionals, we must act quickly to minimise the environmental impact of our work.

    It is encouraging to see the UK government encourage action by adding “Minimise environmental impact” to their best practice design principles, but there is still too much talking and not enough corrective action taking place within our industry.

    The reality of many tightly constrained, fast-paced, and commercially driven web projects is that minimising environmental impact is far from the agenda.

    So how can we make the environment more of a priority and talk about it in ways that stakeholders will listen to?

    A eureka moment on a recent web optimisation project gave me an idea.

    My Eureka Moment

    I led a project to optimise the mobile performance of www.talktofrank.com, a government drug advice website that aims to keep everyone safe from harm.

    Mobile performance is critically important for the success of this service to ensure that users with older mobile devices and those using slower network connections can still access the information they need.

    Our work to minimise page weights focused on purely technical changes that our developer made following recommendations from tools such as Google Lighthouse that reduced the size of the webpages of a key user journey by up to 80%. This resulted in pages downloading up to 30% faster and the carbon footprint of the journey being reduced by 80%.

    We hadn’t set out to reduce the carbon footprint, but seeing these results led to my eureka moment.

    I realised that by minimising page weights, you improve performance (which is a win for users and service owners) and also consume less energy (due to needing to transfer and store less data), creating additional benefits for the planet — so everyone wins.

    This felt like a breakthrough because business, user, and environmental requirements are often at odds with one another. By focussing on minimising websites to be as simple, lightweight and easy to use as possible you get benefits that extend beyond the triple bottom line of people, planet and profit to include performance and purpose.

    So why is ‘minimising’ such a great digital sustainability strategy?

    • Profit
      Website providers win because their website becomes more efficient and more likely to meet its intended outcomes, and a lighter site should also lead to lower hosting bills.
    • People
      People win because they get to use a website that downloads faster, is quick and easy to use because it’s been intentionally designed to be as simple as possible, enabling them to complete their tasks with the minimum amount of effort and mental energy.
    • Performance
      Lightweight webpages download faster so perform better for users, particularly those on older devices and on slower network connections.
    • Planet
      The planet wins because the amount of energy (and associated emissions) that is required to deliver the website is reduced.
    • Purpose
      We know that we do our best work when we feel a sense of purpose. It is hugely gratifying as a digital professional to know that our work is doing good in the world and contributing to making things better for people and the environment.

    In order to prioritise the environment, we need to be able to speak confidently in a language that will resonate with the business and ensure that any investment in time and resources yields the widest range of benefits possible.

    So even if you feel that the environment is a very low priority on your projects, focusing on minimising page weights to improve performance (which is generally high on the agenda) presents the perfect trojan horse for an environmental agenda (should you need one).

    Doing the right thing isn’t always easy, but we’ve done it before when managing to prioritise issues such as usability, accessibility, and inclusion on digital projects.

    Many of the things that make websites easier to use, more accessible, and more effective also help to minimise their environmental impact, so the things you need to do will feel familiar and achievable, so don’t worry about it all being another new thing to learn about!

    So this all makes sense in theory, but what’s the master plan to use when putting it into practice?

    The Masterplan

    The masterplan for creating websites that have minimal environmental impact is to focus on offering the maximum value from the minimum input of energy.

    It’s an adaptation of Buckminister Fuller’s ‘Dymaxion’ principle, which is one of his many progressive and groundbreaking sustainability strategies for living and surviving on a planet with finite resources.

    Inputs of energy include both the electrical energy that is required to operate websites and also the mental energy that is required to use them.

    You can achieve this by minimising websites to their core content, features, and functionality, ensuring that everything can be justified from the perspective of meeting a business or user need. This means that anything that isn’t adding a proportional amount of value to the amount of energy it requires to provide it should be removed.

    So that’s the masterplan, but how do you put it into practice?

    Decarbonise Your Highest Value User Journeys

    I’ve developed a new approach called ‘Decarbonising User Journeys’ that will help you to minimise the environmental impact of your website and maximise its performance.

    Note: The approach deliberately focuses on optimising key user journeys and not entire websites to keep things manageable and to make it easier to get started.

    The secret here is to start small, demonstrate improvements, and then scale.

    The approach consists of five simple steps:

    1. Identify your highest value user journey,
    2. Benchmark your user journey,
    3. Set targets,
    4. Decarbonise your user journey,
    5. Track and share your progress.

    Here’s how it works.

    Step 1: Identify Your Highest Value User Journey

    Your highest value user journey might be the one that your users value the most, the one that brings you the highest revenue, or the one that is fundamental to the success of your organisation.

    You could also focus on a user journey that you know is performing particularly badly and has the potential to deliver significant business and user benefits if improved.

    You may have lots of important user journeys, and it’s fine to decarbonise multiple journeys in parallel if you have the resources, but I’d recommend starting with one first to keep things simple.

    To bring this to life, let’s consider a hypothetical example of a premiership football club trying to decarbonise its online ticket-buying journey that receives high levels of traffic and is responsible for a significant proportion of its weekly income.

    Step 2: Benchmark Your User Journey

    Once you’ve selected your user journey, you need to benchmark it in terms of how well it meets user needs, the value it offers your organisation, and its carbon footprint.

    It is vital that you understand the job it needs to do and how well it is doing it before you start to decarbonise it. There is no point in removing elements of the journey in an effort to reduce its carbon footprint, for example, if you compromise its ability to meet a key user or business need.

    You can benchmark how well your user journey is meeting user needs by conducting user research alongside analysing existing customer feedback. Interviews with business stakeholders will help you to understand the value that your journey is providing the organisation and how well business needs are being met.

    You can benchmark the carbon footprint and performance of your user journey using online tools such as Cardamon, Ecograder, Website Carbon Calculator, Google Lighthouse, and Bioscore. Make sure you have your analytics data to hand to help get the most accurate estimate of your footprint.

    To use these tools, simply add the URL of each page of your journey, and they will give you a range of information such as page weight, energy rating, and carbon emissions. Google Lighthouse works slightly differently via a browser plugin and generates a really useful and detailed performance report as opposed to giving you a carbon rating.

    A great way to bring your benchmarking scores to life is to visualise them in a similar way to how you would present a customer journey map or service blueprint.

    This example focuses on just communicating the carbon footprint of the user journey, but you can also add more swimlanes to communicate how well the journey is performing from a user and business perspective, too, adding user pain points, quotes, and business metrics where appropriate.

    I’ve found that adding the energy efficiency ratings is really effective because it’s an approach that people recognise from their household appliances. This adds a useful context to just showing the weights (such as grams or kilograms) of CO2, which are generally meaningless to people.

    Within my benchmarking reports, I also add a set of benchmarking data for every page within the user journey. This gives your stakeholders a more detailed breakdown and a simple summary alongside a snapshot of the benchmarked page.

    Your benchmarking activities will give you a really clear picture of where remedial work is required from an environmental, user, and business point of view.

    In our football user journey example, it’s clear that the ‘News’ and ‘Tickets’ pages need some attention to reduce their carbon footprint, so they would be a sensible priority for decarbonising.

    Step 3: Set Targets

    Use your benchmarking results to help you set targets to aim for, such as a carbon budget, energy efficiency, maximum page weight, and minimum Google Lighthouse performance targets for each individual page, in addition to your existing UX metrics and business KPIs.

    There is no right or wrong way to set targets. Choose what you think feels achievable and viable for your business, and you’ll only learn how reasonable and achievable they are when you begin to decarbonise your user journeys.

    Setting targets is important because it gives you something to aim for and keeps you focused and accountable. The quantitative nature of this work is great because it gives you the ability to quickly demonstrate the positive impact of your work, making it easier to justify the time and resources you are dedicating to it.

    Step 4: Decarbonise Your User Journey

    Your objective now is to decarbonise your user journey by minimising page weights, improving your Lighthouse performance rating, and minimising pages so that they meet both user and business needs in the most efficient, simple, and effective way possible.

    It’s up to you how you approach this depending on the resources and skills that you have, you can focus on specific pages or addressing a specific problem area such as heavyweight images or videos across the entire user journey.

    Here’s a list of activities that will all help to reduce the carbon footprint of your user journey:

    • Work through the recommendations in the ‘diagnostics’ section of your Google Lighthouse report to help optimise page performance.
    • Switch to a green hosting provider if you are not already using one. Use the Green Web Directory to help you choose one.
    • Work through the W3C Web Sustainability Guidelines, implementing the most relevant guidelines to your specific user journey.
    • Remove anything that is not adding any user or business value.
    • Reduce the amount of information on your webpages to make them easier to read and less overwhelming for people.
    • Replace content with a lighter-weight alternative (such as swapping a video for text) if the lighter-weight alternative provides the same value.
    • Optimise assets such as photos, videos, and code to reduce file sizes.
    • Remove any barriers to accessing your website and any distractions that are getting in the way.
    • Re-use familiar components and design patterns to make your websites quicker and easier to use.
    • Write simply and clearly in plain English to help people get the most value from your website and to help them avoid making mistakes that waste time and energy to resolve.
    • Fix any usability issues you identified during your benchmarking to ensure that your website is as easy to use and useful as possible.
    • Ensure your user journey is as accessible as possible so the widest possible audience can benefit from using it, offsetting the environmental cost of providing the website.

    Step 5: Track And Share Your Progress

    As you decarbonise your user journeys, use the benchmarking tools from step 2 to track your progress against the targets you set in step 3 and share your progress as part of your wider sustainability reporting initiatives.

    All being well at this point, you will have the numbers to demonstrate how the performance of your user journey has improved and also how you have managed to reduce its carbon footprint.

    Share these results with the business as soon as you have them to help you secure the resources to continue the work and initiate similar work on other high-value user journeys.

    You should also start to communicate your progress with your users.

    It’s important that they are made aware of the carbon footprint of their digital activity and empowered to make informed choices about the environmental impact of the websites that they use.

    Ideally, every website should communicate the emissions generated from viewing their pages to help people make these informed choices and also to encourage website providers to minimise their emissions if they are being displayed publicly.

    Often, people will have no choice but to use a specific website to complete a specific task, so it is the responsibility of the website provider to ensure the environmental impact of using their website is as small as possible.

    You can also help to raise awareness of the environmental impact of websites and what you are doing to minimise your own impact by publishing a digital sustainability statement, such as Unilever’s, as shown below.

    A good digital sustainability statement should acknowledge the environmental impact of your website, what you have done to reduce it, and what you plan to do next to minimise it further.

    As an industry, we should normalise publishing digital sustainability statements in the same way that accessibility statements have become a standard addition to website footers.

    Useful Decarbonising Principles

    Keep these principles in mind to help you decarbonise your user journeys:

    • More doing and less talking.
      Start decarbonising your user journeys as soon as possible to accelerate your learning and positive change.
    • Start small.
      Starting small by decarbonising an individual journey makes it easier to get started and generates results to demonstrate value faster.
    • Aim to do more with less.
      Minimise what you offer to ensure you are providing the maximum amount of value for the energy you are consuming.
    • Make your website as useful and as easy to use as possible.
      Useful websites can justify the energy they consume to provide them, ensuring they are net positive in terms of doing more good than harm.
    • Focus on progress over perfection.
      Websites are never finished or perfect but they can always be improved, every small improvement you make will make a difference.

    Start Decarbonising Your User Journeys Today

    Decarbonising user journeys shouldn’t be done as a one-off, reserved for the next time that you decide to redesign or replatform your website; it should happen on a continual basis as part of your broader digital sustainability strategy.

    We know that websites are never finished and that the best websites continually improve as both user and business needs change. I’d like to encourage people to adopt the same mindset when it comes to minimising the environmental impact of their websites.

    Decarbonising will happen most effectively when digital professionals challenge themselves on a daily basis to ‘minimise’ the things they are working on.

    This avoids building ‘carbon debt’ that consists of compounding technical and design debt within our websites, which is always harder to retrospectively remove than avoid in the first place.

    By taking a pragmatic approach, such as optimising high-value user journeys and aligning with business metrics such as performance, we stand the best possible chance of making digital sustainability a priority.

    You’ll have noticed that, other than using website carbon calculator tools, this approach doesn’t require any skills that don’t already exist within typical digital teams today. This is great because it means you’ve already got the skills that you need to do this important work.

    I would encourage everyone to raise the issue of the environmental impact of the internet in their next team meeting and to try this decarbonising approach to create better outcomes for people, profit, performance, purpose, and the planet.

    Good luck!

  • SerpApi: A Complete API For Fetching Search Engine Data

    This article is a sponsored by SerpApi

    SerpApi leverages the power of search engine giants, like Google, DuckDuckGo, Baidu, and more, to put together the most pertinent and accurate search result data for your users from the comfort of your app or website. It’s customizable, adaptable, and offers an easy integration into any project.

    What do you want to put together?

    • Search information on a brand or business for SEO purposes;
    • Input data to train AI models, such as the Large Language Model, for a customer service chatbot;
    • Top news and websites to pick from for a subscriber newsletter;
    • Google Flights API: collect flight information for your travel app;
    • Price comparisons for the same product across different platforms;
    • Extra definitions and examples for words that can be offered along a language learning app.

    The list goes on.

    In other words, you get to leverage the most comprehensive source of data on the internet for any number of needs, from competitive SEO research and tracking news to parsing local geographic data and even completing personal background checks for employment.

    Start With A Simple GET Request

    The results from the search API are only a URL request away for those who want a super quick start. Just add your search details in the URL parameters. Say you need the search result for “Stone Henge” from the location “Westminster, England, United Kingdom” in language “en-GB”, and country of search origin “uk” from the domain “google.co.uk”. Here’s how simple it is to put the GET request together:

    Then there’s the impressive list of libraries that seamlessly integrate the APIs into mainstream programming languages and frameworks such as JavaScript, Ruby, .NET, and more.

    Give It A Quick Try

    Want to give it a spin? Sign up and start for free, or tinker with the SerpApi’s live playground without signing up. The playground allows you to choose which search engine to target, and you can fill in the values for all the basic parameters available in the chosen API to customize your search. On clicking “Search”, you get the search result page and its extracted JSON data.

    If you need to get a feel for the full API first, you can explore their easy-to-grasp web documentation before making any decision. You have the chance to work with all of the APIs to your satisfaction before committing to it, and when that time comes, SerpApi’s multiple price plans tackle anywhere between an economic few hundred searches a month and bulk queries fit for large corporations.

    What Data Do You Need?

    Beyond the rudimentary search scraping, SerpApi provides a range of configurations, features, and additional APIs worth considering.

    Geolocation

    Capture the global trends, or refine down to more localized particulars by names of locations or Google’s place identifiers. SerpApi’s optimized routing of requests ensures accurate retrieval of search result data from any location worldwide. If locations themselves are the answers to your queries — say, a cycle trail to be suggested in a fitness app — those can be extracted and presented as maps using SerpApi’s Google Maps API.

    Structured JSON

    Although search engines reveal results in a tidy user interface, deriving data into your application could cause you to end up with a large data dump to be sifted through — but not if you’re using SerpApi.

    SerpApi pulls data in a well-structured JSON format, even for the popular kinds of enriched search results, such as knowledge graphs, review snippets, sports league stats, ratings, product listings, AI overview, and more.

    Speedy Results

    SerpApi’s baseline performance can take care of timely search data for real-time requirements. But what if you need more? SerpApi’s Ludicrous Speed option, easily enabled from the dashboard with an upgrade, provides a super-fast response time. More than twice as fast as usual, thanks to twice the server power.

    There’s also Ludicrous Speed Max, which allocates four times more server resources for your data retrieval. Data that is time-sensitive and for monitoring things in real-time, such as sports scores and tracking product prices, will lose its value if it is not handled in a timely manner. Ludicrous Speed Max guarantees no delays, even for a large-scale enterprise haul.

    You can also use a relevant SerpApi API to hone in on your relevant category, like Google Flights API, Amazon API, Google News API, etc., to get fresh and apt results.

    If you don’t need the full depth of the search API, there’s a Light version available for Google Search, Google Images, Google Videos, Google News, and DuckDuckGo Search APIs.

    Search Controls & Privacy

    Need the results asynchronously picked up? Want a refined output using advanced search API parameters and a JSON Restrictor? Looking for search outcomes for specific devices? Don’t want auto-corrected query results? There’s no shortage of ways to configure SerpApi to get exactly what you need.

    Additionally, if you prefer not to have your search metadata on their servers, simply turn on the “ZeroTrace” mode that’s available for selected plans.

    The X-Ray

    Save yourself a headache, literally, trying to play match between what you see on a search result page and its extracted data in JSON. SerpApi’s X-Ray tool shows you where what comes from. It’s available and free in all plans.

    Inclusive Support

    If you don’t have the expertise or resources for tackling the validity of scraping search results, here’s what SerpApi says:

    “SerpApi, LLC assumes scraping and parsing liabilities for both domestic and foreign companies unless your usage is otherwise illegal”.

    You can reach out and have a conversation with them regarding the legal protections they offer, as well as inquire about anything else you might want to know about, including SerpApi in your project, such as pricing, performance expected, on-demand options, and technical support. Just drop a message at their contact page.

    In other words, the SerpApi team has your back with the support and expertise to get the most from your fetched data.

    Try SerpApi Free

    That’s right, you can get your hands on SerpApi today and start fetching data with absolutely no commitment, thanks to a free starter plan that gives you up to 250 free search queries. Give it a try and then bump up to one of the reasonably-priced monthly subscription plans with generous search limits.

  • Functional Personas With AI: A Lean, Practical Workflow

    Traditional personas suck for UX work. They obsess over marketing metrics like age, income, and job titles while missing what actually matters in design: what people are trying to accomplish.

    Functional personas, on the other hand, focus on what people are trying to do, not who they are on paper. With a simple AI‑assisted workflow, you can build and maintain personas that actually guide design, content, and conversion decisions.

    • Keep users front of mind with task‑driven personas,
    • Skip fragile demographics; center on goals, questions, and blockers,
    • Use AI to process your messy inputs fast and fill research gaps,
    • Validate lightly, ship confidently, and keep them updated.

    In this article, I want to breathe new life into a stale UX asset.

    For too long, personas have been something that many of us just created, despite the considerable work that goes into them, only to find they have limited usefulness.

    I know that many of you may have given up on them entirely, but I am hoping in this post to encourage you that it is possible to create truly useful personas in a lightweight way.

    Why Personas Still Matter

    Personas give you a shared lens. When everyone uses the same reference point, you cut debate and make better calls. For UX designers, developers, and digital teams, that shared lens keeps you from designing in silos and helps you prioritize work that genuinely improves the experience.

    I use personas as a quick test: Would this change help this user complete their task faster, with fewer doubts? If the answer is no (or a shrug), it’s probably a sign the idea isn’t worth pursuing.

    From Demographics To Function

    Traditional personas tell you someone’s age, job title, or favorite brand. That makes a nice poster, but it rarely changes design or copy.

    Functional personas flip the script. They describe:

    • Goals & tasks: What the person is here to achieve.
    • Questions & objections: What they need to know before they act.
    • Touchpoints: How the person interacts with the organization.
    • Service gaps: How the company might be letting this persona down.

    When you center on tasks and friction, you get direct lines from user needs to UI decisions, content, and conversion paths.

    But remember, this list isn’t set in stone — adapt it to what’s actually useful in your specific situation.

    One of the biggest problems with traditional personas was following a rigid template regardless of whether it made sense for your project. We must not fall into that same mistake with functional personas.

    The Benefits of Functional Personas

    For small startups, functional personas reduce wasted effort. For enterprise teams, they keep sprawling projects grounded in what matters most.

    However, because of the way we are going to produce our personas, they provide certain benefits in either case:

    • Lighten the load: They’re easier to update without large research cycles.
    • Stay current: Because they are easy to produce, we can update them more often.
    • Tie to outcomes: Tasks, objections, and proof points map straight to funnels, flows, and product decisions.

    We can deliver these benefits because we are going to use AI to help us, rather than carrying out a lot of time-consuming new research.

    How AI Helps Us Get There

    Of course, doing fresh research is always preferable. But in many cases, it is not feasible due to time or budget constraints. I would argue that using AI to help us create personas based on existing assets is preferable to having no focus on user attention at all.

    AI tools can chew through the inputs you already have (surveys, analytics, chat logs, reviews) and surface patterns you can act on. They also help you scan public conversations around your product category to fill gaps.

    I therefore recommend using AI to:

    • Synthesize inputs: Turn scattered notes into clean themes.
    • Spot segments by need: Group people by jobs‑to‑be‑done, not demographics.
    • Draft quickly: Produce first‑pass personas and sample journeys in minutes.
    • Iterate with stakeholders: Update on the fly as you get feedback.

    AI doesn’t remove the need for traditional research. Rather, it is a way of extracting more value from the scattered insights into users that already exist within an organization or online.

    The Workflow

    Here’s how to move from scattered inputs to usable personas. Each step builds on the last, so treat it as a cycle you can repeat as projects evolve.

    1. Set Up A Dedicated Workspace

    Create a dedicated space within your AI tool for this work. Most AI platforms offer project management features that let you organize files and conversations:

    • In ChatGPT and Claude, use “Projects” to store context and instructions.
    • In Perplexity, Gemini and CoPilot similar functionality is referred to as “Spaces.”

    This project space becomes your central repository where all uploaded documents, research data, and generated personas live together. The AI will maintain context between sessions, so you won’t have to re-upload materials each time you iterate. This structured approach makes your workflow more efficient and helps the AI deliver more consistent results.

    2. Write Clear Instructions

    Next, you can brief your AI project so that it understands what it wants from you. For example:

    “Act as a user researcher. Create realistic, functional personas using the project files and public research. Segment by needs, tasks, questions, pain points, and goals. Show your reasoning.”

    Asking for a rationale gives you a paper trail you can defend to stakeholders.

    3. Upload What You’ve Got (Even If It’s Messy)

    This is where things get really powerful.

    Upload everything (and I mean everything) you can put your hands on relating to the user. Old surveys, past personas, analytics screenshots, FAQs, support tickets, review snippets; dump them all in. The more varied the sources, the stronger the triangulation.

    4. Run Focused External Research

    Once you have done that, you can supplement that data by getting AI to carry out “deep research” about your brand. Have AI scan recent (I often focus on the last year) public conversations for your brand, product space, or competitors. Look for:

    • Who’s talking and what they’re trying to do;
    • Common questions and blockers;
    • Phrases people use (great for copywriting).

    Save the report you get back into your project.

    5. Propose Segments By Need

    Once you have done that, ask AI to suggest segments based on tasks and friction points (not demographics). Push back until each segment is distinct, observable, and actionable. If two would behave the same way in your flow, merge them.

    This takes a little bit of trial and error and is where your experience really comes into play.

    6. Generate Draft Personas

    Now you have your segments, the next step is to draft your personas. Use a simple template so the document is read and used. If your personas become too complicated, people will not read them. Each persona should:

    • State goals and tasks,
    • List objections and blockers,
    • Highlight pain points,
    • Show touchpoints,
    • Identify service gaps.

    Below is a sample template you can work with:

    # Persona Title: e.g. Savvy Shopper
    - Person's Name: e.g. John Smith.
    - Age: e.g. 24
    - Job: e.g. Social Media Manager
    
    "A quote that sums up the persona's general attitude"
    
    ## Primary Goal
    What they’re here to achieve (1–2 lines).
    
    ## Key Tasks
    • Task 1
    • Task 2
    • Task 3
    
    ## Questions & Objections
    • What do they need to know before they act?
    • What might make them hesitate?
    
    ## Pain Points
    • Where do they get stuck?
    • What feels risky, slow, or confusing?
    
    ## Touchpoints
    • What channels are they most commonly interacting with?
    
    ## Service Gaps
    • How is the organization currently failing this persona?
    

    Remember, you should customize this to reflect what will prove useful within your organization.

    7. Validate

    It is important to validate that what the AI has produced is realistic. Obviously, no persona is a true representation as it is a snapshot in time of a Hypothetical user. However, we do want it to be as accurate as possible.

    Share your drafts with colleagues who interact regularly with real users — people in support cells or research teams. Where possible, test with a handful of users. Then cut anything that you can’t defend or correct any errors that are identified.

    Troubleshooting & Guardrails

    As you work through the above process, you will encounter problems. Here are common pitfalls and how to avoid them:

    • Too many personas?
      Merge until each one changes a design or copy decision. Three strong personas beat seven weak ones.
    • Stakeholder wants demographics?
      Only include details that affect behavior. Otherwise, leave them out. Suggest separate personas for other functions (such as marketing).
    • AI hallucinations?
      Always ask for a rationale or sources. Cross‑check with your own data and customer‑facing teams.
    • Not enough data?
      Mark assumptions clearly, then validate with quick interviews, surveys, or usability tests.

    Making Personas Useful In Practice

    The most important thing to remember is to actually use your personas once they’ve been created. They can easily become forgotten PDFs rather than active tools. Instead, personas should shape your work and be referenced regularly. Here are some ways you can put personas to work:

    • Navigation & IA: Structure menus by top tasks.
    • Content & Proof: Map objections to FAQs, case studies, and microcopy.
    • Flows & UI: Streamline steps to match how people think.
    • Conversion: Match CTAs to personas’ readiness, goals, and pain points.
    • Measurement: Track KPIs that map to personas, not vanity metrics.

    With this approach, personas evolve from static deliverables into dynamic reference points your whole team can rely on.

    Keep Them Alive

    Treat personas as a living toolkit. Schedule a refresh every quarter or after major product changes. Rerun the research pass, regenerate summaries, and archive outdated assumptions. The goal isn’t perfection; it’s keeping them relevant enough to guide decisions.

    Bottom Line

    Functional personas are faster to build, easier to maintain, and better aligned with real user behavior. By combining AI’s speed with human judgment, you can create personas that don’t just sit in a slide deck; they actively shape better products, clearer interfaces, and smoother experiences.

  • Creating Elastic And Bounce Effects With Expressive Animator

    This article is a sponsored by Expressive

    In the world of modern web design, SVG images are used everywhere, from illustrations to icons to background effects, and are universally prized for their crispness and lightweight size. While static SVG images play an important role in web design, most of the time their true potential is unlocked only when they are combined with motion.

    Few things add more life and personality to a website than a well-executed SVG animation. But not all animations have the same impact in terms of digital experience. For example, elastic and bounce effects have a unique appeal in motion design because they bring a sense of realism into movement, making animations more engaging and memorable.

    (Large preview)

    However, anyone who has dived into animating SVGs knows the technical hurdles involved. Creating a convincing elastic or bounce effect traditionally requires handling complex CSS keyframes or wrestling with JavaScript animation libraries. Even when using an SVG animation editor, it will most likely require you to manually add the keyframes and adjust the easing functions between them, which can become a time-consuming process of trial and error, no matter the level of experience you have.

    This is where Expressive Animator shines. It allows creators to apply elastic and bounce effects in seconds, bypassing the tedious work of manual keyframe editing. And the result is always exceptional: animations that feel alive, produced with a fraction of the effort.

    Using Expressive Animator To Create An Elastic Effect

    Creating an elastic effect in Expressive Animator is remarkably simple, fast, and intuitive, since the effect is built right into the software as an easing function. This means you only need two keyframes (start and end) to make the effect, and the software will automatically handle the springy motion in between. Even better, the elastic easing can be applied to any animatable property (e.g., position, scale, rotation, opacity, morph, etc.), giving you a consistent way to add it to your animations.

    Before we dive into the tutorial, take a look at the video below to see what you will learn to create and the entire process from start to finish.

    Once you hit the “Create project” button, you can use the Pen and Ellipse tools to create the artwork that will be animated, or you can simply copy and paste the artwork below.

    Press the A key on your keyboard to switch to the Node tool, then select the String object and move its handle to the center-right point of the artboard. Don’t worry about precision, as the snapping will do all the heavy lifting for you. This will bend the shape and add keyframes for the Morph animator.

    Next, press the V key on your keyboard to switch to the Selection tool. With this tool enabled, select the Ball, move it to the right, and place it in the middle of the string. Once again, snapping will do all the hard work, allowing you to position the ball exactly where you want to, while auto-recording automatically adds the appropriate keyframes.

    You can now replay the animation and disable auto-recording by clicking on the Auto-Record button again.

    As you can see when replaying, the direction in which the String and Ball objects are moving is wrong. Fortunately, we can fix this extremely easily just by reversing the keyframes. To do this, select the keyframes in the timeline and right-click to open the context menu and choose Reverse. This will reverse the keyframes, and if you replay the animation, you will see that the direction is now correct.

    With this out of the way, we can finally add the elastic effect. Select all the keyframes in the timeline and click on the Custom easing button to open a dialog with easing options. From the dialog, choose Elastic and set the oscillations to 4 and the stiffness to 2.5.

    That’s it! Click anywhere outside the easing dialog to close it and replay the animation to see the result.

    The animation can be exported as well. Press Cmd/Ctrl + E on your keyboard to open the export dialog and choose from various export options, ranging from vectorized formats, such as SVG and Lottie, to rasterized formats, such as GIF and video.

    For this specific animation, we’re going to choose the SVG export format. Expressive Animator allows you to choose between three different types of SVG, depending on the technology used for animation: SMIL, CSS, or JavaScript.

    Each of these technologies has different strengths and weaknesses, but for this tutorial, we are going to choose SMIL. This is because SMIL-based animations are widely supported, even on Safari browsers, and can be used as background images or embedded in HTML pages using the <img> tag. In fact, Andy Clarke recently wrote all about SMIL animations here at Smashing Magazine if you want a full explanation of how it works.

    You can visualize the exported SVG in the following CodePen demo:

    Conclusion

    Elastic and bounce effects have long been among the most desirable but time-consuming techniques in motion design. By integrating them directly into its easing functions, Expressive Animator removes the complexity of manual keyframe manipulation and transforms what used to be a technical challenge into a creative opportunity.

    The best part is that getting started with Expressive Animator comes with zero risk. The software offers a full 7–day free trial without requiring an account, so you can download it instantly and begin experimenting with your own designs right away. After the trial ends, you can buy Expressive Animator with a one-time payment, no subscription required. This will give you a perpetual license covering both Windows and macOS.

    To help you get started even faster, I’ve prepared some extra resources for you. You’ll find the source files for the animations created in this tutorial, along with a curated list of useful links that will guide you further in exploring Expressive Animator and SVG animation. These materials are meant to give you a solid starting point so you can learn, experiment, and build on your own with confidence.

  • AI Infra平台市场报告:京东云稳居前三

    近日,赛迪顾问发布《2025中国AI Infra平台市场研究报告》,凭借在异构算力调度、GPU池化管理等领域的技术创新和实践成果,京东云在“2024年中国AI Infra 平台算力管理层市场厂商竞争力
  • How does AutoMQ implement a sub-10ms latency Diskless Kafka?

    Abstract Running Apache Kafka in the cloud is constrained by three fundamental engineering challenge
  • 跨平台开发地图:客户端技术选型指南 | 2025年12月

    老刘每个月为大家画出最新的跨平台技术选型地图,帮你快速做决策。 本月很多跨平台框架都有细节方面的更新。
  • Android AI解放生产力(七):更丰富的AI运用前瞻

    一、更多的skill 开发人员可以提炼项目中的skill,作为一种能力共享给所有人员使用。比如: code review 的skill Compose 性能检查的skill 慢函数检测的skill 也
  • 用 HBuilder 上架 iOS 应用时如何管理Bundle ID、证书与描述文件

    从工程实践角度解析 HBuilder 上架 iOS 应用的真实流程,并结合 Appuploader 的使用场景,分享更可控的发布经验。