Share Digital Insights
Share to email
Share to Facebook
Share to X
By Paul Boag
4.9
99 ratings
The podcast currently has 800 episodes available.
I've witnessed this scenario countless times: organizations invest substantial resources into launching a new website, only to neglect post-launch optimization once it goes live. This oversight is a critical mistake that can significantly undermine the long-term success of your digital presence. Today, let's delve into the crucial topic of post-launch optimization and explore why many organizations are failing to capitalize on this essential process.
The Post-Launch Optimization ChallengeWhen it comes to post-launch website management, many organizations typically fall into one of two categories:
Both of these approaches miss a critical opportunity in the post-launch phase: leveraging real-world data to enhance the website's effectiveness in meeting organizational objectives. This oversight isn't just a minor misstep—it's a significant waste of potential and a failure to implement proper post-launch optimization techniques.
Consider this: it's only after a website goes live that we begin gathering authentic data about user interactions. This post-launch data is invaluable for improving your site's performance through targeted optimization efforts, yet it often goes unused.
Harnessing the Power of Post-Launch Data for OptimizationLet's explore an example of how we can leverage post-launch data for effective optimization. Tools like Microsoft Clarity can provide crucial insights into user behavior, enabling targeted post-launch optimization. Through these tools, we can identify:
All these behaviors serve as red flags, indicating potential issues with specific pages or user flows that require attention in your post-launch optimization efforts. However, identifying these problems is just the initial step. The real challenge—and the key to successful—lies in how we process and act on this valuable information to continuously improve our website's performance.
Microsoft Clarity is invaluable for identifying areas for improvement post-launch.
From Data to Action: A Structured ApproachOnce we've pinpointed problem areas using tools like heat maps and session recordings, we need a systematic way to form hypotheses and test improvements. This is where many organizations falter. They either ignore the data entirely or make knee-jerk changes based on individual opinions rather than evidence.
What we need is a structured process for managing and prioritizing potential improvements. To help with this, I've created a Notion template that demonstrates how to structure a post-launch optimization backlog. You can access this template here. Additionally, I've recorded a video walking through the process, which you can watch below:
[Insert video embed code here]
Now, let me walk you through my approach to this challenge:
1. Idea SubmissionCreate a system where anyone in the organization can submit improvement proposals. Each proposal should include:
This approach ensures that all ideas are considered and backed by some level of evidence.
Any suggested improvement should come with a written proposal explaining why it deserves consideration.
2. Internal ReviewOnce proposals are submitted, they need to be reviewed internally. The most viable options should be selected based on:
This step helps prioritize resources and focus on changes that are most likely to yield significant improvements.
3. Test PlanningFor proposals that make it through the review process, the next step is to create a detailed test plan. This should include:
A well-thought-out test plan ensures that you're not just making changes for the sake of change, but actually measuring the impact of your optimizations.
Before implementing a proposal, create a test plan.
4. Test Execution and ReportingOnce the test plan is approved and scheduled, it's time to run the test. After completion, a report should be prepared, including:
This report serves as the basis for deciding whether to implement the change, refine and retest, or abandon the idea altogether.
After testing a proposal, create a report and evaluate the results. Only then should it proceed to production.
5. Implementation or IterationIf the test results are favorable, move forward with implementing the change on your live site. If not, consider refining the approach and running another test, or move on to the next proposal in your backlog.
The Benefits of Structured Post-Launch OptimizationBy adopting this kind of structured approach to post-launch optimization, you stand to gain several benefits:
Of course, implementing such a process isn't without its challenges. You might encounter resistance from team members who are used to making changes based on gut feelings or who are resistant to the additional steps involved. Here are some strategies to overcome these obstacles:
Remember, the goal isn't to create bureaucracy, but to ensure that every change you make to your website is purposeful and impactful.
Conclusion: Embracing Continuous ImprovementPost-launch optimization isn't just about fixing what's broken—it's about continuously improving your website's ability to meet both user needs and organizational objectives. By implementing a structured, data-driven approach to post-launch optimization, you can turn your website from a static digital brochure into a dynamic, ever-improving asset.
Don't let your website stagnate after launch. Embrace the power of post-launch data, implement a structured optimization process, and watch as your website's performance improves over time. Your users—and your bottom line—will thank you.
If you need help with post-launch optimization for your website, just reach out! I’d love to chat about how we can customize this strategy to fit your needs and goals. You can also learn more about this approach and other methods of improving your site’s effectiveness with my workshop on conversion optimization.
I've been thinking about an important shift in our industry that we've discussed in the Agency Academy I run. It's time we dive into this subject and explore how we can adapt our approach to stay competitive.
The landscape for web design agencies and freelancers is evolving, but don't worry - this isn't about abandoning our core services. Instead, it's about recognizing and charging for the expertise we often give away for free.
While DIY platforms and templates have made the technical aspect of web design more accessible, our strategic knowledge is more valuable than ever. It's time we position ourselves not just as implementers, but as strategic partners who offer both consultancy and implementation.
Let's break down why this matters and how you can make the most of it:
The Real Value: Knowledge Alongside ImplementationClients can get a website from many places, but what they truly need is strategic insight to align their digital presence with their business goals. This is where we excel. Our experience, understanding of user behavior, and ability to see the big picture are incredibly valuable assets.
By offering both consultancy and implementation, we're not just building websites; we're comprehensively solving business problems. This approach allows us to charge separately for our knowledge and our technical skills, potentially increasing our overall project value by 20-30% or more.
Adding Consultative Services to Your OfferingsTo make this transition, start by expanding your service offerings. Alongside your existing web design and development services, consider adding:
The key is to focus on outcomes rather than features. Instead of just selling a redesign, sell the strategy behind it, and then implement that strategy.
Packaging and Pricing Your ExpertiseJustifying higher rates for consultative work as a freelancer who also handles implementation can be challenging, but there are several compelling reasons to do so:
By clearly communicating the distinct value of your consultative services and focusing on the outcomes they provide, you can justify charging higher rates for this aspect of your work, even as the same person delivering both services.
Benefits of the Combined ApproachThis shift benefits both us and our clients. Here's why:
For your agency:
For your clients:
As we wrap up, I want to emphasize that this transition is about expanding our role, not changing it entirely. We're not abandoning implementation to become pure consultants. Instead, we're recognizing the full value of what we offer: both strategic insight and technical expertise.
This shift might feel daunting, but remember, you already have the knowledge. It's just a matter of packaging and presenting it as a distinct, valuable service alongside your implementation work. Start small if you need to - maybe offer a paid strategy session before your next website project. See how it goes, and build from there.
If you would like to discuss this further, you should consider joining the Agency Academy. Let’s share our experiences, ask questions, and support each other in this transition. Remember, we're all in this together. By embracing our dual role as consultants and implementers, we can add more value, command higher rates, and build stronger relationships with our clients.
Hi everyone.
I run a lot of workshops within organizations. They're great for connecting people, allowing space for questions, and inspiring teams. But here's the thing: I'm not convinced they're the best method of training staff in most cases.
Don't get me wrong, workshops have their place and I enjoy running them. They create a shared learning environment, foster discussion, and can be incredibly motivating. However, they come with some significant drawbacks that we need to address.
Key Challenges with WorkshopsFirst, let's consider the retention problem. Unless people immediately apply what they've learned in a workshop, they tend to forget it. Even if they do use the information right away, without regular application, that knowledge fades over time. It's just how our brains work.
Then there's the issue of staff turnover. When employees who attended a workshop leave, they take that knowledge with them. New hires miss out unless you repeat the workshop, which can be expensive and logistically challenging.
Speaking of logistics, getting everyone in the same place at the same time is always a headache. There's always someone on vacation, out sick, or unable to attend for various reasons. This leads to knowledge gaps within teams.
Workshops also tend to be one-size-fits-all solutions, which is problematic when you have attendees with varying levels of experience. Some people might be bored, while others struggle to keep up.
Lastly, workshops require intense concentration, which can be exhausting for participants. By the end of a long session, people's attention spans are stretched thin, and their ability to absorb information diminishes.
The Alternative: Self-Learning ResourcesSo, what's the alternative? I'm a strong advocate for self-learning resources broken down into small, focused lessons. These could teach specific skills like "how to run a 5-second test" or "how to edit a page on the CMS." I find this approach far more effective when I implement them in organizations.
These resources could take various forms: short videos, step-by-step written instructions, or even checklists. Some content could be universal and purchased off-the-shelf (like "writing for the web"), while other material would need to be custom-made for your organization.
Imagine organizing all of this in a UX playbook alongside policies, procedures, standards, and more general educational content like "why accessibility matters." You could even integrate these resources directly into your tools. For example, embedding how-to guides within your CMS so people can access instructions right when they need them.
Speaking of playbooks, if you're an agency owner or freelancer, I've created one just for you! It includes easy-to-follow guides, client education materials, and tools to help simplify your web design projects. Check it out here.
Benefits of Self-Learning ResourcesI've found that self-learning resources offer numerous benefits for organizations:
Don't get me wrong – I'm not saying we should completely do away with workshops. They still have value, especially for team building, brainstorming, and tackling complex problems that benefit from group discussion. But they shouldn't be your only, or even primary, method of training and knowledge sharing.
By investing in a robust set of self-learning materials, you're not just training your current staff – you're building a knowledge infrastructure that will serve your organization for years to come. It's about creating a culture of continuous learning, where employees are empowered to seek out information and improve their skills on an ongoing basis.
Moving ForwardSo, the next time you're tempted to schedule another workshop, ask yourself: Is this the most effective way to share this knowledge? Or could you create a resource that will have a more lasting impact?
Let's have an honest conversation about invitations to tender (ITTs). We've discussed this topic in the Agency Academy, and I believe it's time to address this significant issue in our industry.
If you've been in the digital industry for any length of time, you've likely encountered them. They're a staple of the procurement process, especially in larger organizations and government bodies. But here's the thing: they're not working. Not for agencies, not for clients, and certainly not for the projects themselves or their end users.
As someone who's been on both sides of the fence - writing proposals and evaluating them - I've seen firsthand how this process can fall short. So, let's break down why ITTs are problematic and explore some alternatives that could lead to better outcomes for everyone involved.
The Agency Perspective: A Costly GambleFor agencies, responding to an ITT is often a significant investment of time and resources. It's not uncommon for teams to spend weeks crafting the perfect response, only to find out they were just there to make up the numbers. This isn't just frustrating; it's economically unsustainable.
The amount of work involved in pitching is substantial. Agencies often have to dedicate significant resources to preparing detailed proposals, which takes time away from billable work and ongoing projects. This investment is made with no guarantee of success, and often with the knowledge that they may have little to no chance of winning the bid.
Moreover, the limited information provided in most ITTs makes accurate pricing nearly impossible. Agencies are forced to make educated guesses about the scope and complexity of the work, often leading to either overpricing (and losing the bid) or underpricing (and losing money on the project). This lack of information and the absence of an opportunity to conduct necessary research puts agencies in a precarious position.
To mitigate these risks, agencies often have to add a buffer to their pricing, which can make them less competitive. Alternatively, they might lowball their estimates to win the bid, potentially setting themselves up for financial strain or a compromised project quality down the line.
The Client's Dilemma: Paying More for LessClients might think they're getting a good deal through competitive tendering, but the reality is often quite different. The costs associated with preparing unsuccessful bids don't just disappear - they're factored into the rates of successful projects. This means clients are indirectly paying for all those failed proposals, essentially subsidizing the entire tendering process across the industry.
Furthermore, the ITT process often rewards the best sales pitch rather than the most suitable agency. Clients end up with partners who excel at writing proposals but may not be the best fit for their specific needs. In many cases, agencies tell the client what they want to hear rather than what they need to know, leading to misaligned expectations and potential project failures down the line.
The Project Suffers: Inflexibility and Missed OpportunitiesPerhaps the most significant drawback of the ITT process is its impact on the projects themselves. The rigid specifications laid out in most tenders leave little room for agencies to bring their expertise to bear on the project's scope and approach.
This inflexibility continues throughout the project, as the fixed scope makes it challenging to adapt to new insights or changing requirements. It can also lead to tension between the client and agency over what's considered "in scope," potentially damaging the relationship and the project's success.
Moreover, the selection process is often weighed too heavily towards the cheapest price, NOT the best value. This can result in subpar outcomes, as the focus shifts from delivering quality and innovation to merely meeting the minimum requirements at the lowest cost.
The fixed scope also means there's limited opportunity to respond to insights gained during the project, including crucial user testing results. In the fast-paced world of digital, this inflexibility can lead to outdated solutions or missed opportunities for improvement. Without the ability to pivot based on user feedback, projects risk delivering products that don't meet actual user needs, regardless of how well they adhere to the original specifications.
A Better Way ForwardSo, what's the solution? While I understand the need for accountability and fairness in procurement processes, especially in public sector organizations, we need to find a middle ground that works better for all parties involved.
Here are a few ideas to consider:
Implementing these changes won't be easy, especially in organizations with entrenched procurement processes. The approach of using ITTs makes sense when you're buying a fixed product or service, but it doesn't work well with digital services, which are inherently more fluid and require ongoing collaboration and adaptation.
Ideally, the relationship needs to be more like hiring a contractor based on time and materials. But I accept that this is a big change to ask for, especially in larger organizations and the public sector. The alternatives suggested above can serve as a middle ground, allowing for more flexibility and better outcomes while still maintaining a structured procurement process.
By adopting these approaches, we can create a system that benefits all parties involved:
The potential benefits - more successful projects, better client-agency relationships, and more efficient use of resources - make it worth pursuing these changes. It's time for our industry to move beyond the outdated ITT process and embrace a more collaborative, flexible, and value-driven approach to project procurement.
If you're considering hiring an agency and find this approach intriguing, don't hesitate to reach out. I'd be delighted to discuss in more detail how you can implement these ideas.
Until next time,
Paul
Whether you're part of a UX team, running an agency, or freelancing, there's a service you should be offering. I include myself in this too.
This realization struck me while preparing for my design leadership workshop next week (and yes, it's not too late to sign up!). I was thinking about how most design teams are under-resourced, as I mentioned in a previous newsletter. We, therefore, need to be more strategic about how we spend our time.
One issue is that we often get pulled into projects that shouldn't exist because they don't meet real user needs. We try to advocate for discovery phases to research user requirements, but many colleagues don't grasp what a discovery phase entails. Often, the decision to move forward with a project has already been made.
The same goes for those of us working externally. By the time a client reaches out, the project is already defined and approved. We can't influence its direction as much as we should.
So, we need to take elements of a discovery phase, combine them with a SWOT, repackage them, and present them as a new service we offer.
Enter SUPA: Strategic User-Driven Project AssessmentThis is where the Strategic User-Driven Project Assessment (SUPA) comes in. Yes, I know, another acronym. But bear with me – there's a method to this madness.
Why SUPA? Well, in a world drowning in jargon and buzzwords, sometimes you need to fight fire with fire. SUPA isn't just catchy; it's a trojan horse. It's designed to grab the attention of those business analysts and managers who love their TLAs (Three Letter Acronyms) and make them sit up and take notice. Plus, let's be honest, who doesn't want to be SUPA at their job?
But bad puns aside, SUPA represents a critical service that we, as UX professionals, need to champion more forcefully. It's our chance to get in at the ground floor of projects, to shape them before they become runaway trains of misguided objectives and wasted resources.
What is SUPA?In essence, SUPA is a pre-emptive strike against poorly conceived projects. It's a comprehensive assessment that evaluates the potential success of a project from a user-centric perspective, before significant resources are committed. Think of it as a health check for ideas – we're diagnosing potential issues before they become full-blown problems.
Now, I can already hear some of you thinking, "But isn't this just a discovery phase by another name?" And you're not entirely wrong. SUPA does incorporate elements of discovery, but it's more focused, more strategic, and crucially, it's packaged in a way that speaks directly to business priorities.
Selling SUPA to Your Organization or ClientsThe key to selling SUPA is to frame it in terms of risk mitigation and resource optimization. Here's how you might pitch it:
"SUPA is a strategic assessment tool that helps organizations validate project ideas before significant investment. It ensures that we're not just building things right, but that we're building the right things."
Emphasize that SUPA can:
For in-house teams, position SUPA as a way to strengthen your role as strategic partners rather than just executors. For agencies and freelancers, it's an opportunity to add value right from the project's inception, potentially leading to longer-term engagements.
What SUPA CoversUltimately a SUPA is delivered as a report or presentation focusing on the following areas:
To create a comprehensive SUPA report, you'll need to engage in a variety of UX research and analysis activities. Here's a breakdown of key activities for each area of the SUPA report:
Audience AssessmentRemember, the goal is to conduct these activities efficiently, focusing on gathering just enough information to make informed recommendations. The exact mix of activities will depend on the project's scope, timeline, and available resources.
SUPA and Business Analysis: Complementary, Not CompetitiveI can almost hear some of you thinking, "Wait a minute, isn't this treading on the toes of our business analysts?" It's a fair question, and one we should address head-on.
Yes, there's some overlap between SUPA and traditional business analysis. Both aim to validate ideas and assess project viability. However, SUPA isn't about replacing or competing with business analysts – it's about complementing their work with a laser focus on user needs and experience.
Here's how SUPA differs from and enhances business analysis:
The key is collaboration, not competition. Ideally, SUPA should be conducted in partnership with business analysts. While they dive deep into market analysis and business viability, we bring our understanding of user behavior and experience design to the table. Together, we create a more holistic view of the project's potential.
By positioning SUPA as a complement to existing business analysis processes, we're not stepping on toes – we're strengthening the foundation of project planning. We're ensuring that user needs are considered just as carefully as business needs from the very beginning.
Implementing SUPA in Your WorkI hope I've convinced you of the value of SUPA for us as UX professionals and for our organizations. Now, the question is, "Where do we start?"
Start small. You don't need to roll out SUPA as a full-fledged service right away. Begin by incorporating elements of it into your existing processes. For example, when you're brought into a new project, ask for a short meeting to run through these assessment points. Frame it as a way to ensure you're fully aligned with the project goals and can deliver the best possible outcomes.
As you demonstrate the value of this approach – perhaps by identifying a potential issue early or by suggesting a more user-centric direction that resonates with stakeholders – you can gradually formalize it into a distinct service offering.
For those of you working in agencies or as freelancers, consider offering SUPA as a standalone service. It could be a great way to get your foot in the door with new clients, showcasing your strategic thinking and potentially leading to larger projects down the line.
The Future of UX is SUPAAs UX professionals, we often lament that we're brought in too late in the process, forced to put lipstick on the proverbial pig. SUPA is our chance to change that narrative. It's about shifting our role from just designing interfaces to shaping product strategy.
By offering SUPA, we're not just improving individual projects – we're elevating the entire field of UX. We're demonstrating that user-centered design isn't just about pretty interfaces or smooth interactions; it's about building the right things, for the right people, in the right way.
Need Help Getting Started?If you're excited about implementing SUPA in your organization but feel unsure about where to begin, I'm here to help. I offer coaching services to guide you through your first SUPA process, ensuring you have the tools and confidence to make it a success. For those who prefer a more hands-off approach, I'm also available to conduct SUPA assessments for your projects directly.
Whether you use me or not, don't let another project start without proper user validation.
I’m not sure if it’s just me, but it feels like a strange time in UX right now. I’m noticing many layoffs in our field, budget cuts, and a decline in work for external suppliers. It seems we’re going through another shift in our industry, which tends to happen every few years due to technological advancements or economic factors.
In this email, I’d like to share my thoughts and best guesses about what might be happening and what the future could hold.
However, I want to begin by clarifying what I don’t believe is happening: I don’t think the user experience field is being replaced by AI.
AI Is Not Making Our Jobs RedundantWhile AI may streamline processes and reduce job numbers in the field, I believe the risk of AI replacing you anytime soon is minimal.
This is due to the current nature of AI. It excels in areas like data analysis and written language but remains weak in other domains, such as:
Fortunately, these are the three core skills essential for user experience design. Therefore, I see no reason to worry about the impact AI may have on our jobs. While AI will inevitably change how we work, it won't diminish the need for our roles.
So, if AI isn't driving the changes I'm observing, what is? It could simply be economic and political factors.
The Broader Economy May Be A FactorThere is a lot happening in the world right now that creates uncertainty. We have the war in Ukraine, conflicts in the Middle East, a cost of living crisis, and upcoming elections in the US. Additionally, the long-term effects of COVID have changed business operations and put significant economic pressure on governments.
It's not surprising that organizations are looking to cut costs and are hesitant to start new initiatives. They are waiting to see how these issues unfold.
However, we can't solely blame the broader economy. There is also issues specific to UX that are affecting the situation.
The Honeymoon is OverTo begin with, we are witnessing the end of the honeymoon period in user experience design. For some time, user experience was the buzzword in business. Similar to digital transformation, senior management became aware of this discipline. Success stories from companies like Apple and Uber sparked a frenzy of investment in user experience.
They were further seduced by statistics we all threw around like:
Every dollar invested in UX results in a return of $100, representing a 9,900% ROI.
Many of these companies did not prioritize user experience effectively. They either underinvested or had a culture that hindered genuine user-centric delivery. As we know, you can't just bolt on UX to an existing organization.
As a result, it has often fallen short of management's expectations. Now, we see them starting to cut back, drawn in by the allure of the next big thing—AI.
To complicate things further, this initial excitement, along with the maturing of the discipline, has led to another issue.
The Maturity ProblemThe excitement around UX has attracted many people to the field, especially with the rise of UX bootcamps.
Meanwhile, significant progress is being made in the discipline. We are discovering what works and what doesn’t. UX patterns and best practices are emerging, leading to fewer problems that need solving.
Of course, every project has its nuances. However, we can achieve results faster than ever because we build on the solutions found by others in the past.
These two factors—more professionals and fewer problems—have created an oversaturated market. At least that is my current working hypothesis.
So, what comes next?
What Comes NextWell, your guess is as good as mine. In the short term, we will probably see more of the same: more layoffs and more budget cuts. Unless the current economic and political uncertainty decreases, we are unlikely to see any improvements.
However, it ultimately depends on how organizations choose to integrate UX in the long run. The importance of user experience is here to stay. Consumers now expect a good user experience, and that expectation will only increase. Bridget van Kranlingen from IBM consulting put it well when she said:
"The last best experience that anyone has anywhere becomes the minimum expectation for the experience they want everywhere."
The question is: how will organizations choose to deliver on it?
As I see it, they have three options:
I believe most will adopt a hybrid approach. Organizations will likely have some in-house staff while also relying on outside resources for support.
To make this work, organizations will need strong in-house UX leaders to tackle internal issues that undermines the user experience. That's why I'm focusing on this area. It has the potential for meaningful improvements that can deliver the largest return on investment for organizations.
But what about you? What should you be doing?
What Should You Do?I don’t think you need to worry too much. Yes, things may be tough for a while, but if you hang in there, they will stabilize.
Newcomers to the sector are at the greatest risk. You may be forced to shift to related fields as job openings decrease.
However, experienced UX designers can feel secure in their roles. You may work externally instead of in-house, but your job is safe.
As for experienced UX leaders who have a track record for delivering results, the future looks bright. UX is here to stay. As long as you demonstrate your value, you should be fine.
Demonstrating value is crucial for all of us. Now that the initial excitement has faded, we must show management that our work makes a real impact. We need to focus on delivering what matters to them, not just on user needs.
For those who advocate for UX but don’t work directly in the field (I’m looking at you, marketers), we need your support now more than ever. You know that user experience is vital for delivering on your objectives. So, I would encourage you to keep promoting the importance of UX design. We need to ensure it isn’t seen as a failed experiment or reduced to making superficial changes on the website. If that happens, your job will become so much harder.
Of course, these are just my opinions, and I could be completely wrong. I would really like to hear your thoughts on the future and any downturns you might be noticing.
Hey all,
This topic could prove controversial, but I've had a couple of conversations recently that make me think this is a subject worth discussing. It's about the role of user researchers in organizations.
Be Careful What You Wish ForNow, I know a lot of you reading this will be thinking to yourself that you'd kill to work somewhere willing to invest in hiring a dedicated user researcher. But be careful what you wish for, because I'm not sure it's always a good idea. Especially if it ends up creating a gatekeeper between stakeholders and users.
The Separation of RolesYou see, I've worked with a few companies over the last year or so where the roles of user researcher and user experience designer have been separated. On the surface, this looks like a good idea. After all, generally speaking, the more specialized you are, the better job you'll do in a particular niche. And that's true for user researchers. There are many nuances to carrying out user research that a more generalist user experience designer may overlook.
However, by separating the roles, you can create a couple of problems that I've witnessed recently.
The Time-Consuming Nature of In-Depth ResearchFirst, precisely because of their expertise, some user researchers carry out such in-depth research that it doesn't always sit comfortably with the timescales allocated to projects internally. The result is that user research can become time-consuming and so only happens once or twice during the project. Instead of facilitating a culture of testing and iteration, you end up with a piece of upfront research and a sanity check towards the end when it's too late to change things.
Although in theory, this kind of in-depth user research should provide benefits, in my experience at least, a leaner, more iterative approach tends to win out. Put another way, I favor a series of lightweight research and testing exercises throughout the project over more in-depth research at the beginning and end.
If this is an approach you are interested in learning more about, I have a workshop that I can run in your organization.
Reduced Designer-User InteractionSecond, and probably even more significantly, the involvement of a user researcher reduces the interactions that the UX designer has with users. Instead of running user testing themselves, they get back a report from the user researcher and often don't experience the user frustrations firsthand.
Admittedly, the user researcher's observations may well be more in-depth and insightful because of their experience and expertise. However, I believe you lose something when the UX designer isn't observing and interacting with users firsthand. They'll learn a lot more this way than from reading a report.
The Exception, Not the RuleOf course, this won't always be the case. In some organizations, the user researchers will go out of their way to involve the designer. However, in my experience, this is the exception and not the rule. That's not because of reluctance on either the part of the designer or researcher, but instead for the sake of efficiency. The pressure to deliver will often mean it's seen as excessive to have the designer involved in testing when it's seen as the job of the user researcher.
Not a Criticism, But a ConcernNone of this is meant as a criticism of user researchers. Neither am I suggesting that there isn't a place for separate user researchers.
However, I see the role of user researchers to be focused on the bigger picture. They should be gathering insights that apply to the wider organization, while project-specific testing should be done primarily by UX designers.
See Also: Rethinking The Role Of Your UX Teams And Move Beyond Firefighting
User researchers can support them by providing training and advice, but I think it's dangerous to centralize all user research with the user researcher. Doing so, in my experience, results in less research and testing for the reasons I've given.
What's Your Experience?That said, I recognize that I'm drawing on my own experience here, and maybe things are different where you work. I'd therefore love to hear from you on this one. Do you have separate user researchers, and if so, does that still allow for lots of lightweight research and testing to refine ideas and answer questions throughout the project?
Hello all.
If you work on websites, rather than web apps, the chances are you want people to do something on that site. It might be sign up for a newsletter, buy a product or getting in touch. Whatever it is you want people to do you will find them cautious. That is just human nature. We are always looking for the “danger” in any situation. People fear making the wrong decision or wasting their money. They worry about what will happen if they act and how things might go wrong.
Addressing Concerns is KeyYou can have an amazing product, great design, and compelling content. But if you fail to address people's concerns, they will hesitate to act.
Skills for SuccessA vital skill when working on websites is the ability to address these concerns. Even if content creation is not your main job, you need to guide those who create content. Otherwise, you might end up receiving the blame if the website underperforms.
Objection Handling: A Life SkillObjection handling is useful not only for creating websites but also in everyday life. You may need to persuade people to do something, whether it's convincing a child to eat vegetables or getting a manager to approve your pay raise.
Identifying objections and knowing how to respond are valuable skills in many situations.
How to Identify ObjectionsSo, how do you find out what objections your audience might have?
For example, to improve a website's conversion rate, talk to the sales or customer support teams. They can help you understand people's objections better.
Asking Your AudienceYou can also ask your audience directly. I often run exit-intent surveys on landing pages to find out why people choose not to act. This feedback can provide valuable insights for improving the page and increasing the conversion rate.
Addressing Objections Head OnOnce you know their objections, you can start working on how to address them. It may be tempting to ignore objections, but this rarely works. Addressing objections directly shows that you understand your audience. This approach not only addresses their concerns but also builds trust.
Preempting ObjectionsWhen speaking to people directly, it’s helpful to preempt objections. Don’t wait for them to raise issues. If you address their concerns before they mention them, they have the opportunity to remain silent. This means they will not lose face in front of others, something especially important with senior stakeholders. They really do not like being corrected by someone below them!
Responding to Objections on Your WebsiteOn your website, link your responses to objections with elements that might trigger them. For example, if you're asking for credit card information, reassure users about security at that moment. Don't expect them to look for answers in your FAQ section!
ConclusionThere is much more to say about objection handling. I have just published a comprehensive post on my website that explores this topic in depth.
However, the reason I have raised the issue here is that objection handling is a crucial skill that anybody working in marketing or UX needs to know. In fact, it is a good skill to have no matter what your role. And yet, somehow it is not a skill you hear people discuss very often.
Hello all,
I've always been a strong advocate for establishing measurable success criteria in every project. The digital world offers us a wealth of metrics to track - from conversion rates and lifetime customer value to engagement and user experience. It's a data goldmine!
But recently, I've been reminded that adopting metrics can be a dangerous game, especially if we become too obsessed with them. Let's explore four key dangers of metrics and how to navigate them effectively.
The Perils of Poorly Chosen Metrics1. Measuring the Wrong ThingsI'm currently working with an insurance company that's fallen into this trap. They're tracking the number of quotes sent out rather than actual sales. Consequently, they're making decisions that boost quote numbers at the expense of real conversions.
How does this happen? It's called the McNamara fallacy - our tendency to measure what's easy to measure and, over time, assume it's the only metric that matters.
Be wary of this trap. While measuring something is better than nothing, avoid placing too much weight on easily accessible metrics. They're just part of the equation, and the metrics you can't easily measure (like lifetime customer value) are often the most important.
2. Focusing on Short-Term GainsQuarterly targets are common, but they can lead to dangerously short-term thinking. If you're fixated on this quarter's sales target, you might prioritize costly customer acquisition over more sustainable strategies like customer retention and word-of-mouth recommendations.
3. Misreading the DataWhen we focus on a small number of short-term metrics, it's easy to misinterpret what's happening. You might miss seasonal variations or fail to see that the overall picture is healthier than it appears.
I once had a client who pulled a feature after just three days because it caused a dip in a single metric at launch. There was no time to understand the full impact or whether it was having positive effects in other areas. They jumped to conclusions based on limited data.
4. Overreacting to ChangesMetrics should guide our decision-making, not dictate it. Our actions shouldn't be reduced to simplistic if/then statements (If [metric] goes up = good. If [metric] goes down = bad).
We need to make informed judgment calls, take calculated risks, and have the nerve to give ideas time to succeed. For instance, it's common for website changes to receive initial negative reactions as they disrupt users' procedural knowledge. But if you give people time to adjust, the results often improve.
How to Use Metrics EffectivelySo, how do we harness the power of metrics while avoiding these pitfalls? Here's my advice:
Implementing these strategies isn't always smooth sailing. Many organizations are deeply entrenched in their thinking, and changing established metrics often falls outside my direct control.
However, by laying these foundations early, we create a reference point for when things go awry. We can revisit these conversations and adjust course as needed.
Remember, metrics are powerful tools, but they're not the end goal. Use them wisely, and they'll guide you toward meaningful improvements and sustainable success.
What are your experiences with metrics? Have you encountered similar challenges? I'd love to hear your experiences! Drop me a reply.
If you would like a transcript of this episode, access to the links I mentioned, or any additional information, please visit the associated blog post.
The podcast currently has 800 episodes available.
5,863 Listeners
5,360 Listeners
873 Listeners
76,989 Listeners
94 Listeners
504 Listeners
316 Listeners
5,451 Listeners
165 Listeners
106 Listeners
1 Listeners
1,277 Listeners
692 Listeners
72 Listeners
35 Listeners