The pursuit of optimal user experience is often clouded by misconceptions, especially for product managers, leading to wasted resources and frustrated users. Are you falling for these common myths?
Key Takeaways
- Product managers should prioritize user research methods beyond simple surveys, including ethnographic studies and usability testing, to gain deeper insights into user behavior.
- A/B testing, when properly designed and executed, is the most reliable method for validating design decisions and should be used more frequently.
- Personalization should be implemented strategically based on user data and preferences, not assumptions, and must always respect user privacy.
Myth #1: Surveys Provide All the User Insights You Need
Many believe that sending out a quick survey is sufficient to understand user needs. This is far from the truth. Surveys often capture superficial feedback and may not reveal the underlying motivations or frustrations driving user behavior. They are a great starting point, but relying solely on them can lead to inaccurate conclusions.
Instead, product managers striving for optimal user experience should incorporate a variety of user research methods. Consider ethnographic studies, where researchers observe users in their natural environment. For example, a product manager at a fintech startup in Atlanta, Georgia, might spend a day observing customers at a local coffee shop near the Georgia State University campus, noting how they interact with mobile banking apps in a real-world setting. This provides much richer data than a survey ever could. We had a client last year who launched a new feature based solely on survey data, only to see it flop because it didn’t address the actual user pain points uncovered during subsequent user interviews.
Myth #2: A/B Testing is Too Time-Consuming and Complicated
Some product managers view A/B testing as an unnecessary burden, believing it slows down the development process. The misconception is that it requires extensive resources and technical expertise. While A/B testing does require careful planning and execution, the benefits far outweigh the perceived costs.
A/B testing is the gold standard for validating design decisions. By comparing two versions of a feature, you can objectively determine which performs better based on real user behavior. For instance, a product manager at an e-commerce company could A/B test two different checkout flows to see which leads to a higher conversion rate. Tools like Optimizely and VWO make A/B testing more accessible than ever. If you aren’t A/B testing, you’re guessing. And speaking of guessing, performance testing can help eliminate some of the guesswork.
Myth #3: Personalization Always Improves User Experience
There’s a widespread belief that personalization automatically enhances user experience. However, poorly executed personalization can be intrusive, creepy, and downright annoying. Users are increasingly wary of companies that seem to know too much about them.
Successful personalization is based on user data and preferences, not assumptions. Consider a streaming service that recommends shows based on viewing history. If the recommendations are relevant and helpful, users will appreciate the personalization. However, if the recommendations are generic or completely off-base, users will feel like their data is being misused. Furthermore, always prioritize user privacy. Ensure you comply with regulations like the California Consumer Privacy Act (CCPA) and provide users with clear control over their data. According to a 2025 report by the Pew Research Center’s Internet & American Life Project, 72% of Americans are concerned about how companies use their personal data https://www.pewresearch.org/internet/2025/01/11/americans-and-privacy-concerned-confused-and-feeling-lack-of-control-over-their-personal-information/.
Myth #4: “Intuitive” Design Requires No User Testing
Many designers and product managers assume that if a design looks good and feels logical to them, it will be intuitive for all users. This is a dangerous assumption. What seems intuitive to one person may be confusing to another, especially considering the diverse backgrounds and technical skills of your user base.
User testing is essential, even for designs that appear “intuitive.” Conduct usability testing with a representative sample of your target audience to identify any pain points or areas of confusion. For instance, a product manager developing a new mobile app for the Georgia Department of Driver Services should test the app with users of varying ages and technical abilities to ensure it’s accessible to everyone. Don’t just ask if they like it; observe them using it. Sometimes, expert interviews can also help reveal unforeseen usability issues.
| Feature | Surveys (Common PM Approach) | Usability Testing (In-depth) | Analytics + Heuristic Evaluation (Data-Driven) |
|---|---|---|---|
| Depth of Qualitative Insight | ✗ Limited | ✓ Rich, nuanced user behavior | Partial: Trends, not motivations |
| Behavioral Data | ✗ Self-reported only | ✓ Direct observation | ✓ Aggregated action logs |
| Contextual Understanding | ✗ Lacks real-world context | ✓ Natural use environment | Partial: Requires interpretation |
| Cost & Time Investment | ✓ Relatively low | ✗ More resource-intensive | Partial: Depends on tool complexity |
| Bias Mitigation | ✗ Prone to response biases | ✓ Reduced observer effect | ✓ Objective, but data can be skewed |
| Specific Problem Identification | ✗ General trends only | ✓ Pinpoints usability issues | ✓ Identifies drop-off points |
| Actionable Recommendations | Partial: High-level feedback | ✓ Concrete improvements | ✓ Data-backed suggestions |
Myth #5: User Experience is a One-Time Fix
Some see user experience as a task to be completed once and then forgotten. This couldn’t be further from the truth. User experience is an ongoing process that requires continuous monitoring, evaluation, and improvement. User needs and expectations evolve over time, so your product’s user experience must evolve along with them.
Regularly collect user feedback, analyze usage data, and conduct usability testing to identify areas for improvement. Implement a system for tracking and prioritizing user feedback, and make sure to iterate on your designs based on what you learn. We ran into this exact issue at my previous firm: after a major redesign, we saw initial positive feedback, but engagement slowly declined. Only through continuous monitoring and A/B testing were we able to identify and address the underlying issues.
Myth #6: All User Feedback is Created Equal
Not all feedback is equally valuable. While all user feedback should be acknowledged, it’s crucial to discern between signal and noise. Some feedback may be based on personal preferences or isolated incidents, while other feedback may indicate broader usability issues.
Prioritize feedback from representative users who are actively engaged with your product. Look for patterns and trends in the feedback you receive. Use data analytics to validate qualitative feedback and identify areas where users are struggling. The squeakiest wheel doesn’t always need the grease.
Case Study: Revamping the “PeachPass” Mobile App
Let’s consider a hypothetical case study involving the “PeachPass” mobile app used for toll collection on Georgia highways like I-85 and GA-400. The app, initially launched in 2023, had a clunky interface and generated numerous complaints. In early 2026, the Georgia State Road and Tollway Authority (SRTA) decided to revamp the app based on user feedback. In fact, this is a great example of how tech solves real problems.
Phase 1: User Research (2 Months)
- Ethnographic Studies: Researchers observed commuters using the app in real-world scenarios, such as at gas stations near highway exits and during their commutes.
- Usability Testing: Participants were asked to perform specific tasks within the app (e.g., adding funds, checking toll history) while being observed.
- Data Analysis: App usage data was analyzed to identify drop-off points and frequently used features.
Phase 2: Redesign and A/B Testing (3 Months)
- Based on the research findings, the SRTA team redesigned the app’s interface, simplifying the navigation and improving the payment process.
- A/B testing was conducted on different versions of the payment screen to determine which design resulted in the highest completion rate.
Phase 3: Launch and Iteration (Ongoing)
- The redesigned app was launched in July 2026.
- Continuous monitoring of user feedback and app usage data was implemented.
- Regular updates and improvements are planned based on ongoing user feedback.
Results:
- App store ratings improved from 2.5 stars to 4.6 stars.
- The percentage of users successfully adding funds increased by 35%.
- Customer support inquiries related to the app decreased by 40%.
By embracing a data-driven approach and prioritizing user feedback, SRTA successfully transformed the “PeachPass” app into a more user-friendly and efficient tool for Georgia commuters.
Product managers striving for optimal user experience must challenge these misconceptions and embrace a data-driven, user-centered approach. By prioritizing user research, A/B testing, and continuous iteration, you can create products that truly meet user needs and expectations. Don’t let these myths derail your efforts to build exceptional user experiences.
How often should I conduct user research?
User research should be an ongoing process, not a one-time event. Conduct regular user interviews, usability testing, and data analysis to stay informed about user needs and preferences.
What are some affordable user research methods for startups?
Affordable methods include guerilla usability testing (testing with people in public places), online surveys, and analyzing existing user data from analytics platforms like Amplitude.
How can I ensure that my A/B tests are statistically significant?
Use a sample size calculator to determine the appropriate sample size for your A/B tests. Ensure that you run the tests for a sufficient duration to gather enough data and avoid making decisions based on small sample sizes.
What are the ethical considerations when collecting user data?
Always obtain user consent before collecting data. Be transparent about how you will use the data and provide users with control over their data. Comply with all relevant privacy regulations, such as the CCPA.
How can I prioritize user feedback effectively?
Prioritize feedback based on its impact, frequency, and alignment with your product goals. Use a framework like the RICE scoring model (Reach, Impact, Confidence, Effort) to help you prioritize feedback objectively.
Stop chasing fleeting trends and start listening to your users. Implement a structured system for gathering, analyzing, and acting on user feedback. This is the only way to consistently deliver exceptional user experiences and build products that truly resonate.