Comparison shopping
Amazon · Search · Electronics
Overview
Customers weren't confused. They were working around us.
Amazon researchers identified a consistent behavior: customers shopping for electronics were opening four or more browser tabs to compare products side by side, spending an average of 18 minutes in that mode before making a decision — or abandoning entirely. The workaround was universal. It was also entirely outside the product.
I led the design effort to bring that behavior in — a native comparison experience that felt like it had always belonged on Amazon.com. The work shipped in stages to millions of customers and became part of how people chose electronics on the site.
The problem
18 minutes of tab-switching. $7.9 million in potential lost sales.
A conservative internal estimate put the cost of comparison friction at $7.9M in lost or delayed sales over a comparable window. The business case wasn't the hardest part to make. The harder question was what a native comparison experience should actually look like on one of the world's most visited product pages.
The customer workaround was also remarkably consistent — which meant there was a clear pattern to design around instead of a vague pain point.
The customer behavior
Search, find a product, Command+Click to open tabs, visit each product page, manually cross-reference specs, repeat, eventually decide.
The design opportunity
Collapse eight steps into five by pulling the tab-switching and spec-matching behaviors into a single, in-product experience.
The hardest design problem
The feature had to feel like it was always there.
The primary constraint wasn't technical — it was experiential. Amazon's search results pages are dense, high-traffic environments where customers are in a flow state. Any comparison affordance that felt intrusive, unfamiliar, or in the way would get ignored or create friction where there was none before.
The interaction model had to earn its place on the page. It couldn't announce itself. It needed to feel like a natural extension of how customers were already behaving — not a new behavior we were asking them to learn.
Exploration
We went wide on both platforms — then let the research narrow it.
Although the original research happened on desktop, I didn't want to rule out mobile before we'd explored it. We ran parallel design tracks — mobile and desktop — and tested both with customers. The mobile question wasn't debated internally; the research settled it decisively.
- Mobile users said the design had potential, but this kind of high-consideration, spec-heavy comparison wasn't something they wanted to do on their phone. They preferred desktop for decisions of this size.
- We explored two comparison modes in parallel: active selection versus suggested comparison. Desktop and active selection won on both dimensions — clarity for shoppers and alignment with how people actually compared.
The design
A persistent tray. A scannable spec grid. Zero new mental models.
The pin-to-compare interaction let customers mark products from search results and collect them in a persistent tray — visible as they scrolled, manageable without leaving the page. Entering comparison mode showed products side by side with their most important specs lined up in a scannable row.
- Priority specs: weight, processor speed, RAM, OS, screen size, resolution — grouped to support the actual questions customers were trying to answer.
- Anchored by the highest-priority commercial signals: price, Prime eligibility, ratings, brand, product photo, and name.
- Early exploration stayed at low fidelity deliberately — keeping conversations focused on structure and behavior, not typography and color.
Shipping
The right sequencing meant shipping the most valuable part first.
The full vision shipped incrementally — not because of scope cuts, but because the pieces had different levels of readiness and different levels of immediate value to customers.
Rather than hold the launch for the complete experience, we sequenced the work so customers got the most scannable, highest-leverage improvement first — and the active comparison model landed on top of an information hierarchy that had already been validated.
Phase 1 — Spec grid
The spec grid on search results pages. Electronics search results got richer, more scannable product data — delivering comparison value passively, before a customer had even decided to compare. Highest impact, lowest interaction complexity.
Phase 2 — Pin to compare
The active comparison model. Pin-to-compare followed once the foundational spec grid had established the right information hierarchy on the page, so the active pattern felt like an extension of something already familiar.
Impact
Research, iteration, and a native experience at scale.
I applied user research and usability testing to prototype and iterate alongside engineering. The experience shipped to a large customer base and became part of how people compared electronics on the site — contributing to increased GMV and time on site, and connecting the design directly to business and customer outcomes.
Time to purchase
−30%
reduction after launch
Reach
Millions
customers using comparison flows
Estimated friction cost
$7.9M
pre-launch business case
Where I spent my time. Design lead · User research · Usability testing · Interaction design · Cross-platform exploration