Total Wine & More

Leveraging AI to Accelerate a Product Tile Redesign

Hero view of redesigned wine product tiles on a product listing page
Role
Solo Product Designer
Responsibilities
End-to-end design process
Collaborators
Claude
timeline
2 months

overview

Big scope, small team, smarter workflow.

This project focused on redesigning product tiles on the product listing page to improve clarity, hierarchy, and decision-making. But beyond the interface itself, the larger challenge was execution.

  • A compressed two-month timeline
  • A downsized team (solo designer)
  • A need for high-quality exploration and validation

A traditional UX process wouldn’t allow for enough iteration or depth within these constraints. So I restructured my workflow using AI to compress timelines, expand exploration, and reduce manual overhead across the entire design cycle.

The Challenge

How do I run a full research → ideation → testing cycle, alone, without sacrificing quality?

Rethinking the design process

Implementing AI to work faster and explore more.

Infographic My UX Process, Accelerated with AI across define, research, design, test, and iterate

Instead of following a traditional linear process, I used AI to move faster. Using Claude throughout, I cut down on manual work and focused more on decision-making and iteration.

Defining the problem

Understanding where the current experience fails.

I ran a design audit using Claude’s design critique skill to quickly identify issues with the existing tiles.

Annotated product tile and search results showing production audit findings

User testing driven through structured prompting.

I ran an unmoderated usability test on the existing product tiles to establish a baseline and identify issues with clarity and hierarchy. To move quickly and reduce bias, I used Claude to help generate the test structure and questions. Rather than asking it to “write a test,” I prompted it with a clear framework:

You are helping design a usability test for Product Listing Page tiles in an eCommerce experience where users browse and compare products. The objective is to understand how users interpret pricing, ratings, and badges, identify which elements influence purchase decisions, and uncover any confusion or misinterpretation. Avoid leading questions, prioritize natural behavior over stated opinions, and ensure all tasks are realistic and scenario-based. Generate task-based testing scenarios (not direct questions), including follow-up probes to understand user reasoning, using neutral phrasing to minimize bias and capture authentic behavior.

Key insights identified through AI analysis.

What shoppers noticed, trusted, and misread within the product tiles:

Three cards summarizing purchase drivers, Winery Direct confusion, and expert rating misinterpretation

Problem Definition

The product tiles had weak hierarchy and poor scannability, causing users to misinterpret key information and struggle to make quick comparisons.

In-Store Research

Validation with real customers in-store confirmed key behavioral signals.

To validate findings beyond unmoderated testing, I conducted a moderated testing session in-store with real customers.

I printed individual product tile elements, cut them into cards, and asked users to rank them from most to least important when deciding what to purchase.

Hands arranging printed cutouts of tile elements during in-store card-sorting research

In-store testing grounded the redesign in real-world shopping behavior and reinforced key behavioral patterns:

  • Price consistently ranked as most important
  • User and expert reviews were the strongest trust signal
  • Merch badges were secondary and often unclear

Competitive Research

Using AI to scan the landscape and surface patterns faster.

Using Claude Cowork with agent-based browsing, I analyzed competitor experiences to identify patterns in:

  • Product information hierarchy
  • Badge usage and placement
  • Trust signal presentation

Insights and opportunities were automatically structured and documented directly into Confluence, reducing the time I would have spent manually synthesizing.

Competitive analysis table of product listing tiles with AI chat prompt visible

Ideation

Using Claude for rapid exploration.

Instead of starting with wireframes, I used Claude to generate 10 high-fidelity design directions based on research findings, identified usability issues, and competitive patterns.

Because we already had an established design system, I could skip low-fidelity exploration and focus directly on layout and hierarchy in realistic UI.

This approach allowed me to:

  • Explore more directions than a traditional process
  • Quickly visualize different hierarchy models
  • Facilitate more concrete stakeholder discussions
Five mobile screens comparing PLP tile treatments for red wine search results

Testing & Synthesis

I defined the research goals and prompted AI to generate and deploy the test directly into the testing platform.

From the generated concepts, I used AI to help identify which directions were most valuable to test based on usability risk and potential impact.

AI then assisted in writing unbiased test scripts, structuring realistic user scenarios, and deploying tests directly into UserTesting.

UserTesting test plan for PLP tile hierarchy with AI-generated tasks and questions

Turning raw test data into decision-ready insights and presentation readouts.

After testing the design variants, I exported the results into structured Excel files and uploaded them to Claude for rapid synthesis.

I prompted it to identify patterns, recurring behaviors, and areas of confusion across participants. This helped quickly surface themes in the raw data, which I then reviewed and refined to ensure accuracy before translating them into final insights.

I used Clause to quickly create readouts for presenting to stakeholders.

Research synthesis workflow from UserTesting through Excel and Claude to PowerPoint readout slides

Value Signal Optimization

User testing revealed clarity issues with individual elements, leading to targeted redesigns.

Expert rating badge before and after adding a clear POINTS label

Expert Rating

  • Updated from a green circle to a black badge with numeric score + “points” label
  • Improved clarity of meaning
  • Avoids visual competition with green Add to Cart CTA
Deal callout before and after restructuring value and visual treatment

Deal Callout

  • Lead with clear value
  • Reduced reliance on all-red text, which felt alarming during browsing
  • Improved visibility and comprehension during quick scanning
User ratings before and after switching to a single star with numeric score and review count

User ratings

  • Shifted from 5-star visualization to single star + numeric rating
  • Improved clarity and reduced ambiguity
  • Saved space and improved scan speed

Solution

A redesigned structure that supports faster decision-making.

Annotated product tile for Tower Vodka showing hierarchy for quality signals, price, reviews, and deals
Grid of six redesigned wine product tiles showing consistent solution layout

Reflection

What I learned

AI is only as useful as the prompt.

Getting useful output meant writing specific, context-rich prompts. The more clearly I defined the problem, constraints, goal, the better the result. It forced me to think before jumping into solutions.

Understanding can’t be automated.

AI can create polished looking artifacts but it can’t build the understanding necessary to articulate and defend decisions. That came from the slower work, listening to real users seeing their frustrations and subtle expressions firsthand.