Foundational / Generative study with Amazon to inform future product roadmaps and development of software integrations using LLM-backed AI with on-the-go devices and services. This work was part of my 2023 internship on the Echo Frames team.
Problem Space: In 2023, generative AI (GenAI) and large language models (LLMs) were rapidly advancing, with major tech companies investing in in-house LLM development. However, a key challenge was identifying real-world, on-the-go use cases for this technology.
Product Context: As part of the Echo Frames team (Amazon’s wearable smart glasses), the business sought to explore how an LLM-backed AI assistant could enhance mobile and wearable experiences.
Research Goal: My research aimed to map potential use cases and user expectations for an on-the-go AI assistant, helping UX designers and product teams identify high-value opportunities for future software integrations.
What expectations do users have for an LLM-backed AI assistant in mobile or wearable contexts?
Which potential use cases resonate most with users? (Based on an initial list ideated by UX Designers.)
How do expectations differ across user demographics (e.g., age, gender, prior experience with LLMs, and smart device usage)?
What concerns or barriers might prevent adoption of this technology?
Mixed-Methods Research:
Survey (Quantitative & Qualitative Data Collection)
Designed and launched a Qualtrics survey to collect demographic data, smart device usage, prior LLM experience, and preferences for potential AI use cases.
Incorporated Q-sort ranking to assess user preferences for different AI-powered tasks and gauge enthusiasm or skepticism.
Included open-ended qualitative questions to capture nuanced expectations and concerns.
Follow-Up Interviews (Qualitative Deep Dive)
Selected participants based on extreme high and extreme low ratings for AI usage to explore both enthusiasm and skepticism.
Conducted semi-structured interviews to understand users’ mental models, imagined scenarios, and potential adoption barriers.
Analysis & Synthesis:
Conducted statistical analysis in R to identify trends and correlations across demographic segments.
Visualized survey findings with data graphics produced in R.
Thematically analyzed interview data to extract key insights and user narratives.
End-to-End Research Execution:
Led the project from initial stakeholder alignment to study execution, analysis, and presentation.
Met with Senior UX Designers to refine research questions and align on key business and design needs.
Survey Design & Data Collection:
Designed the survey and collaborated with a fellow UX researcher to implement Q-sort ranking in Qualtrics.
Recruited participants internally through Amazon’s employee network via Outlook.
Interviewing & Qualitative Analysis:
Scheduled, conducted, and transcribed all semi-structured interviews.
Thematically analyzed interview responses to surface key behavioral patterns and concerns.
Data Analysis & Visualization:
Used R for statistical analysis and to create clear, visually compelling graphics of survey findings.
Stakeholder Communication & Impact:
Presented findings at the Amazon Echo Frames / Devices & Services org-wide capstone presentation, documenting key insights to inform future AI integration strategies.
(Under NDA – Generalized Summary Below)
Delivered foundational insights on user expectations and adoption barriers for wearable AI assistants.
Identified high-priority use cases and scenarios where users see the most value in on-the-go AI assistance.
Provided demographic-based insights that informed design considerations for AI interactions in wearable devices.
Research findings were internally documented, serving as a reference for future AI product roadmaps within Amazon’s Devices & Services organization.