Cloud GenAI Prototypes
The Challenge
Analyzing user attention data (eye-tracking) manually is a time-intensive process. Research teams spend hours interpreting gaze plots and heatmaps to derive actionable insights for UX studies. The goal was to automate this qualitative analysis to speed up the feedback loop.
The Solution
I architected a Retrieval-Augmented Generation (RAG) pipeline on AWS. By integrating computer vision outputs (gaze coordinates, fixation duration) with Large Language Models (LLMs) via AWS Bedrock, the system can "read" the attention data and generate natural language reports automatically.
- Developed multi-modal agents combining vision and text.
- Implemented vector search for semantic retrieval of past study insights.
- Orchestrated workflows using LangChain and AWS Lambda.
Key Outcomes
80%
Reduction in Reporting Time
12x
Cost Efficiency vs Legacy Ops