Where research leaves the conference room and enters the real world
TL;DR
I don’t just design studies. I build research programs, create measurement infrastructure, and bring customers into the room. This page shows how I actually work, not just what I’ve worked on.
Field Research: Getting Out of the Building
I designed and lead an ongoing field research program. Real customers. Real context. Real stakes.
FanWeek is a recurring research event—25+ moderated sessions per instance, multiple researchers, authentic behavior observation. Not a one-time study. A sustainable practice.
Why it matters: Lab conditions lie. When users know they’re being watched in controlled environments, they perform. When you observe them making real decisions with real consequences, you see what actually happens.
The approach:
- Semi-structured interviews with concept testing embedded
- In-person sessions at live events
- Multiple researchers capturing different angles
- Real transactions, not hypothetical scenarios
I’ve also adapted ethnographic methods for contexts where traditional lab testing falls short, combining observation with usability testing to capture both what users do and why they do it.
Co-Creation: Customers in the Room
I don’t just test solutions, I build them with customers.
I designed Customer Roundtables that bring actual users to our product offsites. Not behind a one-way mirror. In the room. With designers, PMs, and engineers asking questions directly.
The Format:
- Open conversation about pain points and opportunities
- Real-time concept and prototype reactions
- Cross-functional team participation (not just researchers observing)
- Direct dialogue that builds empathy you can’t get from a report
What comes out: Design direction changes. New feature ideas. Shared understanding across the team. PMs and designers who’ve heard it directly from customers, not filtered through my slides.
Why it works: When stakeholders hear it themselves, you skip the “convince leadership” step entirely.
Controlled Experimentation: Rigor That Ships
I design experiments that would survive peer review—and deliver in 2-week sprints.
Recent example: Testing 3 prototype designs with Latin Square counterbalancing to control for order effects. 62 participants across two parallel tests. Task success rates compared quantitatively. Clear winner, defensible methodology, decision made.
Experimental design toolkit:
- Controlled prototype comparisons with quantitative metrics
- Within-subjects and between-subjects designs
- Counterbalancing for order effects
- Multi-phase validation (test → iterate → confirm)
Why it matters: “We tested it with users” means nothing if your methodology is sloppy. I design studies that produce results you can actually trust—and defend when stakeholders push back.
Measurement Infrastructure: Building Systems, Not Just Studies
I created the measurement instruments my team now uses across every study.
Standardized Desirability Scale
5 dimensions tied directly to business KPIs: Usability, Satisfaction, Engagement, Retention, Acquisition.
Bidirectional scale (-3 to +3). Validated, refined based on reliability analysis, now used across product teams.
Strategic Outcomes Scale
3 dimensions tied to business goals: Ritualizing (habit formation), Thrill (emotional engagement), Attract (word-of-mouth).
Companion instrument that measures strategic alignment alongside UX quality.
Integrated Priority Ranking Framework
Combines quantitative scores with qualitative signal and strategic alignment. Produces composite scores that give stakeholders clear, defensible prioritization.
Why it matters: Individual studies are valuable. Repeatable measurement systems that compound over time are transformational. I build the infrastructure, not just the insights.
The Full Methodological Range
I’m not a “qual researcher who can do surveys” or a “quant researcher who occasionally talks to users.” I’m a trained methodologist who matches approach to question.
Quantitative:
- Survey design with psychometric validation
- Experimental design (controlled comparisons, factorial designs)
- Statistical analysis: SEM, factor analysis, regression, ANOVA
- Large-scale data collection and analysis
Qualitative:
- In-depth interviews and contextual inquiry
- Ethnographic observation (field research with real behavior)
- Usability testing (moderated and unmoderated)
- Diary studies and longitudinal methods
Mixed Methods:
- Triangulation across qual and quant
- Integrated frameworks that combine multiple data streams
- Qual→Quant sequencing (discover then validate)
The PhD difference: I don’t just know HOW to run these methods, I understand WHEN each is appropriate, what threats to validity exist, and how to design around them.
Practice Building: Research as Infrastructure
I’ve built research practices from scratch. Three times.
From Zero:
- Glenville State & Albright College: Director-level roles creating institutional research functions where none existed
- PointsBet: Sole researcher → full practice with repository, taxonomy, stakeholder rituals
Evolved & Elevated:
At Fanatics, I inherited an existing practice and elevated it—integrating AI tools, building measurement infrastructure, creating standard frameworks, and positioning UXR as an essential strategic partner rather than a validation service.
What “building from zero” actually means:
- Designing systems that scale beyond one person
- Creating processes that outlive individual studies
- Training non-researchers to think in evidence
- Establishing stakeholder relationships that make research indispensable
Why it matters for you: I don’t need an existing playbook. I write the playbook.
Let’s Talk Methodology
If you’re looking for someone who goes beyond the conference room. Someone who builds programs, creates infrastructure, and brings rigor without sacrificing speed, let’s connect.
