The hardest part of an AI/ML project isn't the model.
ML Strategy
From readiness assessments to tailored strategy and training workshops, I can help your organization develop and implement an AI/ML strategy that advances your business objectives. Invest in AI/ML capabilities knowing you have a clear strategy.
From $5,000
ML Systems
I build custom production-ready AI/ML systems trained on your data and tailored to your substantive domain. I build classification systems, information extraction systems, and recommender systems that work because they have been specifically trained to understand the specialized language of your specific field.
From $20,000
Data Visualization
I design publication-quality visualizations and interactive data features that tell a compelling story using data. I implement best-practices from data journalism — every design decision serves the argument, not just an aesthetic.
From $1,000
Most ML projects don't fail because of the model. They fail because the problem was framed wrong, the labels were poorly defined, or the evaluation metric didn't reflect what actually mattered in production.
Fixing those problems requires a different kind of expertise — not more advanced architecture or more compute, but better methodology. Asking the right questions. Carefully framing the problem. Thoughtfully operationalizing key concepts. Selecting appropriate methods, not trendy ones. That work happens before a single model is trained.
I have extensive experience conducting quantitative research at leading universities, working with some of the most complex text there is: dense legal documents, multilingual court judgments, and technical regulatory documents, where a misclassification isn't just a metric — it's a wrong answer about what the law says. That environment trains a specific skill set: defining valid labels, anticipating measurement error, and determining whether a model has actually learned the concept you care about.
I bring research-level rigor to every AI/ML project — from our first conversation about how to frame the problem to the final handoff to your team.
Can You Tell If Text Was Written by an LLM?
The "em-dash debate" misses the point entirely. Detecting LLM-generated text is a challenging classification problem, and the "folk methods" people use to do it are somewhere between unreliable and useless.
How Quantitative Social Scientists Can Contribute to ML Projects
Quantitative social science has spent decades developing tools for drawing valid inferences from messy observational data about complex human phenomena. Those tools transfer directly to applied ML — and their absence explains a significant share of production failures.
Domain Adaptation Is Not Fine-Tuning: A Practical Distinction That Matters
Fine-tuning and domain adaptation are often used interchangeably, but they solve different problems and require different approaches. Getting the distinction wrong is one of the more expensive mistakes in applied NLP.