About Me
Multimedia journalist and human rights researcher with 5+ years of experience in global business media and human rights advocacy, including at Amnesty International and the Financial Times. I'm currently pursuing an MS in Data Journalism at Columbia University to transition into the field of AI safety and responsible technology.
Currently, I work as a Graduate Fellow at Columbia University's Trust Collaboratory and Center for Smart Streetscapes, researching and creating frameworks for understanding and building trust while developing community-based LLMs for Harlem neighborhoods and urban communities.
I have been selected as a 2025 Aspen Ideas Fellow, a competitive program that brings together emerging leaders working on society's most pressing challenges, to attend the Aspen Institute’s Ideas Festival in June 2025. As part of this recognition, I was also chosen as a Fathom Fellow, a selective program by AI nonprofit Fathom for professionals advancing ethical AI development and governance.
I'm keen to develop impactful AI and data solutions that drive innovation while ensuring ethical and well-governed outcomes.
I hold an MFA in Ethnographic Documentary and a BASc in Anthropology and Communications from University College London, both completed with top honors. My interdisciplinary background enables me to bridge technical understanding with human-centered research approaches. Prior to my current studies, I graduated from United World Colleges, which fostered my global perspective and commitment to cross-cultural collaboration.
More information about my work experience and skills can be found in my CV.
Projects
-
Scraping and Analyzing Word Frequency in Supreme Court Cases - A word scraper for retrieving and analyzing word frequency in Supreme Court Cases
-
How Safe is AI? When LLMs Generate Harmful Content - A data analysis of 11,000 AI conversations quantifying content moderation challenges, revealing that 46% of harmful content falls outside standard risk categories with both users and models contributing to unsafe interactions.
-
AI Energy Consumption: The Costs and Impacts of the AI Boom - A data analysis of AI energy consumption patterns showing image generation tasks consume vastly more energy than text processing, with significant regional environmental impact variations.
-
AI Moderation Project - An application that compares the content moderation models of different Large Language Models. The project evaluates various moderation approaches across multiple LLMs to benchmark their effectiveness and biases.
-
Why DeepSeek's $6M Price Tag Doesn't Reflect the Full Costs of AI Development - A data visualization examining the true costs behind AI model development beyond the headline figures.
Certificates and Competitions
- HalluShield: A Mechanistic Approach to Hallucination Resistant Models. Women in AI Safety Hackathon, Apart Sprint, March 2025
- Intro to Transformative AI - AI Safety Fundamentals – BlueDot Impact
- AI Alignment - AI Safety Fundamentals – BlueDot Impact