Sara Hooker


Sara Hooker is a computer scientist who works in the field of artificial intelligence. She is known for her work on model efficiency at scale, large language models and areas of research on algorithmic bias and fairness in machine learning. In 2025, she co-founded Adaption, a startup focused on creating AI systems capable of continuous real-time learning and efficient adaptation. She previously served as the Vice President of Research at Cohere, where she led the company's research arm, Cohere For AI. As VP of Research at Cohere, she launched the Cohere For AI scholars program. In 2023, she was listed as one of AI's top 13 innovators by Fortune. In 2024, she was in TIME's 2024 list of the most influential people in AI.
Sara Hooker is on Kaggle's ML Advisory Research Board and the World Economic Forum council on the Future of Artificial Intelligence. She is also a member of the MLC research group.

Early life and education

Sara Hooker was born in Dublin, Ireland. At four years old, her parents moved to Lesotho. She grew up in South Africa, Mozambique, Lesotho, Eswatini and Kenya until she was 19.
Hooker earned her computer science PhD as a Doctor of Philosophy in Computer Science from Mila - Quebec AI Institute.

Career

Early career

In 2014, she founded Delta Analytics, which develops technical capacity for non-profits.

Google Brain

Hooker joined Google Brain in 2017 as a research scientist. During her tenure, she focused on model interpretability and the Hardware Lottery a concept she pioneered regarding how hardware constraints shape the direction of AI research. She was also a founding member of Google’s first AI research office in Accra, Ghana.
In April 2022, Hooker joined the AI startup Cohere to lead Cohere Labs, the research arm of Cohere
The lab launched several initiatives including:
  • Aya Project: A massive collaborative effort involving over 3,000 researchers to create a state-of-the-art multilingual model and dataset covering 101 languages.
  • Aya Expanse: A 2024 release of highly performant 8B and 32B multilingual models designed to bridge the gap between English-centric and global AI capabilities.