I’m interested in NLP&IR, with a special focus on developing cool, smaller models to power LLM-based applications. You may know me from my frequent twitter rants about using late-interaction over dense vectors, or my love for encoders.
I do R&D at Answer.AI 👀, where every quest is a side-quest and scope-creep is encouraged. Currently, I’ve helped gather a really cool crowd of people to bring BERT to 2024 👀!
I’ve also built 🪤RAGatouille, a python library whose grand aim is to bridge the gap between state-of-the-art research code and commonly used practices and Rerankers, a library whose aim is to make it really easy to use just about any common reranking method, and swap them in and out painlessly.
Feel free to reach out if you’d like to chat!