Hello 👋

I’m a French person working in Machine Learning, and more specifically, NLP/Recommendations/(a tiny bit of IR).

I’m formerly ML Lead at a few places, most recently Bright Network.

My main area of interest is, broadly, LLMs and representation techniques. You may know me from my frequent twitter rants about using late-interaction over dense vectors.

Once upon a time, I specifically focused on figuring out ways to deploy lightweight-but-not-much-worse models to production, because BERT-Large seemed unwieldy.

I take on advisory and contract work in areas broadly related to my interests, with bonus points if the work will be released publicly in some form! If you’re interested, feel free to reach out via email.

RAGatouille

I’m a big fan of seeing the impact of research of production, and really excited by the wild-west of prompt engineering and arcane technique growing into a mature field. I think a huge part of that is making it easy to use models: take ColBERT, for example. It seems scary, but the bag-of-embeddings and MaxSim concepts are actually super simple once you get to play with them!

I’ve recently released RAGatouille, a library which makes it easy to use ColBERT/late-interaction models in production in just a few lines of code! If you’re wondering why you should use late-interation (or RAGAtouille), check out the In-progress RAGatouille documentation homepage!

My current plans involve growing RAGatouille into a sustainable OSS library with proper contributors, and for it to serve as a gateway point into the wonderful world of interacting with LLMs in a new way.

RAGatouille currently allows you to leverage and train ColBERT models in a variety of ways, and will soon integrate DSPy programs as well. Without much pretense, I’d love to see it grow, or at least help other projects to grow, into some sort of baby-HuggingFace to ColBERT/DSPy’s PyTorch.

The Side Projects

In my spare time, I like to explore some other fun LLM applications, to try and figure out how to optimise their deployments or try new toys. I’ve been working on a lot of recommendations, and then RAG projects in the past couple years, and I’ve been eager to find ways to improve things!

As a result of those ongoing efforts, I’ve released an early version of a Japanese language embedding model, Fio, a Japanese ColBERT model (and its training dataset), which is the current state of the art for monolingual Japanese document retrieval.

The Previous Full-Time Stuff

In Industry, I’ve generally been responsible for setting up and leading NLP projects.

work but different

Over time, it’s become apparent that “It does take 9 hours longer to run, but the macro-F1 on CoNLL2003 is 2 points higher!” is, in fact, perhaps not the greatest thing to tell a customer.

As a result, the main aim of those projects generally ends up being centered around shipping something that can actually serve users in production; rather than looking good on the Benchmark of The Day.

I’ve been lucky to publish some of my work . All we publish makes it to production, often in improved forms.

For example, our latest paper leverages a combination of LLM and extremely light models (good ol’ logistic regressions) to address an ongoing problem in the AI-in-HR literature, doubling the performance of previous state of the art approaches. (Of course, I’m lucky to work with excellent colleagues who made this possible!)