our foundation model

The industry-leading foundation model made by pathologists for pathologists

"RudolfV: A Foundation Model by Pathologists for Pathologists"

Jonas Dippel, Barbara Feulner, Tobias Winterhoff, Timo Milbich, Stephan Tietz, Simon Schallenberg, Gabriel Dernbach, Andreas Kunft, Simon Heinke, Marie-Lisa Eich, Julika Ribbat-Idel, Rosemarie Krupar, Philipp Anders,  Niklas Prenißl, Philipp Jurmeister, David Horst, Lukas Ruff, Klaus-Robert Müller, Frederick Klauschen,  Maximilian Alber

Our histopathology foundation model was developed using a curated “pathologist-in-the-loop” approach that increases model robustness. Our foundation model dramatically improves the scalability and performance of a variety of downstream tasks.

When evaluated against public benchmarks, our foundation model showed the highest average accuracy.

  • Foundation models themselves cannot be evaluated – only their performance on downstream tasks

  • Numerous public histopathology benchmarks exist that can help contextualize performance

  • RudolfV is the most comprehensively benchmarked foundation model today

over

90%

accuracy on key histopathology benchmarks*

*Dippel et al, 2024

up to

10%

increase in balanced accuracy of downstream tasks*

*based on internal analyses

up to

90%

less training data / annotations needed to train new models*

*based on internal analyses

What is a foundation model?

Trained on very large data sets, foundation models serve as a starting point for developing high performing machine learning models in a quick and cost-effective way.

Foundation models can be quickly fine-tuned for a wide range of downstream tasks and represent a major leap forward from traditional supervised learning approaches.

How was our foundation model developed?

Our foundation model was developed by having AI reconstruct masked image data. By doing so, AI learns to understand images and their context, e.g., that in a dense tumor the appearance of immune cells is less likely.

How does our foundation model impact downstream tasks?

In internal analyses, our foundation model:

  • Reduces the amount of training data/annotations needed to optimize model performance by up to 90%

  • Increased the balanced accuracy of cell classification tasks by an average of ~10% across cell types 

  • Is robust across a wide range of scanners and stains

Inquires

Interested in our foundation model? Reach out to learn more!

Contact us
Products and services

Discover our offerings to deliver novel insights for precision medicine.

Reach out below to learn more about how we transform drug development and improve patient outcomes.

See our Products