Foundation models as context

Using foundation models as flexible priors and contextual memory for downstream inference.

Prior knowledge is a powerful form of context in statistical inference. Traditionally, applying such knowledge required expert intervention on each new problem. Today, foundation models encode broad domain knowledge in a reusable, black-box format. Our work focuses on extracting and operationalizing this implicit knowledge by connecting foundation models to structured, parametric statistical models.

We are building a bi-directional bridge between foundation models and structured statistical estimation:

Our work on InContextML supports this integration with structured prompting and retrieval-augmented inference.



References

2024

  1. From One to Zero: RAG-IM Adapts Language Models for Interpretable Zero-Shot Clinical Predictions
    Sazan Mahbub, Caleb Ellington, Sina Alinejad, and 3 more authors
    NeurIPS Workshop on Adaptive Foundation Models (NeurIPS AFM), 2024

2023

  1. Data Science with LLMs and Interpretable Models
    Sebastian BordtBen LengerichHarsha Nori, and 1 more author
    AAAI Explainable AI for Science, 2023
  2. LLMs Understand Glass-Box Models, Discover Surprises, and Suggest Repairs
    2023