Machine-assisted scientific discovery has been a popular topic in machine learning. Recent breakthroughs have shown that building a framework for exploiting known background theory (encoded as physical axioms) in addition to data-driven symbolic regression models can greatly improve machine-assisted discovery in the scientific context (1, 2). These methods not only generate hypotheses that fit data, but are also derivable from theory as well. This certificate of derivability, however, is only guaranteed when the background theory itself is complete. If there are essential axioms missing, then while these systems are able to recover the correct hypothesis from data, they are not able to generate a certificate of derivability. Therefore, there is a gap of explainability between machine-generated hypotheses and known theory if the known theory is incomplete. In this talk, we attempt to bridge this gap with our work on AI Noether, a computational framework for abductively inferring missing axioms that are required for an incomplete background theory (encoded as polynomials) to explain a hypothesis.
You can find the slides for my talk here .