The past few years have witnessed a revolution in machine learning. However, just as capabilities have increased, so too have concerns over safety, explainability, and fairness when deploying these powerful technologies in the real world.
Formal verification and automated reasoning have traditionally been the weapons of choice in providing proof of performance, but applying such techniques to the current machine learning paradigm is highly challenging. Here at Imandra, we're rising to this challenge by combining formal methods with all three key stages of the machine learning pipeline - data analysis, training, and model evaluation - to provide new levels of insight and rigor.
One of the primary weaknesses of current machine learning algorithms (and particularly - deep learning) is the vast amounts of data typically needed for training. Using a state-space description, Imandra's region decomposition tool can split potentially infinite domains into a finite number of regions over which the behavior of a specified system is invariant. By sampling from regions instead of the whole state-space, we can make sure that our learning algorithm encounters all possible edge cases with less data, improving efficiency and performance. The potential for this kind of symbolic, semantic information to augment statistical training data is something we've previously demonstrated within deep reinforcement learning, and we believe to be critical for the next generation of AIRead the Article
Check out the documentation and examples of our newly released Python interface to the Iterative Decomposition Framework (IDF).Read more
We're developing a new probabilistic programming library for Imandra, allowing users to quickly and easily generate custom hierarchical probabilistic models representing distributions over their algorithms' input. This statistical information can then be automatically propagated through the existing symbolic model to form a distribution over outputs. Alongside Imandra's capacity to generate Python output that can be used to query and parse large datasets, this allows unprecedented insight into the behavior of potentially highly complex and dynamic algorithmic systems. An exciting and increasingly important application we're already researching is verifying the fairness properties of decision-making algorithms with respect to population demographics. Imandra forms an invaluable tool for checking against these kinds of increasingly pervasive standards and regulations.Example Notebook
The vast majority of work within formal methods has involved analyzing models fully specified by the user. However, more and more critical parts of algorithmic pipelines are constituted by models that are instead learned from data. The task of analyzing these kinds of models presents fresh challenges for the formal methods community and has seen exciting progress in recent years. While scalability is still a significant, open research problem - with state-of-the-art models often having millions of parameters - we've already made progress by analyzing and verifying simple yet powerful models learned from real-world data. In the future, we'll be taking this work further by expanding upon both the size and variety of machine learning architectures that Imandra is able to verify.Read our Article