Automated Reasoning
Meets Machine Learning

The past few years have witnessed a revolution in machine learning. However, just as capabilities have increased, so too have concerns over safety, explainability, and fairness when it comes to deploying these powerful technologies in the real world.



Formal verification and automated reasoning have traditionally been the weapons of choice in providing proof of performance, but applying such techniques to the current machine learning paradigm is highly challenging. Here at Imandra, we're rising to this challenge by combining formal methods with all three key stages of the machine learning pipeline - data analysis, training, and model evaluation - in order to provide new levels of insight and rigour.

Improving Sampling Efficiency

One of the primary weaknesses of current machine learning algorithms (and in particular deep learning) is the vast amounts of data typically needed for training. Using a state-space description, Imandra’s region decomposition tool is able to split potentially infinite domains into a finite number of regions over which the behaviour of a specified system is invariant. By sampling from regions instead of the whole state-space, we can make sure that our learning algorithm encounters all possible edge cases with less data, improving both efficiency and performance. The potential for this kind of symbolic, semantic information to augment statistical training data is something we’ve previously demonstrated within deep reinforcement learning, and believe to be critical for the next generation of AI.

NEW Article - Read More
Example Notebook 1
Example Notebook 2
Improving Sampling Efficiency

Probabilistic Symbolic Execution

PSE Code Snippet

We’re developing a new probabilistic programming library for Imandra, allowing users to quickly and easily generate custom hierarchical probabilistic models representing distributions over the input to their algorithms. This statistical information can then be automatically propagated through the existing symbolic model to form a distribution over outputs. Alongside Imandra’s capacity to generate Python output that can be used to query and parse large datasets, this allows unprecedented insight into the behaviour of potentially highly complex and dynamic algorithmic systems. An exciting and increasingly important application we’re already researching is in verifying fairness properties of decision-making algorithms with respect to population demographics. Imandra forms an invaluable tool for checking against these kinds of increasingly pervasive standards and regulations.

Example Notebook

Verifying Learnt Models

The vast majority of work within formal methods has involved analysing models that are fully specified by the user. More and more, however, critical parts of algorithmic pipelines are constituted by models that are instead learnt from data. The task of analysing these kinds of models presents fresh challenges for the formal methods community and has seen exciting progress in recent years. While scalability is still an important, open research problem  -  with state-of-the-art models often having millions of parameters - we’ve already made progress by analysing and verifying simple yet powerful models learnt from real-world data. In future, we’ll be taking this work further by expanding upon both the size and variety of machine learning architectures that Imandra is able to verify.

NEW Article - Read More
Example Notebook
VML Code Snippet

Give it a spin

Try Imandra

and join us on

Discord
By using this website, you agree to our cookie policy

Please note that the Imandra website no longer supports Internet Explorer

We recommend upgrading to the latest Microsoft Edge, Google Chrome, or Firefox