Robust and Trustworthy Deep Learning (part 3): Themis AI
Themis AI's cutting-edge technological advancements in robust and trustworthy deep learning
In April 2023, Sadhana Lolla, a machine learning scientist at Themis AI, delivered a lecture centered around the theme of "Robust and Trustworthy Deep Learning." During her presentation, she unveiled the cutting-edge technological advancements in progress at Themis AI. We have published a sequence of blog posts to showcase the main points of her talk. Last week we talked about uncertainty in ML. In this final blog post we will show how Themis AI is transforming risk-awareness to ensure trustworthy AI. For more information on the lecture see MIT Introduction to Deep Learning.
In the previous two blog posts, we've looked at two significant obstacles in achieving responsible and robust deep learning. We've delved into the concept of bias, which emerges when models lack relevant data, as well as uncertainty, which involves quantifying the confidence level of a particular model. Bias and uncertainty are ethical risk factors in machine learning because they may lead to results that are unfair and discriminatory such as denying loans based on applicants’ race or systematically failing to hire women for certain jobs.
Here, we will delve into how Themis AI leverages the ideas of bias and uncertainty to develop transformative products that improve models, making them more attuned to potential risks. This discussion will illuminate our role in reshaping the AI landscape, with a focus on ensuring the safety and reliability of AI technologies.
At Themis AI, we believe that uncertainty detection and bias mitigation offer solutions for safe and responsible AI. We leverage detection of both uncertainty and bias to mitigate risk at different stages of the AI life cycle:
Labeling the Data: Aleatoric uncertainty detection is key to determine the level of noise and erroneous labels of the data. These failures can lead to inaccurate predictions since similar inputs may result in widely different outputs. Having awareness of these types of risks and thus determining Aleatoric uncertainty can help labelers reevaluate incorrect labels and correct them
Analyzing the Data: Before training a model, we analyze the data in the dataset to identify biases and underrepresented demographics, prompting the addition of more diverse and inclusive data.
Training the Model: During training, if the dataset is biased in some problematic way, we can adaptively de-bias it by using the methods we have already discussed in previous blogs, such as Debiasing Variational Autoencoder (DB-VAE).
Verification and Certification: We use epistemic uncertainty detection on specific samples of predictions to check deployed models for safety and unbiased performance. This is a form of automatic auditing that can be reliably and efficiently scaled to certify trustworthiness in AI.
Deployment Guardian: This is a layer between the AI model and the user that signals a high level of model uncertainty and thus risk in real world situations. For instance, in autonomous driving, the model may signal to the human that the model lacks confidence and the human should thus take control of the vehicle. This technique allows humans to stay in command of the AI while also allowing them to trust AI in situations where the model has low uncertainty levels.
Building the Model: When we look at how to actually build the model, at Themis AI we have developed Capsa, a model agnostic framework for risk estimation. Capsa is an open source library that transforms models to be risk-aware. By adding just one line in the training workflow, Capsa automatically estimates label noise, bias and uncertainties for you. There are many methods for estimating uncertainty and bias out there. However, it has proved quite hard to determine which one of these solutions is useful and when. Capsa saves you the effort of choosing suitable methods by offering an extensive library of wrappers to achieve risk awareness. That is, Capsa allows for easy implementation of various uncertainty metrics, such as aleatoric uncertainty, epistemic uncertainty and vacuity uncertainty, with minimal additional work. Capsa does this by wrapping models: for every uncertainty metric that we want to estimate, we can apply the minimal model modification necessary without altering the original model's architecture or prediction capabilities. In the case of Aleatoric uncertainty, this may mean adding new layers. In the case of a variational autoencoder the method allows for creating and training the decoder and calculating the reconstruction loss on the fly.
On July 1st 2023, we launched our Private Beta, offered to 5 companies (carefully selected from a waitlist of over 30) that represent some of the most innovative, visionary, and groundbreaking players in the market.
Unlocking the Future of Trustworthy AI
At Themis AI we are committed to advancing scientific innovation while simultaneously guaranteeing AI safety and reliability. We believe these are essential goals for ensuring that new technologies can be developed in a responsible way.
In particular, Trustworthy AI is essential for several compelling reasons:
Safety: In critical applications such as healthcare, autonomous vehicles, and industrial control systems, AI errors can have severe consequences. Trustworthy AI is crucial to minimize the risk of failures caused by unreliable models.
Ethical Considerations: AI systems are increasingly involved in decision-making processes that impact individuals and society as a whole. Ensuring that these systems are trustworthy helps prevent biased or unfair outcomes, promoting fairness and equity.
User Confidence: Trustworthy AI fosters user confidence. People are more likely to adopt and use AI technologies if they believe that the systems will provide accurate, reliable, and safe results. Furthermore, the long-term success and acceptance of AI technologies depend on their ability to consistently deliver value and avoid negative impacts. Trustworthy AI is a foundation for sustainable adoption and integration into various industries.
Regulatory Compliance: In many regions of the world (e.g., US, Europe) governments and regulatory bodies are introducing standards to govern the development and deployment of AI. Trustworthy AI tools allow organizations to meet these requirements and avoid reputational, legal and financial repercussions.
By developing products like Capsa, Themis AI aims to ensure AI safety across various fields and applications. We are actively engaged in the development and dissemination of open-source tools, fostering an environment of transparency and collaborative progress. Our resolute dedication is further amplified through our strategic partnerships and collaborations with industries on a global scale, as we enter in a new era of AI that is both pioneering and trustworthy.
And now Themis AI is growing and we are in the process of hiring new Machine Learning Scientists in the upcoming months.There has never been a better time to be part of the transformative journey towards fostering trustworthy AI. Joining us now means becoming an integral part of this exciting adventure, where cutting-edge research, collaborative work, and a commitment to ethical AI converge to shape the future of technology.