Organizations building artificial intelligence (AI) models have no shortage of quality challenges, including the need for explainable AI that minimizes the risk of bias.
For Redwood City, Calif.-based startup TruEra, the path to explainable AI is paved with technologies that bring AI quality to models. Founded in 2019, TruEra has raised over $45 million in funding, including a recent investment round that included participation from Hewlett Packard Enterprise (HPE).
This week, TruEra announced the latest stage of its growth, revealing that it has been selected to be part of the Intel Disrupter Initiative, which provides participants with technical partnerships and go-to-market support.
“The big picture here is that as machine learning becomes more widely adopted in the business, there is a growing need to explain, test, and monitor these models because they are used in higher-value use cases,” Will Uppington, Co-Founder. and CEO of TruEra, he told VentureBeat.
TruEra addresses the challenges of explainable AI
As the use of AI matures, regulations are emerging around the world for its responsible use.
Responsible use of AI has many facets, including prioritizing data privacy and providing mechanisms to explain the methods used in models, to promote fairness and avoid bias.
Uppington noted that in addition to regulations, the performance of AI systems, which require both speed and accuracy, must be monitored and measured. According to Upington, each time software undergoes a new paradigm shift, a new monitoring infrastructure is required. However, he argued that the monitoring infrastructure for machine learning is different from other types of software systems that already exist.
Machine learning systems are fundamentally data-driven analytical entities, where models are iterated at a much faster rate than other types of software, he explained.
“The data you see in production becomes training data for your next iteration,” he said. “So today’s operational data is tomorrow’s training data that is used to directly improve your product.”
As such, Uppington argues that to deliver explainable AI, organizations must first implement correct AI model oversight. The things a data scientist does to explain and analyze a model during development should be monitored throughout the model’s lifecycle. With this approach, Uppington said the organization can learn from this operational data and feed it back into the next iteration of the model.
Disrupting the AI market with Intel
The issue of AI quality, or lack thereof, is often seen as a barrier to adoption.
“The quality and explicability of AI have turned into great obstacles for companies, which has led to the impidence of obtaining a return of sus inversions in AI”, affirmed Arijit Bandyopadhyay, CTO of business analysis and AI of Intel Corporation, in a Press release. “By partnering with TruEra, Intel is helping remove these barriers by enabling enterprises to access AI assessment, testing, and monitoring capabilities that can help them harness AI for business impact measurable.”
Uppington noted that as part of his company’s commitment to Intel, he is integrating with cnvrg.io, an Intel company that develops machine learning training software and services. The goal of the integration is to make it easier to build, deploy, and quality monitor AI using the convrg.io platform.
Integration with Intel is not the first or only silicon vendor that TruEra has partnered with. Barbara Lewis, chief marketing officer for TruEra, said her company already has a partnership with Nvidia, though she noted the partnership isn’t as deep as the new Intel Disrupter Initiative.
Looking ahead, Uppington said TruEra will continue to iterate on its own technology to further help organizations improve the quality and accuracy of AI.
“We’re going to talk a lot more simply about simplifying systematic testing and then analyzing the root cause of your machine learning systems,” he said.