Intel grant to help develop smarter AI models
Intel grant to help develop smarter AI models

Associate Professor Amit R. Trivedi is developing AI models that can assess the uncertainty of their predictions and continuously learn from their mistakes.
His framework will be trained on large datasets and will be able to adapt to specific tasks by combining conformal predictions, a method that assesses the uncertainty of a prediction while enabling a lightweight execution, so the prediction models can be made continuously aware of when they might be wrong without getting overwhelmed with computations.
Deep neural networks (DNN) greatly simplify the complexity of decision-making problems, easily identifying what is unique or suspicious, just like a human brain. But DNNs are like black boxes, with no guarantees of accurate predictions. To assess when DNN predictions are inaccurate, large amounts of data must be processed.
Processing this data in edge-connected devices is more difficult due to the sheer amount of power needed to process data quickly. And extracting that information within the time and energy limits of edge devices isn’t feasible with current semiconductors.
Trivedi is conserving power through dynamic sensor adjustment, scaling resources based on the uncertainty of the model itself, and the signal interference that is present.
These techniques will be applied to healthcare applications, which can include vast data inputs such as DNA sequences, and to digital twin applications–digital models that replicate a system’s processes and incorporate real-time data from embedded sensors.
Trivedi received a $240,000 grant from Intel Corporation for this project, “Uncertainty-Aware Continual Learning and Dynamic Resource Adaptation for Foundation Models”. This funding runs through July 31, 2027.
Trivedi has received seven parallel grants, with more than $3 million in current research funding. Trivedi’s current work touches on physics, robotics, and machine learning.