LORI: Local Low-Rank Imputation for Context-Aware AI - | Virginia Tech Intellectual Properties (VTIP)

LORI: Local Low-Rank Imputation for Context-Aware AI

THE CHALLENGE

Leveraging Artificial Intelligence (AI) for data-driven decision-making faces a major challenge in automatically configuring and optimizing complex AI pipelines across different use cases. As organizations deal with diverse datasets and fast-changing requirements, selecting the right combination of processing steps becomes increasingly difficult. Existing tools often rely on outdated methods like manual tuning or basic recommendation systems that can’t keep up with the scale, speed, or nuance of modern AI needs. These systems struggle with incomplete performance data, can’t differentiate between similar options, and are bogged down by high computational costs—resulting in slower decisions, suboptimal performance, and missed opportunities. There’s a growing need for smarter, more scalable solutions that can adapt AI configurations in real time, even with sparse or messy data, helping businesses stay agile and competitive in a fast-moving landscape.

OUR SOLUTION

We offer a smarter, faster way for systems to automatically configure AI pipelines across a wide range of data scenarios. By breaking down complex, incomplete performance data into smaller, manageable parts, our system—built on the innovative Local Low-Rank Response Imputation (Lori) framework—can accurately predict missing results and quickly rank the best AI methods for each context. It uses advanced statistical techniques like multivariate splitting and matrix completion to identify patterns and similarities among datasets and algorithms, making the configuration process more precise and scalable. Unlike traditional, manual, or rigid recommendation systems, our approach adapts in real time, reduces computational overhead, and improves responsiveness—helping organizations save time, increase efficiency, and make better data-driven decisions at scale.

Figure: Block diagram of an example environment that can facilitate local low-rank matrix imputation for contextual computation pipeline recommendation according to at least one embodiment of the present disclosure.

Advantages:

  • Improved prediction accuracy using local low-rank modeling
  • Enhanced scalability through submatrix segmentation
  • Increased efficiency with minimal parameter tuning
  • Robust handling of noise and cold-start scenarios

Potential Application:

  • Automated AI pipeline configuration and tuning
  • Context-aware machine learning model recommendation
  • Scalable deployment in cloud-based analytics platforms
  • Adaptive optimization for industry-specific data workflows
Patent Information:
Tech ID:
22-025
Category(s):
Technology Classifications > Software
For Information, Contact:
Emily Lanier
Licensing Manager
Virginia Tech Intellectual Properties, Inc.
emilylt@vt.edu
Inventors:
Ran Jin
Chen Xiaoyu
Keywords: