Lexi-Jarvis

Lexi-Jarvis: A Proactive AI Notification System for Human–AI Interaction

Research Motivation

Most AI assistants today are reactive, requiring users to explicitly request information. This places cognitive burden on users and limits the potential of AI systems to support decision-making in real time. This project explores proactive AI systems that surface relevant information before users ask, while carefully considering issues of attention, autonomy, and interruption.

Research Question

How can AI systems proactively deliver context-aware notifications while preserving user autonomy and minimizing cognitive disruption?

System Overview

Lexi-Jarvis is an independent proactive notification system developed by the author. The system builds on an existing LLM-based conversational agent ("Lexi") as a language interface. The original contribution of this project is the Jarvis layer, which introduces proactive decision-making, contextual reasoning, and notification delivery. Lexi-Jarvis monitors contextual signals and determines whether, when, and how to notify users, moving beyond fixed triggers by evaluating relevance and urgency before delivery.

System Architecture

  • Context Monitoring Layer: periodically gathers environmental and temporal signals via external APIs.
  • Decision Layer: applies rule-based logic to assess notification relevance and timing.
  • Delivery Layer: pushes notifications to users through Firebase Cloud Messaging in a Progressive Web App.

Research Contribution

  • Framed proactive notification timing as a human–computer interaction design problem rather than a purely technical trigger.
  • Demonstrated how abstract HCI concepts such as attention and agency can be operationalized in a deployed AI system.
  • Identified early trade-offs between notification relevance, user trust, and perceived system autonomy.

Limitations & Future Work

The current system relies on rule-based decision logic and has not yet been evaluated through controlled user studies. Future work includes incorporating learning-based decision models, modeling user interruptibility, and conducting empirical studies on user trust and acceptance of proactive AI systems.

Research Outputs