Research
My group works on making AI-driven systems trustworthy — provably secure, resilient under adversarial conditions, and safe when deployed in the physical world.
Research Areas
Cyber-Physical Systems Security
Strategic defense of interconnected infrastructure — power grids, IoT networks, autonomous systems — under intelligent adversaries using game-theoretic and control-theoretic tools.
Quantum AI & Security
Security and privacy of quantum computing systems; adversarially robust quantum machine learning; resilient quantum learning under noise and attacks.
Reinforcement Learning
Safe and trustworthy RL for sequential decision making — online representation learning, Stackelberg coupling, and adversarial RL in multi-agent environments.
Game & Decision Theory
Dynamic and differential games for security, mechanism design, contract theory, and optimal resource allocation under uncertainty and bounded rationality.
Network Resilience
Distributed optimization and resilience of large-scale interdependent networks, including energy systems, communication infrastructure, and multi-agent systems.
Programmable Wireless Systems
Formal-assurance-based policy management for domain-specific programmable wireless architectures; trustworthy AI for next-generation communication infrastructure.
Funded Projects
Trustworthy Programmable Wireless Infrastructure
NSF · 2025
Resilient Quantum Learning Systems
NSF · 2024
System-Level Quantum Computing Security and Privacy
NSF · 2023
Cyber-Physical Energy Systems Security and Resilience under Strategic IoT-Based Cyberattacks
NSF · 2022
High-Performance GPU Cluster (NVIDIA H200) for Multidisciplinary Research
NSF MRI · 2025
Trustworthy AI for Multi-Agent Systems
Fordham–IBM Research Fellowship
Dynamic Adversarial AI
Fordham–IBM Research Fellowship