Reinforcement Learning for Distributed AI Systems: Scalable Indexing and LLM Integration in Cloud Architecture

  • Prithviraj Kumar Dasari et al.
Keywords: Reinforcement Learning, Distributed AI, Scalable Indexing, Large Language Models, Cloud Architecture, Interpretability, Kubernetes, Real-time Systems

Abstract

This study proposes a unified framework for distributed artificial intelligence (AI) systems by integrating reinforcement learning (RL), scalable indexing, and large language models (LLMs) within a cloud-native architecture. The research investigates how advanced RL algorithms, particularly PPO and DQN, function under distributed workloads and how the inclusion of LLMs enhances system interpretability and user interaction. A multi-agent simulation was deployed in a cloud environment using Kubernetes for orchestration and Apache Cassandra for indexing, enabling horizontal scalability and low-latency performance. Results show that PPO outperforms in convergence speed and reward optimization, while DQN integrated with LLMs improves interpretability and dynamic policy updates without compromising performance. Scalable indexing frameworks significantly enhanced throughput and reduced latency, with cache hit rates positively correlating with overall system efficiency. Statistical analyses, including ANOVA and Pearson correlations, confirmed the significance and strength of these improvements. This integrated approach demonstrates the effectiveness of combining learning, reasoning, and storage subsystems in distributed AI applications. It offers a scalable, interpretable, and efficient model suitable for real-time intelligent systems in domains such as autonomous operations, industrial automation, and federated learning.

Author Biography

Prithviraj Kumar Dasari et al.

Prithviraj Kumar Dasari 1, Omkar Ashok Bhalekar2, Amrit pal Singh3
1 Senior Software Engineer
2 Senior Network Engineer
3 Product Security Engineer

Published
2025-01-09
Section
Regular Issue