International Journal of Advances in Engineering & Scientific Research

International Journal of Advances in Engineering & Scientific Research

Print ISSN : 2349 –4824

Online ISSN : 2349 –3607

Frequency : Continuous

Current Issue : Volume 11 , Issue 2
2024


As machine learning (ML) becomes deeply integrated into real-world applications, ensuring scalability while preserving user privacy has emerged as a critical challenge. This work examines the integration of on-device inference within scalable ML pipelines to address these demands. The approach reduces data transmission through processing and predicting locally on edge devices, enhancing privacy and decreasing latency. We propose hybrid architecture combining on-device inference, federated learning and centralized model management for enabling efficient deployment across diverse environments. Such architectures are protected with privacy-preserving techniques like differential privacy and homomorphism encryption. Evaluations across domains like IoT, health care, and financial application showcase the efficiency of this privacy-first approach in achieving scalability without risking data security.

Keywords: On-device inference, scalable pipelines, federated learning, privacy-preserving ML, edge computing