Loading...
Thumbnail Image
Item

Packing, BiLSTM, and attention for privacy-preserving intent classification: Practical upgrades across centralized and IID-federated learning

Ansari, Mohammad Samar
Kulsheshtra, Pankhuri
Kanwal, Nadia
Aslam, Asra
Citations
Altmetric:
Advisors
Editors
Other Contributors
EPub Date
Publication Date
2026
Submitted Date
Files
Other Titles
Abstract
We introduce a deployment-oriented intent classification framework that delivers a strong accuracy–efficiency–calibration trade-off without external pretraining or any changes to the data: an attentional BiLSTM coupled with four training/architecture elements: length-aware sequence packing, bidirectional recurrence, variational (locked) dropout across time, and attention pooling. The same compact architecture is evaluated in two regimes: centralized learning (CL) and federated learning with IID client partitions (IIDFL) using FedAvg, with shared hyperparameters to isolate the impact of the modeling recipe. The pipeline produces publication ready artifacts (CSV logs, learning curves, per-class F1, reliability diagrams with ECE, and round-wise confusion matrices for FL) to enable transparent, reproducible assessment. On a multi-intent dataset representative of production constraints, the model attains high Accuracy and Macro-F1, improved tail robustness (higher worst-class F1), and low Expected Calibration Error in CL; under IID-FL it exhibits smooth round-wise convergence toward the centralized reference while maintaining a modest communication budget per round (one broadcast plus m client uploads with 32-bit floats). This work contributes: (1) a principled, portable LSTM recipe: (a) packing, (b) BiLSTM, (c) locked dropout, and (d) attention—that improves recognition and calibration without additional data; 2) an IID-FL evaluation with round-wise diagnostics and communication estimates; and (3) a reference implementation that outputs all metrics and figures needed for rigorous, deployment-focused reporting in privacy-conscious assistants.
Citation
Ansari, M. S., Kulsheshtra, P., Kanwal, N., & Aslam, A. (2026 - forthcoming). Packing, BiLSTM, and attention for privacy-preserving intent classification: Practical upgrades across centralized and IID-federated learning. Procedia Computer Science, vol(issue), pages. doi
Publisher
Elsevier
Journal
Procedia Computer Science
Research Unit
DOI
PubMed ID
PubMed Central ID
Type
Article
Language
Description
© 2025 The Authors. Published by Elsevier B.V.
Series/Report no.
ISSN
1877-0509
EISSN
ISBN
ISMN
Gov't Doc
Test Link
Sponsors
unfunded
Embedded videos