The system will be going down for regular maintenance. Please save your work and logout.
Packing, BiLSTM, and attention for privacy-preserving intent classification: Practical upgrades across centralized and IID-federated learning
Name:
Packing, BiLSTM - As_Submitted ...
Embargo:
2026-12-31
Size:
816.3Kb
Format:
PDF
Request:
Accepted version
Affiliation
University of Chester; University of SheffieldPublication Date
2026
Metadata
Show full item recordAbstract
We introduce a deployment-oriented intent classification framework that delivers a strong accuracy–efficiency–calibration trade-off without external pretraining or any changes to the data: an attentional BiLSTM coupled with four training/architecture elements: length-aware sequence packing, bidirectional recurrence, variational (locked) dropout across time, and attention pooling. The same compact architecture is evaluated in two regimes: centralized learning (CL) and federated learning with IID client partitions (IIDFL) using FedAvg, with shared hyperparameters to isolate the impact of the modeling recipe. The pipeline produces publication ready artifacts (CSV logs, learning curves, per-class F1, reliability diagrams with ECE, and round-wise confusion matrices for FL) to enable transparent, reproducible assessment. On a multi-intent dataset representative of production constraints, the model attains high Accuracy and Macro-F1, improved tail robustness (higher worst-class F1), and low Expected Calibration Error in CL; under IID-FL it exhibits smooth round-wise convergence toward the centralized reference while maintaining a modest communication budget per round (one broadcast plus m client uploads with 32-bit floats). This work contributes: (1) a principled, portable LSTM recipe: (a) packing, (b) BiLSTM, (c) locked dropout, and (d) attention—that improves recognition and calibration without additional data; 2) an IID-FL evaluation with round-wise diagnostics and communication estimates; and (3) a reference implementation that outputs all metrics and figures needed for rigorous, deployment-focused reporting in privacy-conscious assistants.Citation
Ansari, M. S., Kulsheshtra, P., Kanwal, N., & Aslam, A. (2026 - forthcoming). Packing, BiLSTM, and attention for privacy-preserving intent classification: Practical upgrades across centralized and IID-federated learning. Procedia Computer Science, vol(issue), pages. doiPublisher
ElsevierJournal
Procedia Computer ScienceAdditional Links
https://www.sciencedirect.com/journal/procedia-computer-scienceType
ArticleDescription
© 2025 The Authors. Published by Elsevier B.V.ISSN
1877-0509Sponsors
unfundedCollections
Except where otherwise noted, this item's license is described as https://creativecommons.org/licenses/by-nc-nd/4.0/

