Federatad Learning (FL) improving security and privacy in tactical networks
| Title | Federatad Learning (FL) improving security and privacy in tactical networks |
|---|---|
| Summary | Investigate and prototype how FL can improve security and privacy in tactical networks, with a focus on intrusion detection or anomaly detection. The project will combine a literature review with a simulation-based implementation to assess feasibility. |
| Keywords | |
| TimeFrame | |
| References | Federated Learning in Intrusion Detection: Advancements, Applications, and Future Directions
https://link.springer.com/article/10.1007/s10586-025-05325-w Federated Learning for Anomaly Detection: A Systematic Review on Scalability, Adaptability, and Benchmarking Framework https://www.mdpi.com/1999-5903/17/8/375 |
| Prerequisites | |
| Author | |
| Supervisor | Edison Pignaton de Freitas |
| Level | Master |
| Status | Open |
Project Goal:
Investigate and prototype how FL can improve security and privacy in tactical networks, with a focus on intrusion detection or anomaly detection. The project will combine a literature review with a simulation-based implementation to assess feasibility.
Proposed Solution & Specific Tasks: Literature Review & Gap Analysis: Survey current research on FL in military/tactical or IoBT networks. Identify open problems in privacy, security, robustness, and efficiency of FL for tactical applications.
System Design: Define a tactical network scenario (e.g., coalition IoBT with partitioned nodes). Choose a security task (e.g., intrusion detection, anomaly detection, malware traffic classification). Design a federated learning framework for this task.
Implementation: Simulate tactical nodes using a federated learning framework such as Flower, PySyft, or TensorFlow Federated. Integrate privacy-preserving mechanisms (e.g., differential privacy, secure aggregation). Model adversarial attacks on FL (e.g., data poisoning, model poisoning, inference attacks).
Experiments & Evaluation: Train baseline centralized ML vs. federated ML. Evaluate under varying tactical conditions (network partitions, node dropouts, adversarial clients). Compare trade-offs: accuracy, latency, bandwidth, privacy leakage, robustness to attack.
Evaluation Criteria:
Model Accuracy: Classification performance (e.g., F1-score, ROC-AUC) compared to centralized training.
Security & Privacy: Resistance to data/model poisoning attacks (attack success rate). Privacy leakage quantified via inference attacks.
Efficiency: Bandwidth usage reduction compared to centralized training. Computational overhead on constrained devices.
Robustness: Performance under intermittent connectivity and node dropouts.
Tools & Frameworks:
Federated Learning Frameworks: Flower (flexible, Python-based). TensorFlow Federated (robust, integrates with TF/Keras). PySyft (privacy-preserving ML with secure aggregation).
Datasets: CIC-IDS 2017 / CIC-IDS 2020 (intrusion detection). TON_IoT dataset (2021) (IoT/IIoT telemetry with attacks — closer to tactical IoBT).
Simulation Tools (optional extension): NS-3 or CORE to emulate tactical network constraints (bandwidth limits, node dropouts).