Federated Learning: From Research to Reality
Build better models across organizations without sharing raw data
Scale anywhere - from cloud infrastructure to edge devices
Works with all major ML frameworks out of the box
Get Started
Learn More
FEDn is a production-ready federated learning platform that enables organizations to collaboratively train ML models while keeping sensitive data secure and local. Whether you're exploring federated learning or scaling production deployments, FEDn integrates seamlessly with your existing ML workflows with flexible deployment options and strict security controls.

Get started

Begin with the SDK using our quick start guide, or explore examples and frameworks on GitHub.

Getting Started

The best way for data scientists and ML professionals to learn FEDn is by registering for a free personal account and starting with the comprehensive Getting Started tutorial. It takes approximately 30 minutes to complete.
Get Started

FEDn GitHub

Join other developers on GitHub to explore more examples with ML frameworks like TensorFlow, PyTorch, scikit-learn, and Hugging Face, and contribute to open-source client APIs in Python, C++, and Kotlin.
Explore

A video introduction to FEDn

This video covers the following three sections related to the FEDn framework:

  • Where to find information about the FEDn framework.
  • A step-by-step guide to your first federated machine learning project using the FEDn framework.
  • The terminologies used in the FEDn framework.

Watch the rest of the FEDn video series here »

Tutorials

Highlighted examples and guides.

Custom project
A guide on how a FEDn project is structured, and details how to develop your own project. Basic understanding of the FEDn needed.
Flower ClientApps in FEDn
This guide will show you how to run a Flower ‘ClientApp’ on FEDn and integrate it with FEDn’s federated learning framework.
Spam Detection
This example project demonstrates how one can make use of the popular Hugging Face ‘Transformers’ library in FEDn.

Join our next workshop

We'll guide you through establishing a cross-silo ML federation and explore how FEDn integrates with MLOps workflows.

Learn More

See how FEDn enables collaborative AI while keeping your data secure and local.

Why FEDn?

In today's AI landscape, organizations are pushing machine learning to the edge while managing increasingly distributed data sources. Traditional centralized approaches struggle with this new reality, either requiring massive data transfers or resulting in isolated, underperforming models. FEDn bridges this gap by enabling collaborative learning across your entire cloud-to-edge infrastructure.

Problem FEDn Solves

Distributed Data and Compute

  • Enable ML across geographically distributed data sources and edge devices
  • Optimize resource usage across cloud and edge infrastructure
  • Eliminate costly data transfers while maintaining model performance

Privacy and Control

  • Keep sensitive data secure at its source
  • Meet data privacy requirements through local processing
  • Maintain full control over data and model contributions

Scalability and Integration

  • Scale from development to production across diverse infrastructure
  • Support for all major ML frameworks
  • Simplified deployment and management

Key Benefits

Privacy-First Machine Learning

  • Keep sensitive data secure within your infrastructure
  • Train models on distributed datasets without raw data sharing
  • Maintain full control over your data and model contributions

Enterprise-Grade Infrastructure

  • Scale from pilot projects to production deployments
  • Deploy across any infrastructure - from cloud to edge
  • Production-ready security with encryption and token authentication

Simplified Implementation

  • Use your existing ML code without modifications
  • Support for major ML frameworks
  • Minimal DevOps overhead for deployment and maintenance

Technical Overview

FEDn is a modular and distributed framework designed to enable federated learning across heterogeneous computing environments, from cloud infrastructure to edge devices. Its flexible architecture supports scalable model training and aggregation while maintaining control and coordination through a hierarchical system of controllers, combiners, and distributed clients.

Architecture

Tier 1: Control Layer (cloud, near edge)The global controller serves as the central coordination point, managing service discovery across the system. Model storage is handled through S3, while a database system maintains essential operational data and state information.

Tier 2: Model Layer (cloud, near edge, far edge)Combiners play a crucial role in model aggregation, with the flexibility to deploy one or multiple instances for optimal load balancing and geographical proximity to client groups. In scenarios involving multiple Combiners, a Reducer takes on the responsibility of aggregating combiner-level models into a cohesive whole.

Tier 3: Client Layer (near edge, far edge, device edge) Geographically distributed clients

How It Works

FEDn coordinates distributed learning through an iterative process
  1. Model Distribution: The global model is distributed to all participating clients, from cloud to edge devices. Each client receives the same base model, ensuring a common starting point for training.
  2. Local Training: Clients train the model using their local data and compute resources. Data stays secure at its source, never leaving its original location during the training process.
  3. Model Updates: After training, clients send back only the model parameters, never the raw data. These updates are securely transmitted to the global coordinator for aggregation.
  4. Global Aggregation: Updates from all clients are combined into an improved global model. This process repeats for multiple rounds, continuously enhancing the model with knowledge from all participants.

Framework Support

FEDn is designed to be ML-framework agnostic, seamlessly supporting major frameworks such as Keras, PyTorch, Tensorflow, Huggingface and scikit-learn. Ready-to-use examples are provided, facilitating immediate application across different ML frameworks.

Features & Capabilities

Core Features

Scalability and resilience
FEDn boosts federated learning by efficiently coordinating client models and aggregating them across several servers, catering to high client volumes and ensuring robust recovery from failures. It supports asynchronous training, smoothly handling client connectivity changes.
Security
FEDn enhances security in federated learning environments by eliminating the need for clients to open ingress ports and using standard encryption protocols and token authentication. This approach streamlines deployment across varied settings and ensures secure, easy integration.
Real-time monitoring and analysis
With comprehensive event logging and distributed tracing, FEDn enables real-time monitoring of events and training progress, facilitating easier troubleshooting and auditing. The API offers access to machine learning validation metrics from clients, allowing for detailed analysis of federated experiments.
Cloud-native
By following cloud-native design principles, we ensure a flexible range of deployment options that cater to various needs, including private cloud environments, on-premise infrastructure, and hybrid setups. This approach ensures FEDn can be integrated across diverse deployment scenarios.

Security & Privacy

Secure Communication

FEDn implements enterprise-grade security for all communications. Clients connect securely without opening inbound ports, using standard encryption protocols and token-based authentication. All model updates are encrypted during transit.

Privacy Preservation

Training data never leaves its original location. Only model parameters are shared, never raw data. Organizations maintain complete control over their data assets while participating in collaborative learning.

Access Control

Fine-grained access controls let you manage who can participate in training, access model updates, or view results. Track all interactions with comprehensive audit logs and distributed tracing.

Monitoring & Analytics

Real-time Training Insights

Monitor training progress across all participating clients in real-time. Track model performance, convergence metrics, and client status through a centralized dashboard. Visualize learning curves and performance metrics as training progresses.

System Health Monitoring

Keep track of system performance with comprehensive event logging and distributed tracing. Monitor client connectivity, resource utilization, and network health. Quickly identify and troubleshoot issues across your federated infrastructure.

Performance Analytics

Access detailed machine learning metrics from all clients. Compare model performance across different clients and training rounds. Generate reports on training efficiency and resource utilization for optimization.

Deployment Options

FEDn Studio provides enterprise-grade deployment flexibility through standard Kubernetes, enabling seamless integration from cloud platforms to on-premise environments.

Software as a Service

Get started quickly with our fully managed cloud solution. Zero server-side DevOps means you can focus entirely on your ML objectives. Perfect for:

  • Rapid prototyping and proof-of-concepts
  • Early-stage projects and pilots
  • Teams focusing on ML development

Self-hosted

Deploy FEDn in your own infrastructure for maximum control. Ideal for organizations requiring:

  • Complete control over infrastructure
  • Enhanced security and data privacy
  • Custom networking and integration requirements

FEDn Release Notes

New features, enhancements, important bug fixes, and user experience improvements. For a detailed overview of the latest changes, follow the link.

Backed by Industry Leaders

Accelerating federated learning innovation through strategic partnerships with technology leaders.

NVIDIA® Inception Member

As a member of NVIDIA® Inception, we leverage cutting-edge technology and resources to accelerate our innovation in federated learning and AI. This program connects us with technical expertise and a global network of AI pioneers.

Intel® Partner Alliance Member

Our Intel® Partner Alliance membership provides access to advanced technical resources and optimization tools, enabling us to deliver high-performance federated learning solutions across diverse computing environments.

Trusted by Leading Organizations

FEDn powers federated learning initiatives across defense, automotive, telecommunications, and research organizations. Our platform is trusted by leading enterprises to deliver secure, scalable machine learning solutions.

Questions & Answers

  • Why should we choose your FL framework over other options?

    Our framework offers an easy-to-use interface, visual aids, and collaboration tools for ML/FL projects, with features like distributed tracing and event logging for debugging and performance analysis. It ensures security through client identity management and authentication, and has scalable architecture with multiple servers and load-balancers. FEDn also allows flexible experimentation, session management, and deployment on any cloud or on-premises infrastructure.

  • Is this yet another ML platform we have to install?

    FEDn is a versatile framework that can be extended, configured, and integrated into existing systems to tailored to your environment. For effective Federated Learning (FL) management, deployment of server-side components and charts is necessary. It enhances rather than replaces your current setup.

  • What deployment options does FEDn offer?

    FEDn offers two main deployment options to cater to different organizational needs and project stages. The fully-managed SaaS (Software as a Service) model simplifies access to federated learning technology, making it ideal for early-stage projects, pilots, and proof-of-value phases. For organizations with strict cybersecurity requirements, FEDn can be deployed on private clouds or on-premise, providing full control over deployment, security, and privacy. This self-managed option is particularly suitable for advanced security needs, such as protecting IP-sensitive ML models.

  • What is federated learning?

    Federated learning (FL) overcomes the limitations of centralized machine learning by training models on data spread across different locations, preserving privacy and complying with regulations. This decentralized approach enables secure, efficient, and scalable machine learning without moving the data, useful for managing the growing complexity and volume of data in a connected world. Learn more »

  • Can we build our own IP using your framework?

    Absolutely. You can develop your own IP without any conflict. Utilize our framework and Scaleout’s expertise to accelerate your project. There's no risk of lock-in, as our Software Development Kit (SDK) for integration is licensed under Apache2. We're confident you'll find value in our support services, warranty, indemnification, and comprehensive toolkit.

  • How can I explore FL without deep technical expertise?

    We offer a cloud-hosted FL platform for easy FL exploration, optimized for cost and ideal for R&D. Scaleout enables data scientists to investigate FL without initial IT/DevOps resources. We provide a smooth transition to self-hosted production with enterprise integrations, ensuring your PoC is scalable, secure, and representative of real-world scenarios.

  • What capabilities does FEDn offer to support my organization's federated learning needs?

    FEDn supports a range of capabilities to meet diverse organizational demands. It offers both multi-tenant and single-tenant options, allowing organizations to choose the configuration that best suits their needs. Additionally, FEDn takes care of complex operations management, including server aggregation, data storage, user authentication, network configuration, and system monitoring. This reduces the operational burden on users, enabling them to focus on developing and refining their federated learning.

  • What is edge AI?

    Edge AI is an emerging field that combines artificial intelligence with edge computing, enabling AI processing directly on local edge devices. It enables real-time data processing and analysis without constant reliance on cloud infrastructure. This approach has gained momentum as the tech landscape undergoes a major transformation, driven by the exponential increase in data generated at the edge, thanks to IoT devices, sensors, and other connected technologies. Learn more »