The LeakPro project aims to build an open-source platform designed to evaluate the risk of information leakage in machine learning applications. It assesses leakage in trained models, federated learning, and synthetic data, enabling users to test under realistic adversary settings.
Built in collaboration AstraZeneca, AI Sweden, RISE, Syndata, Sahlgrenska University Hospital, and Region Halland, with principal Johan Östman.
Our AI security projects bring together a network of trusted partners and leading experts in the fields of artificial intelligence, machine learning, and cybersecurity. Through strategic collaborations with renowned academic institutions, innovative tech companies, and experienced industry professionals, we leverage cutting-edge research and best practices to develop robust, secure AI solutions. Our partners share our commitment to advancing AI security, ensuring data privacy, and protecting against evolving cyber threats.
Federated Learning is a foundation technology, improving input privacy for distributed data scenarios. It can be complemented by integrating other privacy-enhancing technologies.
Explore our collection of expert-authored articles on AI security, covering the latest trends, techniques, and best practices. Discover in-depth analyses of AI security challenges and cutting-edge solutions to safeguard AI models and ensure data privacy. Whether you're a researcher, developer, or business leader, our articles provide valuable insights to help you stay ahead in the rapidly evolving landscape of AI security.
Our AI security projects are supported by a network of trusted partners and sponsors. Their commitment and funding enable us to leverage cutting-edge research and best practices to develop secure AI solutions, advancing AI security and ensuring data privacy.