Towards logical specification of adversarial examples in machine learning - Advancing Rigorous Software and System Engineering Accéder directement au contenu
Poster De Conférence Année : 2022

Towards logical specification of adversarial examples in machine learning

Résumé

The use of Artificial Intelligence (AI)-based systems, using particularly Machine Learning (ML) classifiers, is growing rapidly and finding uses in many industries. Most of these industries have critical safety, security, and dependability requirements. Despite this rapid growth, interest in the security of these systems has only arisen in the last few years and it is not yet well-studied. There is a want for a formal notion of security for ML systems, similar to that used in classical information security. We took this statement toward security threat modeling and analysis in ML-based systems, focusing on the adversarial example threat. An adversarial example threat is an input of the classifier that was maliciously modified to induce a misclassification. Identifying this threat at the architecture design stage before proceeding with system development is a critical milestone in the development process of secure ML systems. In this paper, we propose an approach to adversarial example threat specification and detection in component-based software architecture models. We use first-order and modal logic as an abstract and technology-independent formalism. The general idea of the approach is to specify the threat as property of a modeled system such that the violation of the specified property indicates the presence of the threat. We demonstrate the applicability of the method through a classifier used in a recommendation system.
Fichier sous embargo
Fichier sous embargo
0 0 16
Année Mois Jours
Avant la publication
mercredi 15 mai 2024
Fichier sous embargo
mercredi 15 mai 2024
Connectez-vous pour demander l'accès au fichier

Dates et versions

cea-04292759 , version 1 (17-11-2023)

Identifiants

Citer

Marwa Zeroual, Brahim Hamid, Morayo Adedjouma, Jason Jaskolka. Towards logical specification of adversarial examples in machine learning. IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom 2022), Dec 2022, Wuhan, China. IEEE, 2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), pp.1575-1580, 2022, ⟨10.1109/TrustCom56396.2022.00226⟩. ⟨cea-04292759⟩
68 Consultations
1 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More