Журнал «Современная Наука»

Russian (CIS)English (United Kingdom)
MOSCOW +7(495)-142-86-81

AN ALGORITHM TO PROTECT MACHINE LEARNING SYSTEMS FROM THE THREAT OF MODEL MODIFICATION BY DETECTING EMBEDDED MALICIOUS DATA

Chekmarev Maxim Alekseevich  (Adjunct, Krasnodar Higher Military School)

The paper discusses the main security threats to machine learning systems and, in particular, the threat of model modification through the introduction of malicious data. A security algorithm based on the deployment technology of artificial neural networks is proposed. The algorithm has been tested using the developed computer program and a set of low-level features of the learning object, which allows to identify the embedded malicious data, has been obtained. Conclusions are drawn on the need for further research in this area.

Keywords:machine learning, artificial intelligence, security breanch, poisoning attack, artificial neural networks, low-level features

 

Read the full article …



Citation link:
Chekmarev M. A. AN ALGORITHM TO PROTECT MACHINE LEARNING SYSTEMS FROM THE THREAT OF MODEL MODIFICATION BY DETECTING EMBEDDED MALICIOUS DATA // Современная наука: актуальные проблемы теории и практики. Серия: Естественные и Технические Науки. -2023. -№07/2. -С. 160-164 DOI 10.37882/2223-2966.2023.7-2.34
LEGAL INFORMATION:
Reproduction of materials is permitted only for non-commercial purposes with reference to the original publication. Protected by the laws of the Russian Federation. Any violations of the law are prosecuted.
© ООО "Научные технологии"