Informatics and Applications
2024, Volume 18, Issue 1, pp 78-83
LOGIC OF DECEPTION IN MACHINE LEARNING
- A. A. Grusho
- N. A. Grusho
- M. I. Zabezhailo
- V. O. Piskovski
- E. E. Timonina
- S. Ya. Shorgin
Abstract
The issues of potential change in the work of artificial neural networks under various influences on training data is the urgent task. Violation of the correct operation of the artificial neural network with hostile effects on the training sample was called poisoning. The paper provides the simplest model of neural network formation in which the features used in training are based only on the predominance of the number of homogeneous elements. Changes in the samples of the training sample allow one to build Back Doors which, in turn, allow one to implement incorrect classification as well as embed errors into the software system, up to malicious code. The correct model of training sample poisoning which allows one to implement Back Door and triggers for classification errors is constructed in the paper. The simplest nature of the constructed model of functioning and formation of deception allows one to believe that the causal logic of the realization of a possible real attack on a complex artificial intelligence system has been restored correctly. This conclusion allows one in the future to correctly build the subsystems of monitoring, anomaly analysis, and control of the functionality of the entire artificial intelligence system.
[+] References (10)
- Nelson, B., M. Barreno, F. J. Chi, A. D. Joseph, B. I. Rubinstein, U. Saini, C. Sutton, J. Tygar, and K. Xia. 2009. Misleading learners: Co-opting your spam filter. Machine learning in cyber trust. Boston, MA: Springer. 17-51. doi: 10.1007/978-0-387-88735-7 2.
- Wang, J., Y. Ma, L. Zhang, R. X. Gao, and D. Wu. 2018. Deep learning for smart manufacturing: methods and applications. J. Manuf. Syst. 48:144-156. doi: 10.1016/j.jmsy.2018.01.003.
- Tripathi, S., D. Muhr, M. Brunner, H. Jodlbauer,
M. Dehmer, and F. Emmert-Streib. 2021. Ensuring the robustness and reliability of data-driven knowledge discovery models in production and manufacturing. Frontiers Artificial Intelligence 4:576892. doi: 10.3389/frai.2021.576892.
- Hofler, M. 2005. Causal inference based on counterfactuals. BMC Med. Res. Methodol. 5:28. doi: 10.1186/1471- 2288-5-28.
- Baracaldo, N., B. Chen, H. Ludwig, and J. A. Safavi. 2017. Mitigating poisoning attacks on machine learning models: A data provenance based approach. 10th ACMWorkshop on Artificial Intelligence and Security Proceedings. New York, NY: Association for Computing Machinery. 103-110.
- Starck, N., D. Bierbrauer, and P. Maxwell. 2022. Artificial intelligence, real risks: Understanding - and mitigating - vulnerabilities in the military use of AI. Available at: https://mwi.usma.edu/arti¦cial-intelligence-real- risks-understanding-and-mitigating-vulnerabilities-in- the-military-use-of-ai (accessed February 8, 2024).
- Nielsen, M. 2019. Neural networks and deep learning. Available at: http://neuralnetworksanddeeplearning.com (accessed February 8, 2024).
- Chen, B., W. Carvalho, N. Baracaldo, H. Ludwig,
B. Edwards, T. Lee, I. Molloy, and B. Srivastava. 2018. Detecting backdoor attacks on deep neural net- works by activation clustering. arXiv.org. 10 p. Available at: https://arxiv.org/abs/1811.03728v1 (accessed February 8, 2024).
- Steinhardt, J., P. W. Koh, and P. S. Liang. 2017. Certified defenses for data poisoning attacks. arXiv.org. 15 p. Available at: https://arxiv.org/abs/1706.03691v2 (accessed February 8, 2024).
- Liu, K., B. Dolan-Gavitt, and S. Garg. 2018. Fine- pruning: Defending against backdooring attacks on deep neural networks. Research in attacks, intrusions, and defenses. Eds. M. Bailey, T. Holz, M. Stamatogiannakis, and S. Ioannidis. Lecture notes in computer science ser. Cham: Springer. 11050:273-294. doi: 10.1007/978-3- 030-00470-5 13.
[+] About this article
Title
LOGIC OF DECEPTION IN MACHINE LEARNING
Journal
Informatics and Applications
2024, Volume 18, Issue 1, pp 78-83
Cover Date
2024-04-10
DOI
10.14357/19922264240111
Print ISSN
1992-2264
Publisher
Institute of Informatics Problems, Russian Academy of Sciences
Additional Links
Key words
finite classification task; cause-and-effect relationships; machine learning; poisoning
Authors
A. A. Grusho , N. A. Grusho , M. I. Zabezhailo , V. O. Piskovski , E. E. Timonina , and S. Ya. Shorgin
Author Affiliations
Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
|